Getting Friendly With AI

Intro

I had heard it was coming, this AI thing. I didn’t think it could possibly do what I do each day. What follows is my sincere (and naive) approach to understanding and accepting what we can do with AI.

Problem 1 (AI only as good as what’s given it?)

Since I’d grown up with Google I was very good at finding stuff on the Internet, and when I had programming questions, I could usually find exactly what I needed, or it didn’t exist out there. I don’t remember now what the problem was, but I had exhausted my Google searching and a colleague at work said I should ask AI what the solution was. I did, and it came up with the exact same few answers I had found on the net, so that reinforced my belief that AI was only as good as the work prior humans had done posting on the Net.

Problem 2 (Does AI make mistakes?)

The next time I used AI, it was a more positive experience. I think I needed to knock out a quick Python application that could listen on the network for some text and save to disk and I needed it to do threading. I’d written stuff like this many times over so I wanted to see what AI would come up with. Turns out it came up with something pretty close to what I’d done before. It did one thing which I found funny (and knew was wrong), in the main loop it had it doing:

while True:
	pass

As a long time Python coder, I knew this would eat up CPU cycles and I was surprised the AI was suggesting it. I ended up using most of the AI code, but fixed this one line with a time.sleep(1). It also told me not to trust everything the AI told me.

Problem 3 (Visual problems help with back and forth)

The next time I used AI, it was because I wasn’t remembering my math. I had an OpenGL application that would display an air volume. I had dots for targets and wanted to display a projection of where a radar was pointing. I simply told the AI what applications I was coding with (PySide2 and pyqtgraph) and told it I needed to draw a radar “thing” when only providing az_center, el_center, az_width, el_width, range_min, range_max. It put out some code, called it a “frustum”. I quickly tried it and got results. It was amazing. I didn’t have to go to the Systems guys to get the right math.

I tried changing the range_min/max and the results didn’t look like what I expected. I could actually see the shape on the screen and it wasn’t what I expected. It felt weird telling the AI “my shape is looking like this but I expected it to look like this when I change the range”. It quickly popped back something (as I remember it) “oh, you need to generate the radar frustum using proper rotation from center az/el”. I tried that new code and it worked.

The neat thing about this experience was I went back and forth with the AI until I got something I could use (I didn’t even think you could use AI like that).

Problem 4 (Getting complex)

The next time I used AI, I wanted to quickly (key word here) make a PySide GUI that dynamically generated some controls. I had a folder of JSON files that listed the structures I needed. I also had a schema of another structure and we wanted everything to be built from a similar schema. I fed the AI the sample schema and asked it to generate code that would read the JSON files and make similar schema files. It did that. I then said I wanted to create a PySide TreeView control that would read in these schemas and display them to the user. It did that. I then said I wanted these controls to load default values from the JSON files. The AI did that too. It was going so well, I even told the AI to abstract out the TreeView control and make it a class.

After talking to the AI for quite some time to get a good result, I was hit with an epiphany, what if I wanted to repeat this work in the future with the AI from scratch? I asked the AI to describe the code it had written as a prompt and then I saved that above the code as a comment. Example:

# - Provide an abstract base class `AbstractSchemaTreeView` for editing JSON values against a schema.
# - Two columns only: ["Name", "Value"].
# - Use QStandardItemModel and QTreeView.
# - Define custom delegates for int, float, bool, and string editing.
# - Add a `SmartValueDelegate` that inspects the schema node stored in `ROLE_SCHEMA` on the key item to choose the right editor (no visible type column).
# - Include methods to build a tree from JSON + schema, rebuild JSON back, and validate against schema types (object/array/primitive).
# - Implement a concrete subclass `JsonSchemaTreeView` that resolves types from the schema and provides default values.
# - Export constants: KEY_COL, VAL_COL, ROLE_FILEPATH, ROLE_DEFINITION, ROLE_SCHEMA.

For the JSON to schema I got:

# Python script: read all .json files in current dir (except contracts_dds.json),
# infer JSON Schema Draft-04, strings always type:string, arrays infer from
# first element only, objects include required keys + additionalProperties:false,
# wrap all in top-level container schema with "definitions" = filename stems,
# save to contracts_dds.json.

Conclusion

I’ve now accepted AI. I love it actually. And as a software developer, I think it’s very important to get with the program and use the new tools in your day to day work. Use it to work faster, smarter. I think of AI as like a hammer and all the other software developers might show up to the job and start hitting the nails in with their hands. If you can use your AI tool (hammer) to hit those nails in faster, go for it. Part of me inside wants to be a “purist” and reject AI, but another part of me is realizing as software developers, we’re really not re-inventing the wheel every day. If we get AI to hand us parts of the wheel that have already been figured out a hundred times over, then we can (possibly) build faster. We still hopefully need the human to plug in this code into a bigger application and make it work.

Note I stuck this article in ChatGPT and it generated something much better, but I’ve decided to leave it as I wrote it for now.

CategoriesAI

Leave a Reply

Your email address will not be published. Required fields are marked *