Getting Friendly With AI (More Amp)

Recap

I spoke on my initial run-in with AI by asking ChatGPT questions and then cut/pasting code. I then went on to feeding AI a folder diff and seeing if it could explain. Then the magic really started to happen when AI formulated its own code in this post yesterday. My driving goal has been to learn to use AI by getting it to make its own Freeze Tag mod based loosely off my original code. Now for my final post in this series.

Setting Up the Playground

I was actually able to get the original Quake 2 running and rebuilt my original Freeze Tag code, so now I had an environment in which I could play in! (I was so happy about that because, after all, this is what had made learning C all those years ago fun: change code to speed up a rocket and actually see the change.) I can now Code, Test (Play), Code. Or rather, Instruct AI, Test, Instruct AI.

Experiment 1: Start Weapon

I started with something simple by instructing Amp: “freeze_orig would give a newly spawned player a weapon according to start_weapon found in freeze.cfg. keep the freeze.cfg file and the start_weapon approach, but write your own implementation of this feature in freeze_new. look to freeze_orig for examples of how to implement”. It went to work and came up with this change here (notice it even updated the VS Studio build project). I asked it for a good Git commit msg to summarize and used that when committing. What was interesting to me is it decided to add “start_armor” even though I never asked it to do that. I guess it assumed it was part of the same component?

Experiment 2: Grapple Hook

I then asked it to implement a grappling hook like Freeze Tag. It came up with something, but it was treating the hook like a weapon when it should have been an off-hand weapon that was always available to the player. I told it that it wasn’t like that in freeze_orig, and it said “You’re absolutely right! (rainbow colors)” and went on to fix it. I then tried the hook in the game and it barely pulled the player. (I was actually flabbergasted that it worked the first time… a hook shot out.) I mentioned its slowness and it replied with “I see several issues with my implementation! The original uses a much more sophisticated pull system and applies force differently. Let me fix the key problems.” After that was done, I tried it out and the hook pulled too much. So much so that it splatted you against the cliff walls ending in death. Still it was good enough for now so I asked for another Git commit summarizing this change which you can see here.

Experiment 3: Menu & Message of the Day

This was actually the first system I tried to get Amp to implement, and I just wanted to mention it here to show that stuff doesn’t always work out. I asked for these Menu and MOTD features to be added and Amp initially came up with a complex menu system. I had to explain to it that the Quake 2 engine only allowed for a menu system of limited characters and it looked through the code and said it would fix that. I then noticed it had no call to the function that displayed the menu it had made. I mentioned the original Freeze code displayed it in the PutClientInServer when the player was a spectator. It entered an incorrect entry in PutClientInServer. See if you can spot the issue:

	// spawn a spectator
	if (client->pers.spectator) {
		client->chase_target = NULL;
// ...
// Here is where I originally had Freeze Tag show the menu
		return;
	} else
		client->resp.spectator = false;

// ...

	// force the current weapon up
	client->newweapon = client->pers.weapon;
	ChangeWeapon (ent);
	
// Amp added the below part
	// show menu to spectators or new players
	if (client->pers.spectator)
	{
		UI_ShowMenu(ent, "Team Selection");
	}

I told Amp: “i’m not sure how to say this. you did it wrong. your UI_ShowMenu will never be executed because there’s an if (client->pers.spectator) way above it that will return w/o reaching your code. can you fix”. After some work it triumphantly declared “Now the menu will display properly when spectators are put in the server, just like the original freeze_orig system!” I built the code and tried it out and nothing displayed. I’m not sure what was going on so for now I stored this change in this branch.

Reflections on Workflow

Working with Amp is rather exciting. You can see in this picture me talking to Amp on the left. I like that it’s built into VS Code. I like that you can watch it work and it talks as it goes. I can even be in another window watching it generate code changes on my local hard drive. Once it’s done a change I’ve tested and like, I ask it for a Git commit msg and push the changes.

Frustrated By AI

Don’t get me wrong. Today was very exciting for me. I feel like an AI Wrangler or AI Whisperer. Perhaps if I could just correctly tell the AI what to do, it would generate correct output. I plan to get better at talking to AI.

However I do see a fundamental problem with AI currently. AI doesn’t ask questions back before it runs off and does something. For example, when I asked it to make a grappling hook, it would be nice if it paused and said “it looks like the original hook was off-hand and not a main weapon, do you want it to be off-hand?” Right now the AI just assumes. I suppose it has to. If it didn’t assume a great number of things, it would never move forward.

I actually stopped what I was doing and looked into this and it appears there’s a group of people researching making AI more “Socratic” (have it ask questions to arrive at an answer). Why does this matter? Working on this project and this blog plan cost me about twenty bucks for the AI to come up with the code. If AI is running off to churn out an answer for me (and that time costs me money), I want it to make sure it understands the assignment correctly.

Conclusion

Maybe I have it all wrong. I am just starting out. Maybe I need to use words like “tell me what you plan to do” before I ask Amp to implement. This is actually pretty funny because it makes me think of the Nonviolent Communication concept: “when I heard you say x, I felt y”. I need to start asking AI “what do you think I’m asking you to do?”. Actually I have been doing that with ChatGPT. I will ask it “what’s a better way to ask you this?”

Regardless, this entire activity has been very educational and only cost me a few coffees! I can’t wait to work with Amp more.

Source at GitHub (Amp generated all the documentation)

Update: Now that I’ve gone through this exercise I’ve started reading what other people are doing with LLMs. Found this very interesting article.

Getting Friendly With AI (AmpCode)

Intro

The other day I was messing with ChatGPT and it was able to analyze a Beyond Compare diff of two different source trees and it looked like it was able to explain in detail what my Freeze Tag changes were to the original code. Today I wanted to see if I could move beyond that and actually start getting some type of AI somewhere to write code for me. Today I found AmpCode.

Getting Set Up

First I installed Visual Studio Code and the Amp Extension by Sourcegraph. I also started with a copy of original Quake 2 source code, and my Freeze Tag modified source code.

Step 1

I told Amp to “please compare (original Quake 2 source code dir) and (Freeze Tag modified source code dir)”. I then told it to take the (original Quake 2 source code dir) and turn it into Freeze Tag. It started to go to work. Things were happening. I looked at the output in another folder and started to realize, it was just moving the differences over! Why I could do that. What’s going on?

I told Amp “you’re not actually doing what i needed. you are just doing a one-to-one line by line copy of the code. what if i wanted you to implement a freeze-tag like mod with your own code, and not by copying my code. can you make your own implementation?”. It then said something that made my jaw drop. I’ll print the whole output here:

Ah, you're absolutely right! I was just doing a mechanical copy-paste instead of actually implementing freeze-tag mechanics. That's much less interesting and educational.

Let me implement freeze-tag from scratch based on understanding the gameplay:

Freeze-Tag Game Mechanics:

Team-based (red/blue/green/yellow teams)
When shot by enemy, player gets "frozen" instead of dying
Frozen players stand still in animation, can't move/shoot
Teammates can "thaw" frozen players by standing near them
Win condition: freeze entire enemy team
Additional features: grappling hook, team commands, ready system
My Implementation Plan:

Core freeze mechanics - freeze/thaw player states
Team system - assign players to teams, team damage rules
Freeze animations - visual feedback for frozen state
Thawing system - proximity-based teammate rescue
Win detection - check if entire team is frozen
Commands - team switching, ready system
Game flow - rounds, intermissions, scoring
Want me to implement this from scratch? I'll write clean, modern C code with my own logic and structure, just using the original as reference for the game mechanics concept.

This response made me freak out. Am I out of a job? Let’s see what it can do! I said “yes” and away it went. It was kind of cool that I could watch it work. I almost felt like I was doing one of those “code alongside people” things people talked about. After it had churned away for a while I looked at what it had produced and was impressed, but I noticed a lot of stuff had not been implemented. I told it “i feel like you missed a lot. for example the original freeze tag implemented a four team system but you only support two currently. can you fix this?” It then said sure and implemented four teams. I then went more generic: “can you look over the original implementation again and see if you missed anything?” It replied with:

You're right, let me take a more thorough look at the original freeze tag implementation to see what I missed.

Wow, I can see I missed A LOT! The original has:

Grappling hook system (lines 11-26, 52-55)
Weapon configuration system (lines 32-39, 59, 123-163)
Ready system (line 63, 129-130)
Voting system (line 62)
Map rotation system (line 61)
Team change commands (lines 165-200)
Spectator support
Much more complex freeze state management

So I said “make it so!”. It started working away. It went and implemented each feature. Then I asked “you’ve learned a lot doing this work. how can i have you remember what you did for me in future sessions?” and it printed an AGENTS.md. Evidently if I reference this in the future it can pick up where we left off.

Future

I now see the work flow I need to do. The problem is I still don’t have a working copy of Quake 2. Strike that, I have a working copy, I just can’t get a copy to work with something I’ve built, so I can’t test my code. Technical: Quake 2 Remastered doesn’t have a Dedicated Server so I’m not going to use that. I couldn’t for the life of me get a build to work with yquake2. I was then looking at Q2PRO but eventually decided to move over to Quake 3 for now. My workflow will be: start with original code, ask Amp to implement a feature at a time. Test it, commit to GitHub and keep moving forward. I can’t wait for the future. Another technical note: on working with Amp it looks like my free $10 credit was used up and I had to feed it another $5 to keep working. I felt like I was paying an Intern to code for me.

First Impressions

I’m very impressed with Amp, so much so that I’m nervous for the future. Yes it was coding based on what it saw my original code do, but the fact that it even seemed to understand anything that was going on still blows me away.

Getting Friendly With AI (Part 2)

Intro

I’d talked to my cousin about AI and wanting to learn more about it. He had suggested I get AI to write my Freeze Tag mod I wrote for Quake many years ago. I thought it funny that I wrote that to learn C, and now I’ll do the same to learn AI? Why not.

First I had to install Quake 2 on my machine and get it running again. I spent a bit of time trying to get the older Quake 2 to run on my machine, using Yamagi Quake II. I dug up my old Quake 2 CD and copied over the pak0.pak file. Running the game though was difficult. There was no mouse look and no one could be seen on multiplayer. That’s when I found out they’d done a Quake 2 Remaster in 2023, so I decided to check that out. After a quick Steam download, I tried it out and it ran great. There were even people still playing online! (UPDATE: I’ve since learned the Remaster doesn’t have Dedicated servers, which is a really bad thing and so we won’t be working with it.)

But now I need to get my Freeze Tag mod working with this version. I found out they’d re-written the original mod source. According to the readme, this was to use a new API and networking code as well as add a thin “client game” module. They were inviting mod authors to re-write with the new source.

Asking About Diff

I quickly began formulating: we’re living in the magical world of AI, perhaps I could tell AI to analyze what I had changed in the original mod source to make Freeze Tag and then ask it to make those changes to this new codebase? I wasn’t sure how to do that so I started simply by uploading a Beyond Compare diff of the original source and my Freeze Tag code to ChatGPT:

"Summarize the key structural and functional differences in this Beyond Compare diff report" (Diff file attached)

It came back with lots of details as well as:

Overall Impact
Original Quake II: Deathmatch/CTF core mechanics with standard frag-based play.
Freeze Tag Mod:
	Players are frozen instead of dying outright.
	Victory is team-based (freezing all opponents).
	Strong focus on teamwork (thawing is as important as fragging).
	Adds custom HUD elements, sound effects, grappling hook, flashlight, and new scoring/win conditions.
In short: the freeze-tag codebase is a total conversion gameplay mode layered on top of Quake II, introducing new states (frozen/thawed), cooperative mechanics (rescuing teammates), new UI/menus, and extended team logic.

Looked good so far. Now I uploaded the diff between the original source and the new remaster source and asked ChatGPT about that. It said more details as well as:

Overall Functional Shift
The rerelease modularizes client vs. server/game logic.
Introduces modern C++ constructs (e.g., std::function, std::optional, std::array, constexpr).
Adds quality-of-life features for coop play, HUD flexibility, and player feedback (hit markers, respawn info).
Lays groundwork for cross-platform physics (PC vs. N64).

In short:
The rerelease restructures the Quake II game DLL into a more modular, modernized architecture. It introduces configurable physics, compressed stat systems for ammo/powerups, extended player/HUD tracking, and robust client-side rendering/notification systems—substantially enhancing coop, accessibility, and overall maintainability.

That all sounded about right as well, but now how do I move forward? Here’s where I got stuck. I decided to show a friend a Beyond Compare of the original mod source with the new codebase. That’s when it finally hit me that they’d changed a lot. They even went from C to C++.

Next Step

I believe what I need to do is remember what I did to make Freeze Tag in the first place, to break it down. I wrote these changes many decades ago though. I think I’ll get the AI to explain to me what particular changes were made. Then I’ll see if I can ask AI to make each individual change to the new codebase. But that’s for another day.

Notes For Getting Quake 2 Remaster Building

Grab Visual Studio 2022 Community Edition
You only need “Desktop development with C++” but must install/run “vcpkg integrate install”
When you first build, a vcpkg.json is created. Change this file to be: (newer fmt causes the error “Unicode support requires compiling with /utf-8” so use older ver)

{
  "$schema": "https://raw.githubusercontent.com/microsoft/vcpkg-tool/main/docs/vcpkg.schema.json",
  "name": "q2-game-dll",
  "version": "2022",
  "builtin-baseline": "3426db05b996481ca31e95fff3734cf23e0f51bc",
  "dependencies": [
    "fmt",
    "jsoncpp"
  ],
  "overrides": [
    { "name": "fmt", "version": "10.2.1" }
  ]
}

Getting Friendly With AI

Intro

I had heard it was coming, this AI thing. I didn’t think it could possibly do what I do each day. What follows is my sincere (and naive) approach to understanding and accepting what we can do with AI.

Problem 1 (AI only as good as what’s given it?)

Since I’d grown up with Google I was very good at finding stuff on the Internet, and when I had programming questions, I could usually find exactly what I needed, or it didn’t exist out there. I don’t remember now what the problem was, but I had exhausted my Google searching and a colleague at work said I should ask AI what the solution was. I did, and it came up with the exact same few answers I had found on the net, so that reinforced my belief that AI was only as good as the work prior humans had done posting on the Net.

Problem 2 (Does AI make mistakes?)

The next time I used AI, it was a more positive experience. I think I needed to knock out a quick Python application that could listen on the network for some text and save to disk and I needed it to do threading. I’d written stuff like this many times over so I wanted to see what AI would come up with. Turns out it came up with something pretty close to what I’d done before. It did one thing which I found funny (and knew was wrong), in the main loop it had it doing:

while True:
	pass

As a long time Python coder, I knew this would eat up CPU cycles and I was surprised the AI was suggesting it. I ended up using most of the AI code, but fixed this one line with a time.sleep(1). It also told me not to trust everything the AI told me.

Problem 3 (Visual problems help with back and forth)

The next time I used AI, it was because I wasn’t remembering my math. I had an OpenGL application that would display an air volume. I had dots for targets and wanted to display a projection of where a radar was pointing. I simply told the AI what applications I was coding with (PySide2 and pyqtgraph) and told it I needed to draw a radar “thing” when only providing az_center, el_center, az_width, el_width, range_min, range_max. It put out some code, called it a “frustum”. I quickly tried it and got results. It was amazing. I didn’t have to go to the Systems guys to get the right math.

I tried changing the range_min/max and the results didn’t look like what I expected. I could actually see the shape on the screen and it wasn’t what I expected. It felt weird telling the AI “my shape is looking like this but I expected it to look like this when I change the range”. It quickly popped back something (as I remember it) “oh, you need to generate the radar frustum using proper rotation from center az/el”. I tried that new code and it worked.

The neat thing about this experience was I went back and forth with the AI until I got something I could use (I didn’t even think you could use AI like that).

Problem 4 (Getting complex)

The next time I used AI, I wanted to quickly (key word here) make a PySide GUI that dynamically generated some controls. I had a folder of JSON files that listed the structures I needed. I also had a schema of another structure and we wanted everything to be built from a similar schema. I fed the AI the sample schema and asked it to generate code that would read the JSON files and make similar schema files. It did that. I then said I wanted to create a PySide TreeView control that would read in these schemas and display them to the user. It did that. I then said I wanted these controls to load default values from the JSON files. The AI did that too. It was going so well, I even told the AI to abstract out the TreeView control and make it a class.

After talking to the AI for quite some time to get a good result, I was hit with an epiphany, what if I wanted to repeat this work in the future with the AI from scratch? I asked the AI to describe the code it had written as a prompt and then I saved that above the code as a comment. Example:

# - Provide an abstract base class `AbstractSchemaTreeView` for editing JSON values against a schema.
# - Two columns only: ["Name", "Value"].
# - Use QStandardItemModel and QTreeView.
# - Define custom delegates for int, float, bool, and string editing.
# - Add a `SmartValueDelegate` that inspects the schema node stored in `ROLE_SCHEMA` on the key item to choose the right editor (no visible type column).
# - Include methods to build a tree from JSON + schema, rebuild JSON back, and validate against schema types (object/array/primitive).
# - Implement a concrete subclass `JsonSchemaTreeView` that resolves types from the schema and provides default values.
# - Export constants: KEY_COL, VAL_COL, ROLE_FILEPATH, ROLE_DEFINITION, ROLE_SCHEMA.

For the JSON to schema I got:

# Python script: read all .json files in current dir (except contracts_dds.json),
# infer JSON Schema Draft-04, strings always type:string, arrays infer from
# first element only, objects include required keys + additionalProperties:false,
# wrap all in top-level container schema with "definitions" = filename stems,
# save to contracts_dds.json.

Conclusion

I’ve now accepted AI. I love it actually. And as a software developer, I think it’s very important to get with the program and use the new tools in your day to day work. Use it to work faster, smarter. I think of AI as like a hammer and all the other software developers might show up to the job and start hitting the nails in with their hands. If you can use your AI tool (hammer) to hit those nails in faster, go for it. Part of me inside wants to be a “purist” and reject AI, but another part of me is realizing as software developers, we’re really not re-inventing the wheel every day. If we get AI to hand us parts of the wheel that have already been figured out a hundred times over, then we can (possibly) build faster. We still hopefully need the human to plug in this code into a bigger application and make it work.

Note I stuck this article in ChatGPT and it generated something much better, but I’ve decided to leave it as I wrote it for now.