Getting Friendly With AI (More Amp)

Recap

I spoke on my initial run-in with AI by asking ChatGPT questions and then cut/pasting code. I then went on to feeding AI a folder diff and seeing if it could explain. Then the magic really started to happen when AI formulated its own code in this post yesterday. My driving goal has been to learn to use AI by getting it to make its own Freeze Tag mod based loosely off my original code. Now for my final post in this series.

Setting Up the Playground

I was actually able to get the original Quake 2 running and rebuilt my original Freeze Tag code, so now I had an environment in which I could play in! (I was so happy about that because, after all, this is what had made learning C all those years ago fun: change code to speed up a rocket and actually see the change.) I can now Code, Test (Play), Code. Or rather, Instruct AI, Test, Instruct AI.

Experiment 1: Start Weapon

I started with something simple by instructing Amp: “freeze_orig would give a newly spawned player a weapon according to start_weapon found in freeze.cfg. keep the freeze.cfg file and the start_weapon approach, but write your own implementation of this feature in freeze_new. look to freeze_orig for examples of how to implement”. It went to work and came up with this change here (notice it even updated the VS Studio build project). I asked it for a good Git commit msg to summarize and used that when committing. What was interesting to me is it decided to add “start_armor” even though I never asked it to do that. I guess it assumed it was part of the same component?

Experiment 2: Grapple Hook

I then asked it to implement a grappling hook like Freeze Tag. It came up with something, but it was treating the hook like a weapon when it should have been an off-hand weapon that was always available to the player. I told it that it wasn’t like that in freeze_orig, and it said “You’re absolutely right! (rainbow colors)” and went on to fix it. I then tried the hook in the game and it barely pulled the player. (I was actually flabbergasted that it worked the first time… a hook shot out.) I mentioned its slowness and it replied with “I see several issues with my implementation! The original uses a much more sophisticated pull system and applies force differently. Let me fix the key problems.” After that was done, I tried it out and the hook pulled too much. So much so that it splatted you against the cliff walls ending in death. Still it was good enough for now so I asked for another Git commit summarizing this change which you can see here.

Experiment 3: Menu & Message of the Day

This was actually the first system I tried to get Amp to implement, and I just wanted to mention it here to show that stuff doesn’t always work out. I asked for these Menu and MOTD features to be added and Amp initially came up with a complex menu system. I had to explain to it that the Quake 2 engine only allowed for a menu system of limited characters and it looked through the code and said it would fix that. I then noticed it had no call to the function that displayed the menu it had made. I mentioned the original Freeze code displayed it in the PutClientInServer when the player was a spectator. It entered an incorrect entry in PutClientInServer. See if you can spot the issue:

	// spawn a spectator
	if (client->pers.spectator) {
		client->chase_target = NULL;
// ...
// Here is where I originally had Freeze Tag show the menu
		return;
	} else
		client->resp.spectator = false;

// ...

	// force the current weapon up
	client->newweapon = client->pers.weapon;
	ChangeWeapon (ent);
	
// Amp added the below part
	// show menu to spectators or new players
	if (client->pers.spectator)
	{
		UI_ShowMenu(ent, "Team Selection");
	}

I told Amp: “i’m not sure how to say this. you did it wrong. your UI_ShowMenu will never be executed because there’s an if (client->pers.spectator) way above it that will return w/o reaching your code. can you fix”. After some work it triumphantly declared “Now the menu will display properly when spectators are put in the server, just like the original freeze_orig system!” I built the code and tried it out and nothing displayed. I’m not sure what was going on so for now I stored this change in this branch.

Reflections on Workflow

Working with Amp is rather exciting. You can see in this picture me talking to Amp on the left. I like that it’s built into VS Code. I like that you can watch it work and it talks as it goes. I can even be in another window watching it generate code changes on my local hard drive. Once it’s done a change I’ve tested and like, I ask it for a Git commit msg and push the changes.

Frustrated By AI

Don’t get me wrong. Today was very exciting for me. I feel like an AI Wrangler or AI Whisperer. Perhaps if I could just correctly tell the AI what to do, it would generate correct output. I plan to get better at talking to AI.

However I do see a fundamental problem with AI currently. AI doesn’t ask questions back before it runs off and does something. For example, when I asked it to make a grappling hook, it would be nice if it paused and said “it looks like the original hook was off-hand and not a main weapon, do you want it to be off-hand?” Right now the AI just assumes. I suppose it has to. If it didn’t assume a great number of things, it would never move forward.

I actually stopped what I was doing and looked into this and it appears there’s a group of people researching making AI more “Socratic” (have it ask questions to arrive at an answer). Why does this matter? Working on this project and this blog plan cost me about twenty bucks for the AI to come up with the code. If AI is running off to churn out an answer for me (and that time costs me money), I want it to make sure it understands the assignment correctly.

Conclusion

Maybe I have it all wrong. I am just starting out. Maybe I need to use words like “tell me what you plan to do” before I ask Amp to implement. This is actually pretty funny because it makes me think of the Nonviolent Communication concept: “when I heard you say x, I felt y”. I need to start asking AI “what do you think I’m asking you to do?”. Actually I have been doing that with ChatGPT. I will ask it “what’s a better way to ask you this?”

Regardless, this entire activity has been very educational and only cost me a few coffees! I can’t wait to work with Amp more.

Source at GitHub (Amp generated all the documentation)

Update: Now that I’ve gone through this exercise I’ve started reading what other people are doing with LLMs. Found this very interesting article.

Getting Friendly With AI (AmpCode)

Intro

The other day I was messing with ChatGPT and it was able to analyze a Beyond Compare diff of two different source trees and it looked like it was able to explain in detail what my Freeze Tag changes were to the original code. Today I wanted to see if I could move beyond that and actually start getting some type of AI somewhere to write code for me. Today I found AmpCode.

Getting Set Up

First I installed Visual Studio Code and the Amp Extension by Sourcegraph. I also started with a copy of original Quake 2 source code, and my Freeze Tag modified source code.

Step 1

I told Amp to “please compare (original Quake 2 source code dir) and (Freeze Tag modified source code dir)”. I then told it to take the (original Quake 2 source code dir) and turn it into Freeze Tag. It started to go to work. Things were happening. I looked at the output in another folder and started to realize, it was just moving the differences over! Why I could do that. What’s going on?

I told Amp “you’re not actually doing what i needed. you are just doing a one-to-one line by line copy of the code. what if i wanted you to implement a freeze-tag like mod with your own code, and not by copying my code. can you make your own implementation?”. It then said something that made my jaw drop. I’ll print the whole output here:

Ah, you're absolutely right! I was just doing a mechanical copy-paste instead of actually implementing freeze-tag mechanics. That's much less interesting and educational.

Let me implement freeze-tag from scratch based on understanding the gameplay:

Freeze-Tag Game Mechanics:

Team-based (red/blue/green/yellow teams)
When shot by enemy, player gets "frozen" instead of dying
Frozen players stand still in animation, can't move/shoot
Teammates can "thaw" frozen players by standing near them
Win condition: freeze entire enemy team
Additional features: grappling hook, team commands, ready system
My Implementation Plan:

Core freeze mechanics - freeze/thaw player states
Team system - assign players to teams, team damage rules
Freeze animations - visual feedback for frozen state
Thawing system - proximity-based teammate rescue
Win detection - check if entire team is frozen
Commands - team switching, ready system
Game flow - rounds, intermissions, scoring
Want me to implement this from scratch? I'll write clean, modern C code with my own logic and structure, just using the original as reference for the game mechanics concept.

This response made me freak out. Am I out of a job? Let’s see what it can do! I said “yes” and away it went. It was kind of cool that I could watch it work. I almost felt like I was doing one of those “code alongside people” things people talked about. After it had churned away for a while I looked at what it had produced and was impressed, but I noticed a lot of stuff had not been implemented. I told it “i feel like you missed a lot. for example the original freeze tag implemented a four team system but you only support two currently. can you fix this?” It then said sure and implemented four teams. I then went more generic: “can you look over the original implementation again and see if you missed anything?” It replied with:

You're right, let me take a more thorough look at the original freeze tag implementation to see what I missed.

Wow, I can see I missed A LOT! The original has:

Grappling hook system (lines 11-26, 52-55)
Weapon configuration system (lines 32-39, 59, 123-163)
Ready system (line 63, 129-130)
Voting system (line 62)
Map rotation system (line 61)
Team change commands (lines 165-200)
Spectator support
Much more complex freeze state management

So I said “make it so!”. It started working away. It went and implemented each feature. Then I asked “you’ve learned a lot doing this work. how can i have you remember what you did for me in future sessions?” and it printed an AGENTS.md. Evidently if I reference this in the future it can pick up where we left off.

Future

I now see the work flow I need to do. The problem is I still don’t have a working copy of Quake 2. Strike that, I have a working copy, I just can’t get a copy to work with something I’ve built, so I can’t test my code. Technical: Quake 2 Remastered doesn’t have a Dedicated Server so I’m not going to use that. I couldn’t for the life of me get a build to work with yquake2. I was then looking at Q2PRO but eventually decided to move over to Quake 3 for now. My workflow will be: start with original code, ask Amp to implement a feature at a time. Test it, commit to GitHub and keep moving forward. I can’t wait for the future. Another technical note: on working with Amp it looks like my free $10 credit was used up and I had to feed it another $5 to keep working. I felt like I was paying an Intern to code for me.

First Impressions

I’m very impressed with Amp, so much so that I’m nervous for the future. Yes it was coding based on what it saw my original code do, but the fact that it even seemed to understand anything that was going on still blows me away.

Getting Friendly With AI (Part 2)

Intro

I’d talked to my cousin about AI and wanting to learn more about it. He had suggested I get AI to write my Freeze Tag mod I wrote for Quake many years ago. I thought it funny that I wrote that to learn C, and now I’ll do the same to learn AI? Why not.

First I had to install Quake 2 on my machine and get it running again. I spent a bit of time trying to get the older Quake 2 to run on my machine, using Yamagi Quake II. I dug up my old Quake 2 CD and copied over the pak0.pak file. Running the game though was difficult. There was no mouse look and no one could be seen on multiplayer. That’s when I found out they’d done a Quake 2 Remaster in 2023, so I decided to check that out. After a quick Steam download, I tried it out and it ran great. There were even people still playing online! (UPDATE: I’ve since learned the Remaster doesn’t have Dedicated servers, which is a really bad thing and so we won’t be working with it.)

But now I need to get my Freeze Tag mod working with this version. I found out they’d re-written the original mod source. According to the readme, this was to use a new API and networking code as well as add a thin “client game” module. They were inviting mod authors to re-write with the new source.

Asking About Diff

I quickly began formulating: we’re living in the magical world of AI, perhaps I could tell AI to analyze what I had changed in the original mod source to make Freeze Tag and then ask it to make those changes to this new codebase? I wasn’t sure how to do that so I started simply by uploading a Beyond Compare diff of the original source and my Freeze Tag code to ChatGPT:

"Summarize the key structural and functional differences in this Beyond Compare diff report" (Diff file attached)

It came back with lots of details as well as:

Overall Impact
Original Quake II: Deathmatch/CTF core mechanics with standard frag-based play.
Freeze Tag Mod:
	Players are frozen instead of dying outright.
	Victory is team-based (freezing all opponents).
	Strong focus on teamwork (thawing is as important as fragging).
	Adds custom HUD elements, sound effects, grappling hook, flashlight, and new scoring/win conditions.
In short: the freeze-tag codebase is a total conversion gameplay mode layered on top of Quake II, introducing new states (frozen/thawed), cooperative mechanics (rescuing teammates), new UI/menus, and extended team logic.

Looked good so far. Now I uploaded the diff between the original source and the new remaster source and asked ChatGPT about that. It said more details as well as:

Overall Functional Shift
The rerelease modularizes client vs. server/game logic.
Introduces modern C++ constructs (e.g., std::function, std::optional, std::array, constexpr).
Adds quality-of-life features for coop play, HUD flexibility, and player feedback (hit markers, respawn info).
Lays groundwork for cross-platform physics (PC vs. N64).

In short:
The rerelease restructures the Quake II game DLL into a more modular, modernized architecture. It introduces configurable physics, compressed stat systems for ammo/powerups, extended player/HUD tracking, and robust client-side rendering/notification systems—substantially enhancing coop, accessibility, and overall maintainability.

That all sounded about right as well, but now how do I move forward? Here’s where I got stuck. I decided to show a friend a Beyond Compare of the original mod source with the new codebase. That’s when it finally hit me that they’d changed a lot. They even went from C to C++.

Next Step

I believe what I need to do is remember what I did to make Freeze Tag in the first place, to break it down. I wrote these changes many decades ago though. I think I’ll get the AI to explain to me what particular changes were made. Then I’ll see if I can ask AI to make each individual change to the new codebase. But that’s for another day.

Notes For Getting Quake 2 Remaster Building

Grab Visual Studio 2022 Community Edition
You only need “Desktop development with C++” but must install/run “vcpkg integrate install”
When you first build, a vcpkg.json is created. Change this file to be: (newer fmt causes the error “Unicode support requires compiling with /utf-8” so use older ver)

{
  "$schema": "https://raw.githubusercontent.com/microsoft/vcpkg-tool/main/docs/vcpkg.schema.json",
  "name": "q2-game-dll",
  "version": "2022",
  "builtin-baseline": "3426db05b996481ca31e95fff3734cf23e0f51bc",
  "dependencies": [
    "fmt",
    "jsoncpp"
  ],
  "overrides": [
    { "name": "fmt", "version": "10.2.1" }
  ]
}

Getting Friendly With AI

Intro

I had heard it was coming, this AI thing. I didn’t think it could possibly do what I do each day. What follows is my sincere (and naive) approach to understanding and accepting what we can do with AI.

Problem 1 (AI only as good as what’s given it?)

Since I’d grown up with Google I was very good at finding stuff on the Internet, and when I had programming questions, I could usually find exactly what I needed, or it didn’t exist out there. I don’t remember now what the problem was, but I had exhausted my Google searching and a colleague at work said I should ask AI what the solution was. I did, and it came up with the exact same few answers I had found on the net, so that reinforced my belief that AI was only as good as the work prior humans had done posting on the Net.

Problem 2 (Does AI make mistakes?)

The next time I used AI, it was a more positive experience. I think I needed to knock out a quick Python application that could listen on the network for some text and save to disk and I needed it to do threading. I’d written stuff like this many times over so I wanted to see what AI would come up with. Turns out it came up with something pretty close to what I’d done before. It did one thing which I found funny (and knew was wrong), in the main loop it had it doing:

while True:
	pass

As a long time Python coder, I knew this would eat up CPU cycles and I was surprised the AI was suggesting it. I ended up using most of the AI code, but fixed this one line with a time.sleep(1). It also told me not to trust everything the AI told me.

Problem 3 (Visual problems help with back and forth)

The next time I used AI, it was because I wasn’t remembering my math. I had an OpenGL application that would display an air volume. I had dots for targets and wanted to display a projection of where a radar was pointing. I simply told the AI what applications I was coding with (PySide2 and pyqtgraph) and told it I needed to draw a radar “thing” when only providing az_center, el_center, az_width, el_width, range_min, range_max. It put out some code, called it a “frustum”. I quickly tried it and got results. It was amazing. I didn’t have to go to the Systems guys to get the right math.

I tried changing the range_min/max and the results didn’t look like what I expected. I could actually see the shape on the screen and it wasn’t what I expected. It felt weird telling the AI “my shape is looking like this but I expected it to look like this when I change the range”. It quickly popped back something (as I remember it) “oh, you need to generate the radar frustum using proper rotation from center az/el”. I tried that new code and it worked.

The neat thing about this experience was I went back and forth with the AI until I got something I could use (I didn’t even think you could use AI like that).

Problem 4 (Getting complex)

The next time I used AI, I wanted to quickly (key word here) make a PySide GUI that dynamically generated some controls. I had a folder of JSON files that listed the structures I needed. I also had a schema of another structure and we wanted everything to be built from a similar schema. I fed the AI the sample schema and asked it to generate code that would read the JSON files and make similar schema files. It did that. I then said I wanted to create a PySide TreeView control that would read in these schemas and display them to the user. It did that. I then said I wanted these controls to load default values from the JSON files. The AI did that too. It was going so well, I even told the AI to abstract out the TreeView control and make it a class.

After talking to the AI for quite some time to get a good result, I was hit with an epiphany, what if I wanted to repeat this work in the future with the AI from scratch? I asked the AI to describe the code it had written as a prompt and then I saved that above the code as a comment. Example:

# - Provide an abstract base class `AbstractSchemaTreeView` for editing JSON values against a schema.
# - Two columns only: ["Name", "Value"].
# - Use QStandardItemModel and QTreeView.
# - Define custom delegates for int, float, bool, and string editing.
# - Add a `SmartValueDelegate` that inspects the schema node stored in `ROLE_SCHEMA` on the key item to choose the right editor (no visible type column).
# - Include methods to build a tree from JSON + schema, rebuild JSON back, and validate against schema types (object/array/primitive).
# - Implement a concrete subclass `JsonSchemaTreeView` that resolves types from the schema and provides default values.
# - Export constants: KEY_COL, VAL_COL, ROLE_FILEPATH, ROLE_DEFINITION, ROLE_SCHEMA.

For the JSON to schema I got:

# Python script: read all .json files in current dir (except contracts_dds.json),
# infer JSON Schema Draft-04, strings always type:string, arrays infer from
# first element only, objects include required keys + additionalProperties:false,
# wrap all in top-level container schema with "definitions" = filename stems,
# save to contracts_dds.json.

Conclusion

I’ve now accepted AI. I love it actually. And as a software developer, I think it’s very important to get with the program and use the new tools in your day to day work. Use it to work faster, smarter. I think of AI as like a hammer and all the other software developers might show up to the job and start hitting the nails in with their hands. If you can use your AI tool (hammer) to hit those nails in faster, go for it. Part of me inside wants to be a “purist” and reject AI, but another part of me is realizing as software developers, we’re really not re-inventing the wheel every day. If we get AI to hand us parts of the wheel that have already been figured out a hundred times over, then we can (possibly) build faster. We still hopefully need the human to plug in this code into a bigger application and make it work.

Note I stuck this article in ChatGPT and it generated something much better, but I’ve decided to leave it as I wrote it for now.

Concept Art for LOTRO

I’ve been gathering some of the fantastic concept art for the Lord of the Rings Online game. I wish someone would put together a “making of” book for this game. I think the world builders for this game are amazing! It’s funny, the first time I played this game for free, I was walking around near the Inn of the Forsaken and there was a road with a broken bridge. And if you looked below the bridge, you could see a worn out path from people having to walk down in the dry river (remember this is a video game and you have to put details like this in). As soon as I saw that I knew this would be a good game. (Alas, when they redid the area, they got rid of this foot path.)

Thank you to these amazing artists:
Chris J. Anderson, Wesley Burt, Devon Cady-lee, Jeff Murchie, Ted Nasmith, Tara Rueping, Lucas Slominski

Places
Creatures
Load Screens

Simple MLAPI Authoritative Server

This is my try at a simple authoritative server: where the clients send their commands to the server, the server decides where everyone moves to on the map and sends back the location of all clients to everyone. There is no delta compression when serializing, no client-side prediction or server reconciliation, no lag compensation. In other words: very very simple.

In the picture you can see red capsules (on the Host) that show where the server says everyone is. We do our physics movement on these objects.

The blue capsules are where the server is telling clients everyone is. They have no physics associated with them, they are just moved to the locations received in a message every 1/20 seconds. Notice that the client window has no red capsules: this is because it is not a server and is not calculating where everyone is.

There are six scripts in this project.

CustomTypes
Defines the PlayerCmd type which is what the client sends to the server (mouseButton0, horizontal, vertical, jumpButton)
Defines the PlayerState type which is sent from the server to the clients (List of clientId, position, rotation)
Defines PlayerObjectDictionary which is how a server keeps track of all non-networked red capsules and how clients keep track of non-networked blue capsules
There is code in here that explains to MLAPI how to serialize PlayerCmd and PlayerState over the network

HandlePlayerCmds
The client will save inputs to PlayerCmd every 0.01 sec. After it has saved 5, it sends out an array of PlayerCmds to the server (every 1/20 sec)
Server code is in this file as well. It shows the server saving a client PlayerCmd to a dictionary (uses clientId as a key)

ServerPlayerObjects
This is where the server has a bunch of fake red gameObjects representing clients that it moves around. It is using Unity’s CharacterController to figure out where the objects move. It uses input from each client’s PlayerCmds array.
After 1/20 sec has passed, the code sends out a PlayerState to all clients which is a List of the locations of all clients.

ClientPlayerObjects
This is where the client receives the PlayerState List from the server. It will take each PlayerState and create a fake blue gameObject that represents that client. It simply Lerps each transform to the location reported by the server.

ThirdPersonCamera
This is attached to the fake blue gameObject that ClientPlayerObjects creates that shares our clientId.

NetworkGUI
Displays a menu and bytes sent/received. Note the bytes sent/recv seems to be broken in MLAPI?

The Issue of Networking

The client records user input every 0.01 sec to a PlayerState. Every 0.05 sec it sends this size 5 array to the server. Perfect world: the server takes index 0 and moves the red client object between time 0.0 and 0.01 sec. Between 0.01 and 0.02 it uses index 1, etc. The server sends a record of where every red gameObject was calculated to be to the clients every 0.05 sec. We are using MLAPI and transport Ruffles. We can turn on the Simulator and make it so PlayerState doesn’t get to the server all the time, that messages are delayed, etc. So what we do on the server is (currently) use index 0 between time 0.0 and 0.02 sec (double!), then use index 1 from 0.02 and 0.04, index 2 from 0.04 to 0.06. We still have 2 more PlayerStates we can use. Hopefully by this time a new PlayerState set has arrived from the client. If it hasn’t, we will continue to use the last index 4 to move our red client object. Networking is hard.

Editor Setup

Pretty much the same as the Simple MLAPI Test, with these differences:

Window > MLAPI > Transports > Ruffles > Install Latest
GameObject > NetworkingManager > Select transport... > RufflesTransport
NetworkingManager > Create Player Prefab > Off (We are not using any NetworkTransport)
Ruffles Transport > Log Level > Warning
Ruffles Transport > Simulator > Use Simulator (If you want)
Layer > Add Layer... > Added "Server", "Client", and "Local"
Edit > Project Settings... > Physics > Uncheck Server/Local, etc. (We don't want Host w/ client and server obj to interact)
Important: Attach a Player prefab to ClientPlayerObjects > playerPrefab
Improvements

There’s a lot wrong here. I really struggled with how Unity organizes things vs. a regular C# project with classes. The hardest part of this small project was just trying to organize each script in a logical way! I have a CustomTypes which is used by two other classes but does nothing itself, really. I have a ThirdPersonCamera that needs ClientPlayerObjects to set what it points at. I have HandlePlayerCmds that remembers (on server) what all client input was, and that’s used by ServerPlayerObjects. I have classes exposing “static public” variables for ease of use.

For the networking memory usage, we create a new List every time we send out/receive PlayerState. Create a new array of PlayerCmds when those come in. We constantly send PlayerState even if the player is standing there. This stuff really bothered me.

Source

Found at GitHub
Using MLAPI network library with Ruffles

Simple MLAPI Test

I wanted to try out MLAPI for Unity. Here’s a picture of the Scene I made running a Host (left) and Client (right).

I’ve posted the code to GitHub. Here’s how I made the Scene:

Download MLAPI
Assets > Import Package > Custom Package... > MLAPI-Installer.unitypackage > Import
Window > MLAPI > Install

GameObject > Create Empty > Rename NetworkingManager
Add Component > MLAPI > NetworkingManager > Select transport... > UnetTransport
Add Component > New script > NetworkGUI.cs

GameObject > 3D Object > Capsule > Rename Player
Add Component > Character Controller
Add Component > MLAPI > NetworkedObject
Add Component > MLAPI > NetworkedTransform
Add Component > New script > ThirdPersonController.cs
(Also add a Cube as a Visor to Capsule and set Box Collider off so it doesn't interfere with our camera)
Create > Material > Black > Add to Visor for cool factor

Create prefab of Player, drop in NetworkedPrefabs of NetworkingManager and set Default Player Prefab

GameObject > Create Empty > Rename PlayerStart

Note I couldn’t figure out how to tell how much data was being sent out on the network with MLAPI so I used their NetworkProfiler to estimate bytes per second sent/received.

Shenanigans in LOTRO

One of my favorite games has been Lord of the Rings Online, an incredible Massively Multiplayer Online Role-Playing Game. I’ve enjoyed it especially because I’ve been able to play through it with my brother. We played through most of the game and then they added a bunch of content, so we decided to play through it again with new characters. I’d like to present the adventures of Lothendor the Minstrel (my brother) and Bethandar the Lore-master (me):

Setting up SG-1100 Netgate with AT&T BGW210

Recording this so I remember in the future:

I connect to the internet through an AT&T BGW210. Behind that sits an SG-1100 Netgate (pfSense). Behind that is my PC (plugged into Netgate LAN port) and my Google Wifi (plugged into Netgate OPT port). Netgate WAN plugs into BGW210.

Google Wifi is set to Bridge mode, meaning it will not assign IPs but will let Netgate do that.

With my PC connected directly to AT&T BGW210:
To set up AT&T: http://192.168.1.254

I set Wi-Fi (2.4 and 5 GHz) off, because I will be going through Google Wifi instead.

Once that is done, I plug the PC into the Netgate LAN port (and unplug Netgate WAN) and from a Command Prompt, type ipconfig/renew to get my new Netgate IP.

I set the IP to 172.16.1.1/24 (using Netgate setup wizard by first connecting to https://192.168.1.1) and then plug the WAN into AT&T.

Out of the box, Netgate LAN port is set up but the OPT port is turned off. Let’s turn it on.

To set up Netgate: https://172.16.1.1
To turn on the Netgate OPT port:

Interfaces > OPT
Enable: Checked
IPv4 Config Type: Static IPv4
IPv4 Address: 172.16.2.1/24

Services > DHCP Server > OPT
Enable: Checked
Range: 172.16.2.10 to 172.16.2.245

Firewall > Rules > OPT > Add
Action: Pass
Protocol: Any
Source: OPT net
Description: Default allow OPT to any rule

Now install ad block software:

System > Package Manger > Available Packages > pfBlockerNG
Install pfBlockerNG-devel

Firewall > pfBlockerNG
Make sure to select LAN and OPT for Outbound Firewall Interface

Firewall > pfBlockerNG > General
CRON Settings: Once a day

Firewall > pfBlockerNG > DNSBL > DNSBL Category (Optional)
Blacklist Category: Enable
Blacklists: Select Shallalist
Shallalist: Check Advertisements

Firewall > pfBlockerNG > Update > Run
It should download the new Shallalist.

Hopefully everything works. If you cannot talk to your Netgate, try directly connecting to it through USB. I used PuTTY to COM3 Speed 115200.

Addendum: I have a NAS connected to the BGW210. In order to see that from behind the Netgate, I added:

DNS Resolver > Host Override Options
Host: WDMyCloud
Domain: localdomain
IP Address: 192.168.1.65

On the BGW210 I went to Home Network > IP Allocation, and added 192.168.1.65 as a Fixed Allocation so the device would always be at that IP. Now I can use File Explorer to \\WDMyCloud.

Playing Squad as Squad Lead

I have recently been playing a lot of Squad. I really enjoy the Squad Leader role of the game. The role can really carry the map – a good SLs can lead a team to victory. I also like that Squad is very voice comms heavy. I find that I stutter a lot and forget words when trying to talk to other people in “quick” situations and the game gives me a chance to get better at speaking and making decisions. Or even having to change plans as the battlefield changes!

There are four objectives I try to reach when playing as SL (listed in order of importance):

  • Reduce Walking – only the Squad Lead can place Rally Points and HABs, so make sure you are close to the battle. Don’t get too close (so hard to gauge this!), don’t be too far away. Make it so your guys can get in there and do what they need to do. You need to enable them.
  • Give Direction – “We’re going to defend this point for a while.” “Let’s move on this flag together.” “Let’s search for their HAB over here.”
  • Be Aware of the Big Picture – I try to watch how the map overall is going and let others know. “They just blew past us, we need to fall back to the last point.” “The next point is captured and safe, so let’s leave this point and move up.” “The enemy keeps coming from this direction so let’s push out and take out their HAB or Rally.”
  • Encouragement – “Hey that was a great shot.” “Thanks for building.” “Thanks for the supplies!” Actively marking the map when teammates report enemy. “Hey you did a good job dying over there, that distraction bought us enough time to sneak around here.”

These are some things I try not to do:

  • Tell people what role to play or demand someone pick medic.
  • Tell people to do a logi run. (I hate logi runs, so why should I make you?)
  • Take another squad’s supplies or logi without asking. This includes using their supplies to build a Hesco wall/repair station.
  • Put up so many defenses around the base that people can’t get out.

Here are some recordings I made so I could re-watch and see what I was doing wrong. I’ve noticed sometimes I miss important comms or I get too hyper.

P.S. The best way to avoid being shot that I know of is to: be where the enemy doesn’t expect you to be! E.g. flank!
P.P.S. I like to place people into fireteams at the start because then I can see at the end of the map how many people stayed through the entire thing. It also means if a FTL leaves, then another person becomes FTL without me having to think about it. I like to place people who are looking out for the enemy as FTL. Usually engineers, snipers, and LAT.

About Recording

I’m using a great, free program called OBS Studio. I’ve been messing with the settings and this is what I’m using for now:

Video Bitrate: 15000 Kbps (Based on YouTube Recommended Settings)
Recording Quality: Indistinguishable Quality, Large File Size
Recording Format: mp4
Base Resolution: 1920x1080
Output Resolution: 1280x720
Downscale Filter: Lanczos (Sharpened scaling, 32 samples)
FPS: 60

Sound was really difficult to get just right.

Enable Push-to-talk (Hotkeys: V, B, G)
Desktop Audio: -8.3 dB
Mic/Aux: 12.1 dB
Squad Effects Volume: 58% Music Volume: 58%