Last weekend I sat down with augmentcode.com to do some serious AI-assisted coding. The agent, “Auggie,” promises to automate coding tasks to a level I haven’t tried before. Gotta say, it was pretty good. I got farther than I would have alone. But hoooo, it is a new kind of effort and frustration.
Overall impression: it’s like an appliance, in that I can set it doing something and then go away for a bit, and come back and my clothes are clean. Except it’s way harder to operate! The interesting part comes later, in how it changes the ways I think and work.
Here, have 3 things I love about augmented coding; 1 tradeoff; and 3 things I hated.
❤️ Augmented coding was so efficient that more changes were in scope.
My sample app is implemented in 3-4 languages, so adding a service was a lot of work—but not anymore. The AI is great at implementing the same thing in another language. Some of the existing services have duplicated hard-coded lists, because that was the fastest implementation. The new service has a shared sqlite database, because why not? I didn’t have to look up libraries in 3 languages.
Caveat: until one of them doesn’t work. sqlite3 in Node caused errors when I built the docker container. I asked Claude, which told me to use better-sqlite, and so I had Auggie switch it to that.
💖 I can make great scaffolding to support the production code.
When production code is important, then it is a minority: most project code is scaffolding. Tests, deployments, linting, verification, utilities for local development, handy admin interfaces–scaffolding is code that helps us safely and efficiently change production code.
I can trust Auggie more if it can run a reproducible end-to-end test after every change. Writing that test is too much work for me, but not for the AI. I did not have to dig into browser automation libraries. I don’t even know which ones it used! For scaffolding, I don’t have to look at the code.
Caveat: It still took half an hour, with some critique and instructions. I did look at the code, and found it flailing, and cleaned it up.
💓 It lets me work in smaller conceptual steps.
While some things happen in bigger jumps (“implement the user service in python”), more interesting ones can be smaller. I’m moving hard-coded phrases in phrase-picker into a sqlite database. I know I’m going to want a tag attribute on the phrases, so that Java-specific ones only show up for Java people, etc. If I were doing this by hand, I’d think through the whole table structure before implementing it.
With AI assistance, I worry first about getting that db working at all, and then adding the field will be as easy as typing a few sentences. While it’s integrating the database, I can think about what to put in it.
There’s nothing more effective in software development (or any complex task) than taking many more much smaller steps.
🤨 Development is still unpredictable.
Coding is always rough: something that seems easy turns out to be a huge pain. Sometimes it’s a typo I can’t spot, and sometimes it’s that spawning a process in Node and getting the output is a pain. The AI helps with those, but then it gets into its own knots.
The agent makes mistakes, then tries to run it, gets errors, and goes back and fixes its mistakes. It’s really quite a good simulation of an oddly-skilled developer. So fast at typing! So clueless at when to stop.
When it gets in a hopeless loop of trying things, I click ‘stop’. I look at what it did (with git diff) and then when it’s too much, roll back all its changes. Try again at something smaller. Ask a different AI what would work, and get more specific. Or just literally try again, it’s nondeterministic.
When the bulk of my project work turns out to be fighting with Docker, the AI helps a little. I’m carefully tweaking docker-compose files, the autocomplete and “next edit” features of Augment Code help most; they reduce typos and help me not miss a change.
😓 Going too fast can be a mistake.
Some of the scope that I added (“it can do this in ten minutes!”) turned out to be a mistake. It did it wrong in ten minutes, and then I spent a frustrated hour fighting it. The ten minutes were a worthwhile risk; everything after that was my failure. Give it up, Jess! Let it be wrong, roll it back, note this down and move on.
After every change, I have to check what it did. I’ve asked it to do a commit each time, so I look at the content of that commit. One limitation on how much I can ask it to do at is: the amount of code I’m willing to look at.
If it’s scaffolding code, I don’t have to look at it. But I’d better do some manual checks. For a good while, its e2e-test script ignored failures in the Docker build, so that further checks ran on the already-deployed working version, defeating the purpose. The agent happily screwed up the Dockerfiles and moved on.
Smaller steps, it’s always about smaller steps! My calibration for size of step that I can implement is not applicable, and I have to learn what’s right for my new role as prompter and supervisor.
🥺 I have to learn from its mistakes.
My new job is less “making the code” and more “making the making.” When it frustrates me, I have to think about how to get different behavior next time.
Sometimes it’s easy: “Always make a commit, and tag it with -- auggie.” It puts that in its “Augment Memories” file, which I can see (and sometimes edit? unclear).
Sometimes it’s changing my expectations: it can call a database no problem, but it struggles with OpenTelemetry, so I have to be way more specific or hand-code some parts.
Sometimes it’s changing the structure of the work: first make an end-to-end test, then tell it to run that every time. Ask it to check Honeycomb for tracing output every time. Give it the constraints it needs.
When I’m doing the coding, I learn from everything I type and every error I see. It doesn’t have that big a context window. It learns what it puts in “Augment memories” and the rest is gone in a few minutes. It will make the same mistakes again, unless I find the constraints it needs. This is hard.
😵💫 I ruined my weekend trying to keep it busy.
That bug of productivity caught me—”I’m getting code done while I make breakfast!” Right out of bed, I wanted to get it started on something before my shower. This is never as quick as I think it’s going to be. That’s how I got into some of the yaks I should not have shaved, just to get it moving. That led me down paths of frustration, always worse when dirty and unfed.
This is not an AI problem. This is a me problem. Shower time is shower time. After that, while it is coding and I am not, spend this time thinking hard about what to implement, how to break it down, and when to give up.
The real power of the tool is in how it changes me.
This is a different way of working, with different triumphs and pain. I can get more done, but I have to wrangle a nondeterministic machine instead of a satisfyingly predictable one.
Coding with AI takes (and enables) more discipline.
I need more tests, more checks and constraints. This takes more scaffolding, which is now fast to create.
More “how will I know this works?” and less “how will I implement this?”
I need to learn how to influence the AI, and channel frustration into guidance.
More high-level thinking about what the software should do, less puzzle-solving of making it happen.
Coding with AI assistance is a new skill. It makes us more powerful, and it asks even more of us.