GitHub Workflow 101: Keeping Your AI-Generated Code Clean and Versioned

The first time I asked an AI to refactor a working file, it broke everything. I had no version control, no backup, just a single folder on my desktop where I'd been building. The "before" version was gone, and I couldn't figure out how to undo what the AI had done. I lost two days of work.

That's the moment I started taking Git seriously.

For an AI-native builder, Git isn't optional. It's not a "nice to have for collaboration" feature. It's the safety net that lets you let the AI run wild without fear, because you can always rewind. If you're using Claude Code or any AI tool to generate code without committing every working state to a repository, you're walking a tightrope without a net.

What This Post Covers

The GitHub workflow I use across every project, why "commit early and often" matters more for AI-generated code than human-written code, the branching strategy that lets me try aggressive ideas without breaking working features, and how GitHub plus Cloudflare Pages turns deployment from a project into a non-event.

Every Commit Is a Save Point

I treat Git commits like save points in a video game. Before I let Claude Code do anything significant — a refactor, a new feature, a dependency upgrade — I commit whatever's working right now. Even if the working state is messy. Even if I'm not done. The commit is just a checkpoint.

The reason this matters more for AI-generated code: AI changes are bigger and faster than what a human types. A human refactoring a function might change five lines. An AI might rewrite the whole module, change three other files that import it, and update the test suite. If that change breaks something subtle — a regression you don't notice for an hour — you need to be able to get back to a known good state cleanly.

# The pattern I use, every time git status # What's currently changed? git add . git commit -m "Working: weather bot signals firing correctly" # Now let Claude Code do the risky thing # If it breaks: git reset --hard HEAD # If it works: git commit again, move on

The commit message doesn't have to be elegant. "Working" plus a quick description of what's working is enough. The point isn't documentation; it's a recovery point. When something breaks two hours later, you want to be able to find the last commit where things were fine and reset to it.

Branches Are Where the Risk Lives

The main branch is sacred. It's what's deployed. If main is broken, the live site is broken, and that pressure makes you cautious in ways that slow down progress.

The fix is branches. When I'm trying something risky — new UI, experimental feature, big architectural change — I create a branch. The AI can go wild in that branch. Multiple commits. Aggressive refactors. Whatever.

If it works, I merge it into main and deploy. If it turns into a mess, I delete the branch and act like it never happened. The working version on main is untouched.

For SpeedTap's Phase 2, I had separate branches for each major feature: difficulty modes, Telegram Stars payments, 1v1 challenges. Each branch lived independently. When the iOS bug almost killed the multiplayer feature, I was working in the challenges branch — not main — so the live game kept running normally while I figured out the fix.

The workflow that scales: main branch = always deployable, never break. Feature branches = where new ideas live and get tested. Preview deployments through Cloudflare Pages = a live URL for every branch, so you can test in a real environment without affecting production. Merge to main only when the feature actually works. This pattern works the same for a solo project as it does for a 50-person team.

The Auto-Deploy Pipeline That Removes Friction

Manual deployment is where most solo projects die. The friction of "commit, push, SSH in, pull changes, restart service, hope nothing broke" is enough that you stop deploying small fixes. Bugs accumulate. Features pile up in unfinished branches. Eventually you're afraid to deploy at all.

The fix is GitHub plus Cloudflare Pages. Push to main, deployment happens automatically. No SSH. No manual commands. The frontend is live globally about 90 seconds after the push.

This sounds like a small thing. It isn't. When the cost of deploying a change is zero effort, you deploy way more often. Small fixes go out the moment they're written. Iteration speed compounds. The Mini-App Builder shipped through 30+ deployments in its first two weeks because each one took zero overhead.

The setup is also embarrassingly easy. In the Cloudflare Pages dashboard, connect a GitHub repo, specify the build command, point at the output directory. That's it. From that moment on, every push triggers a build and deploy. There's no configuration file to maintain. No CI/CD pipeline to debug.

Rollback Is the Real Confidence Builder

Branches and commits are the prevention layer. Rollback is the recovery layer. Together they're what let me push code without sweating.

One time I deployed a SpeedTap update that accidentally hid the navigation bar on iOS. The site looked broken to half my users. In a previous era, I'd have panicked, scrambled to find the bug, possibly made it worse trying to fix it under pressure.

What I actually did: opened the Cloudflare Pages dashboard, found the previous successful deployment, clicked "Rollback." The live site reverted to the working version in about 10 seconds. Then I went back to my dev environment, fixed the bug calmly, and pushed the corrected version when it was actually ready.

Rollback isn't just a feature. It's a different psychology. When you know you can always undo a deployment, you stop fearing deployments. Deployment frequency goes up. Iteration speed goes up. Bug fix time goes down because you're not under stress while debugging.

The Commit History as a Learning Tool

Something I didn't expect: my git log has become one of the better ways to actually learn what I built.

I can scroll through commits and see how a project evolved. Which features I added in what order. Which bugs cost me the most time. Which architectural decisions I reversed later. The diffs themselves are tiny lessons — "oh, that's how Claude Code restructured this function," "okay, I see why that import had to move."

For a non-developer, this is more valuable than tutorials. Tutorials show you contrived examples. Your git history shows you the actual evolution of code you understand the context of, fixing problems you actually had. The learning sticks because it's anchored to real memory.

This only works if you commit often and write meaningful messages. If your history is one commit per week labeled "updates," you get nothing from it. If it's twenty commits per week with messages like "fixed Telegram Stars payment handler order" and "added 4-character challenge code system," you can reconstruct the actual reasoning months later.

What Goes In, What Stays Out

Two files matter more than they look:

.gitignore — the list of files that should never be committed. API keys in .env files. Local database files. Compiled binaries. Editor configs. The whole point of .gitignore is to make sure sensitive or local-only stuff stays out of the public repo.

README.md — the file that loads when someone visits the GitHub repo. For solo projects this often gets neglected, but it's worth at least a few lines describing what the project is, what it depends on, and how to run it. You'll thank yourself when you come back to a project six months later and have no idea what's going on.

The .gitignore file is non-negotiable. The README.md is optional but recommended. Claude Code can generate both for you in seconds if you describe the project.

Where to Start

If you've never used Git, the friction at the start is real. The commands are unfamiliar. The mental model takes a week to click. But the entry path is shorter than it looks.

Pick a small project. Initialize it as a Git repo. Make commits as you build. Don't worry about advanced features — branching, merging, rebasing — until you've got the basics down. The four commands you actually need 90% of the time are git status, git add, git commit, and git push. Everything else is a tool you'll grow into.

Once those four commands feel routine, layer on the next thing: branches for risky changes. After branches feel routine, connect the repo to Cloudflare Pages and watch deployments happen automatically. Each layer adds capability without breaking what already works.

What's Next

GitHub solves versioning and deployment. The next infrastructure question is cost — specifically API costs, which can quietly destroy a solo project's economics. The next post covers the strategies I use to keep AI API spend predictable: model selection, prompt design, caching patterns, and the day I almost shipped an infinite loop that would have cost hundreds.

← Previous: Cloudflare for AI Devs       Next: Optimizing API Costs →


More posts in this series will cover the actual stack — API cost optimization, deployment patterns, security, and the workflows that hold everything together. If you're working on shipping something with AI tools and have questions, drop them in the comments — the more we share, the faster we all move.

Disclaimer: This blog documents practical development workflows based on personal experience. Nothing here is financial, legal, or professional advice.

Comments