Automating the Deployment Pipeline: From GitHub Push to Live in 90 Seconds
I remember when "deploying" meant uploading files via FTP and crossing my fingers. You'd open FileZilla, drag the changed files to the right folder, refresh the live site, and hope nothing broke. If something did break, you'd panic and try to remember which files you'd just overwritten.
That world is gone. Today, I push a commit to GitHub and 90 seconds later the change is live on a global edge network. No SSH. No manual file transfers. No fingers crossed. The whole pipeline runs itself.
This isn't a productivity perk. It's the difference between a project that ships daily and a project that quietly stops getting updated.
What This Post Covers
How automated deployment pipelines actually work for solo builders, the GitHub plus Cloudflare Pages setup that handles every frontend I run, why preview deployments and rollback changed my relationship to shipping, and the honest limit of this approach — what frontends get for free that backends don't.
The Friction Tax of Manual Deployment
Manual deployment kills solo projects. Not because it's hard, but because it's annoying enough to delay.
The pattern is predictable. You finish a small fix on a Tuesday afternoon. The deploy steps are known: commit the code, SSH into the server, pull the changes, restart the service, refresh and verify. Maybe ten minutes total. But ten minutes feels like a lot when the fix itself took three. So you decide to bundle it with the next change. The next change comes Wednesday. You bundle again. By Friday, you have six small fixes pending and the deploy feels riskier because more changed at once.
Eventually the deploy gets postponed indefinitely. Bugs accumulate. Features pile up in unfinished branches. The site stops getting updated. The project quietly dies, not from any single failure but from accumulated friction.
The actual cost of manual deployment isn't time. It's momentum. Every minute of friction between "I have a fix" and "the fix is live" reduces the number of fixes that ever happen.
The Setup That Took 5 Minutes
Connecting GitHub to Cloudflare Pages is the kind of setup that feels too easy to be real. Here's what it actually looks like.
Open the Cloudflare Pages dashboard. Click "Create a Project." Authorize Cloudflare to read your GitHub repos (one-time OAuth approval). Select the repo you want to deploy. Set two fields: build command and output directory. Click deploy.
That's the entire configuration. From this point on, every push to the main branch triggers a fresh build, runs through Cloudflare's infrastructure, and deploys to the global edge network. The first build takes two or three minutes (cold). Subsequent builds typically finish in 60-90 seconds because dependencies are cached.
SSL certificates, DNS routing, edge caching, and CDN distribution all happen automatically. Cloudflare handles the parts that used to require their own setup steps. Your job is to push working code; the platform handles the rest.
Environment variables go in the dashboard, not in the repo. Secrets, API keys, configuration that varies between dev and production — all of it lives in Cloudflare's environment variable settings, available to the build at deploy time but never committed to Git. The principle is simple: code in the repo, secrets in the platform.
Preview Deployments Changed How I Test
The feature I didn't know I needed: every Git branch automatically gets its own deployment URL.
When I create a branch called add-chaos-mode and push it to GitHub, Cloudflare Pages builds it and gives me a unique URL like add-chaos-mode.speedtap.pages.dev. The URL is real, public, and runs the actual production build. I can test it on my phone, share it with a friend for feedback, or run automated tests against it.
This eliminates the staging server problem entirely. There's no separate environment to maintain, no syncing between staging and production configs, no question of "is this the same as what'll go live." Each branch is its own deployment, isolated from main, identical in setup.
For SpeedTap's Phase 2 features, every major change — difficulty modes, Telegram Stars payments, 1v1 challenges — lived on its own branch with its own preview URL. I tested challenge codes on iOS using the preview URL before any of that code reached the live game. When the iOS startapp parameter bug appeared, it appeared in the preview deployment, not in production. The cost of finding it was a few hours of debugging instead of a broken live game.
Rollback Is the Real Confidence Builder
Branches and previews are the prevention layer. Rollback is the recovery layer. Together they're what make deployment feel safe instead of stressful.
Cloudflare Pages keeps every successful deployment indefinitely. If you've pushed to main 100 times this month, you have 100 deployable versions sitting in the dashboard. Each one has a "Rollback to this deployment" button. Clicking it makes that version live, globally, in about ten seconds.
This came up the day I shipped a SpeedTap update that accidentally hid the navigation bar. The site was technically working, but users couldn't navigate between screens. In a manual deployment world, this would mean opening the editor, finding the bug, deploying a fix, and praying the fix didn't introduce a new bug while users complained.
What I actually did: opened the Cloudflare Pages dashboard, found the previous deployment from earlier that day, clicked Rollback. The live site reverted to the working version in seconds. Then I went back to my dev environment, fixed the bug calmly, tested it on a preview URL, and pushed the corrected version when it was actually ready.
The psychological shift matters more than the technical mechanism. When you know you can always undo a deployment, you stop fearing deployments. You ship more often. You take smaller risks more frequently instead of bundling everything into one terrifying release. Iteration speed compounds.
Frontends Auto-Deploy. Backends Don't.
Here's the part most articles skip: this whole automatic pipeline only works for static sites and frontend code that runs in the browser. Backend services running on a VPS have a different story.
Cloudflare Pages is built for static assets and serverless functions. It doesn't run my FastAPI servers, my Telegram bots, or my background data collectors. Those all live on the Oracle Cloud VPS covered in Episode 3, and they need their own deployment mechanism.
The pattern I use for backends: GitHub Actions plus SSH. When I push backend code to GitHub, an Action runs that connects to my Oracle instance, pulls the latest code, and restarts the relevant systemd service. The whole thing takes maybe 30 seconds.
This is less elegant than Cloudflare Pages. It requires SSH key management, GitHub Secrets configuration, and writing the Action manifest. But the user experience after setup is the same: push to main, watch it deploy automatically.
The honest take: backend automation takes maybe an hour to set up the first time, and the result feels just as smooth as frontend auto-deploy after that. The split between frontend and backend deployment paths is real, but both can be automated to the point where deployment becomes a non-event.
What This Looks Like in Practice
Across PrintMoneyLab projects, the deployment patterns break down like this:
Cloudflare Pages auto-deploy: SpeedTap web frontend, Mini-App Builder dashboard, Flight Compensation Checker, LA28 tools, the blog itself. Every push to main, live in 90 seconds.
GitHub Actions plus SSH to Oracle: x402 Protocol API, Telegram bots, Weather Bot data collectors, ACP Agent backend processes. Push to main, automated SSH deploy, service restart.
Manual deployment: nothing. Anything I'd otherwise deploy manually gets either auto-deploy set up or moved to a host that supports it.
The result is that I deploy multiple times per day across various projects without thinking about it. Mini-App Builder shipped through more than 30 deployments in its first two weeks because each one cost essentially zero effort. Most of those were tiny: copy fixes, color tweaks, minor logic adjustments. Without automation, half of them wouldn't have happened.
Where to Start
If you've never set up auto-deploy before, the entry path is shorter than you'd expect.
Pick one frontend project. Push it to a GitHub repo if it's not there already. Open Cloudflare Pages, connect to GitHub, point at that repo. Specify the build command and output directory. Click deploy. Watch the build run.
Once that first build succeeds, you have automatic deployment for that project. Forever. Push to main, watch it go live. The setup that just happened is the same setup professional teams pay platforms hundreds per month to provide.
For your second project, the setup takes two minutes because you already know the pattern. By your fifth project, it's muscle memory. Each new project gets its own auto-deploy without thinking about it.
Backend automation comes later. Don't try to set up GitHub Actions for SSH deployment on day one. Get comfortable with the frontend pipeline first. The Actions setup is a logical next step once you've felt how much friction auto-deploy removes.
What's Next
Automated deployment makes shipping easy. The next post in this series tackles the security side of that ease — specifically, what to do with the API keys and secrets that automated pipelines need to function. The wrong patterns here will leak credentials in ways that take real money to clean up. The right patterns are simple but easy to skip.
← Previous: Serverless vs. Traditional VPS Next: Securing Your Secrets →
More posts in this series will cover the actual stack — secrets management, monitoring, scaling patterns, and the workflows that hold everything together. If you're working on shipping something with AI tools and have questions, drop them in the comments — the more we share, the faster we all move.
Disclaimer: This blog documents practical development workflows based on personal experience. Nothing here is financial, legal, or professional advice.
Comments
Post a Comment