Weather Bot — Episode 10: Moving to Oracle Cloud and the Stop-Loss Surgery That Changed My ROI
My MacBook had become a problem. Every time I closed the lid — grabbing lunch, walking to a cafe, letting the battery die overnight — the bot died with it. No scans, no trades, no monitoring. For 15 cities across 11 time zones, "runs only when my laptop is open" wasn't going to work.
So I moved everything to a server that never sleeps. And while I was rebuilding the infrastructure, I pulled 97 trades worth of data and found out my stop-loss logic had been quietly destroying my returns.
What This Post Covers
Two things that happened at the same time: migrating the bot to a free cloud server for 24/7 operation, and a data-driven teardown of the stop-loss rules that weren't doing what I thought they were doing.
The VPS Setup
Oracle Cloud offers a free ARM-based VM — permanently, not a trial. The Always Free tier gives you 4 CPU cores, 24GB RAM, and 200GB storage. My bot uses about 50MB of RAM and barely touches the CPU. Comically overpowered for a weather trading script.
I grabbed an instance in the Tokyo region on my second attempt (ARM capacity runs out fast in popular regions — if you can't get one, try early morning in a different region). Ubuntu 24.04, SSH access, done.
The deployment stack is as unsophisticated as it gets: tmux for the bot process, scp to copy files from my Mac, cron for daily market history collection. No Docker, no CI/CD. Deploy means: edit on Mac, scp to server, Ctrl+C the old process, start the new one. Takes 30 seconds. Total hosting cost: $0/month, forever.
97 Trades of Truth
With the bot running 24/7 on the VPS, data accumulated fast. At 97 closed trades, I sat down and actually looked at the numbers instead of going by feel.
Overall: 24 wins, 73 losses. Win rate: 24.7%. The breakeven point for my payout ratio was around 25%. I was right on the edge — not clearly profitable, not clearly losing. The kind of result where the specific trades you cut could swing everything.
So I started cutting.
The Data That Jumped Out
Two patterns were obvious once I split the trades into groups.
Two-bucket vs single-bucket. When the forecast pointed to 14°C, I'd been buying both the 14°C and 13°C buckets for the same city-date. Insurance, I thought. The data said otherwise: two-bucket trades had an ROI of -21.9%. Single-bucket trades: +46.2%. The "insurance" bucket almost never hit. I was doubling my risk for nearly zero extra reward.
Trades bought 54+ hours before peak. Twenty-four trades, three wins, ROI of -42.2%. The further out I bought, the worse the forecast accuracy, the worse the results. My 48-hour window was already generous — anything beyond that was just gambling with worse odds.
The Three Cuts
The first thing I ripped out was the 54h+ trades. One line:
if hours_to_peak > CONFIG.get("max_buy_hours", 48):
return False
That single condition eliminated the worst-performing segment in the entire portfolio. -42% ROI, gone.
Next was the profit-skip rule. My old stop-loss logic would cut any position where the market moved against us — even if the current price was above what I paid. I found cases where the bot sold a position at $0.14 (bought at $0.11) because the market's top bucket shifted, only for that position to resolve at $1.00. The bot was cutting winners.
if current_price > buy_price and buy_price > 0:
return # in profit — hold for resolution
After applying this: zero profitable positions stop-lossed. Exactly what I wanted.
The last cut was removing one-tick stop-losses. The old code would stop-loss when the market moved just one tick away from our bucket. But one tick is noise — prices wobble constantly. Chicago once got stop-lossed at one tick, then the winning bucket rebounded to pay $0.33. I was selling noise and missing signal. New rule: stop-loss only triggers at 2+ ticks distance.
Before and After
The results were clearer than I expected:
| Before (pre-3/28) | After (3/28~) | |
|---|---|---|
| Win rate | 27% (20/75) | 18% (4/22) |
| ROI | -8.3% | +11.1% |
Lower win rate, higher ROI. That sounds backwards, but it makes sense: the changes didn't make me win more often. They made each loss smaller and let the winners run to resolution instead of getting cut early.
The stop-loss breakdown after the changes told the story:
| Type | Trades | P&L |
|---|---|---|
| Market stop-loss | 6 | -$4.95 |
| Temperature stop-loss | 4 | -$1.97 |
| Resolution holds | 12 | +$10.00 |
| Profitable positions cut | 0 | $0.00 |
Resolution holds were carrying the portfolio. The stop-losses were doing their job — limiting damage — without eating into wins anymore.
Building the Data Pipeline
While fixing the trading logic, I set up a proper forecast tracking system on the VPS. Snapshots every 6 hours starting 54 hours before each city's peak temperature time. All models — GFS, ECMWF, ICON, plus AIFS and regional models like HRRR for US cities.
The HRRR model needed a fix I spent two days tracking down. Open-Meteo changed the API key from hrrr_conus to ncep_hrrr_conus without any announcement. US city forecasts were missing their regional model and I couldn't figure out why until I checked the API changelog.
All of this data — forecast snapshots, market history, price tracking — lives in JSON files on the VPS. About 77MB total. Not elegant, but it works, and it's what I'd need to actually measure which models are best for which cities instead of guessing.
The Uncomfortable Lesson
The biggest thing I took from this wasn't technical. It was that stop-losses feel safe but cost money in prediction markets. Traditional stock trading wisdom — "cut losses quickly" — doesn't apply when your positions resolve to $0 or $1 within 48 hours. A stop-loss at $0.15 saves you $0.10 per share, but costs you the $0.85 upside when you're right. At 20% win rate with 5:1 payoff, every unnecessary stop-loss is pure waste.
I came into this project thinking stop-losses were responsible risk management. The data said they were the biggest drag on my returns.
Key Takeaways
- Oracle Cloud ARM free tier: 4 cores, 24GB RAM, $0/month. More than enough for a Python trading bot. tmux + scp for deployment.
- Two-bucket trades: -21.9% ROI. Single-bucket: +46.2%. The "insurance" bucket was pure drag.
- Trades bought 54h+ before peak: -42% ROI. The 48-hour buy limit eliminated the worst segment.
- Stop-losses in prediction markets aren't like stocks. Cutting positions early costs you more than it saves when the payout ratio is 5:1.
What's Next
The VPS was running, the stop-loss bleeding was fixed, and the data pipeline was collecting forecast snapshots for every city. In Episode 11, that data is going to rewrite my strategy from the ground up — which models are actually best for which cities, why ECMWF wasn't the answer I thought it was, and a forecast disaster in Los Angeles that forced me to rethink how the bot picks temperatures entirely.
← Previous: Episode 9: Scaling to 15 Cities Next: Episode 11: When Data Rewrote My Entire Polymarket Strategy →
More updates on the way. If you're working on something similar or found a smarter way to do it, drop it in the comments — the more we share, the faster we all move.
Disclaimer: This blog documents my personal learning journey. Nothing here is financial advice.
Comments
Post a Comment