Disclaimer: This content reflects my personal opinions, not those of any organizations I am or have been affiliated with. Code samples are provided for illustration purposes only, use with caution and test thoroughly before deployment.
A few weeks ago, I was vibe coding a personal CRM with an AI assistant. Nothing fancy — just something to log my communication with people at work: who I talked to, what we discussed, when to follow up. The AI cheerfully scaffolded the whole thing: a Python backend, a SQLite database, tables for contacts and interactions, a schema with migration scripts.
I stared at the output for a moment. A contacts table with foreign keys. For notes I could have kept in a text file.
There’s nothing wrong with what the AI generated — it’s following established software engineering instincts. But those instincts were built for a different kind of software. For SaaS apps with millions of users and thousands of concurrent writes, yes, you need a database. For a personal CRM on one machine used by one person? I’m not so sure.
My thesis: plaintext files are the right default storage format for AI-generated personal software, and we should be nudging AI assistants toward that default more deliberately.
I’ve been experimenting with the GitHub Copilot Cloud Agent (also called the Copilot coding agent) as part of my remote vibe coding setup. The idea is simple: assign a GitHub issue to Copilot, let it implement the code on cloud compute, and have it open a pull request — all without needing a machine running locally. If you haven’t read that post, the short version is that this kind of setup is great when you only have a few minutes at a time and want meaningful progress to happen in the background.
The catch is that in some environments you can’t just use GitHub-hosted runners. Maybe you need compute that stays inside a specific AWS VPC, or your setup already lives on AWS and you’d rather keep everything there. In October 2025, GitHub announced support for self-hosted runners for the Cloud Agent, which opens the door to running the whole agent pipeline on AWS CodeBuild. Getting it working end-to-end took some trial and error. This post walks through the setup and, more importantly, the pitfalls I hit along the way.
Many AI platforms — OpenAI, Anthropic — offer “scheduled tasks” or “scheduled agents” that run automatically. Sounds great, until you realise the configuration lives in some web UI, completely outside your version control. You can’t review changes in a pull request, you can’t roll back, and you can’t easily share or reproduce your setup. The automation tooling conversation also seems to have been taken over by no-code platforms — n8n, Zapier, or just asking a chatbot — which work fine until you want something more structured and closer to how software is actually built.
I’m already paying for GitHub Copilot. I wanted scheduled AI automation that’s already included, version-controlled in Git, auditable, and running in a real development environment with full CLI access. Turns out GitHub Agentic Workflows (gh-aw) ticks all those boxes.
I was scrolling through YouTube late one evening when a video on plain text accounting caught my eye. It immediately clicked.
I’ve been managing my finances with a patchwork of Excel spreadsheets for years — one quarterly balance sheet, one income statement, and a few ad hoc sheets for tax calculations. They work, until they don’t. Formulas drift. Unbalanced accounts go unnoticed. And importing CSVs downloaded from my bank apps involves a tedious amount of manual cleanup every time.
In my previous post about ADR as Event Sourcing, I talked about capturing architecture decisions continuously as they happen, not months later when everyone’s forgotten the context. But writing ADRs is only half the battle. The other half is actually enforcing them—and that’s where most teams fail. ADRs sit in a separate wiki or documentation tool like Confluence or Notion, gradually becoming archaeology rather than law.
What if we could give our AI code reviewer the ADRs as instructions, so it flags violations automatically on every pull request?