Kicking off 2026 with a little writing
Coming back to work after a break
What I find valuable about taking time off, and why it’s really important, is that you come back with a fresh perspective, especially if you’ve been in a very reactive mode prior to taking time off. For me, most of the year was reactive. There was so much going on and so much change that we were constantly adapting rather than being proactive. When you step out of that reactivity, it gives your brain space to actually think.
I find this especially helpful at the end of the year. You naturally start reflecting, either consciously or subconsciously, on what’s been happening and what you might want to do next. That reflection can be influenced by others, or it can come purely from your own thinking and priorities. For me, this process brought a lot of clarity around a few key things.
The first was the importance of taking a moment to properly reset. In practical terms, that meant adjusting my calendar and optimising it so I have the most impactful blocks of time. The only way I can do that is by deliberately protecting uninterrupted time during the week. This is time for my most impactful work, not the work that primarily enables others, like one-on-ones, planning sessions or stakeholder meetings, but more strategic work. It’s the kind of work that helps me become better at my job and has longer-term, deeper effects across my team or group.
Another benefit of stepping away is that it helps clarify what you want to pursue or achieve over the next period of time. That naturally leads to setting some form of direction or goals. I’m not a huge fan of traditional goal-setting, and I definitely don’t buy into the “new year, new me” mindset. For me, it’s more about identifying something I want to work toward, a kind of North Star. I don’t always need a perfectly defined path, because I like to adjust as I go, “be like water”. I also know my personality: if I truly want to do something, I’ll find a way to get it done. That might look different for other people, so YMMV.
The other big outcome of this reflection is noticing what’s missing or lacking. Over the holiday period, I realised that we’re not doing enough around AI practices right now. That includes things like proper verification loops, writing detailed (or detailed enough) specs to be used when prompting, using newer multi-agent techniques, and doing more work in containers for safety and parallelisation of tasks. These approaches aren’t necessarily new, but they should be the default. It’s less about constantly watching everything and more about letting agents do their work, while still being able to step in and adjust course when needed.
From there, more questions naturally come up. Are there processes missing that we should be following? Are we not doing certain things because they haven’t become habits, or because there’s a cultural barrier? What can we change, and how can we improve? These are the kinds of thoughts that surface for me when I come back to work after a break. They’re also the ones I like to turn into concrete actions and start executing on.
AI Development Is Lonely?
One thing that came up in a discussion shortly after I came back from a break was that we really need to be using AI a lot more, and more deliberately. There are specific things we should be doing to get real value out of it. As I was talking through this with peers and direct reports, or as we call them, coachees, I noticed something interesting about how I was framing the conversation.
I kept using the word “you.” You need to go do this. You need to spin this up. You need to write the specs. That phrasing came up again and again, and it made me pause. It highlighted something I hadn’t fully articulated before: AI development feels surprisingly lonely.
When you think about traditional software teams, even if someone is working solo on a task, there’s usually collaboration around it, design discussions, pairing, reviews, shared ownership. With AI development, that sense of collaboration feels like it’s fading. A lot of the work happens in isolation: interacting with a model, refining prompts, iterating on outputs.
There can be collaboration at certain points. For example, when defining the prompt, writing a spec or PRD, or doing some kind of group review. But even that feels different. In some cases, reviews may matter less if the requirements are clear, the patterns meet those requirements, and you have strong verification loops in place. If all of that is solid, the traditional review step starts to feel, optional.
That led me to a broader question: is AI development inherently lonely? Are we moving toward a career model where it’s mostly just an individual and their AI tools? Or is this simply a transitional phase, where we’re currently in a “you and AI” mode before collaboration re-emerges in a different form?
I’m seeing more systems where work is created as tickets on boards and automatically picked up by AI. But is that collaboration, or is it still fundamentally solo work? And what does collaboration even mean in this context? Are we building a new layer of abstraction for software engineering, one centered around AI tools, and, in doing so, are we unintentionally setting ourselves up to work more like solo engineers?
I don’t have an answer yet. I’m genuinely curious where this goes. Right now, it feels like the default model is becoming one person managing a set of agents. Whether that becomes the long-term standard or evolves into something more collaborative is still an open question, but it’s one I’ve been thinking about a lot.
You need to keep up with AI whether you like it or not
People tend to fall into one of two camps: they’re either all in and completely bought into it, or they’re highly skeptical and convinced it’s not the answer. I don’t really sit in the middle, but I also don’t think AI is a silver bullet. That said, based on my own experience, and what I’m seeing from people close to me, the benefits of using AI are enormous.
Even if progress stopped where it is now, or even where it was six months ago, the impact would still be significant. Releases like Opus 4.5, along with some of the other commonly used models outside the big providers are models such as GLM 4.7, already represent a major shift in how software engineering works. It feels like a fundamental change, and I don’t see it going away anytime soon.
What’s important to note is that even in a hypothetical world where AI doesn’t end up being the long-term solution and we revert to more traditional development, the skills you gain from working with AI and agents are still incredibly valuable. You get better at understanding problems, articulating requirements, and thinking more clearly about intent and constraints. You also gain a deeper understanding of how these systems work, which carries over into other areas of engineering.
Of course, there are valid concerns, skill atrophy, over-reliance, and so on. But regardless of whether AI is writing most of your code or not, you still need to stay sharp on fundamentals like architecture, language semantics, and system design. Those skills are what allow you to steer the AI effectively in the first place.
Where I feel strongly is this: if you’re someone who thinks AI is just a flash in the pan and plans to ignore it, you should still be hedging that bet. That means investing time in it and engaging with it seriously. From what I’m seeing, the idea that “if you don’t use it, you won’t make it” aka (NGMI - Not Gonna Make It) is starting to feel uncomfortably close to the truth. I’m seeing more experienced, long-tenured engineers move away from traditional IDE-driven workflows and instead run almost everything through AI (and I am one of those engineers), and it’s working.
Yes, AI tools struggle more with large, complex, long-lived codebases, and you’ll typically get more mileage from greenfield projects, especially when they’re set up with AI in mind from day one. But even so, not keeping up with AI trends puts you at real risk. More companies are moving toward AI-based development, and it’s becoming part of interviews and day-to-day expectations. It’s increasingly just part of the job.
What really drives this home for me is the speed. The pace at which people are moving with AI, including myself, is staggering. Work that used to take weeks now takes hours. That realisation is both exciting and honestly a bit scary. It worries me for people who aren’t actively keeping up, especially because I don’t even feel like I’m keeping up, and yet I still see a big gap between what I’m doing and what many others are doing.
Because of that, I think this is a critical moment to pause, lean in, and fully engage. Even if AI turns out to be a trend, or outright bad idea, the upside of diving in far outweighs the downside. At worst, you become a better engineer, better at problem formulation, clearer in your thinking, and more deliberate in how you design and build systems.
From there, it’s just a matter of consciously supplementing that work with other practices to stay grounded in software engineering fundamentals. That might mean keeping up with best practices, or even writing software “artisanally” on the side, for fun, for learning, or for that last 10% of polish. Either way, ignoring AI altogether feels like the riskiest option of all.
Useful and interesting things I have been looking into
Riper five model
RIPER—Research, Innovate, Plan, Execute, Review—is a structured workflow designed for AI‑assisted software development in the CursorRIPER framework.
It encourages a modular, iterative process that preserves context across sessions through a memory bank, customisable rules, and strict mode‑transition protocols to avoid unintended code changes.
The Riper‑5 Mode specifically addresses Claude 3.7’s quirks, enforcing operational safeguards that keep the AI focused and productive.
- https://github.com/johnpeterman72/CursorRIPER
- https://forum.cursor.com/t/i-created-an-amazing-mode-called-riper-5-mode-fixes-claude-3-7-drastically/65516
Everything is a ralph loop
Geoffrey Huntley argues that modern software development has shifted from traditional, brick‑by‑brick coding to an autonomous looping mindset called Ralph.
In this model, a single loop‑driven agent processes tasks toward a goal, learns from failures, and self‑improves—making conventional engineering obsolete unless developers learn to program their own AI loops.
Ralph CLI from Ian Nuttall
Ralph is a minimal, file‑based agent loop that reads the same on‑disk state at each iteration and commits work for one story at a time.
Installation is straightforward: npm i -g @iannuttall/ralph.
Users have noted quirks such as the CLI not respecting their Claude instance and defaulting to Codex; switching to OpenCode introduced further model‑specific issues (GLM‑4.7, Gemini 3).
Despite these challenges, Ralph remains a promising tool for autonomous coding once configuration hurdles are cleared.
- https://x.com/iannuttall/status/2010805713552228607?s=51&t=OpZDj-fX40J3VHNx0Hp37A
- https://github.com/iannuttall/ralph
Wreck It Ralph (Not the movie!)
Mike Hostetler’s tweet about wreckit highlights an NPM package that turns your ideas into long‑running Ralph wiggum loops.
The idea is simple: toss in a few concepts, run wreckit, and let the loop handle task execution while you focus on creative input.
This approach demonstrates how lightweight tooling can turn brainstorming into automated code generation.
Ralph gets a TUI!
Ben Williams released Ralph TUI, a terminal UI that orchestrates AI coding agents to work autonomously through task lists.
It connects your Claude Code or OpenCode agent to a task tracker, running tasks one by one with intelligent selection, error handling, and full visibility—all from the comfort of your terminal.
Yeehaww more Ralph goodness with Drover
Drover is a durable workflow orchestrator that runs multiple Claude Code agents in parallel to complete entire projects.
It manages task dependencies, gracefully handles failures, and guarantees progress through crashes and restarts—making it ideal for long‑running, complex projects.
Git Subtrees, WHAT
Git subtrees simplify integration of external code by merging dependencies into a single repository, offering a unified commit history and an easier workflow.
Submodules, by contrast, provide strict version control but require additional steps to keep the dependency up‑to‑date.
Dex’s stream on “Applying 12‑Factor Principles to Coding Agent SDKs” emphasizes the trade‑offs between these two strategies.
The Agentic Coding Flywheel TL;DR Edition
Jeffrey Emanuel’s Agentic Coding Flywheel is a set of ten interconnected open‑source tools that supercharge multi‑agent AI coding workflows.
Each tool—handling agent communication, searchable session history, bug scanning, dependency visualization, memory, safety, environment setup, and repo sync—feeds into the others to create a self‑reinforcing system that accelerates development.
The TL;DR page and accompanying tweet provide a quick overview of the ecosystem’s benefits.
Sloppy Prompts Be Gone!
Amp’s article addresses the widening gap between users who can effectively harness AI agents and those who cannot.
The core issue is under‑specification: unrestricted prompts yield unrestricted, often unwanted results.
By tightening prompt design and following a structured specification path, users can reach the “oh‑shit” moment faster and achieve higher quality outcomes.
Opensource Claude-Cowork
Claude‑Cowork is a desktop AI assistant that helps with programming, file management, and any task you can describe.
Fully compatible with the Claude Code configuration, it runs on any Anthropic‑compatible large language model.
The open‑source project offers a flexible, desktop‑centric approach to AI assistance.
Vercel Sandboxes
Vercel Sandbox is an ephemeral compute primitive designed to safely run untrusted or user‑generated code on Vercel.
It supports dynamic, real‑time workloads for AI agents, code generation, and developer experimentation, providing a sandboxed environment that protects production deployments.
Bright Sprites
Fly.io’s Bright Sprites introduces stateful sandbox environments that let developers run isolated, persistent containers on edge infrastructure.
These sandboxes enable rapid prototyping and testing without the overhead of full‑blown VM or cloud instances.
Gas Town Survival Guide
Steve Yegge’s Gas Town Survival Guide offers an emergency user manual and practical tips for using the Gas Town framework.
The guide covers troubleshooting, best practices, and advanced usage scenarios to help developers navigate the platform’s intricacies.
Webtmux!
Chris Mccord’s Webtmux is a fork of gotty that adds tmux integration for improved mobile support.
The project provides a web‑based terminal with tmux features, allowing users to access and control tmux sessions from any browser with a visual pane layout and touch‑friendly controls.
Amp Code Painter
Amp’s new Code Painter feature can generate and edit images, enabling developers to explore UI concepts and create visual assets directly within the coding environment.
JJ is better than git… sort of
JJ (Jujutsu) is a modern version control system that treats commits as mutable objects, offering a safer and more flexible workflow than Git.
It can sit on top of Git for compatibility or replace it entirely, providing an intuitive mental model that makes branching, rebasing, and undoing changes effortless while still integrating with Git hosting platforms.