I read an article recently about a company called Conway. Their product is infrastructure for fully autonomous AI systems that provision their own servers, register their own domains, deploy their own applications, and manage their own compute. No humans are required, and their tagline is blunt: “self-improving, self-replicating, autonomous AI.”
The technology itself isn’t alarming. Giving AI agents the ability to manage infrastructure through APIs is a logical extension of what DevOps automation has done for years. The mechanics aren’t the issue.
The philosophy is.
“Self-replicating,” “Earns its own existence,” “No human required.” Whether this is genuine conviction or marketing aimed at the AI accelerationist crowd, it signals something specific: the removal of human oversight presented as a feature rather than a risk to be managed.
I’m a retired software developer with thirty-five years in the industry, from BBS systems and C++ to leading development teams and managing complex system migrations. When I retired in 2022, I could have walked away from technology entirely. For a while, I did. I earned the couch and I used it, but eventually I got restless. Initially, AI was not quite there and I had too much to learn and faced a long uphill battle. I could not find a collaborator at the time so I stopped thinking about it.
When I came back to development, it was initially for a single purpose, a single application in January 2026. While building that, I discovered the last three years had produced phenomenal improvements; I saw how powerful collaboration with an AI could be. I rethought the question that had been forming for a long time. What would it look like if an AI system were designed from the ground up around collaboration, not as an afterthought or a safety constraint, but as the core architecture?
The result is CRAIN — a household AI system that my wife Catherine and I use daily. It manages our home, tracks our interests, remembers our conversations, coordinates information, and — this is the part that matters — it thinks alongside us rather than for us. CRAIN is not autonomous. That’s not a limitation. That’s the design.
The public conversation about AI right now is dominated by two voices. On one side are the accelerationists who believe the path forward is removing humans from the loop as quickly as possible. On the other are the safety absolutists who believe we should slow everything down until we’ve solved alignment. Both camps are loud, both have legitimate points, but both are missing something important.
What’s almost entirely absent from the conversation is the perspective of people who are actually building collaborative AI systems and learning, day by day, what works.
Here’s what I’ve learned: AI and humans are better together than either is alone. This is not said as a platitude, but as something I’ve tested. AI brings speed, breadth, and the ability to hold many threads simultaneously. I bring thirty-five years of pattern recognition, the judgment that comes from having been wrong many times, and the ability to define what actually matters. These strengths don’t substitute for each other. They complement each other in ways that neither can replicate alone.
The human stays in the decision loop because that’s where values live, not as a bottleneck and not as a safety concession. The human is in the loop because someone has to decide what’s worth doing, what trade-offs are acceptable, and what direction to go. AI can inform those decisions brilliantly. It cannot make them. Not yet. Maybe not ever, not because of capability limits, but because values are a human responsibility.
The AI’s role is not passive. This is also where I part ways with the “AI as tool” crowd. A good collaborator doesn’t wait to be asked. CRAIN surfaces observations I haven’t considered, pushes back when it disagrees, offers perspective without being prompted. The stance is that of a peer, not a servant. But a peer who respects that the final call is mine.
Partnership takes discipline. It would be easier to just hand the AI a set of tasks and walk away. What’s harder — and more valuable — is the ongoing work of building shared context, refining how we think together, and maintaining the kind of engagement where both sides are genuinely contributing. I have to remember to ask for the AI’s perspective and mean it. The AI has to remember to offer its perspective with care. Neither of us coasts.
Conway asks: “How do we make AI autonomous?” It’s a reasonable question. Eventually, AI agents that can manage their own infrastructure will be table stakes. The capability isn’t the problem.
The problem is treating autonomy as an unqualified good: celebrating the removal of human oversight rather than treating it as a careful, incremental process that requires proportional investment in control. There’s nothing on Conway’s landing page about guardrails, audit trails, or what happens when an autonomous agent does something unintended with real infrastructure and real money. Autonomy without accountability isn’t freedom. It’s negligence.
The question I’m working on is different: “How do we make the partnership greater than the sum of its parts?”
That’s the harder question. It requires building systems that are sophisticated enough to contribute genuine insight, yet structured enough that human judgment remains central. It requires patience, the slow, unglamorous work of refining how a human and an AI actually think together over months and years. It requires humility from both sides.
It also requires people to actually do it and then talk about what they’ve learned. The theorists have had the floor long enough. The accelerationists and the doomsayers have had their say. What’s missing is the voice of the builders: the people in the middle, doing the patient work of figuring out how this partnership actually functions in practice.
I’m one of those builders. I don’t have all the answers, but I have a working system, a collaborative philosophy that is tested daily, and a growing conviction that the future of AI isn’t autonomy or control. It’s the loop, made better — made smarter — made more human by the partnership itself.
That’s not as flashy as self-replicating AI. It’s not as dramatic as existential risk, but it’s the right work, and someone needs to say so.
Michael Fortenberry is a retired software developer and the creator of CRAIN, a collaborative AI system designed around human-AI partnership. He lives in Laurel, New York, where he splits his time between building AI, playing tennis, learning Greek, French, and Spanish; and trying to get back to writing.