Back to blog
AI·

AI Feb 2026 — The State of Play

Voice clones, agents, and what it means if you can call your digital self on the phone.

What does it mean if you can call your digital self on the phone?

In this short essay I'll first discuss the events leading up to the project I'm currently working on, then recap the progress of AI voice models, the rise of computer use agents, and finally attempt to discuss what this means for humanity.


1. How'd We Get Here

Almost one year ago today I had a dream that I was driving around the Toronto suburbs in the summer, lost. I picked up the phone and made a call. On the other end I heard a familiar voice: "Hey, it's Liam." My digital self had answered the phone. My eyes began to widen and distort. I didn't reply, and "Liam" didn't have anything more to say.

Fast forward a year: I've trained a professional voice clone and attached it to a conversational AI agent that is becoming increasingly good at being me. This digital agent has my voice and understands how I would respond. And as of February 2026, the infrastructure to build something like this has become more accessible than most people realize.


2. Voice AI Progress (and the Lack of Bottlenecks)

ElevenLabs has put out remarkable models since 2023, the standout being a professional voice clone requiring between 30 minutes and 3 hours of input audio. Then in January 2026, Qwen released a voice clone model requiring only 10 seconds of input and producing comparable results. The barrier to cloning a voice went from hours to seconds in the span of a few years.

Running alongside this is a quieter shift toward local LLMs — models that run entirely on your own hardware, without sending data to the cloud. The privacy and cost appeal is real. The caveat is that local models still lag meaningfully behind the best cloud-based ones, and the most capable versions require computers with roughly 10 to 20 times the memory of a normal machine. It's not something most people can run at home today. But the gap is closing, and the direction of travel is clear.


3. Agents Go Loose: OpenClaw (February 5, 2026)

To understand why OpenClaw matters, it helps to understand the difference between an LLM and an agent.

A standard LLM interaction is simple: you write a message, the model processes it, and it sends a reply. An agent is different. You write a message, the model looks at the tools it has available — search the web, create a file, read a file, send an email, connect to an external service — and then takes a series of actions using those tools before responding to you. It's not just answering. It's doing.

A great example of an agent done well is Cursor, the coding tool that grew from $1M to $100M in revenue in 2024. Before Cursor, you'd generate code in ChatGPT and copy-paste it into your IDE. Cursor removed that step — it put the agent directly inside your coding environment so it could read and write your codebase. Focused, useful, relatively controlled.

OpenClaw, which went viral on February 5th 2026, took a different approach. Instead of sandboxing the agent to a specific folder or environment, someone let it loose at the root of your computer and gave it a full toolkit — open Notes and type something, open Mail and send a message, trigger actions on a schedule without you asking. That last part matters: OpenClaw supports CRON jobs, meaning the agent can act proactively on a timer, without any prompt from you.

On the surface this sounds like a small technical difference. It's actually a meaningful shift in how much autonomy we're willing to hand over.

A word of caution: do not install OpenClaw on your personal computer. Start it on a virtual machine like an EC2 instance on AWS, or on an old wiped machine with no sensitive data on it. These agents are unpredictable and vulnerable to prompt injection attacks — meaning a malicious actor could, through the right input, trick your agent into sending your API keys, credit card numbers, or email history somewhere it shouldn't go. OpenClaw has internet access enabled by default. Treat it accordingly.


4. What It Means

I don't know.

But what I do know is that we will very soon have near-perfect digital mirrors of ourselves existing in the world — ones with our voice, our personality, and increasingly, the ability to act on our behalf. Agents that can send emails, make calls, manage schedules, all while sounding and responding like us. The phishing attack that used to be a sketchy email from a Nigerian prince is becoming a phone call from your mom.

Beyond security, this will reshape how we think about identity and death. What does it mean to be you if your personality and likeness are accurately living on digitally? Do you really say goodbye to someone if they can still be reached on the phone after their physical self is gone?

Some might call that futurism. I'd call it paying attention. These aren't hypothetical questions for some distant future — they're questions for now.