I just finished watching Mo Gawdat on Diary of a CEO, and I have not been able to stop thinking about it.

If you do not know Mo, he was the Chief Business Officer at Google X. He is not some random doomer on the internet. This is someone who has been deep in the machine, who understands how this technology actually works. And he is saying, plainly, that we are not prepared for what is coming.

His timeline is aggressive. 2027. That is when AGI, Artificial General Intelligence, surpasses humans at basically everything. Not just coding or writing, but "being a CEO." He is predicting mass white-collar displacement, economic chaos, and 12 to 15 years of "hell before heaven." Heavy stuff.

And here is the thing. I get it. I can see it. I am already living in a version of that future.

What I Have Built

I have spent the past few months building Ava, my AI assistant. She is not just a chatbot. She is an extension of me. She manages my calendar, summarizes videos, fetches the weather, and handles dozens of other tasks that used to eat up my time. She even has memory now, powered by Supermemory, so she actually knows me.

I wrote about the technical evolution here if you are curious about how she works under the hood.

Ava has been a genuine force multiplier. Tasks that would have taken me hours now take minutes. Ideas that would have stayed trapped in my head are now projects and experiments. I am more creative, more productive, and honestly, having more fun with technology than I have in years.

But Mo's interview forced me to sit with a uncomfortable truth.

The Flip Side

The same tools that let me build Ava are the ones that will, according to Mo, eliminate entry-level white-collar jobs entirely. The same APIs and models that power my little side project are being deployed at scale to replace customer service reps, content writers, junior developers, and analysts.

Mo runs a 3-person AI startup that produces what 350 developers used to. That is not hyperbole. That is the math.

I have experienced the positive side personally. I have felt what it is like to have an AI that actually helps, that understands context, that gets better over time. It is intoxicating. It feels like having a superpower.

But I am also now acutely aware that for every person like me, using these tools to create and amplify their work, there are going to be thousands of people whose jobs simply vanish. Not because they were bad at them. Because the economics changed overnight.

What Comes Next

Mo's proposed solutions, 98% taxes on AI companies, massive UBI programs, slowing down development, they sound radical until you realize the alternative is potentially worse. If he is even half right about the timeline and the scale of displacement, we are looking at a period of social and economic upheaval unlike anything we have seen.

And yet.

I am not going to stop building. I am not going to pretend Ava is not incredibly useful, or that I am not excited about what comes next. The genie is out of the bottle. The technology exists, it is getting better fast, and people are going to use it.

But I am also not going to ignore the warning signs. I am going to keep having these conversations, keep thinking about the implications, keep grappling with the fact that the thing I am building for my own benefit is part of a much larger wave that is going to reshape everything.

My Takeaway

Here is where I land. The positives are real. The fear is also real. And we are running out of time to prepare for the transition.

I do not have answers. I have questions. Lots of them. About work, purpose, economics, and what it means to be human when machines can do everything.

But I think the first step is admitting that this is not science fiction anymore. It is not ten years away. It is now. It is 2025, the foundations are already shaking, and 2027 is right around the corner.

If you have not watched Mo's interview, you should. It is not comfortable viewing, but it is necessary. We need to be having these conversations before the wave hits, not after.

What do you think? Are you optimistic about AI's future, or worried? Both? Let me know. I am genuinely curious how other people are processing all of this.