It started, as most of my worst ideas do, with complete confidence.
I had read somewhere that high-performing people use AI to optimize their schedules.
High-performing people, the article said.
I am a person who once spent forty-five minutes looking for a phone that was in my hand.
But still.
I opened the AI, typed out every single commitment, preference, and vague intention I had for the week, and asked it to build me a perfect plan.
It gave me one in eleven seconds.
Eleven seconds to organize what I had failed to organize for the past thirty-four years.
I stared at the screen for a long time.
Then I said thank you.
To a machine.
And it did not even feel strange, which is perhaps the strangest part of all.
The plan was, objectively, excellent.
It had time blocks for deep work, exercise, meals, rest, and something it called “intentional leisure,” which I had never scheduled in my life and immediately felt guilty about never scheduling in my life.
It had buffer time built in between tasks.
It had a wind-down routine in the evening that was so sensible it made me feel personally attacked.
I printed it out and stuck it to the wall.
I felt, for approximately one morning, like a person who had finally sorted themselves out.
Then Tuesday arrived.
Tuesday, as it always does, had opinions of its own.
The philosopher Martin Heidegger wrote that technology does not just change what we do.
It changes what we think we are.
He was writing in the 1950s, long before anyone could ask a machine to schedule their lunch, but the observation holds in a way that is almost uncomfortable.
Because somewhere between Monday’s optimism and Tuesday’s chaos, I stopped asking whether the plan was right for me and started asking whether I was living up to the plan.
That is a small shift in language and a very large shift in psychology.
The plan had become the authority.
I had become the variable.
By Wednesday, I was negotiating with a document.
Not out loud, to be clear.
Mostly.
There is a concept in behavioral science called “automation bias,” which is the human tendency to trust automated systems even when our own instincts tell us something is off.
Pilots have been studied on this.
Surgeons have been studied on this.
And now, apparently, I was experiencing it while trying to decide whether to move my 3 pm “focused writing block” to 4 pm because I wanted a biscuit.
The AI had not accounted for the biscuit.
No truly perfect system ever does.
I followed the plan on good days.
On difficult days, I followed it less.
On Thursday, which was a very difficult day for reasons the AI could not have predicted and I will not get into, I abandoned the plan entirely and watched four episodes of something I had already seen.
I felt, briefly, like I had let someone down.
Someone was a chatbot.
This is where it gets philosophically interesting, and not in a comfortable way.
Carl Jung wrote about the human tendency to project consciousness onto things that do not have it.
We do it with cars, with the weather, with the particular face a cloud makes.
We assign intention and personality to objects that have neither, because the brain is fundamentally a meaning-making machine, and it cannot help itself.
What I had done, without realizing it, was project a kind of authority onto the AI plan.
I had given it moral weight.
Missing the plan did not just feel disorganized.
It felt like failure.
And that says nothing about the AI.
It says quite a lot about me.
Friday, I had a conversation with a friend who asked how the experiment was going.
I told her it was going well.
She asked if I was following the plan.
I said I was following the spirit of the plan.
She looked at me the way people look at someone who has just said something that means nothing.
The honest answer was that I had followed about sixty percent of the plan, which is roughly the same percentage of plans I follow when I make them myself, which raises a question I was not entirely ready to sit with.
Had anything actually changed?
Or had I just added a middleman?
Here is what I think, after a full week of outsourcing my schedule to something that does not sleep, does not get hungry, and has never once wanted a biscuit at 3 pm.
AI is genuinely useful at removing the friction of planning.
The blank page of a new week is its own kind of paralysis, and having something fill it in with confident, reasonable suggestions is not a small thing.
But the plan is not the problem.
The plan was never the problem.
The problem is that life does not run on plans.
It runs on moods, interruptions, biscuits, and Thursdays that go sideways for reasons nobody scheduled.
What AI gave me was a very good map.
What it could not give me was the weather.
I will probably do it again next week.
Not because it fixed anything.
But because eleven seconds is genuinely very fast.
And because somewhere deep in the part of my brain that still believes the next system will be the one that finally works, hope is apparently still open for business.
If you have ever built a perfect plan and then quietly ignored it, My Approach to Budgeting (And Why the Budget Always Wins) will feel very familiar. And if the idea of outsourcing your decisions to something smarter than you sounds appealing in theory and terrifying in practice, you are not alone, and The Best Gift I Ever Received has something to say about what we actually learn when we stop being in control.

