What 2025 Taught Me About AI, Focus, and Self-Inflicted Chaos

What 2025 Taught Me About AI, Focus, and Self-Inflicted Chaos

Everyone keeps saying 2025 was the year AI finally clicked.

For me, it was the year I noticed how often I stopped thinking. I spent most of the year surrounded by tools, experiments, half-finished ideas, and the constant feeling that I should be moving faster. Machine learning here. A plugin update there. YouTube on the side.


O
n paper, it looked productive. In reality, it felt noisy. The first thing that became obvious was energy. Not motivation. Not inspiration. Machine learning needs a ridiculous amount of it. Hardware, infrastructure, maintenance. Most AI showcases skip that part entirely. They show the aspirational output, not the cost.

The second realization was more uncomfortable. AI is very good at pretending. And I noticed how quickly I was willing to believe it, especially when I was tired and burned down due to side hassles and unfulfilling work (at least from my perspective). That’s when I realized the real damage wasn’t coming from bad tools. It was coming from how easily I handed over responsibility and creative tinkering for shallow outcomes. To AI. To ambition.

This article is what I learned and what that means for how I’ll work in 2026.

AI Looks Smarter Than It Is, and That’s Dangerous

What surprised me most wasn’t that AI failed. It was how rarely I questioned it when it sounded confident. Sometimes I was too tired and stressed to challenge AI responses. Sometimes I was too entangled in busy work. The output looked clean, structured, and good enough. That was usually enough to move on. That’s where the problems started.

Let’s be blunt. Most AI products today are still closer to vaporware than robust systems.

  • They shine in demos.
  • They struggle in messy real-world conditions.
  • They hide energy and maintenance costs.

AI works best in narrow lanes. Real work almost never does.

The problem isn’t that AI fails. Failure is natural, expected, even healthy. The real problem is that AI fails quietly and we humans are often too busy, too distracted, or too comfortable to question it. The output looks polished enough to trust, yet not solid enough to rely on. That subtle gap between appearance and substance is where real damage begins.

The moment we stop interrogating the output, because it sounds confident, because it saves time, because it flatters our desire for convenience, we hand over our judgment. I notice this most clearly in code generation, workshop planning, or any professional workflow where precision actually matters.

What frustrates me even more is how carelessly AI-generated content spreads today. People publish it without reflection, monetize it without adding value, and flood the world with noise that looks like insight but collapses under pressure. Some do it knowingly. Some simply don’t care. Either way, the result is the same: erosion of trust, erosion of craft, erosion of responsibility.

That’s the paradox of this moment. The AI revolution is exciting. But the way we handle it, the shortcuts, the complacency, the willingness to outsource thinking, is unsettling. The tool isn’t the problem. The problem is how easily we use it against ourselves. AI is holding up a mirror. What we see isn’t its failure, but our own as a modern society.

AI Is a Great Companion and a Terrible Replacement

AI is excellent at pretending to understand. It mirrors structure, tone, and patterns extremely well. That’s useful. It’s also risky.

In coding, this becomes dangerous fast. AI will happily introduce abstractions nobody asked for. It duplicates logic under the disguise of cleanliness. Given enough freedom, it can turn a solid codebase into a fragile spaghetti monster that looks well-architected.

AI doesn’t understand your system. It imitates patterns convincingly. And here’s the uncomfortable part: our brains love that. We’re efficient. Sometimes lazy. Happy to hand over hard thinking if the output feels professional enough.

Short term, it feels faster. Long term, it creates more work, more debugging, and more wasted time.

Perfection Is How Productivity Dies Quietly

While experimenting with new areas, I noticed a familiar pattern in my work ethos. The more I cared, the slower I moved.

Perfection sneaks in disguised as quality. Effort increases. Everything feels heavier. Overwhelm leads to depletion. Depletion leads to procrastination. That weight doesn’t motivate me. It drains me. And when that happens, nothing ships.

This hit hardest with my Figma plugins.

  • They help designers.
  • They save time.

Most users don’t pay, because free alternatives are good enough. That’s fine. I still support these plugins and use them daily. But financially, the effort-to-return ratio is terrible. Think ten cents an hour terrible.

And yet, I optimized them anyway. That’s part of my nature. If I publish something commercially, I want it to feel solid. Polished. Thought through. I’m willing to go the extra mile for people who trust my tools, even if that means nights fixing edge-case bugs or adding features most users will never notice.

You can see this in Notely, Contently, or Color Extractly. They’re built for everyday use. They’re ahead not because they replace creativity, but because they accelerate it.

The problem wasn’t the work. It was ignoring the cost of caring this much.

Craftsmanship matters to me. But without constraints, it quietly eats time, energy, and focus. Optimization feels responsible. Often, it isn’t.

When AI Made My Code Worse

I once let AI refactor a codebase I knew well. On the surface, it looked great. Clean structure. Logical naming. Everything felt right.

Two weeks later, the truth surfaced. The system was harder to reason about. Fixing issues took longer. Understanding intent became work.

Undoing that refactor took more time than writing the code manually would have taken in the first place. The lesson was simple. AI didn’t fail. I failed to stay accountable for thinking.

YouTube Helped Me Speak, Not Scale

YouTube pulled me out of my head. I started enjoying conversations more. Explaining ideas out loud instead of endlessly refining them internally.

That alone made the experiment worth it, regardless of views or growth.

Tactically, I failed my original goals. I buried the channel under ambition. Better production. More formats. More output. I tried to scale before finding rhythm.

As a result, I published nothing for around four months. That doesn’t mean I’m done. It means I need to refocus and return later.

More effort does not equal more progress.

Ambition Without Constraints

At one point, ideas were everywhere. New videos. New tools. Improvements. Experiments.

Spoiler: Nothing shipped.

The problem wasn’t motivation. It was direction. Too many priorities flatten everything into an unhealthy loop of perfectionism.

So What Does This Mean for 2026?

2026 is about focus, not expansion. This isn’t about lowering standards. It’s about narrowing where care applies, so it doesn’t consume everything else.

I want to invest more energy into partners who trust me with real responsibility.

Instead of spreading myself across platforms, I’ll focus on YouTube. Not for views. Not for success. But for the joy of creating, learning, and growing.

I also want to prioritize new products over endlessly perfecting existing ones. While writing this, I published two new Figma plugins, Masonry and Optimizely, built on a shared framework to maintain quality without constant micromanagement.

I want to improve products people genuinely find useful, not just the ones that satisfy my curiosity.

Less polish. More relevance.

A Simple Framework I’m Using Going Forward

  • Limit active projects to three
  • Add hard time constraints
  • Use AI for drafts, not decisions
  • Review outputs line by line
  • Measure effort against real value
  • Kill projects that only satisfy ego
  • Less digital overload, more me time

Credibility Bump

  • Built and shipped multiple Figma plugins used by thousands
  • Hands-on experimentation with ML and AI throughout 2025
  • Completed a full year of YouTube alongside client and partner work

Fixing unchecked AI-generated code took roughly two to four times longer than writing it myself. More than once, I tried to let AI fix issues while I handled other work, only to end up in the same place days later. When I finally sat down and fixed it manually, it took five minutes.

Takeaway and Comment Question

2025 taught me that AI, perfectionism, and ambition fail the same way when they replace thinking instead of supporting it. Use tools to elevate your work, but don’t hand over deep thinking.

You can only do so much. The higher your task stack grows, the less space remains for work that feels meaningful.

After my monologue, I’m curious: what are you carrying into 2026 that really needs to be left behind?

Related Articles

Continue reading about design, UX, and creative tools
Product Hero