Major paradigm shift?
Yes. (I'm in a mood to be blunt and direct.)
Are SaaS companies in trouble?
If they don't pivot hard fast, yes. Most will start to fail soon.
What about other kinds of software companies?
Same answer.
I thought AI was going to make software companies lots of money?
There's tons of money to be made, but software is now already a commodity for those who have figured out how to leverage AI agents for coding.
I thought AI code was slop?
It can be. But at this point there's no doubt at all.
The age of making millions because software is useful, really hard to make, but nearly infinitely scalable, is over.
If you've been on the inside of this shift over the last couple years, actually coding real things, increasingly leveraging AI (even though you have the skill to do it yourself), you know what I mean. If you're not in that group--if your perception of AI-utility is still shaped by the era of problematic, non-deterministic, hallucination-prone chats with an LLM--you will likely think I'm being a little hyperbolic. (Note: the pseudo em-dashes are mine. I'm a real human who can't be bothered to remember the key combo to spit one out right now.) Maybe I'm just reading trends and thinking too-optimistically, "we'll get useful AI soon." No. That's not it.
This is not a future prediction anymore. There is still plenty that no-one knows about how this will play out, but useful agentic coding for real work is here today.
Isn't AI an overblown marketing gimmick that nobody really wants?
I get it. I don't want AI junk features in my accounting software either. Most of what I see people tacking on to their SaaS offering is junk. I can see why many are convinced that this AI trend is just a misguided economic bubble full of promises that have not materialized. I agree--to an extent. But here's the thing: Those AI-add-on use cases to keep your SaaS relevant are mostly a dead end. They're the temporary churn of mass missing-the-point.
LLM's are non-deterministic.
I'm an engineer. I have technical depth enough to have caught blatantly-ridiculous errors from LLM's for years now. But my perspective on this goes further back than that. I sat in a university lab in 2000, having learned neural network math, and coded a C program to train an algorithm to recognize letters on blocks and then used that to program a robot arm to sort blocks with previously-unseen shapes and fonts of letters into the right piles.
Recent "AI" hype aside, neural networks and machine learning techniques proved their utility in real applications a long time ago.
But since LLM's are non-deterministic, that means they're unreliable, right?
Well, humans are non-deterministic too. Statistical probability might be non-deterministic, but it's not useless slop. For humans on an assembly line, we make jigs and fit-one-way components so our non-deterministic friends can get it right with six-sigma levels of confidence. How do you leverage AI agents for real business utility today? There's your free hint for today.
Subscribe for more if you're interested.