Predictions for an AI Future, Now

In the last several weeks, myself and many people working in this field have felt something. Not the ground shifting, it was already doing that, more a sense of certainty that yes this really will happen. The combination of the latest large language models with maturing agentic pipelines (often using MCP servers) mean that human level performance (or better) is achievable quickly and easily on a surprising number of tasks. Speculation and awareness about this is hardly new, but what is new now is the certainty that this will be a dramatic AI-induced change unfolding starting… now.

The core economic idea here is cheap labor. Anything that can be done on a computer either already can be, or soon will be, capable of being done quickly, and for pennies. We see this most strongly now in software development, elaborate apps can be thrown together in minutes, but already everything from spreadsheets to emails to presentation, the bread and butter of white collar jobs, can be done.

Doomsayers, with good reason, predict that something like 80% of white collar jobs could be lost (something like half the US workforce). Many economists opt for a more optimistic note, pointing out that most historic productivity boosts actually created more jobs in the long run (albeit different jobs, telephone switch board operators don’t exist anymore, but countless higher paying IT jobs came in instead). My feelings are somewhere in the middle.

I think this one really is different, and will see job loss, but not end of the world level job losses. Before we can really pin down a number, we first need to understand why.

Frightening Speed versus Speed Bumps

One thing I really need you to understand is that the massive acceleration of software development speeds up how quickly other careers are endangered. Right now, AI for engineering is in its early stages, but it is so easy now to start building CAD integrations and so on, that it is only a short matter of time until bridges are all being designed by AI. This applies to a lot of other jobs as well. Advanced tools for medicine, law, all quite feasible now, will become very high quality very quickly. I was thinking months ago that electronic engineering (a hobby of mine) would be safe for quite a while, but now already generic LLMs are starting to master pulling together circuit designs. This is true exponential growth, and that means freaky fast.

But before you grab your tin foil hat and start arming to fight Terminators, I also see a lot of reason that widespread economic changes will be much slower, even if many new tools will be arriving at light speed.

I give you, the speed bumps:

Shallow Digitization

One of the biggest things holding back AI is just that most processes are not nearly as digital as companies pretend they are. We have digitized our tools, but not our workflows. And digitizing existing workflows is tough.

For example, Person A emails an update to Person B who puts data into a spreadsheet who shares that in a meeting with Person C who communicates a decision (after a long meeting(s) with Persons E, F, and G) to Person D who puts it into some software, leading to a sale or order often involving a phone call. A large business has thousands of these processes, poorly documented, constantly handling strange exceptions. I can easily take each of those individual actions and put it into an agentic pipeline, but actually capturing the business process, with all its little random inputs, is quite hard (but possible). The AI is the easy part.

As an AI engineer myself, we keep seeing this. I don’t really worry about hallucination much, tools used carefully are no worse than humans in terms of making up things entirely (I actually trust AI more than many humans on that now). But actually figuring out what goes to where to drive business value, then linking that all reliably, is hard. And the bigger the company, the harder it is. Large corporations feel more ‘black box’ than any neural network.

I expect we will see some companies (particularly small companies with elite staff) pull ahead here just because their processes are already more digital-ready. Since big companies make most of the jobs, this means economic changes will be slower, even from the big tech companies (which I can speak to as I work inside one, it’s going to take years for leadership to realize their dreams of significantly smaller workforce). I am guessing medium sized companies will be hurt the most, lacking the agility of small companies and the capital power and inertia of large companies.

Liability and Laws

Medicine is perhaps the best example of a domain that will be able to stave off AI for a while. Laws protect what can be used. Liability scares investors and developers. But do not for a second believe that this is safe for particularly long.

Already, many people are getting treatment advice from LLMs, but this will expand greatly. The irony here is that the massively expensive US healthcare system may be its own destruction, because the potential profits mean that cheap AI + team of lawyers and lobbyists will still be much cheaper (more profit margin) than the existing system. Yes, surgeons are probably safe for quite a while, as are many physical long term care jobs (changing bedpans). But anything that can be solved with a pill (or just where people need reassurance and basic advice) will likely go mostly human free pretty soon.

So liability and laws do slow things down, but AI powered solutions being so much cheaper, and small startups being supercharged by AI, mean this swarm of mosquitoes will breach these defenses sooner rather than later.

Digital Risks

Our reality is soon a place where a five person company can easily command billions in revenue, overseeing an army of digital agents managing all parts of the business.

The risk? A ransomware attack can destroy all those agents and all the company IP overnight. People are much safer (you can go and take a bunch of human employees hostage, but it is a lot harder to do, and a lot more likely to fail dramatically). And remember, supercharged software development means very powerful, adaptive digital attacks. These risks can be overcome, but awareness of them is going to keep a lot of companies from diving in too quickly. Solutions will need to be proven, then workforce gradually reduced. Even then, I rather expect many companies will prefer to be a bit overstaffed with humans than they might strictly need (also for reasons of leadership vanity).

There will be “pro-Human” terrorist groups and hackers noticeably active soon (pro-Human is one thing that can unite different religions and fringe groups for common action) actively targeting what they believe are the biggest movers in ending human jobs. Like the Luddites of old, more of a wild card factor than a game changer. And an increasingly political issue. Banning AI is both a bad idea and basically impossible (because capitalism), but restrictions and limitations are likely (I have no idea where restrictions will land, I mostly guess they will be performative limitations rather than serious hindrances to development).

Blue Collars, Safer but Big Changes

Many people foresee manual labor jobs like construction being safe havens from AI. This is generally true, but a lot more change is coming than is often predicted.

The first reason is fast, cheap, and more accurate diagnosis. You will be able to point a camera at your broken water heater, and get a diagnosis for what is likely wrong, simple checks to verify, and fixes (if easy). DIY repairs will become easier, which will likely have some reduction in blue collar demand. More so, existing workers will become more efficient, likely showing up knowing exactly what the problem is, with appropriate part in hand to fix. Longer term, we may even see reliability improve of appliances as the feedback loop to manufacturers closes, and AI agents diagnosis potential problems earlier in designs.

Secondly, robots are finally going to become more common. The main challenge with robots is how expensive they are, in large part because programming them is really hard to do in a reliable way. Enter the massive speedups in software development. Robots will become practical enough to see in use from McDonalds to hospitals. That said, robots will still be expensive and best suited for repetitive, expensive, or dangerous tasks (same as now, but with the threshold being lower for where robots are used).

What I don’t see are hands-on jobs increasing in number enough to make up for job losses elsewhere. In fact, increased (human) labor competition is likely to drive wages down. I see an increased focus on unions as existing tradesman fight to defend their salaries.

A Human Premium?

Some people predict that products “certified made by humans” will become important. I doubt it greatly. Yes, handmade, craftsman products will have a strong market, but not much differently than they already do.

Premium humans however will have a lot of value. Something we are seeing now is that senior engineers and good project managers (most PMs are not that good) are critical for running these AI projects. Someone has to manage the vision and sort the prioritization, while agents build the code. Right now, AI agents don’t look to be able to fully replace those roles, and full AGI (which to me means AI good enough to take on dynamic senior roles) is still very much in the “maybe sometime” phase. This also just comes down to good systems design. Systems are most reliable and predictable when they are in smaller, modular units. This is critical for AI systems. I generally expect we will see AI fill all those modular roles, and then expert humans will oversee the maintenance, testing, connection, and expansion of these systems, and also ‘human minions injected as lubricant into the AI system’ to help make sure these modular systems run smoothly.

Some people have predicted that AI will lead to a massive underclass of “AI data labelers”. On the contrary, AI can actually already label its own data much of the time, with human input really only being most useful from, again, true experts. Such people also predict that a massive underclass of “gig workers” will result from the laid off white collar workers. On the contrary, self-driving taxis and food delivery bots are promising to take on these gigs instead, and indeed most freelance jobs are the most vulnerable to automation.

One critical element to add as a point here is this thing variously called the Jevons Paradox, Induced Demand, or Rebound Effect. We have already seen this as software engineers. Despite being massively more productive with AI, we still seem to have a constant stream of work to do. Generally, we always had much more that we wanted to do or should do than we had time for, and now we are getting that done, faster. One simple way of putting this is just that things are getting “fancier”, better software for the same price, rather than the same software at a cheaper price.

One thing we are already seeing is a struggle for people freshly out of college. These are the positions most easily replaced by AI, the interns and junior devs. This is only going to get worse. Yes, there will still need to be a pipeline of talent development to fill those senior roles, but the pipeline is smaller. This will favor people with connections and a handful of geniuses, and create one of my biggest worries: a risk of reduced upward mobility. Top talent may have more opportunities (between senior roles and thriving startups), but average people will face fewer routes to get to high paying jobs.

I could potentially see PhDs, currently overproduced, see a small resurgence, possibly becoming equal to the MDs of today in selectivity and pay by being true experts, at least from elite universities.

Don’t plan to switch roles to AI. That might seem self interested seeing as that is my role, but from what I have seen competition for entry level positions is insane. A better plan is become an expert at something else (something you enjoy) and learn about AI as supporting knowledge.

Timing is Everything

You may have noticed I have bounced around between gloom and sunshine, without really locking down what the impact overall to the economy is of AI. The uncertainty is because a lot depends on the timing, and all sorts of things could change the timing (a global war, for example). If unemployment spikes and wages fall first, we see a major economic collapse materialize. If costs fall first, then wages, we don’t really see as much harm.

Why should costs fall? Well, a lot of the clearest benefits for AI are small businesses and startups, and a boost to competition in the market drives prices down. Secondly, consumers will be able to access much cheaper AI solutions for some medical, legal, educational options, and so on. I generally expect AI to boost efficiency, by reviewing decisions, and surfacing action items quickly and clearly at businesses. I think it will generally help guide consumer spending as well, with consumers able to get advice helping them effectively allocate capital to what they most appreciate and benefit from. And of course, reduced labor costs.

Generally I do expect effective deflation, but it might not show up clearly in the data (shadow deflation), because it won’t be as much the price of the same good going down, as users being able to substitute cheaper AI alternatives and being guided towards more cost effective choices. Indeed housing prices likely will continue with steady increase, standard price tracking indices may miss an overall decrease in cost of living.

And what genuinely worries me above all else here? National debts. Gigantic nationals debts + deflation + falling income tax base = disaster. It could force government austerity right when investment is needed, and that could put an economy on the back foot for a decade.

Long term I see a more clearly positive economic picture. AI should really favor efficient and effective infrastructure creation, which is always the basis of a strong economy. I also see AI helping clean up legal codes, creating minimal but impactful regulation (a legal refactor). Corruption will be easier for lone wolf podcasters to trace. Of course, this requires serious political engagement.

Geopolitics are unclear, but I expect the hardest immediate pain to be felt by countries (like India) that had major out-sourcing economies. The richer world (US, China, EU) will generally be better off but it will be a risky time, at least one country will likely stumble here. Long term, I expect it couls help revitalize rural areas, able to access services easily they could not before, and small startups in Africa (or wherever) will be able to compete on the global stage like never before. Rather like how I said small startups and big corporations looked best positioned, I think the largest cities and rural areas benefit more than the cities and countries caught in the middle.

My Summary Prediction

Headlines are going to keep coming fast, and changes at the job (what tools are available) are probably going to be coming pretty fast. But, due to those speed bumps, I expect the underlying movement of the economy will be slower and more steady. Indeed, I expect many people will still continue to feel AI is over-hyped as fast headlines contrast with slower fundamental changes.

Induced demand, some new job types, inertia and risk-aversion of large corporations, and invigorated small businesses will help to tone down the job loss that AI enables. The challenge is that this will be hitting many industries at once. The biggest challenge will be those freshly out of college – the economy can better survive this, but the depression of young people may be the cost of broader economic continuance. We need to use the time they buy us well.

I wish to emphasize that I think much of this is going to be our choice. If business leaders choose to aggressively cut jobs before costs have come down significantly, a lot of pain will follow for everyone. I expect politicians will take action, but we need to make sure that action is actually useful. Both an unprecedented economic disaster and a thriving bright future are possible. Probably we will muddle our way to somewhere in between.

This flowchart being the only part of this post created by AI. It is a simplification, obviously, meant as a focal point for discussion. I am trying to show how business and government (and behind both, the people) together control our fate. Yes, government is important, this is a tragedy of the commons situation. I personally expect something like Outcome 2 (enough government and business action to keep the economy moving, but with some people losing ground).

Actionable Items

  • Filtering Problems are going to become even more important. The internet being flooded with content, from AI generated videos to AI generated software, the “signal” will be increasingly buried in noise.
  • Verification Problems, with things being easier to fake, services like “am I actually hiring a human” will likely be strong, but I don’t think this will be a very large field, probably a few reputable venders will dominate (perhaps a bit like Visa and Mastercard dominating credit cards). I actually believe humans will generally adapt to the new trust dynamics and this may actually improve intelligence by forcing more people to practice more critical thinking on their daily lives.
  • Energy and land will be two areas of steady investment. I think compute itself (Nvidia) will be more contested, moats like CUDA being easier to breach with AI-coding, but these compute and tech companies still hold value as long as they remain competitive. Energy can include both the electric grid (solar, nuclear, etc) and energy storage and portable energy (batteries). No, compute will not become currency.
  • Luxury brands, fitness, sports, and leisure travel will remain strong markets (note that for investing these will still be strongly exposed to economic downturns, but generally that’s more status-quo than AI impacted). Some commentators call out entertainment more generally, but I think there will be a lot of disruption there too (especially music, movies, maybe the metaverse will actually happen).
  • Adaptability and flexibility are critical skills, even if you cannot be an expert. Being able and willing to dive in and deal with whatever random problems come up is a good mindset. Denying AI changes is not going to be a good career move.
  • Be an expert. Human experts in fields will remain valuable. Also those who are good at identifying and bridging gaps (truly good project managers).

Political Action Items

  • Get the spending deficit under control ASAP. I know that’s hard for all political parties (hasn’t been done in the US since Bill Clinton and I was so young I don’t even vaguely remember this), but it needs to be done.
  • Government spending on infrastructure with a stipulation of minimal robotics use (note AI in supply chain and design might still be wise for efficiency), or perhaps something more like the WPA or CCC.
  • A new version of the CCC (Civilian Conservation Corps) actually seems like a good idea, putting labor into protecting the environment and helping bridge a youth unemployment gap. But it would take some work to modernize this concept effectively.
  • Look at making university free, at least in some cases (say towards useful but low salary industries, education perhaps), or, less ambitiously, improve loan terms, or, expand PhD program funding to assure a human expert supply. This keeps young people from walking out into high unemployment and high debt at the same time, which is a major economic burden.
  • Harsh tax penalties for companies with healthy balance sheets reducing head count quickly. Businesses will probably sink this with lobbying but it would be great to make sure costs drop before wages.
  • We need to change the educational targets of secondary schools. “Asking the right questions” and “creatively driving value” and “systems thinking” are already a small part of the curriculum but will need to be most of the curriculum. This isn’t a new problem, I felt my neuroscience degree 10 years ago was mostly pointless memorization (I felt my Classics degree was better because it was aware that history memorization was not so important, and tried to teach the deeper lessons).
  • I don’t think we need Universal Basic Income (UBI) out of the gate, it is too contentious and not needed immediately. But we need to keep thinking about options like this, in 20 years or 50 years, a new means of distributing resources may be needed. The book Extras by Scott Westerfield has always stuck with me, and might be a good young adult friendly intro to this topic.
  • Trade unions are going to be important, but not the solution to problems. I see unions acting more as fortresses, bastions of steady salaries, but more mobile defenses are needed to preserve an economy for all.
  • Mandating human audits of AI has some potential, but risks being useless bureaucracy if not done carefully.
  • Mandating that AI remain in smaller, modular pieces. Monolithic, vertically integrated AI is where all the risks are (from Terminator Skynet risks to it’s own ransomware vulnerability). We as engineers can lead the way on this and already often do this (Microservices, Zero Trust, etc). A Capability Gated Architecture (CGA) reflects the idea of combining microservices and zero trust with a limitation on capabilities. This right now would be mostly expressed as limiting and auditing MCP tool permissions to verify that most agents have limited tool access, that sensitive tools are only usable where needed, and that no agent has end-to-end access over critical systems.

Colin Catlin, February 2026



flowchart code

flowchart TD
  A["Agentic AI Economic Impacts<br/>Possible Outcome Paths"] --> B{"Decision 1:<br/>Do businesses slow job losses?"}

  %% Slow job losses branch
  B -->|Yes: job losses are slow| C{"Decision 2:<br/>Does government invest in reskilling + infrastructure?"}
  C -->|Yes| S1["Outcome 1:<br/>Broad-based thriving economy<br/><br/><i>Example:</i> GI Bill + Interstate Highway System, US post-WWII<br/>Large-scale skills investment + infrastructure helped drive long-run growth and broad middle-class expansion."]
  C -->|No| S2["Outcome 2:<br/>Downward mobility for displaced workers<br/>into lower-pay service/retail work<br/><br/><i>Example:</i> US manufacturing decline (late 1970s to 2000s) in parts of the Midwest<br/>Many displaced workers moved into lower-wage services amid limited retraining/industrial policy."]

  %% Fast job losses branch
  B -->|No: job losses are fast| D{"Decision 2:<br/>Can the government step in and create enough jobs?"}
  D -->|Yes| F1["Outcome 3:<br/>Direct job creation (or UBI) stabilizes society<br/>but wages/roles are lower and fiscal cost is very high<br/><br/><i>Example:</i> Dust Bowl to New Deal work programs, US 1930s<br/>WPA and CCC created large numbers of jobs, improving stability at significant public expense."]
  D -->|No | F2["Outcome 4:<br/>Worst case: long-term decline<br/>(persistent unemployment, weak demand, social strain)<br/><br/><i>Example:</i> Post-Soviet economic collapse, 1990s<br/>Rapid restructuring without adequate safety nets/jobs policy led to prolonged hardship and institutional strain."]

  classDef outcome fill:#f7f7f7,stroke:#333,stroke-width:1px;
  class S1,S2,F1,F2 outcome;

Leave a Comment

Your email address will not be published. Required fields are marked *