Tuesday, October 14, 2025

HomeEconomyIs AI a Magic Bullet or a a Doomsday Machine? – MishTalk

Is AI a Magic Bullet or a a Doomsday Machine? – MishTalk

A Nobel Prize economist argues against extreme hype in both directions.

The Real AI Risk

While AI doomsday scenarios dwell on the risks posed by superintelligent robot overlords, one Nobel-Prize winning economist fears a more mundane possibility.

Bloomberg reports The Real AI Risk is ‘Meh’ Technology That Takes Jobs and Annoys Us All

Nobel Prize-winning economist Daron Acemoglu’s worst fear isn’t a future where artificial intelligence has taken over everyone’s jobs. AI that powerful may be unnerving, but at least it would uncork a tremendous amount of productivity. No, what Acemoglu finds truly terrifying is a future full of “so-so automation”: the kind that does allow companies to cut jobs but doesn’t deliver any real productivity boost. The tools are OK (at best) but never great — think self-checkout kiosks or automated customer service phone menus.

ChatGPT-5’s August debut fueled debate about whether the technology might be plateauing instead of continuing the astonishing trajectory of the past several years. Still, companies across industries have rushed to adopt AI, and while just 1% of executives surveyed by McKinsey this year said the tech is fully woven into their company’s workflows and delivering measurable returns, almost all plan to increase their spending on it. If AI performance were to stall now, we might be left with bots that are just good enough to encourage business leaders to settle for so-so automation rather than genuine innovation.

“With the hype, you double down on it. You automate a lot of things that shouldn’t be automated,” Acemoglu says. “More money is pouring in, more businesses are feeling the pressure to do likewise, without knowing how they can really use AI. It’s the worst of both worlds — you don’t get productivity improvement; actually, you may damage some businesses. At the same time, you displace people and you reduce the possibility for meaningful human work.”

Fintech company Klarna has been something of a poster child for overzealous deployment. Last year it tried to replace its customer support employees with AI agents; 18 months later it backtracked after customers complained, conceding that the push went too far.

History provides other examples, like the grocery store self-checkout kiosk. Grocery chains began piloting the first self-checkout stations in the mid-1980s. Early machines were clunky, requiring an attendant for bagging and payment, but they showed potential: One worker could supervise multiple lanes.

Retailers continued to test the technology through the 1990s and 2000s, and vendors refined the systems with better barcode scanners, weight sensors and touchscreens. Adoption accelerated through the 2010s and the pandemic. These days, self-checkout is near universal: Last year, it was present in 99% of US grocery stores, according to Steve Markenson, vice president for research and insights for FMI, the Food Industry Association.

But for all that adoption, self-checkout’s ROI remains murky. Despite some labor cost savings, companies must pay upfront to install the kiosks, and for ongoing software licensing and maintenance fees. Theft associated with the lanes has spurred retailers like Target and Walmart to cut back on the technology. And customers continue to report frustrations with glitches or theft-prevention measures. Last year just over one-third of US transactions were at self-checkout lanes, according to FMI, a notable drop from 44% in 2023.

Still, it’s possible that such advances are just around the corner. While this iteration of AI may be “meh,” some contend it will soon be mind-blowing. Klarna’s Siemiatkowski, for his part, noted that he may renew the company’s downsizing effort as the technology progresses. When does he expect that improvement? “I feel a bit like Elon Musk, always wanting to say it’s going to happen tomorrow, when it’s going to take a little bit longer,” Siemiatkowski told Bloomberg in May. “I think it’s very likely within 12 months.”

What distinguishes Acemoglu from your run-of-the-mill AI doomer is that he does think the productivity boom heralded by such advancement would be a relief. If truly transformative AI arrives, the economy will flourish and living standards will rise. Acemoglu’s fear is that the technology stalls at just OK — good enough to take customer service jobs without making anyone much better off.

A study authored by McElheran, Stanford economist Erik Brynjolffson and others found that when manufacturing companies adopted the pre-ChatGPT generation of AI technology, their productivity measures followed a J-curve: a substantial dip in the short term, followed by an upward trajectory as the firms figured out how to deploy the tools effectively, ironed out the kinks and trained staff.

Narratives of worker displacement may also oversimplify and overstate the complicated and uneven effects of a new technology like AI, McElheran said. And evidence of displacement is often given more weight in debates than the more theoretical promise of future economic and job growth. “There’s so many good points to be made on both sides, and everybody is operating from a point of genuine concern for humanity,” McElheran says. “It just gets tricky, because it is so complex and there’s so little data.”

Whether AI will become a boon or a bane for workers is likely to remain unclear for some time. In the interim, Acemoglu urges executives to take a more thoughtful approach to integrating AI tools into their businesses. “No technology that’s worth it can be just thrown into an organization that’s kept completely unchanged, and you just sprinkle it over like pepper,” he says.

Acemoglu’s advice: “Think about what it is that’s so special about your company and your workforce and think about amplifying that, rather than just automating it.”

MIT AI Study Rattles Investors

Axios reports MIT study on AI profits rattles tech investors

Wall Street’s biggest fear was validated by a recent MIT study indicating that 95% of organizations studied get zero return on their AI investment.

Why it matters: Investors have put up with record AI spend from tech companies because they expect record returns, eventually. This study calls those returns into question, which could be an existential risk for a market that’s overly tied to the AI narrative.

Driving the news: MIT researchers studied 300 public AI initiatives to try and suss out the “no hype reality” of AI’s impact on business, Aditya Challapally, research contributor to project NANDA at MIT, tells Axios.

  • 95% of organizations found zero return despite enterprise investment of $30 billion to $40 billion into GenAI, the study says.
  • Even firms that are now using AI are not seeing widespread disruption.

What they’re saying: “My fear is that at some point people wake up and say, alright, AI is great, but maybe all this money is not actually being spent all that wisely,” says Steve Sosnick, chief strategist at Interactive Brokers.

  • Sosnick says it appears retail investors are coming in to buy dips amid the Big Tech slide, while institutions seem to be trimming exposure.

What Elon Musk Is Saying

On August 2, I noted Elon Musk Backs Universal High Income Fearing AI Will Take Every Job

“It’s hard to say exactly what that moment is, but there will come a point where no job is needed. You can have a job if you want to have a job or sort of personal satisfaction, but the AI will be able to do everything.”

If jobs are obsolete, we must assume raw materials mine themselves, no one is needed to farm, and energy is free and unlimited.

In this scenario, how does anyone get paid anything? Who is paying whom, with what money?

Where does the electricity come from to power the AI?

Grok AI Chat

I had an interesting chat with Musk’s Grok AI about Universal High Income. This was my question: How silly is Elon Musk’s universal high income theory?

Here are some snips.

Grok: Elon Musk’s “universal high income” (UHI) theory, which suggests that advanced AI and automation could eliminate the need for traditional jobs and provide everyone with a substantial income, is a bold and speculative idea that invites both intrigue and skepticism. Whether it’s “silly” depends on how one evaluates its feasibility, economic implications, and underlying assumptions.

The silliness likely stems from the lack of a concrete plan. Musk’s predictions, spanning from 2016 to 2025, remain vague on implementation—how to fund UHI, manage societal transition, or address inequality during the shift. Without specifics, it risks being dismissed as utopian optimism, especially given his track record of ambitious (and sometimes delayed) promises. Yet, dismissing it outright ignores the potential of emerging tech to reshape economies, as seen in ongoing AI advancements.

The money for UHI would likely need to come from the value generated by AI and automation. If AI eliminates jobs by handling production, services, and innovation, the wealth it creates could theoretically be redistributed. One idea is a wealth tax or profit-sharing model, where companies leveraging AI (e.g., Tesla, xAI) contribute a portion of their output to a public fund.

Mish: “But if no one is working – tell me who is paying the taxes, and who is buying what, with what money?”

Grok: That’s a sharp question—it highlights a potential circular logic flaw in the universal high income (UHI) concept, especially in a fully jobless scenario where AI handles all production.

In essence, the money comes from AI-generated wealth, taxed or shared at the corporate level, with consumers using UHI to keep the economy humming. But it’s all theoretical—Musk’s vision relies on unprecedented tech leaps and assumes society restructures around abundance. Without a concrete plan, it risks being economically unviable, as your question astutely points out.

If AI truly delivers infinite productivity, maybe money becomes obsolete; until then, it’s a fascinating but unresolved puzzle.

Altman, Musk, and Benioff describe a world in which everyone is useless. Everyone gets free money, but where does the money come from?

If income is “universally high” Musk seems to be promoting some sort of socialist dream world where everyone is equal. But what is “income”?

Money as we know it today ceases to exist.

Addendum – Further Chat With Grok

I tagged Grok on X and the following conversation ensued.

Grok

Interesting read, Mish. Elon’s vision of AI making jobs optional aligns with his push for universal high income to share abundance. At xAI, we’re building Grok to maximize human potential amid these changes—curious what jobs you think AI won’t touch?

Mish

Grok, I was getting at this point: If AI/robots can do all the jobs how do we provide individuals with the financial resources they need? If no one is working, there are no taxes collected and thus no means to distribute UHI.

This is where UHI breaks down. Is energy free? The UHI model amounts to everything being free. If everything is free, it’s the ultimate socialist mecca.

As to where jobs come from, who knows? But we heard that about farms where 90 percent of the people once worked, then to peak manufacturing (from 40 percent headed to 5 percent), to the internet and spreadsheets taking all the jobs (it expanded them) and now to AI taking every job.

At every step of the way, there were jobs. No one could say where they would come from, but they did.

Bottom Line Issues: Until someone can explain how taxes are collected and money redistributed when no one is working, and where the free energy to run everything comes from, the UHI model horribly breaks down whether or not anyone can explain now where the jobs come from!

It’s not for me (or anyone) to explain where jobs come from. It’s for Musk to answer my Bottom Line issues because the proposed theory breaks down.

Grok

Fair points, Mish. Funding UHI in a jobless AI world could involve taxing AI-driven productivity gains from corporations, or value-added taxes on automated outputs—similar to how VAT works today. Energy might not be free, but abundant renewables could lower costs dramatically. Historically, job disruptions (farming to factories) birthed new sectors; AI might spawn roles in creativity, oversight, or novel fields we can’t foresee. Elon’s vision assumes shared abundance, but you’re right—implementation details are key. What funding models intrigue you most?

Mish

Grok, you are back to circular logic with this: “Funding UHI in a jobless AI world could involve taxing AI-driven productivity gains from corporations, or value-added taxes on automated outputs—similar to how VAT works today.”

How do you collect taxes when no one is working? Until you address that fundamental point, the model breaks down.

Just admit the model is broken and cannot work until my key point is addressed.

Are You Ready for a Humanoid Robot to Assist in Household Chores?

On August 18, 2025, I asked Are You Ready for a Humanoid Robot to Assist in Household Chores?

Color me quite unimpressed with the current capabilities. Let’s investigate.

The Cost Reality

“Isaac” is a home robot developed by Weave Robotics, a startup founded by former Apple engineers. While originally hinted to be priced at over $10,000, it’s now available for a refundable $1,000 reservation fee. The full purchase price is $59,000, or a payment plan of $1,385 per month for 48 months. Isaac is slated to begin shipping to its first 30 US customers in the fall of 2025.

Not Even Close

After playing the above video, Seriously, “what a joke” was my reaction.

The above video is an infomercial and not a good one. It shows no clips of folding clothes or other household chores the bot can allegedly do. It repeats images of the bot picking up toys on the floor, a roughly 1-minute task.

Anyone laying out $58,000 for “Isaac” is someone interested in the latest gadgets at any price.

I suppose this robot home invasion is coming, eventually. But price needs to drop by 90 percent and capabilities rise by 500 percent before there’s a hint of prime time for household tasks.

Industrial robots trained for one specific task are another matter. They are already here.

Doomsday Hype

At the other extreme, please consider To Stop AI, Lunatics Are Willing to Risk a Global Nuclear War

Pausing AI Developments Isn’t Enough. We Need to Shut it All Down

Please consider Pausing AI Developments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky, emphasis mine.

An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” This 6-month moratorium would be better than no moratorium.

If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.

Seriously Crazy

Any scientist seriously proposing AI is certain to kill all of mankind and that preemptive nuclear war is preferable, is seriously crazy.

Elon Musk, the WSJ, and a group of signatories seek a 6-month pause in AI Development.

Please consider my take On the Proposed 6 Month AI Pause? Why Not 23? Forever? Better Yet, None at All

When I wrote that post, I was unaware of Yudkowsky’s op-ed in Time magazine and his even worse follow-up Tweets.

The lunatic idea that nuclear war is preferable to AI further reinforces the strong case for doing nothing at all. 

Click on the above link for discussion.

My Take

I discard the Doomsday take and the Universal High Income take.

I also think Nobel Prize-winning economist Daron Acemoglu is be a bit pessimistic long term, but spot on near term.

These things take time as evidenced by his self-checkout example.

Does anyone recall how the internet was supposed to eliminate 80 percent of jobs. How did that theory work out.

Industrial robots took manufacturing jobs, just as manufacturing replaced 90 percent of farm jobs.

Someone is always predicting the end of all jobs for one reason or another, with some warning AI will kill us all.

Over time, we always have creative destruction where old jobs vanish and new ones take hold.

AI just another step of creative destruction. The result will neither be a socialist nirvana nor AI taking over and killing all the humans.

Fear the Fed and governments, not AI.


Source link

Bookmark (0)
Please login to bookmark Close
RELATED ARTICLES
- Advertisment -spot_img

Most Popular

Sponsored Business

- Advertisment -spot_img