It’s hard to avoid the conclusion that the market for artificial intelligence and its associated industries are over inflated. In 2025, just five hyperscalers—Alphabet, Meta, Microsoft, Amazon, and Oracle—accounted for a capital investment of $399 billion, which will rise to over $600 billion annually in coming years. For the first nine months of last year, real GDP growth rate in the U.S. was 2.1%, but would have been 1.5% without the contribution of AI investment.
This dependence is dangerous. A recent note by Deutsche Bank questioned whether this boon might in fact be a bubble, noting the historically unprecedented concentration of the industry, which now accounts for around 35% of total U.S. market capitalization, with the top 10 U.S. companies making up more than 20% of the global equity market value. For such an investment to yield no benefit would be a failure of unprecedented proportions.
In their book Power and Progress, Nobel Prize-winning economists Daron Acemoglu and Simon Johnson narrate the calamitous failure of the French Panama Canal project in the late 19th century. Thousands of investors, large and small, lost their fortunes, and 20,000 people who worked on the project died for no benefit. The problem, Acemoglu and Simon write, was that the vision for progress did not include everyone—and failure to incorporate feedback from others resulted in poor quality decision making. As they observe, ”what you do with technology depends on the direction of progress you are trying to chart and what you regard as an acceptable cost.“
Fast forward 150 years and a significant chunk of the U.S. economy is similarly dependent on a small coterie of grand visionaries, ambitious investors, and techno-optimists. Their capacity to ignore their critics and sideline those forced to bear the costs of their mission risks catastrophic consequences. Trustworthy AI systems cannot be conjured by marketing magic. We must ensure those building, deploying, and working with these systems can have a say in how we direct the progress of this technology.
Mistrust and a general lack of optimism
The data suggests that there is an urgent need to chart a new course. Even a generous analysis of the market for generative AI products would likely struggle to show how a decent return on the gargantuan investment in capital is realistic. A recent report from MIT found that notwithstanding $30 billion to $40 billion in enterprise investment into GenAI, 95% of organizations are getting zero return. It is difficult to imagine another industry raising so much capital despite producing so little to show for it. But this appears to be Sam Altman’s true superpower, as Brian Merchant has documented extensively.
This is coupled with significant levels of mistrust and a general lack of optimism from everyday people about the potential of this technology. In the most comprehensive global survey of 48,000 people across 47 countries, KPMG found that 54% of respondents are wary about trusting AI. They also want more regulation: 70% of respondents said regulation is necessary, but only 43% believe current laws are adequate. The report concludes that the most promising pathway towards improving trust in AI was through strengthening safeguards, regulation, and laws to promote safe AI use.
This, most obviously, sits in stark contrast with the position of the Trump administration, which has repeatedly framed regulation of the industry as an impediment to innovation. But the trust deficit cannot simply be hyped out of existence. It represents a significant structural barrier to the take up and valuable deployment of emerging technologies.
One of the key conclusions of the MIT report is that the small subset of companies that actually saw productivity gains from generative AI products were doing so because ”they build adaptive, embedded systems that learn from feedback.” Highly centralized decisions about procurement were more likely to result in employees being required to use off-the-shelf products unsuited to the enterprise environment and generating outputs that employees mistrusted, especially for higher-stakes tasks, resulting in work arounds or dwindling rates of usage. The problem is that these tools fail to learn and adapt. In turn, there are too few opportunities for executives to receive that feedback or incorporate it meaningfully into model development and adaptation.
The narrative spun by politicians and media commentators that the AI industry is full of visionary leaders inadvertently points to a key cause of why these products are failing. Trust in AI systems can only be earned if feedback is both sought and acted on—which is a significant challenge for the hyperscalers, because their foundational models are less capable of adapting and responding to unique and varied contexts. Unless we decentralize the development and governance of this, the benefits may remain elusive.
The workers’ view
There are useful ideas lying around that could help navigate a different path of technological progress. The Human Technology Institute at the University of Technology Sydney published research about how workers are treated as invisible bystanders in the roll out of AI systems. Through deep, qualitative consultations with nurses, retail workers, and public servants to solicit feedback about automated systems and their impact on their work.
Rather than exhibiting backward or unhelpful attitudes to AI, workers expressed nuanced and constructive contributions to the impact on their workplaces. Retail workers, for example, talked about the difficulties of automated systems that disempowered workers, and curtailed their discretion: “unlike a production line, retail is an unpredictable environment. You have these things called customers that get in the way of a nice steady flow.”
A nurse noted how “the increasing roll-out of automated systems and alerts causes severe alarm fatigue among nurses. When an (AI system) alarm goes off, we tend to ignore or not take it seriously. Or immediately override to stop the alarm.”
One might think that increased investment in such systems would contend with the problem of alarm fatigue. But without worker input, it’s easy to miss this as a problem entirely. The upshot is that, as one public servant put it, in workplaces where channels for worker feedback are absent, a necessary quality of employees was “the gift of the workaround.”
Traditionally, this kind of consultation and engagement would happen through worker organizations. But with the rate of unionization slipping below 10% in the U.S., this becomes a problem not just for workers but also employers, who are left with few methods to meaningfully engage with their workforce at scale.
Some unions are nonetheless leading on this issue, and in the absence of political leadership, might be the best hope of making change. The AFL-CIO has developed a promising model law aimed at protecting workers from harmful AI systems. The proposal focuses on limiting the use of worker data to train models, as well as introducing friction into the automation of significant decisions, such as hiring and firing. It also emphasizes giving workers the right to refuse to follow directives from AI systems—essentially, building in feedback loops for when automation goes wrong. The right to refuse is an essential failsafe that can also cultivate a culture of critical engagement with technology, and serve as a foundation for trust.
Businesses are welcome to ignore workers’ views, but workers may end up making themselves heard in other ways. Recent surveys indicate that 31% of employees admit to actively sabotaging their company’s AI strategy, and for younger workers, the rates are even higher. Even companies that fail to seek feedback from workers may still end up receiving it all the same.
Our current course of technological progress relies on narrow understandings of expertise and places too much faith in small numbers of very large companies. We need to start listening to the people who are working with this technology on a daily basis to solve real world problems. This decentralization of power is a necessary step if we want technology that is both trustworthy and effective.
source https://www.fastcompany.com/91472029/hyperscalers-ai-alphabet-meta-microsoft-amazon-oracle
Discover more from The Veteran-Owned Business Blog
Subscribe to get the latest posts sent to your email.
You must be logged in to post a comment.