A new survey by JUST Capital, working with The Harris Poll and the Robin Hood Foundation, finds a significant divide between where the general public and investors believe the returns from AI should go: whereas investors overwhelmingly focus on shareholder value, the public expects broader distribution — including for employees, customers and society at large. According to the research, investors believe the gains from AI-driven productivity should mostly flow to shareholders, while only about half of the public believe AI will generate net positive productivity gains. Both groups, however, do agree that companies should devote at least some of their AI spend to safety and guardrails amid fears of social disruption. The survey suggests that how companies deploy AI isn’t just a technological or operational decision—it has emerged as a measure of how “just” business is perceived to be.
Sources: Just Capital, Yahoo Finance
Key Takeaways
– There is a clear gap between investor and public views on AI gains: investors lean toward shareholder returns, the public toward broad stakeholder distribution.
– Both groups agree that AI safety, guardrails, and social stability matter—even as they differ on allocation of profits.
– How companies manage and communicate AI deployment is increasingly shaping public trust in capitalism and in business leadership’s “fairness.”
In-Depth
In today’s rapid-evolving landscape of artificial intelligence, a crucial question is emerging beyond algorithms and infrastructure: who exactly should reap the benefits of AI-driven gains? The study by JUST Capital in collaboration with The Harris Poll and the Robin Hood Foundation shines a light on a meaningful disconnect: investors and the American public are operating with different assumptions and expectations about value distribution when AI is brought to scale. According to the survey results, 96 % of investors believe AI will yield a net positive impact on worker productivity, while only 47 % of members of the public share that optimism. With that disparity, it’s perhaps no surprise that opinions diverge sharply on how the resulting profits—or productivity gains—should be distributed. (Source: JUST Capital)
From the investor mindset, the logic goes like this: AI investments are capital-intensive and risky, and shareholders ultimately bear that risk; therefore, returns ought to flow primarily to them. Investors in the survey did acknowledge some broader allocation toward employees or customers, but their preference remained weighted toward shareholder benefit. On the other side of the equation, the public perceives AI not simply as another automation tool but as a societal shift with broad implications: jobs, inequality, wage pressures, consumer pricing, and the potential for displacement. As a result, the public believes that gains should be shared more broadly—lower prices for customers, retraining for workers, safety investments, and more transparent governance.
Interestingly, while the two groups diverge on distribution, they align on one important point: both believe that companies must allocate meaningful resources to AI safety and stability. According to the survey, majorities from both sides say companies should spend more than 5 % of their total AI investment budget on safety. That demonstrates that even investors are sensitive to the “social license” question: when technology deployment threatens systemic instability—mass displacement, lost consumer trust, energy-price shocks—investors begin to see those as risks to returns as well. The MOST significant takeaway here is that AI is now not only a tech-roll-out issue but a reputational and governance challenge. CEOs and boards must manage not only code and models but public expectations and social outcomes.
For companies in the U.S. actively deploying AI—or preparing to—this survey suggests a need to calibrate three key areas. First, communication: firms will be well-served explaining not just what AI is enabling (productivity, cost savings, innovation) but how the benefits will be shared with employees, customers, and communities—especially given public skepticism. Second, stakeholder strategy: an approach that simply optimises for shareholder value risks alienating consumers and employees, thus undermining trust and possibly triggering backlash or regulation; an inclusive stakeholder strategy may cost more upfront but may protect social license and long-term value. Third, oversight and safety: companies must invest in governance, transparency, and guardrails—not only because of ethical obligation but because investor interests align when instability threatens returns.
From a conservative business-lens perspective, this is not about rejecting shareholder value—rather, it is about protecting it by acknowledging that the broader ecosystem matters. If companies ignore the public’s viewpoint—especially in a politically wedged era—they risk structural headwinds: regulation, reputation damage, workforce disruption, consumer pushback—all of which erode margins and shareholder value. So while investors may still prefer returns flow to them, the smart business sees that aligning those returns with stakeholder interests can safeguard those returns by reducing downside risk.
In short: deploying AI effectively means more than choosing the right model or scaling the infrastructure. It means crafting a narrative of value creation that includes shareholders and the broader community, and building governance that anticipates societal concerns. If the gap between public expectation and investor mindset widens, companies may find themselves caught between innovation and backlash. Navigating that tension wisely may prove to be one of the defining leadership challenges of the AI-era.

