OpinionDiscussion

Scott Galloway: AI Wasn’t Built For You. The Rich Don’t Need You Anymore!

Scott Galloway joins Steven Bartlett to discuss AI's overhyped job destruction narrative, the declining brand of both the US abroad and AI companies, and the moral detachment of tech CEOs from societal consequences. The conversation spans AI investment bubbles, the Iran military conflict, wealth inequality, and personal reflections on resilience, purpose, and fatherhood.

Summary

Scott Galloway opens by identifying two brands that have suffered the greatest erosion in the past 18 months: the United States' global reputation and artificial intelligence. He argues that AI's public perception is directly correlated with wealth — only those earning over $200,000 view it positively, largely because they benefit from rising portfolios and are its heaviest users. Meanwhile, average citizens face rising electricity bills and no access to AI investments, while leaders like Sam Altman make tone-deaf statements about energy costs.

On AI job destruction, Galloway pushes back heavily against the catastrophizing narratives from tech CEOs like Elon Musk and Sam Altman. He argues this rhetoric is primarily a fundraising tool to justify astronomical valuations, and that the employment data simply does not support predictions of an apocalyptic labor collapse. He notes radiologist job listings are up, coder listings are up 11%, and unemployment among youth remains near historical averages. He acknowledges a 'V-shaped' dip is possible and that certain roles — like long-haul trucking, entry-level legal work, and customer service — will be significantly disrupted, but maintains that AI will ultimately create more jobs than it destroys.

Galloway and Bartlett discuss how the labor market is reshaping rather than collapsing. Bartlett shares his own experience of needing fewer analysts and executive assistants due to AI augmentation, while still hiring 220 people across his companies in 24 months. Galloway contends that AI fluency is the key differentiating skill — not that AI will replace workers, but that workers who understand AI will replace those who don't.

A substantial portion of the conversation addresses the moral character of tech CEOs. Galloway argues that society has mistakenly elevated tech founders to godlike status — a cultural void left by declining religious participation — and repeatedly falls into the trap of believing these individuals have the public's best interests at heart. He traces the pattern from Steve Jobs through Sheryl Sandberg, Mark Zuckerberg, and now Sam Altman, arguing they all follow the same arc from celebrated innovator to 'Darth Vader' as they prioritize shareholder value above societal harm. He says the solution is not to trust these individuals, but to elect officials capable of regulating them.

Galloway raises a provocative thesis that AI may not produce concentrated shareholder value the way social media or e-commerce did. Drawing comparisons to jet transportation and vaccines — transformative technologies that didn't mint a small number of trillion-dollar companies — he argues there is a one-in-three chance AI follows the same path, with open-weight Chinese models acting as a form of 'AI dumping' that could kneecap US AI valuations and trigger a broader market correction.

He also argues that GLP-1 drugs (like Ozempic) may be more transformative than AI in terms of real-world human impact and shareholder value creation, noting that those who use both and are asked which they'd give up overwhelmingly choose to keep GLP-1.

On geopolitics, Galloway analyzes the US military engagement with Iran as operationally competent but strategically disastrous. He argues Trump was convinced this could be his 'defining legacy moment' but failed to coordinate with allies, brief Congress, or think through the game theory implications — particularly around Iran's ability to threaten the Strait of Hormuz. He says the conflict has been a massive win for the IRGC in terms of both propaganda and geopolitical positioning, and that the US is now trapped in a quagmire where withdrawing signals weakness and continuing degrades its global standing.

In the latter portion of the conversation, Galloway offers personal reflections on wealth, resilience, and purpose. He advises young people to diversify investments, avoid trying to time or pick markets, and focus on compound interest through low-cost index funds. He identifies storytelling and the ability to endure rejection as the most durable future skills. He warns that young men in particular are losing resilience due to frictionless online relationships that substitute for real-world social risk-taking.

Galloway closes on deeply personal notes about fatherhood, the death of his mother, and the role of grief as evidence of love. He reflects that his greatest professional asset has been resilience in the face of repeated failure, and that the most important lesson of his life has been investing in relationships and finding purpose in something that offers no transactional return — for him, raising his sons.

Key Insights

  • Galloway argues that AI catastrophizing by tech CEOs is primarily a fundraising strategy — framing their technology as world-altering justifies raising capital at extreme valuations, regardless of whether employment data supports the apocalyptic claims.
  • Galloway contends that public perception of AI is directly correlated with wealth: only people earning over $200,000 view AI positively, because they benefit from rising portfolios and are the heaviest users, while average citizens bear costs like higher electricity bills without investment upside.
  • Galloway identifies a structural trap in AI valuations: either these companies need to generate a trillion dollars in new incremental revenue within five years, or they need to show massive labor cost destruction — and neither has materialized convincingly yet.
  • Galloway argues there is a one-in-three chance AI becomes like vaccines or jet transportation — transformative for humanity but incapable of generating concentrated shareholder value for a small number of companies, particularly as open-weight Chinese models commoditize the technology.
  • Galloway suggests China could deliberately engage in 'AI dumping' — flooding the US market with cheap, open-weight models — to undermine the valuations of OpenAI and Anthropic, potentially triggering a US market crash given that 40% of S&P value is now tied to AI bets.
  • Galloway claims that GLP-1 drugs are more important than AI in terms of real-world human impact and shareholder value creation, and that people who use both overwhelmingly say they would give up AI rather than GLP-1 if forced to choose.
  • Galloway argues that the US-Iran military engagement was operationally competent but strategically disastrous — failing to coordinate with allies, brief Congress, or account for Iran's ability to weaponize Strait of Hormuz access, which he says may be more powerful than nuclear capability.
  • Galloway claims that tech CEOs are not moral actors to be trusted — they are doing exactly what capitalism requires by maximizing shareholder value — and that the systemic failure lies with citizens electing officials who lack the domain expertise or will to regulate these companies.
  • Galloway argues that the ultra-wealthy in the US have fully dissociated from the societal infrastructure everyone else depends on — flying private, using concierge medicine, living in secured neighborhoods — and therefore have no personal stake in improving public systems.
  • Galloway identifies the ability to endure rejection as the most underrated and endangered skill among young men, arguing that frictionless online relationships and synthetic substitutes for social life are eroding the resilience required for real-world success.
  • Galloway reflects that his greatest professional asset has been resilience after failure — having started nine companies with more losses than wins, he argues that the ability to mourn a failure and immediately attempt to raise money again is the common thread among self-made entrepreneurs.
  • Galloway argues that true purpose is found in investments that offer no transactional return — using fatherhood as his example — and that his earlier capitalist framework of measuring every relationship for ROI was fundamentally incompatible with meaning and happiness.

Topics

AI job destruction and labor market reshapingAI company valuations and investment bubble riskTech CEO moral detachment and regulatory failureUS brand erosion and Iran military conflictGLP-1 drugs vs AI as transformative technologyWealth inequality and the 1% dissociation from societySkills of the future: storytelling, resilience, rejection tolerancePersonal finance: compound interest, diversification, index fundsFatherhood, purpose, and the role of grief

Full transcript available for MurmurCast members

Sign Up to Access

Get AI summaries like this delivered to your inbox daily

Get AI summaries delivered to your inbox

MurmurCast summarizes your YouTube channels, podcasts, and newsletters into one daily email digest.