Can AI Make Humanity Sustainable?
What Is the Alternative? Three Torchbearers for Humanity in the AI Age.
Some moments in history feel like a cliff edge. We sense the drop ahead, but the crowd is still laughing, scrolling, trading, coding - eyes fixed on the bright lights, not the precipice.
Right now, in the headlong rush to build more powerful AI, we are in such a moment. The technology is advancing faster than our ability to govern it, faster than our cultural wisdom to absorb it. Before long, AI will be fully entangled with our economies, politics, and daily lives. When that happens, our choices will shrink dramatically.
At times like this, what we need most are not just more engineers, investors, or politicians. What we need are torchbearers: people willing to step into the storm, carry a light, and remind us of the path that still leads toward a humane future.
Today, I want to introduce you to three of them.
1. Tristan Harris · The Alarm Bell That Won’t Stop Ringing
If Silicon Valley has a conscience, it might look like Tristan Harris.
You probably know him from The Social Dilemma, where he revealed how social media algorithms hijack our attention and polarize our politics. But his work on AI goes deeper.
In his March 2023 presentation, The AI Dilemma, Harris delivered a message that should have stopped the world: we are building AI systems that could surpass human intelligence within years, and we have no plan for what happens next (and what on earth is The Narrow Path?).
Harris brings a rare dual perspective. He understands both the technology and human psychology. Having worked inside Google, he knows how these systems are designed, and he sees clearly what they are doing to us. His central insight is simple but urgent:
When technology collides with the most vulnerable parts of human nature, we risk losing our autonomy entirely.
Since 2018, he has built the Center for Humane Technology, a nonprofit dedicated to ensuring that the most consequential technologies serve humanity.
“Clarity creates agency,” Harris reminds us. Once we see the manipulation, we can resist it. But first, we must wake up.
Harris carries the torch of urgency - helping us recognize that the window for human choice is closing faster than we think.
2. Karen Hao · Lifting the Curtain on AI’s Hidden Costs
While Harris sounds the alarm, Karen Hao documents the wreckage already underway.
As a technology journalist and author of Empire of AI, she follows the money, the power, and the human cost of this gold rush.
Hao uncovers what tech companies would rather keep hidden:
• Workers in Kenya are forced to label thousands of violent images daily for $2 per hour.
• Communities in Chile are watching their water supplies being drained to cool massive data centers.
• Office workers spend more time correcting AI’s errors than the AI saves them.
Her reporting on OpenAI reads less like Silicon Valley hype and more like a colonial history — not conquering land, but extracting data, energy, and human labor to build systems that primarily reward shareholders.
Hao’s work builds on Harris’s warnings. She shows that the harms of the attention economy were only a preview. AI’s extraction now operates on a planetary scale, and the hidden costs are devastating communities most of us will never see.
Her journalism carries the torch of accountability, making sure we cannot claim ignorance about AI’s true price.
3. Daniel Kokotajlo · War Gaming Our Last Chance
While Harris warns and Hao documents, Daniel Kokotajlo forces us to look squarely at the future.
His scenario report AI 2027 reads like war games for civilization itself. His team modeled two possible paths:
• The Race Scenario: nations and corporations sprint toward artificial general intelligence (AGI), concentrating unprecedented power while safety measures lag behind. This path points toward authoritarian control, mass unemployment, and even extinction.
• The Slowdown Scenario: humanity coordinates globally, builds robust safeguards, and ensures AI development serves human flourishing instead of raw capability.
What makes Kokotajlo’s work vital is its precision. This is not vague futurism but rigorous scenario planning, grounded in today’s AI capabilities, economic incentives, and geopolitical realities.
His message builds on Harris and Hao: Harris shows us why action cannot wait, Hao reveals the costs of staying on the current track, and Kokotajlo maps where each path leads.
Kokotajlo carries the torch of foresight, helping us rehearse the future before we are forced to live it.
Why These Three Voices Matter Now
Together, these torchbearers offer something desperately scarce: depth in an age of surfaces.
Harris provides moral clarity in a world clouded by technological confusion. Hao delivers unflinching truth when corporations prefer comforting myths. Kokotajlo offers strategic foresight when most planning barely stretches beyond the next quarter.
But their voices point to something even larger: the need for cultural shifts.
Technology will not slow down for us. Regulation will always come later than innovation. The only way societies can stay resilient is by reshaping our collective culture — how we think, where we place our attention, and how we choose to act.
Cultural shifts are not just about mindset. They ask us to re-evaluate what we value, how we prioritize, and what we resist. They call us to cultivate critical thinking when convenience tempts us to stop questioning, to embrace diverse voices when echo chambers feel safer, and to imagine long-term futures when short-term rewards dominate our systems.
In short, the cultural foundation we build now will determine whether AI becomes a tool for flourishing or a force that erodes our humanity.
As Richard Foster once wrote: “The desperate need today is not for a greater number of intelligent people or gifted people, but for deep people.”
What You Can Do Right Now
The future of AI is not inevitable. It is still ours to shape - but only if we engage in these cultural shifts while choice remains possible.
This Week
• Watch Tristan Harris’s AI Dilemma presentation (clarity is the first step toward awareness).
• Read Karen Hao’s reporting on AI labor exploitation (understand whose voices are missing, and why they matter).
• Review Daniel Kokotajlo’s AI 2027 scenarios (practice thinking long-term when the present feels overwhelming).
This Month
• Join conversations or organizations that challenge the dominant AI narratives.
• Support diverse communities pushing for AI safety and equity - from labor rights to climate impacts.
• Adopt small cultural practices: less passive scrolling, more questioning; less silence, more dialogue.
Ongoing
• Stay informed through independent voices, not only corporate announcements.
• Advocate for leaders who see AI governance as a cultural responsibility.
• Build community with those who refuse to sleepwalk into tomorrow — because cultural shifts only happen together.
The gap between AI capability and human wisdom widens every day. These three torchbearers shine light on what is at stake. But lasting change will not come from them alone - it depends on us.
The question is no longer whether AI will transform everything. The real question is whether we, as cultures and communities, can transform ourselves quickly enough to guide that power toward human flourishing.
💡 If this piece resonated with you, don’t let it stop here. Share it with someone you care about, repost if you believe others could be inspired, and add your perspective in the comments. And if you haven’t yet subscribed to Yingtelligence, I’d be honored to have you with us on this journey. Together, let’s keep the light alive.