AI’s Hidden Costs: A Human Conversation on the True Price of Progress

AI’s Hidden Costs: A Human Conversation on the True Price of Progress

We hear so much these days about how AI is transforming everything—our productivity, our creativity, even the way we think. And it’s true: the technology is moving faster than most of us can keep up with. But with all the buzz and excitement, I keep wondering: what are we not talking about enough?

What’s the cost? Not just in dollars or data, but in energy, human dignity, emotional well-being, even our sense of meaning?

This isn’t meant to be an alarmist piece or a lecture. I’m not a scientist or policymaker. I’m just someone who cares deeply about the future we’re building, and what it means to be human in the age of AI. So think of this piece as a conversation starter, a beginning. I’ve pulled together some powerful insights from people I respect deeply, and added a few thoughts of my own. But this is by no means complete. I’m counting on you, my friends, cross-cultural collaborators, and fellows from around the world, to help expand the conversation.

What We Often Overlook When We Talk About AI

1. Let’s Start with the Planet

AI may seem weightless and virtual, but it’s built on very real, physical infrastructure. As Kate Crawford reminds us in Atlas of AI, everything from the servers to the chips to the cloud eats up energy, water, and minerals. And a lot of that comes from places with limited resources and vulnerable ecosystems.

· Training models like GPT-3 emitted over 550 metric tons of CO2.

· Data centers are thirsty: Microsoft’s Phoenix facility would consume over 56 million gallons of water per year.

· And here’s the kicker: even when models get more efficient, their overall energy use goes up—because they’re used more. It’s called the rebound effect (thanks to Luccioni, Strubell, and Crawford).

Karen Hao In her beautifully reported Empire of AI, goes even further, calling this a kind of modern imperialism. Big companies gather data from under-regulated parts of the world, use local labor to train and moderate the content, and consume resources that those same communities may desperately need.

So what can we do?

· Cities like Singapore are experimenting with green data center designs using seawater cooling (Keppel DC) and renewable energy sourcing.

· Tech companies can adopt carbon labeling (like Hugging Face's emissions tracker for ML models) to make AI development more transparent.

· As citizens, we can push for climate accountability in tech, just like we have in food or fashion.

2. Then There’s the Social Cost

We talk about jobs being replaced, but what about the people doing the invisible work behind AI? In Code Dependent, Madhumita Murgia describes content moderators in Kenya and the Philippines, reviewing traumatic materials to make platforms safe. It’s thankless work, often poorly paid, and mentally draining.

Emily Bender and Alex Hanna, in The AI Con, point out how generative AI is reshaping creative industries: writing, design, and even music. There’s a lot of excitement about democratization, but we have to ask: who really benefits? Who gets displaced?

And Tim Hwang’s research reminds us that access to computational power is increasingly held by just a few major players. If compute is the new oil, what happens to the rest of us?

So what can we do?

· In Indonesia, the non-profit Open Dialog Indonesia is training rural communities in basic AI literacy, empowering people to ask critical questions about surveillance and job loss.

· In California, community-led co-ops like AI Commons are developing shared datasets governed by local artists and educators.

· Policy-wise, the EU's AI Act includes explicit provisions for worker protections and transparency in algorithmic decisions.

3. The Human and Emotional Toll

I’ve been thinking a lot lately about what it means to be human when AI can write, draw, and even simulate empathy. It’s not just about automation, it’s about identity. What’s left for us when machines become better performers?

Nick Bostrom, in Superintelligence, warns of the existential risks. But even before we get to the sci-fi stuff, there’s something quieter and more personal happening:

· AI “friends” and therapists are replacing human connection.

· Young people are growing up talking more to screens than to each other and getting addicted to it.

· And many of us feel an odd sense of inadequacy—like we’re falling behind or being rendered obsolete, total FOMO.

So what can we do?

· In South Korea, high schools are piloting AI-free Fridays dedicated to human-only creativity and reflection.

· In the Netherlands, mental health nonprofits are building hybrid therapy models where licensed therapists supervise AI check-ins, a balance of accessibility and care.

· For individuals: daily digital detox routines, AI-skeptic family dialogue, and even journaling your own thoughts before asking ChatGPT, all help us reclaim agency.

So What Are Governments Doing?

Let’s zoom in on three countries that play a central role in this context: China, Singapore, and the U.S.

China: China’s approach is top-down and focused on control. Through laws like the Personal Information Protection Law and initiatives like “East Data, West Compute,” they’re trying to build infrastructure while regulating algorithms tightly. Ethical debates tend to stay within state-sanctioned boundaries.

Singapore: Singapore is aiming to lead in responsible innovation. The new Model AI Governance Framework for Generative AI is practical, detailed, and very forward-thinking. I admire how they’re also pushing for sustainable data centers, integrating AI into public services, but keeping humans in the loop.

United States: In the U.S., it feels like innovation has outpaced regulation. The tech giants are charging ahead, but the environmental cost is catching up: Google’s AI-related emissions rose by over 50%. While executive orders and the AI Safety Institute are promising, many worry it’s too little, too late. Still, the public pushback is growing—just look at the Hollywood strikes and rising ethical AI movements.

A Personal Note

This newsletter isn’t the full story. It’s just a start. I hope it sparks reflection, disagreement, conversation, anything that helps us see this topic more clearly.

AI’s costs are not just technological questions. They’re human questions.

And we need more voices from East and West, Global North and South--artists, parents, policy folks, engineers, students, retirees, teachers, spiritual leaders, to shape where we go from here.

So let’s open the conversation:

· What have you noticed in your life, your industry, or your community?

· What angles haven’t we explored yet?

· What kinds of futures are still possible, and which ones should we avoid?

Let’s Build This Conversation Together

Thanks for reading. Truly. And thank you for caring.

If you found this issue thought-provoking, drop a comment or share it with someone who might bring a new perspective. I’d love to hear what you think.

Until then, Yingying

Stay connected—subscribe to my Substack:👉 https://yingfluence.substack.com/

References & Sources

· Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

· Murgia, M. (2024). Code Dependent: Living in the Shadow of AI. Hodder & Stoughton.

· Hao, K. (2025). Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. (Journalistic series, MIT Tech Review).

· Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

· Christian, B. (2020). The Alignment Problem: Machine Learning and Human Values. W. W. Norton.

· Bender, E., & Hanna, A. (2025). The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Penguin Random House.

· Hwang, T. (2018). "Computational Power and the Social Impact of Artificial Intelligence." arXiv:1803.08971.

· Luccioni, A., Strubell, E., & Crawford, K. (2025). "From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI’s Polarized Environmental Debate." arXiv:2501.16548.

· Wu, C.J., et al. (2021). "Sustainable AI: Environmental Implications, Challenges and Opportunities." arXiv:2111.00364.

· Singapore Infocomm Media Development Authority (IMDA). (2024). Model AI Governance Framework for Generative AI. Retrieved from https://aiverifyfoundation.sg

Back to blog