Let’s face it: Artificial Intelligence (AI) often feels like a double-edged sword. It’s easy to focus on the downsides—privacy invasions, job displacement, and the mindless clutter it can create. But what if we could harness AI in a way that enhances our lives without sacrificing our humanity?
I’ll admit, I’ve become that person at gatherings, the one who can’t stop talking about AI. When I mention I’m working on a newsletter about it, the reactions are predictable: raised eyebrows, skeptical glances. But here’s where it gets interesting—this isn’t another hype-filled pitch about AI replacing your job or tricking your boss. Instead, I’m exploring how we can use AI mindfully, as a tool that complements our lives rather than controls them.
Like the internet, AI is a powerful force with both positive and negative potential. Yes, the internet brought us doomscrolling, data harvesting, and questionable social media posts. But it also gave us digital maps, podcasts, Wikipedia, and video calls—tools that have transformed how we live and connect. AI is no different. While some will exploit it for harmful purposes, that doesn’t mean we have to follow suit. And this is the part most people miss: We can—and should—demand ethical regulations and accountability from the companies building these technologies. Now is the time to advocate for guardrails around privacy, environmental impact, and the spread of misinformation.
If we’re going to use AI, let’s do it with our eyes wide open. That’s where AI for the People comes in—our new free six-week newsletter course (https://www.theguardian.com/lifeandstyle/2026/jan/08/ai-for-the-people). We’ll explore practical, thoughtful ways to integrate AI into daily life—whether at work, in the kitchen, or even at the gym—while staying in control. And yes, we’ll do this with clear boundaries, guided by our four cardinal rules (more on that later).
But let’s circle back to my party-conversation reputation. Here’s what I tell my skeptical friends: I hate informational asymmetry. Think about those impenetrable legal contracts or terms and conditions we sign without understanding. Remember Disney’s arbitration clauses (https://www.theguardian.com/film/article/2024/aug/15/disney-wrongful-death-lawsuit-dismissal) or Uber’s (https://www.theguardian.com/us-news/2024/oct/01/new-jersey-uber-eats-car-crash) that prevented people from suing? I’ve been using AI to translate legal jargon into plain English, highlighting the red flags I need to watch for.
AI has also helped me tackle personal challenges—managing my chronic time blindness, studying for my driving test, experimenting with new recipes, and even learning to play the Lord of the Rings theme on the tin whistle. In most cases, AI isn’t a replacement for human connection or expertise, but it’s an incredible assistant for understanding new information, speeding up tasks, and creating tailored plans. My year has been filled with small, practical victories, and I’m excited to share them with you.
AI for the People isn’t about flashy prompts or letting chatbots do your work. It’s about learning how AI can assist you without surrendering your judgment. As AI expert Ethan Mollick puts it, ‘It’s just like any other tool: you dull your skills and critical thinking by handing them over to the AI.’
These challenges aren’t new. Back in 2002, author Umberto Eco told the New York Times, ‘The problem with the internet is that it gives you everything, reliable material and crazy material. So the problem becomes, how do you discriminate?’ That question—how we learn to discern, adapt, and stay in control—is at the heart of AI for the People. We hope you’ll join us.
Our Four Cardinal Rules
You’re the boss: AI can do a lot, but it’s a tool, not a replacement for your judgment. As Ethan Mollick advises, ‘If you’re trying to learn something, make sure the AI is asking you questions, not giving you answers.’ Treat it as a collaborator, not a crutch.
Be your own fact-checker: AI isn’t infallible. Remember when Google’s AI suggested adding glue to pizza in 2024? Always verify AI-generated information, especially if it matters. Ask for sources or provide your own for the AI to reference.
Be informed and intentional: AI’s environmental impact is a growing concern (https://www.theguardian.com/technology/2026/jan/03/just-an-unbelievable-amount-of-pollution-how-big-a-threat-is-ai-to-the-climate). While individual use isn’t the biggest issue, the rapid growth of AI infrastructure and its energy consumption are. For this series, we’ll stick to text-based prompts, which are less energy-intensive. Responsible use is key—just as you wouldn’t run a dishwasher for one fork.
Protect your privacy and boundaries: What you share with AI tools can end up in corporate servers, vulnerable to breaches or legal requests. Many workplaces have strict AI policies, and your data could be used to train models unless you opt out. Be mindful of what you disclose.
Controversial Question: Is it possible to use AI ethically without addressing the systemic issues of corporate accountability and environmental impact? Share your thoughts in the comments—let’s spark a conversation!