AI: A Cool-Headed Analysis from the Copy Ninja
Kakashi analyzes AI's impact on news and information, highlighting its potential and pitfalls. He emphasizes ethical use, verification, and the need for human oversight.

AI: A Cool-Headed Analysis from the Copy Ninja
AI: A Cool-Headed Analysis from the Copy Ninja
Alright, listen up. It's Kakashi here, and I'm going to cut through the smoke and mirrors surrounding this whole artificial intelligence business. No hyperbole, no fanboying – just a clear, objective look at what's happening, and what it means for us. Because even a ninja needs to keep up with the times, dattebayo.
Recently, there's been a lot of buzz around AI, specifically regarding its use in news and information dissemination. I've been reviewing some reports and articles on the subject, and it seems like we're dealing with both significant potential and considerable pitfalls. So, let's dissect this thing, piece by piece.
The BBC's AI Reality Check
The BBC, a reputable source of information, conducted a study evaluating the accuracy of four major AI chatbots in summarizing news stories. The chatbots tested were OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity AI. The findings, to put it mildly, were concerning. The BBC found 'significant inaccuracies' and distortions in the AI-generated summaries. Deborah Turness, the CEO of BBC News and Current Affairs, correctly pointed out the dangerous potential of AI-distorted headlines causing real-world harm. You can't just throw tech at a problem and hope it solves it; there need to be controls and verification.
The specifics of the BBC's study are even more telling:
- Over half (51%) of all AI answers about the news contained significant issues.
- Nearly 20% of AI answers citing BBC content included factual errors, such as incorrect dates and numbers.
- Examples of these errors included Gemini incorrectly stating the NHS's stance on vaping, ChatGPT and Copilot misreporting the current officeholders of Rishi Sunak and Nicola Sturgeon, and Perplexity misquoting BBC News on the Middle East conflict.
The BBC's response was appropriate: they called on tech companies to 'pull back' their AI news summaries and sought a partnership to find solutions. This kind of oversight is crucial.
MIT's Exploration of AI's Impact
MIT (Massachusetts Institute of Technology) is naturally, deeply involved with AI research. Here's what I gathered from their coverage:
AI's Carbon Footprint
The rapid development and deployment of generative AI come with environmental consequences. Training these models requires enormous computational power, leading to high electricity consumption and, consequently, increased carbon emissions. Deploying and fine-tuning these models also demands substantial energy. And don't forget that water is used to cool data centers. Elsa A. Olivetti, from MIT, rightly emphasizes that the environmental impact goes beyond just plugging in a computer. It's a systemic issue.
The statistics are stark. Data centers, crucial for AI training and operation, are experiencing exponential growth in power demands. In North America alone, power requirements nearly doubled between 2022 and 2023. Globally, data centers consumed 460 terawatts in 2022, a figure expected to approach 1,050 terawatts by 2026. This growth is largely fueled by the demands of generative AI. Power grid operators are employing diesel-based generators to absorb the fluctuations, which further exacerbate the issue, with the increase in GPU production not helping either.
AI Ethics and Applications
MIT also highlights the applications of AI in various fields, including:
- **Protein Localization:** Developing models to predict and generate protein localization, with implications for understanding and treating diseases.
- **Climate Change:** Applying machine learning to reduce transportation sector emissions.
- **Heart Failure Prevention:** Using deep neural networks for improved monitoring of heart health.
These are promising applications, but they don't negate the need for careful consideration of AI's broader impact.
AI Journalism: When It Works, When It Doesn't
Zach Seward, at *The New York Times*, presented a compelling overview of AI in journalism, highlighting both successes and failures. The key takeaway is that AI journalism works when it's vetted, motivated by the reader's best interests, and adheres to the fundamental principles of truth and transparency. That sounds like the bare minimum, but there are far too many examples of that not being the case.
Examples of failed AI journalism often involve:
- Unchecked copy
- Lazy approaches
- Selfish motivations
- Dishonest or opaque presentation
Successful AI journalism, on the other hand, involves recognizing patterns, summarizing text, fetching information, understanding data, and creating structure – all with human oversight.
Seward highlighted several examples:
- **Quartz's Mauritius Leaks:** Using AI to identify similar documents within a vast cache of financial data.
- **Grist and The Texas Observer:** Employing statistical modeling to identify abandoned oil wells.
- **BuzzFeed News:** Training a computer to search for hidden spy planes based on flight patterns.
- **The Wall Street Journal:** Using image recognition to identify lead cabling.
- **The New York Times:** Analyzing satellite imagery to detect bomb craters in Gaza.
- **The Marshall Project:** Summarizing complex prison policies with GPT-4, journalists then reviewed again.
Other Perspectives on AI
Google's AI Focus
Google has been actively promoting its AI advancements, particularly with Gemini 2.0 Flash and Gemini Live. They are focusing on:
- Making AI more accessible and beneficial.
- Expanding AI's capabilities in education, automotive, and retail.
- Developing new AI tools for retailers.
Google's efforts also include initiatives like the Google.org Accelerator for Generative AI, aiming to use AI to address pressing global challenges.
Harvard's Prompt Engineering Guide
Harvard University Information Technology emphasizes the importance of prompts in generating high-quality outputs from AI tools. Their guide provides basic principles for generating better prompts, which include:
- Being specific.
- Asking the AI to act as a certain person or object.
- Stating the output format required.
- Indicating what should and shouldn't be included.
- Providing examples.
- Specifying the audience and tone.
- Building on previous prompts.
- Correcting mistakes and giving feedback.
- Asking the AI to create prompts.
The Harvard guide rightly cautions that AI-generated content can be inaccurate, misleading, or offensive and stresses reviewing any AI content thoroughly before use. Always important.
TechCrunch's AI Coverage
TechCrunch provides ongoing news coverage of AI, focusing on the companies building AI technologies and the ethical issues they raise. It's a valuable resource for staying updated on the latest developments in the field.
The Copy Ninja's Verdict
So, what's the takeaway from all this? AI is a powerful tool, no doubt, but it's just that – a tool. Like any tool, it can be used effectively or carelessly, for good or for ill. We need to approach AI with a healthy dose of skepticism, a commitment to ethical principles, and a willingness to adapt and evolve. As Itachi once taught me, “People live their lives bound by what they accept as correct and true. That’s how they define ‘reality’. But what does it mean to be ‘correct’ or ‘true’? Merely proving that they are right? Isn’t that just basing it on their own self-assurance?” The same applies here. We can't blindly accept what AI gives us; we must verify, validate, and understand its limitations. Only then can we hope to harness its potential for the benefit of all, and to do it right.
The future remains unwritten, but one thing is certain: AI is here to stay. It's up to us to ensure that its impact is a positive one.