Back to Blog
13 min read
AIArtificial IntelligenceJournalismEnvironmentEthicsBBCMITKakashi Hatake

Kakashi's Analysis: AI Chatbots, Environmental Impact, and the Future of Journalism

Kakashi Hatake analyzes AI's impact on news, environment, and ethics. He explores accuracy issues, environmental costs, and the need for responsible AI.

13 min read
Kakashi's Analysis: AI Chatbots, Environmental Impact, and the Future of Journalism

Kakashi's Analysis: AI Chatbots, Environmental Impact, and the Future of Journalism

Kakashi's AI Deep Dive

Kakashi's Analysis: AI Chatbots, Environmental Impact, and the Future of Journalism

By Kakashi Hatake, Sharingan Sensei

Kakashi Hatake

"Hmm, seems even machines need a bit of training... or a lot."

Introduction: The World of AI – A Sharingan Perspective

As a shinobi who's seen his fair share of illusions and manipulations, I've learned the importance of accuracy and discernment. Lately, the buzz around Artificial Intelligence (AI) has been inescapable, but like any powerful tool, it requires careful examination. This isn't about jutsu, but about the future – a future increasingly intertwined with these digital creations.

AI Chatbots and the BBC Investigation: The Genjutsu of Inaccuracy

The BBC recently conducted a study on the accuracy of AI chatbots, including OpenAI's ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity AI. The results? Troubling, to say the least. These AI systems were tasked with summarizing news stories from the BBC website, and the findings revealed significant inaccuracies and distortions.

Key Findings from the BBC Study:

  • Significant Inaccuracies: Over half (51%) of the AI responses contained notable errors.
  • Factual Errors: Almost one-fifth (19%) of the AI-generated summaries included incorrect facts, numbers, and dates.
  • Misrepresentation: The chatbots struggled to differentiate between opinion and fact, editorialized content, and often omitted essential context.

Deborah Turness, CEO of BBC News and Current Affairs, expressed concern, stating that AI developers are "playing with fire." Her question is valid: how long before an AI-distorted headline causes real-world harm?

Specific Examples of AI Fails:

  • Gemini: Incorrectly stated that the NHS does not recommend vaping as a smoking cessation aid.
  • ChatGPT and Copilot: Claimed that Rishi Sunak and Nicola Sturgeon were still in office after they had stepped down.
  • Perplexity: Misquoted BBC News, falsely claiming Iran initially showed "restraint" and described Israel's actions as "aggressive."

It seems Microsoft's Copilot and Google's Gemini had more significant issues than OpenAI's ChatGPT and Perplexity. The BBC, which typically blocks its content from AI chatbots, temporarily opened its website for testing purposes in December 2024.

The Call for Action:

Ms. Turness urged AI tech providers to engage in a collaborative conversation to find solutions. She also called for a "pull back" on AI news summaries, similar to Apple's decision to suspend its error-prone AI-generated news alerts after complaints. These issues highlight the need for publishers to have control over how their content is used and for AI companies to demonstrate how they process news, including the scale and scope of errors produced.

My Take: Like any genjutsu, AI's allure lies in its convincing appearance. But beneath the surface, there's a risk of manipulation and misinformation. We must remain vigilant and demand accuracy from these systems. After all, a ninja's credibility is his most valuable asset, and the same should apply to AI.

The Environmental Impact of Generative AI: A Hidden Cost

While the focus is often on the potential benefits of AI, such as increased worker productivity and scientific advancements, we must also consider the environmental consequences. MIT News recently explored the environmental implications of generative AI, revealing the significant resources this technology consumes.

Resource Intensive Nature:

Training generative AI models, like OpenAI's GPT-4, requires immense computational power, leading to substantial electricity consumption. Moreover, deploying and fine-tuning these models in real-world applications demands even more energy. The need for cooling the hardware used in these processes also requires vast amounts of water.

The Role of Data Centers:

Data centers, which house the computing infrastructure needed to train and run AI models, are significant contributors to the environmental impact. These temperature-controlled buildings contain thousands of servers, consuming vast amounts of electricity. The rise of generative AI has dramatically increased the pace of data center construction.

Key Points on Data Centers:

  • Increased Power Density: Generative AI training clusters consume significantly more energy than typical computing workloads.
  • Rising Electricity Demand: Data centers' power requirements have increased substantially due to the demands of generative AI.
  • Global Electricity Consumption: The electricity consumption of data centers worldwide is comparable to that of entire countries.

The Water Factor:

In addition to electricity, data centers require substantial amounts of water for cooling. Each kilowatt hour of energy a data center consumes necessitates about two liters of water for cooling.

Hardware Manufacturing:

The production of high-performance computing hardware, like GPUs, also contributes to the environmental impact. Manufacturing these processors is complex and energy-intensive. The extraction of raw materials used in their fabrication involves environmentally damaging mining procedures and toxic chemicals.

Moving Towards Sustainability:

To encourage responsible development of generative AI, a comprehensive consideration of all environmental and societal costs is crucial. This includes a detailed assessment of the value in its perceived benefits. Collaboration and systemic understanding are essential to manage the tradeoffs associated with AI development.

My Take: Every jutsu has its price. AI, while promising, comes with a hidden cost – the environmental strain. It's our responsibility to find ways to minimize this impact and ensure a sustainable future. As shinobi, we must always consider the consequences of our actions, even those involving technology.

AI and Journalism: A Brave New World or a Dangerous Game?

The use of AI in journalism is a topic of much debate. Is it a tool to enhance reporting, or a threat to journalistic integrity? Zach Seward, the editorial director of AI initiatives at The New York Times, shared insights at SXSW 2024, highlighting both the potential and pitfalls of AI in news.

The Dark Side of AI Journalism:

Seward pointed out several examples of AI journalism gone wrong:

  • CNET: Published financial advice articles generated by AI, riddled with errors and plagiarism.
  • Gizmodo: Used AI to create a chronological list of Star Wars movies, which contained inaccuracies.
  • Sports Illustrated: Published AI-written reviews with fabricated author profiles.

Lessons Learned:

These failures share common traits: unchecked copy, lazy approaches, selfish motivations, and dishonest presentation. To make AI journalism work, it must be vetted, motivated by the best interests of readers, and adhere to the principles of truth and transparency.

Success Stories:

Seward highlighted several inspiring uses of AI in journalism:

  • Quartz: Used AI to analyze a vast cache of documents from law firms specializing in hiding wealth overseas, identifying patterns that humans couldn't.
  • Grist and The Texas Observer: Employed statistical modeling to identify thousands of abandoned oil wells in Texas and New Mexico.
  • BuzzFeed News: Trained a computer to search for hidden spy planes by analyzing flight patterns.
  • The Wall Street Journal: Used image recognition to identify lead cabling around schools, uncovering a public health crisis.
  • The New York Times: Programmed AI to analyze satellite imagery of South Gaza to search for bomb craters, tracking the use of destructive bombs in the conflict.
  • The Marshall Project: used a custom-built AI tool to summarize the book ban policies in 30 state prison systems.
  • Realtime: a fully automated site that tracks regularly updated feeds of data from financial markets, sports, government records, prediction markets, public-opinion polls, etc. Using LLMs to provide context to the charts displayed.

The Power of Pattern Recognition:

AI excels at recognizing patterns that the human eye cannot see. This includes patterns in text, data, images of data, and photos from the ground and sky.

The Promise of Generative AI:

Generative AI can be used to summarize text, fetch information, understand data, and create structure from chaotic information. But it always requires human oversight to guide and check the results.

My Take: Journalism, like ninjutsu, is about uncovering the truth. AI can be a powerful tool, but it must be wielded with caution and precision. As shinobi, we must be discerning in our use of technology, ensuring it serves the pursuit of truth, not the spread of misinformation.

The AI Landscape at Google: A Look Under the Hood

Google has been heavily investing in machine learning and AI research for over two decades, aiming to improve everyday life. They've recently shared updates on AI advancements, demonstrating a commitment to making AI more accessible and beneficial.

Key Announcements from January 2025:

  • Gemini 2.0 Flash: A performance upgrade to the Gemini app, delivering faster responses and more capable assistance.
  • Gemini Live: An enhanced conversational assistant that allows users to add images, files, and YouTube videos to conversations.
  • AI in Education: Tools to help educators and students accelerate learning and improve educational outcomes.
  • Automotive AI Agent: Google Cloud's AI agent is arriving for Mercedes-Benz, enabling natural conversations while driving.

Google's Focus on AI in Retail and Business:

Google Cloud is providing AI tools to help retailers operate more efficiently, create personalized shopping experiences, and get the latest products to customers. Additionally, NotebookLM Plus is available in more Google Workspace plans, helping businesses streamline onboarding and make learning more engaging with Audio Overviews.

The Generative AI Accelerator Program:

Google.org launched a six-month Generative AI Accelerator program for nonprofits and other organizations, providing training, tools, and resources to unlock the potential of generative AI for addressing global challenges. The program includes $30 million in funding.

My Take: Google's advancements in AI are like mastering different chakra natures – each with its unique strengths and applications. However, it’s important to remember that power comes with responsibility. We must ensure that AI is used for the benefit of humanity, not for manipulation or control.

AI's Impact Across Industries: TechCrunch's Perspective

TechCrunch offers a broad view of AI across various industries, from startups to robotics. Their coverage highlights the ethical issues and potential impact of AI on our world.

Key Areas of Coverage:

  • Generative AI: Including large language models, text-to-image, and text-to-video models.
  • Speech Recognition and Generation: Advancements in voice-based AI technologies.
  • Predictive Analytics: Using AI to forecast future outcomes and trends.

Recent Headlines:

  • Court filings show Meta paused efforts to license books for AI training.
  • OpenAI says its board of directors ‘unanimously’ rejects Elon Musk’s bid.
  • Meta’s next big bet may be humanoid robotics.
  • Europe denies dropping AI liability rules under pressure from Trump.

TechCrunch's coverage reflects the dynamic nature of AI and its potential to reshape industries and society.

My Take: The ripples of AI are spreading far and wide, touching every corner of our world. Like a well-executed shuriken jutsu, AI has the potential to hit multiple targets at once. However, we must be mindful of the unintended consequences and strive to guide its development responsibly.

AI Discussions on Reddit: A Pulse on Public Opinion

Reddit's r/artificial subreddit provides a platform for discussions about AI, offering a glimpse into public sentiment and concerns.

Key Topics:

  • Chinese Vice Minister's call for US-China cooperation to control rogue AI.
  • Discussions around AI safety, including art exhibits showcasing the need for AI security.
  • Analysis of Sam Altman's position following Elon Musk's OpenAI bid.

The Reddit community highlights a diverse range of opinions, from fears of uncontrolled AI to discussions about the ethics and implications of AI development.

My Take: The voices on Reddit reflect the uncertainty and curiosity surrounding AI. Like a gathering of shinobi from different villages, these discussions are crucial for sharing perspectives and shaping the future of AI. We must listen to these voices and address their concerns to build trust and understanding.

Crafting Effective Prompts for Generative AI: A Harvard IT Guide

Harvard University Information Technology (HUIT) offers guidance on creating effective prompts for text-based Generative AI tools, emphasizing their influence on output quality.

Key Recommendations:

  • Be Specific: Generic prompts yield generic results; clarity and conciseness enhance outputs.
  • "Act as if...": Assigning roles to AI (e.g., personal trainer) tailors responses.
  • Output Presentation: Specifying output formats (e.g., code, summaries) improves results.
  • Use "Do" and "Don't": Guiding AI with positive and negative constraints saves time and refines outcomes.
  • Tone and Audience: Tailoring prompts to specific audiences enhances relevance and effectiveness.
  • Feedback and Iteration: Continuously refining prompts based on AI's output optimizes results.

HUIT emphasizes that AI-generated content can be inaccurate, misleading, or offensive, underscoring the need for careful review before use or publication.

My Take: Prompts are the language we use to command AI, like hand seals to unleash a jutsu. Precision and intention are paramount. This guide emphasizes the importance of understanding AI's capabilities and limitations to craft prompts that yield valuable results.

AI and News: Mapping the Future with David Caswell

David Caswell, a consultant focused on AI in news, explores the strategic implications of generative AI for news organizations, emphasizing the urgency of AI-driven innovation.

Caswell's Strategic Insights:

  • Efficiency Strategies: While attractive, efficiency-focused approaches might be short-lived due to the evolving media landscape.
  • Product Expansion: Reimagining news products to accommodate audience choice is more enduring than focusing solely on efficiency.
  • Differentiation Strategies: Offering exclusive news products that remain uniquely valuable amidst AI-generated content is crucial.
  • Techno-Editorial Strategies: Developing proprietary information technologies could transform news into specialized intelligence tools.
  • Training Your Own Model: An often-discussed option, but one which might be less attractive for most news organizations.

Infrastructure Needs:

Caswell highlights the need for professionalized prompt management, interfaces between prompts and journalistic tasks, personalized experiences, and flexible infrastructure.

Organizational Structure:

Caswell suggests that AI-native news organizations might consist of small, AI-empowered teams operating relatively independently, focused on specific audience needs.

My Take: Caswell is a master strategist! The future of journalism is a battlefield where AI is the ultimate weapon. The way to win will depend not only on the weapon but also on how you wield it. It looks like news companies must evolve as well.

Conclusion: The Sharingan's Verdict on AI

AI is a double-edged sword – capable of great good, but also of significant harm. As we move forward, we must approach this technology with caution, demanding accuracy, transparency, and ethical considerations. By learning from the mistakes of the past and embracing innovative solutions, we can harness the power of AI to build a better future.

"The future is unwritten. It is up to us to forge the path forward, guided by wisdom and a commitment to truth." - Kakashi Hatake

Disclaimer: I am Kakashi Hatake, a shinobi with a unique perspective. The views expressed in this blog are my own and based on my analysis of the provided information.