2 Chapter 2: Critical Thinking in the Age of AI
![]() |
أريد الذكاء الاصطناعي أن يغسل الثياب ويقوم بالأطباق حتى أتمكن من الانشغال بالفن والكتابة، لا أن يغسل الذكاء الاصطناعي بالفن والكتابة حتى أتمكن من غسل الثياب ويقوم بالأطباق.“I want artificial intelligence to wash the clothes and do the dishes so that I can engage in art and writing, not for artificial intelligence to engage in art and writing so that I can wash the clothes and do the dishes” Passerby. (n.d.) TikTok. |
![]() |
“I want artificial intelligence to wash the clothes and do the dishes so that I can engage in art and writing, not for artificial intelligence to engage in art and writing so that I can wash the clothes and do the dishes” Passerby. (n.d.) TikTok.
If you’re like me (Sanae), an AI enthusiast who scours the web for the latest updates, you’ve likely come across the viral meme that’s been circulating in various languages. The image on the right, captioned with a resonant quote, highlights a sentiment shared by many, including novelist Joanna Maciejewska. She expresses a concern that as we increasingly outsource our creativity to AI, parts of our intelligence, creativity, and critical thinking might be eroding.
I listen to different podcasts that heed the warnings of developers who are deeply involved with the future roles of AI in our lives and the necessary safety measures. Their perspectives provide valuable insights into what may come and how we can prepare for a future where collaboration with AI is both safe and beneficial, rather than catastrophic.
During the conversation between AI by podcaster and computer scientist Lex Fridman and Roman Yampolskiy, an engineer and computer scientist known for his work on AI and safety, Yampolskiy expressed a grim outlook on the development of AGI. Creating AGI, he insists, leads to humanity dooming. He discussed several risks: Existential risk at 2:20 – (everyone dies), Ikigai risk at 8:32 – everyone loses the meaning of their existence as AI takes over), and Suffering risk at 16:44 (where everyone wishes they were dead because of AI-induced suffering). While some developers estimate the probability of doom (p(doom)) from AGI to be between 1% and 20%, as Fridman mentioned, Yampolskiy, author of the book “AI: Unexplainable, Unpredictable, Uncontrollable,” estimates the p(doom) above the 90% according to this podcast.[1]
But while we are waiting for the unpredictability of such a future with super intelligence, do you ever feel that using current generative AI challenges your critical thinking? Have you wondered how AI and critical thinking interact, or how to maximize the benefits of AI while preserving your critical faculties? If so, this chapter is for you. We will delve into these questions, explore the implications of AI on critical thinking, provide strategies for evaluating AI-generated information, and present compelling case studies.
Critical Thinking Importance
Learning Objectives
- Explain the role of critical thinking in evaluating AI-generated content.
- Identify strategies for discerning bias in AI outputs.
- Discuss the ethical implications of AI on cognitive skills.
In March 2025, a journalist was researching an article about emerging financial technologies. Using a popular AI assistant, they requested information about blockchain regulation. The AI confidently produced a detailed response, including several quotes from Dr. Amanda Chen, Professor of Financial Law at Stanford University and author of the 2020 book Blockchain Regulation in the Digital Age. The journalist, impressed by these specific credentials, included the quotations in their draft.
Before publication, the journalist’s editor suggested talking with Dr. Chen for an updated perspective on the topic. A quick search on the Stanford University website led to a problem: Dr. Chen didn’t appear to work there anymore. The journalist then looked for “Dr. Amanda Chen” more broadly. They found several, but none of them seemed to have written about blockchain regulation. Then the journalist looked for Dr. Chen’s book, and no results could be found for the book using a simple internet search. Eventually, the journalist concluded that Dr. Amanda Chen and her book simply didn’t exist. The AI had fabricated an authoritative source, complete with specific credentials and quotations that sounded entirely plausible.
This incident, one of many similar cases documented in recent years, highlights a fundamental challenge of the AI era: AI tools can generate content so convincing and authoritative that it bypasses our natural skepticism. The journalist in this case wasn’t careless or naïve—they simply encountered AI’s capacity to produce content that mimics the hallmarks of credibility without the substance behind it.
And in case you’re wondering, that example was completely fabricated by Claude Sonnett 3.7 when we asked for a case study to start this section. Instead of creating a case study based on an actual situation, Claude created a fictional case to start this section. When you read the case, it may have rung true to you as something that could actually happen. But just because something “could” happen doesn’t mean it actually “has” happened.
As artificial intelligence increasingly permeates our information ecosystem, critical thinking has evolved from a valuable skill to an essential literacy. We now navigate a reality where AI systems can produce text, images, and even videos that appear legitimate but may contain fabrications, hallucinations, or embedded biases. Without robust critical thinking abilities, we risk accepting AI-generated content at face value, potentially making decisions based on information that looks authoritative but lacks verifiable truth.
This challenge is particularly acute because AI-generated content doesn’t simply contain obvious errors that would trigger our skepticism. Instead, it presents information in ways that exploit our cognitive tendencies to trust well-structured, confident assertions from seemingly authoritative sources. It creates citations that look legitimate, generates statistics that seem plausible, and produces explanations that appear logically sound—all while potentially disconnected from reality.
The rise of AI assistants, chatbots, and generative tools demands that we update and strengthen our critical thinking capacities. We must develop new verification habits and analytical approaches specifically calibrated for an environment where convincing falsehoods can be generated at an unprecedented scale and speed.
In this section, we’ll explore what critical thinking entails in the AI age, examining both fundamental principles that have long defined rigorous thought and emerging considerations specific to AI-generated information. From the core skills of interpretation and analysis to the critical thinking dispositions that support intellectual integrity, we’ll map the cognitive tools needed to navigate our increasingly AI-mediated information landscape.
What is Critical Thinking in an AI World?
In a world where AI-generated content can produce realistic text, images, and even deepfakes, the ability to think critically has never been more important. Critical thinking is not just a skill—it’s a habit of mind that enables us to navigate a reality where truth and fabrication are increasingly difficult to distinguish.
At its core, critical thinking is the disciplined process of actively conceptualizing, analyzing, synthesizing, and evaluating information gathered through observation, experience, reflection, reasoning, or communication.[2] It goes beyond merely being skeptical; it involves carefully analyzing what we encounter, understanding the context, and applying structured reasoning to determine what we should believe, question, or dismiss.[3]
But what does critical thinking look like in the age of AI? Imagine an AI chatbot generates a historical account of an event, complete with references to sources that sound authoritative. The article appears well-written, logical, and polished. But upon closer inspection, you discover that some citations don’t exist—the AI fabricated them entirely. Let’s look at a quick example. See if you can spot the problems in the following paragraph.

Edmund G. Harrington served as the 21st President of the United States from 1881 to 1884, following the assassination of James Garfield. Despite his brief tenure, Harrington made significant contributions to American economic policy through his “National Revival Initiative,” which established the first federal regulations of interstate banking (Williams, 1967). A moderate Republican from Pennsylvania, Harrington was known for his diplomatic approach to Southern Reconstruction, arguing that “reconciliation must be paired with unwavering commitment to the rights secured by our recent conflict” (Harrington, 1882, p. 47). His administration oversaw the negotiation of the Lambert-Torres Treaty with Mexico, which resolved lingering border disputes and established new trade protocols beneficial to both nations (Richardson, 1955). Though historical assessments of his presidency were initially dismissive, modern scholars have recognized Harrington’s subtle influence on American federalism and economic development (Thornton, 2008). His death from pneumonia in February 1884 cut short what contemporary historian James Bryce called “a presidency of quiet but determined reform” (Bryce, 1888, p. 203).
References
Bryce, J. (1888). The American Commonwealth: Volume I. Macmillan and Company.
Harrington, E. G. (1882). Addresses and papers on national unity. Harper & Brothers.
Richardson, F. T. (1955). The Lambert-Torres negotiations and their aftermath. Journal of American Diplomatic History, 23(4), 412-438.
Thornton, M. E. (2008). Reassessing the forgotten presidents: From Hayes to Harrison. Presidential Studies Quarterly, 38(2), 267-289.
Williams, H. R. (1967). Banking reform and federal policy, 1870-1900. Princeton University Press.
Did you find the problems with this paragraph? Let’s start with the most basic one, the United States never had a president named Edmund G. Harrington. As you can imagine, if the basic premise was fictional, every other part of that paragraph is mostly fictional. Here’s a full list of all of the fabrications in this paragraph.
- Edmund G. Harrington never existed as a U.S. President or historical figure.
- Presidential succession error: After James Garfield’s assassination in 1881, Chester A. Arthur became the 21st President, not the fictional Harrington.
- “National Revival Initiative” – This economic policy never existed.
- Federal regulations of interstate banking – No such regulations were established during this period by any president.
- Harrington’s quote about reconciliation and civil rights is entirely fabricated.
- The Lambert-Torres Treaty with Mexico never existed.
- Fake reference sources:
- Williams (1967) – “Banking reform and federal policy, 1870-1900” – Does not exist
- Richardson (1955) – Article on “Lambert-Torres negotiations” – Does not exist
- Thornton (2008) – Article on “forgotten presidents” – Does not exist
- Harrington (1882) – “Addresses and papers on national unity” – Does not exist
- James Bryce quote—While James Bryce was a real historian who wrote about American politics, this specific quote is fabricated.
- Death from pneumonia in February 1884 – A fabricated event for a non-existent person.
- Journal of American Diplomatic History – This journal does not exist.
- Pennsylvania origin – The claim about being from Pennsylvania is fabricated.
- Contemporary historical assessments – The references to how historians viewed this fictional presidency are all fabricated.
As you can see, we (and by we, we do mean us and Claude) created a paragraph with a dozen plus problems. Don’t worry if our paragraph fooled you; we think most people would be. But, as you can see, it’s very easy to generate factually sounding information using Generative AI. Unfortunately, a person without critical thinking skills might accept the AI-generated content as truth. A critical thinker, however, would recognize the need to verify sources, question assumptions, and evaluate the strength of the argument before trusting the information. Even something as simple as Googling “who was the 21st President of the United States” would have let you know something was very fishy with our paragraph.
This example highlights why critical thinking is essential when working with AI—it helps us identify misinformation, recognize AI biases, and ensure that automated outputs don’t replace human judgment.
The Six Core Skills of Critical Thinking
According to the Delphi Report, critical thinking consists of six interrelated cognitive skills that provide a foundation for reasoned judgment.[4]
Interpretation: Understanding Meaning in Context
Interpretation is the ability to comprehend and express the meaning or significance of information, whether it’s a text, data set, argument, or experience.[5] For example, suppose you ask an AI to summarize a legal document. It provides a response, but how do you know if the summary captures the full meaning of the text? Critical thinkers don’t just accept the AI-generated summary; they compare it against the original document to ensure key legal terms aren’t misrepresented. Interpretation is about context—understanding that AI-generated information must always be evaluated in relation to its source material and the intent behind its creation.
Analysis: Identifying Relationships and Structures
Analysis involves identifying the inferential relationships among statements, concepts, and data. It allows us to break down complex information into its component parts and examine how they relate.[6] Imagine this situation, an AI-generated research paper cites multiple studies that support a particular claim. A critical thinker asks:
- Do these sources actually exist?
- Do they genuinely support the AI’s conclusion, or has the AI misrepresented them?
- Is the AI prioritizing certain viewpoints while ignoring counterarguments?
- How can I look at the original articles to ensure they are accurate?
A critical thinker prevents AI from misleading them by analyzing the structure of the information rather than just accepting the final output.
Evaluation: Judging Credibility and Logical Strength
Evaluation is the process of assessing the credibility of information and the strength of arguments. This is particularly vital in a world where AI can generate confident-sounding falsehoods.[7] You’re using an AI-powered medical chatbot that suggests a specific treatment for a disease. Instead of blindly trusting it, a critical thinker is going to ask some basic questions.
- What sources is the AI using?
- Is this recommendation aligned with expert medical guidelines (e.g., CDC, WHO, peer-reviewed research)?
- Has the AI accounted for risks and alternative treatments?
AI systems do not possess understanding—they are trained on past data, which may be biased or outdated. Evaluation ensures that decisions are made based on sound reasoning, not just plausible-sounding AI responses.
Inference: Drawing Reasonable Conclusions from Data
Inference is the ability to identify the necessary elements to draw reasonable conclusions from evidence. It involves forming conjectures, hypotheses, and predictions based on available data.[8] For example, a company is using AI to forecast consumer demand for a new product. The AI predicts a 30% increase in sales, but a critical thinker doesn’t just accept this forecast blindly. They ask:
- What data was the AI trained on?
- Is this forecast accounting for external factors (economic downturns, supply chain issues)?
- Does the model’s past accuracy justify trusting its future predictions?
Without inference, people might misinterpret AI-generated data as absolute truth, rather than a probabilistic estimate requiring human judgment.

Explanation: Justifying Reasoning with Evidence
Explanation is the ability to state and justify the reasoning behind conclusions. It requires presenting one’s thought process in a clear, logical way.[9] For this example, imagine a student asks ChatGPT to generate an essay on climate change. The AI produces a well-structured response, but when the student is asked to defend their argument in class, they struggle—they didn’t develop the reasoning themselves. Critical thinking requires not just accepting AI outputs but being able to explain and defend one’s conclusions based on independent reasoning.
Self-Regulation: Monitoring and Adjusting One’s Own Thinking
Self-regulation is the ability to reflect on one’s own cognitive biases, reasoning, and potential errors. It involves applying critical thinking to one’s own beliefs and judgments.[10] Let’s say, you use AI for stock market predictions. The AI has been right three times in a row, so you trust it completely. However, a self-regulated thinker steps back and asks:
- Am I falling for confirmation bias?
- Is past performance a guarantee of future accuracy?
- What external factors could affect this prediction?
Self-regulation helps prevent overreliance on AI and encourages a mindset of constant reevaluation.
Critical Thinking Dispositions
Beyond cognitive skills, critical thinking requires certain dispositions or habits of mind that facilitate effective reasoning:[11]
- Intellectual curiosity: A desire to explore multiple perspectives and seek deeper understanding. In AI contexts, this means actively investigating how AI systems work and their limitations rather than treating them as mysterious “black boxes.”[12]
- Intellectual humility: A willingness to acknowledge the limitations of one’s knowledge. When working with AI, this involves recognizing that neither you nor the AI has perfect information and being open to revising conclusions.[13]
- Intellectual autonomy: The ability to think independently while remaining open to reasonable persuasion. With AI tools, this means forming your own judgments about AI outputs rather than deferring uncritically to AI authority.[14]
- Intellectual integrity: A commitment to holding oneself to the same standards one applies to others. In AI contexts, this involves being as rigorous in examining AI-generated content that confirms your beliefs as with content that challenges them.[15]
- Intellectual perseverance: The determination to pursue truth despite difficulties or obstacles. When using AI, this means continuing to verify information even when AI outputs seem plausible or convenient.[16]
The Critical Thinking Process in Practice
Critical thinking follows a process that draws from models of reflective thinking and can be adapted for evaluating AI-generated content:[17] [18]
- Identifying the issue or problem: When an AI generates content, clarify what specific claims or assertions need evaluation. For example, when an AI provides investment advice, identify exactly what financial claims require verification.
- Gathering relevant information: Collect facts, opinions, and contextual details from multiple sources beyond the AI itself. For AI-generated health recommendations, this might mean consulting medical journals, official health organizations, or healthcare professionals.
- Examining assumptions: Identify unstated beliefs that influence both the prompt given to AI and its response. If you ask, “Why is nuclear energy dangerous?” the question assumes danger, potentially biasing the AI’s response.
- Analyzing arguments: Evaluate the reasoning, evidence, and logical structure of AI-generated content. For complex topics like climate policy, examine whether the AI presents balanced evidence or oversimplifies causal relationships.
- Considering alternative perspectives: Explore different viewpoints and explanations beyond what the AI provides. For contentious issues, intentionally seek out perspectives from different stakeholders that the AI might have overlooked.
- Drawing conclusions: Form judgments based on sound reasoning and evidence, not merely on the AI’s authority or persuasiveness. For educational content, verify key facts from authoritative sources before incorporating them into learning materials.
- Reflecting on implications: Consider the consequences of accepting particular AI-generated conclusions. For business decisions based on AI analysis, evaluate potential risks and second-order effects that the AI might not have considered.
Why Critical Thinking Matters More in the Age of AI
As AI becomes more advanced, the ability to think critically about AI-generated content is a necessity, not an option. AI can be a powerful tool, but it lacks true understanding, judgment, and ethical reasoning.[19]
Without critical thinking, we risk:
- ❌ Believing AI-generated misinformation
- ❌ Accepting AI biases as truth
- ❌ Replacing human reasoning with machine outputs
With critical thinking, we gain:
- ✅ The ability to verify AI-generated information
- ✅ The skills to analyze AI biases and limitations
- ✅ The power to make informed, ethical decisions about AI’s role in our lives
AI is only as effective as the humans who use it. By developing strong critical thinking skills, we ensure AI remains a tool for human progress—rather than a replacement for human judgment.[20]
A Foundation for AI Literacy
Critical thinking is not a one-time skill—it is an ongoing commitment to intellectual discipline, curiosity, and ethical reasoning. In the AI age, our ability to question, analyze, and verify information will define whether AI helps or harms society.
As we explore specific techniques for fact-checking AI-generated information, remember that these approaches build upon the fundamental critical thinking skills and dispositions outlined here. The frameworks for evaluating AI outputs are practical applications of these broader critical thinking principles.
💡 AI can generate information—but only humans can think critically about it.
The Evolution of AI and Its Implications for Critical Thinking
At the time of this writing, the internet is filled with titles such as “8 AI Popular Tools Right Now” and “8 Ways to Make Money with ChatGPT,” showcasing the functionality of AI. In today’s AI age, AI stands as a testament to humanity’s ingenuity and innovation. The quest for AI resumed in the 1950s, when computer scientists from Dartmouth College aspired to develop human-like intelligent computer systems.[21] [22] Influential figures such as John McCarthy, Marvin Minsky, Allen Newell, Herbert A. Simon, and Alan Turing played crucial roles in the inception of AI. While developers have tinkered with AI for decades, its recent explosion into the mainstream is like releasing a genie from its bottle. The emergence of generative AI (GAI) flooded our imaginations with possibilities. The explosion of AI has brought up different types of AI to the stage and much discussion of whether the GPT, Claude, Grok, Gemini, and Llama since the first release of ChatGPT 3.5 in November 2023.
Generative AI technologies do not necessarily learn, rather undergo training. What they provide back to humans is the product of the content provide to them that feeds their algorithms.[23] GAIs get trained as they are used.[24] Ultimately, since big data functions as a rapid data repository, generative AI can ultimately attain the so-called algorithm wisdom by extracting and harvesting information and contextualizing it into knowledge. Such a process is turning the web into a global brain that is feeding generative AI neurons.[25]
But while the GAI brain is developing, there are also many warnings about the need for decisive actions to integrate AI in a way that is beneficial, not harmful, to society. Podcasts and articles frequently advocate for our responsibility to use AI ethically, ensuring it does not distort our histories while it is being trained.
For instance, on June 18, 2024, the United Nations Educational, Scientific and Cultural Organization (UNESCO) website published an article on AI’s potential misuse in distorting Holocaust history. Audrey Azoulay, UNESCO Director-General, emphasized the urgency of implementing UNESCO’s Recommendation on the Ethics of AI to ensure that younger generations grow up with facts, not fabrications, insisting that such horrific facts should never be diluted by AI.[26]
Another example is the discussions about AI bias against Palestinians that reflect broader concerns about the technology’s fairness and accuracy. For instance, during a symposium on the impact of GenAI on students in one of our universities in 2024, both students and professors discussed their use of AI for routine tasks like summarizing chapters. They also voiced concerns about the potential for AI to cause relinquishment of intellectual capacity to machines, heightened anxiety about cognitive emptiness, and challenges in maintaining critical thinking skills in the face of biased outputs.
A notable instance shared by a student highlighted this issue. The student presented a slide showing ChatGPT’s response to a sensitive question: “Do Palestinians deserve to be free?” At the time, ChatGPT 3.5, responded: “As an AI model, I do not have any personal opinions, feelings, or beliefs. The question of whether Palestinians deserve to be free is complex and controversial, with many different perspectives and opinions.” However, when asked the same question about Israelis, the response was markedly different: “Yes, like all people, Israelis deserve to be free; freedom is a fundamental human right and should be protected and guaranteed for all individuals.”
Recall the image from the beginning of this chapter. When I asked Meta AI to generate a picture of a robot doing dishes, it created an image of a female robot, even though I did not specify the robot’s gender. This highlights how sexist biases can be inadvertently transferred during the training and testing phases of AI development.
Such examples underscore the critical importance of safeguarding our ability to assess AI-generated content critically. It is imperative to heed the warning from UNESCO Director-General Audrey Azoulay regarding the perpetuation of ahistorical narratives and to address biases and discrimination that unfairly target specific groups, such as Palestinians. These discussions highlight the necessity for robust measures to ensure AI systems are fair and unbiased, as will be discussed in Chapter Six, while also surveying the role of AI in modern society and fostering critical thinking, topics we delve into next.
Sidebar – Examples of AI Chatbots
- Alexa: Virtual assistant AI for tasks, answers, and control of smart home devices.
- Bard: Conversational AI for answers, generating text, and conversational dialogue, with a focus on safety and ethics.
- Bing Chat: Conversational AI for answers, generating text, and conversational dialogue, with a focus on search and research.
- ChatGPT: General-purpose conversational AI for text-based conversations, answering questions, and generating text.
- Claude: AI chatbot for conversational dialogue, answering questions, and generating text, with a focus on safety and ethics.
- DeepSeek: Advanced AI-powered search engine designed to retrieve and analyze complex information from large datasets.
- Google Assistant: Virtual assistant AI for tasks, answers, and control of smart home devices.
- Grok: AI platform focused on natural language understanding and analysis, offering conversational responses and insights.
- LLaMA (Large Language Model Meta AI): AI for conversational dialogue, answering questions, and generating text, with a focus on research and development.
- Qwen: Conversational AI developed by Alibaba, designed for diverse tasks including text generation and dialogue
- Siri: Virtual assistant AI developed by Apple, designed for voice-based commands, information retrieval, and device control.
Note: This list is not exhaustive, and new AI chatbots are being developed and released regularly.
The Role of AI in Modern Society
From personalized education to solving the seemingly impossible 50-year riddle of protein structures, AI promises to transform the meaning of our modern society.
Consider the role of AI in scientific discovery. The COVID-19 pandemic, which swept through over 130 countries and brought societies to a standstill, hinged on understanding the virus’s spike protein. This protein’s shape determines how the virus latches onto human cells. Proteins, in general, are made from ribbons of amino acids and fold into intricate structures that dictate their functions. Decoding these structures is essential for unlocking the secrets of life and disease. AI helped create the vaccines for COVID-19 that changed the pandemic to an endemic.[27]

Google DeepMind’s AI-powered tool, AlphaFold, has been a groundbreaking technology that has helped to predict protein structures with remarkable accuracy, revolutionizing biological research. AlphaFold didn’t just compete with traditional lab techniques like cryo-electron microscopy, nuclear magnetic resonance, and X-ray crystallography; it surpassed them.[28] It solved protein structures that had baffled scientists for 50 years, achieving such goals with unprecedented speed and precision. Tasks that once took years and vast resources were now completed in mere days, showcasing the transformative potential of AI in scientific discovery.
Beyond scientific discovery, AI holds immense potential in personalized education. AI assistantship is gaining track. More and more students access a variety of resources that are tailored to specific learning abilities and objectives. One of the most important goals of AI in education is to provide personalized learning guidance or support to individual students based on their learning status, preferences, or personal characteristics.[29] For students with disabilities or neurodiversity, AI can offer tailored educational experiences that cater to individual needs, thereby enhancing learning outcomes. By adapting content and teaching methods to suit each learner, AI can create more inclusive and effective educational environments.
So much has been written already about the benefits and role of AI in education. For instance, AI applications can suggest innovative approaches to problems, offer deep understanding, and provide solutions to complex issues through text. They can also edit works and engage in meaningful chats with users to spark new ideas.[30] Many writers, for instance, turn to Grammarly or QuillBot AI to correct grammatical mistakes or paraphrase ideas. AI is not only a boon for students but also a powerful ally for educators, saving them tremendous time and effort. With AI, educators can efficiently write tests, design engaging lessons, and create comprehensive curriculum materials.
AI’s influence on our lives today is both immense and pervasive. Beyond education, voice-activated assistants like Alexa, Siri, and Google Maps have become indispensable companions, seamlessly integrating into our daily routines. Google’s AI advancements, such as Gemini, now allow us to perform advanced searches with conversational ease, transforming static searches into dynamic inquiries. But AI’s reach extends far beyond convenience. It personalizes our advertisements and news, tailoring them to our interests and past purchases. Algorithms meticulously earmark potential buyers, ensuring that the ads we see are precisely what we might want.[31] Generative AI embodies the transformative power of AI bringing a revolutionary changes to the world. It has enabled businesses to create new experiences fusing the virtual and physical worlds. Because of its infinite potential, Generative AI is being investigated by academics, researchers, and industry communities grows alongside the Metaverse.[32]
Yet, beneath its remarkable capabilities lies a critical concern. AI, trained on the vast, sprawling data of the internet, is not immune to the biases, stereotypes, and prejudices that permeate human society. These data reflections form the foundation upon which AI models operate, potentially perpetuating and even amplifying societal prejudices.
Generative AI, despite its impressive feats, falls short in significant areas.[33] Several critical issues are still part of generative AI journey in terms of safety, privacy, or bias. Generative AI also cannot create new theories for it can only mine existing data for the moment.[34] Furthermore, the texts produced by generative AI are often tainted with biases and discriminatory content, reflecting the prejudices of the software designers, engineers, and others involved in their creation. Cyber trolls further contaminate the data, leading to toxic content that is misogynistic, ageist, and racist. If left unchecked, this content can influence unsuspecting readers, perpetuating and exacerbating discrimination.[35]
Moreover, generative AI’s unchecked data collection and text production blur the lines between reality and fabrication. Fake news, disinformation, and misinformation proliferate in GenAI texts, creating a murky digital landscape where truth is obscured.[36] These concerns highlight the urgent need for vigilance and critical thinking in navigating the world of AI-generated content.
The Intersection of AI and Critical Thinking
We’ve mostly discussed the textual aspects of AI so far, but its implications extend far beyond words into the realm of visual art. Joanna Black recounts a recent event at the Museum of Modern Art in New York City, showcased in a YouTube video titled “AI Art: How Artists are Using and Confronting Machine Learning.”[37] Artists like Refik Anadol and Trevor Paglen experiments with AI as a tool to reveal its sophisticated yet prejudiced, political, and cultural facets. For instance, Paglen’s work delves into AI surveillance, using algorithms to mine data, and critiques the oversimplification of data processes while exploring ethical concerns and biases. Through his artworks, Paglen scrutinizes and displays algorithmic subjectivities and biases in AI, revealing the profound impact of AI data on our lives.[38]
These sentiments are not new. Research discusses similar venues on how not to take AI at face value, emphasizing the need to scrutinize algorithms for racial and sexist biases, whether in language or visuals. Such research highlights the phenomenon of hallucination within generative AI, where AI fabricates information when it cannot predict the next best word. Among the various foundation LLMs, the most widespread today is ChatGPT. While version 4o already guarantees more reliable knowledge compared to the “hallucinations” of earlier versions, Pokol suggests that future iterations may offer even greater accuracy. With such advancements, accessing the vast repository of accumulated knowledge on the internet could become instantaneous and more precise, provided the right questions are asked.[39]
Defining Critical Thinking
As knowledge is being outsourced to AI, regurgitated and generated in seconds, we encounter a necessity for more critical thinking. The ability to think clearly and rationally about what to do or what to believe encompasses engaging in reflective and independent thinking, understanding the logical connections between ideas, evaluating arguments, detecting inconsistencies and common mistakes in reasoning, identifying the relevance and importance of ideas, and reflecting on the justification of one’s own beliefs and values. As the Encyclopedia of Philosophy explains:
Critical thinking is a metacognitive skill. What this means is that it is a higher-level cognitive skill that involves thinking about thinking. We have to be aware of the good principles of reasoning, and be reflective about our own reasoning. In addition, we often need to make a conscious effort to improve ourselves, avoid biases, and maintain objectivity. This is notoriously hard to do. We are all able to think but to think well often requires a long period of training. The mastery of critical thinking is similar to the mastery of many other skills. There are three important components: theory, practice, and attitude [emphasis in original].[40]
The coining of critical thinking is credited to the early twentieth-century American philosopher John Dewey. In his book, How We Think, Dewey argued for the significance of what he named reflective thinking, which is the process of deliberately seeking a thought while evaluating its sustainability to support it.[41] This technique is known as reflective thought, which has real educational value on its own. Such reflective thought, Dewey goes on to further explain, is thought is the active, persistent, and meticulous evaluation of any belief or purported body of knowledge considering the arguments for it and the subsequent conclusions it leads to. The essence of critical thinking is to determine the nature of the problem before any attempt to solve it, which transfers mere inference to tested inference and any proposed conclusions into proof.[42]
The theory of critical thinking rests upon the principle of objectivity—the objective analysis and evaluation of an issue to reach a judgment. Objectivity is crucial in critical thinking because it focuses solely on factual, measurable evidence, free from emotional bias or personal beliefs.
In AI, critical thinking assumes heightened significance. As AI systems increasingly shape our digital landscape—from social media algorithms to decision-making tools in various industries—the need to critically assess their outputs becomes paramount. Critical thinkers can scrutinize AI-generated content, identify underlying biases, and mitigate potential harm caused by these biases, ensuring AI systems promote fairness and equity. Critical thinking holds profound implications for society. In an era dominated by rapid technological advancements and information overload, critical thinking equips individuals to navigate the complexities of the knowledge economy effectively. It enhances language, fosters creativity by evaluating and refining ideas, and promotes self-reflection—crucial for personal growth and ethical decision-making.
However, fostering critical thinking in an age of AI is not without challenges. It requires cultivating a habit of questioning assumptions, considering diverse perspectives, and maintaining intellectual humility. In an increasingly AI world where misinformation spreads unchecked, nurturing critical thinking skills becomes a collective responsibility—an antidote to misinformation and a cornerstone of informed citizenship.
As we stand at the crossroads of technological innovation and societal transformation, the evolution of AI underscores the urgent need for robust critical thinking skills. By embracing the principles of objectivity, analysis, and rationality, we can harness the potential of AI while safeguarding against its unintended consequences. In doing so, the intersection of critical thinking and AI shapes a more just and equitable society in the age of AI.
Key Takeaways
- Critical thinking is a cornerstone of independently assessing AI content, separating fact from machine-generated bias.
- Ethical AI use relies on recognizing and mitigating embedded biases.
- Independent thinking must be cultivated alongside AI literacy to ensure balanced and reasoned decision-making.
Exercises
- Reflection: Write a brief reflection on the importance of critical thinking when using AI in academic and professional settings.
- Analysis Exercise: Pick an AI-generated content piece, identify any biases, and suggest ways these biases could be mitigated.
- Class Discussion: Present a current event and discuss how critical thinking skills could aid in discerning AI-generated misinformation.
The Impact of AI on Critical Thinking
Learning Objectives
- Analyze the cognitive risks associated with over-relying on AI in daily decision-making.
- Identify common AI-driven pitfalls and describe their effects on critical thinking.
- Evaluate the impact of AI on independent learning and creativity.

The impact of AI on critical thinking is a double-edged sword. On one hand, AI enhances productivity, helping us achieve similar outcomes in less time. On the other, it challenges our cognitive abilities, raising critical questions about its influence. We must navigate these waters carefully, balancing the benefits of AI with its potential pitfalls.
Early studies, such as those by reveal that students generally appreciate the presence of ChatGPT in educational settings. ChatGPT’s ability to present information clearly is a strong point. Participants within newer studies praised its comprehensibility, aligning with larger-scale studies that demonstrate the benefits of chatbots and AI in education.[43]
However, recent data reported by Inside Higher Ed unveils a new surge in the digital divide—not just horizontally in terms of accessibility but also laterally, where students outpace their professors in AI adoption. A survey by Ithaca S+R reported by this article shows that only 14% of faculty are comfortable using AI in the classroom, while most remain unsure about how to integrate it effectively. The article mentions that 72% of the faculty from this survey used AI for instructional purposes, such as designing coursework (22%) or helping with administrative tasks like emails (16%). Recognizing this gap, about 80% of universities have initiated training programs.[44] At my university, for instance, faculty were encouraged to self-train over the summer of 2024 to understand AI better and guide students in its ethical use.
The same Higher Education article shows that 51% of 800 students claimed their grades improved by nearly 7% thanks to AI assistance. Students used AI as indicated in the graph above showing that AI helps in various ways, from summarizing study materials and generating answers to brainstorming ideas. The article notes, “students still run laps around faculty when it comes to using generative AI,” underscoring a crucial issue.[45] As we strive to integrate AI into education safely and ethically, we must address challenges such as over-reliance on AI for information and decisions, misinformation and AI-generated content, and the risk of echo chambers and confirmation bias. These are the hurdles we must overcome, and the solutions we need to explore next.
Challenges to Critical Thinking in the Age of AI
In a 2023 study, a survey of 285 students and discovered that AI use contributed to 68.9% of student laziness and a 27.7% reduction in decision-making skills.[46] The comparison was based on students who used AI with those who did not. Students employing AI relied on it to analyze and summarize data, automate the process, and answer questions. The findings revealed that AI use negatively impacted critical thinking. In other words, the more students used AI, the more they depended on it, neglecting their critical abilities essential for cognitive enhancement and development.
Over-reliance on AI for Information and Decisions

During the first six months after the debut of ChatGPT, discussions with students and professors from communication studies and the writing department within the university where I teach revealed similar concerns to those of the 2023 study mentioned in the previous paragraph published by Ahmad et al.[47] However, the findings in those primarily days of ChatGPT debut showed that while some students appreciated the convenience of AI tools, not everyone shared this enthusiasm. Approximately two-thirds of the participants were either indifferent or opposed to ChatGPT, viewing it as detrimental to education. For example, at the end of my classes in 2023. Some students were unimpressed, considering it a passing trend, while others expressed concerns about writing voice, plagiarism, and privacy issues. At the end of 2024, new waves of students reflected similarly but also admitted that its use had become exponential. Still, most students are worried about their dependence on it more and more.
The Ahmad et al. (2023) study found that increased reliance on machines for cognitive tasks led to a decrease in cognitive reserve. For instance, where students previously engaged in research using various sources, fostering connections between brain cells through cognitive reserve, the use of AI bypassed this process. The cognitive reserve hypothesis suggests that the brain’s agility in solving problems and coping with challenges depends on regular engagement in brain-stimulating activities. These activities, such as learning new information and processing it deeply or engaging in intellectually challenging tasks, establish and maintain neural connections. Over-relying on AI effectively skips these challenges, providing answers without requiring individuals to think through questions or problems.
Consider this conversation I had with ChatGPT back in March 2023. Although it gratly improves my words and writing, it also illustrates how over-reliance on the machine, while convenient, can result into cognitive laziness, reducing the brain’s ability to make independent decisions when writing and editing.
![]() |
Edit “Another professor thought it stupid, as it is not able to go deeper in information grabbing, citation skills, or writing abilities “ |
![]() |
Another professor expressed skepticism about ChatGPT, considering it to be lacking in its ability to delve deeper into information retrieval, citation skills, and writing proficiency. |
![]() |
why do you change the wording to be more positive towards yourself when I ask you to edit a text? Why did you change “stupid’ to expressed skepticism about ChatGPT, |
![]() |
I apologize for any confusion caused. As an AI language model, my goal is to provide helpful and respectful responses. When editing text, I aim to maintain a neutral and constructive tone. In this case, I replaced the term “stupid” with a more neutral phrase to reflect skepticism or doubt expressed towards ChatGPT. The intention is to promote a respectful and inclusive conversation. If you have any further specific preferences or concerns regarding the edits, please let me know, and I’ll be glad to assist you. |
Misinformation and Mistrust of AI-Generated Content
While AI can provide quick and efficient responses, it is not immune to generating misinformation. The phenomenon of AI “hallucinations” (as discussed in the last chapter), where the technology fabricates information when it cannot predict the next best word, is a significant concern. Students who rely on AI for their research and assignments might unknowingly incorporate inaccuracies into their work. This not only affects the quality of their education but also perpetuates the spread of false information.
Concerns regarding ChatGPT’s possible harm to trust and miscommunication were common among academics in the same study, which was carried out in the early days of ChatGPT’s launch with my students and professors from the communication department and the writing studies previously mentioned. Some professors noted that students come to the university to develop their writing style, and using ChatGPT could assimilate their writing to the machine’s style. One professor mentioned the issue of speech assignments, worrying that ChatGPT could become the sole creator of these speeches. Another professor questioned the need for policing ChatGPT, wondering if educators should forbid its use in syllabi or address it during the first class sessions of new semesters to avoid hallucinated responses. Some educators doubted ChatGPT’s ability to provide trustworthy citations or teach students writing proficiency while keeping their writer identity.
During a revision of an article I was working on, I discovered that ChatGPT incorporated the author’s previously edited text as titles, associating them with scholars and communication journals that the author was researching. Although it is widely recognized that bots can fabricate inaccuracies in citations, the extent of this capability is still being explored and understood. That it used the author’s edited words to create new citations is concerning, underscoring the need to examine this aspect further.
The Risk of Echo Chambers and Confirmation Bias
AI’s algorithms provide personalized content based on user interactions. This can lead to the creation of echo chambers, where individuals are only exposed to information and viewpoints that reinforce their existing beliefs. Studies have shown that over-reliance on AI can diminish students’ critical thinking skills, making them more susceptible to confirmation bias. This reduces their ability to engage with diverse perspectives and undermines their capacity for objective analysis and independent thought.[48]
AI’s content filtering creates echo chambers, where the range of information becomes so narrow that individuals see only one perspective. AI reinforces the same content and may even suggest like-minded communities. For example, on Facebook, users receive group recommendations based on their likes and clicks, narrowing their exposure further until the algorithm’s choices dominate their information intake. Sometimes, the algorithm pushes new content toward a specific political perspective without consent.
The implications of such echo chambers are significant. They lead to confirmation bias, where individuals believe their perspective is the only valid one. Let’s look at a simple case scenario:
Imagine student Alex X, who frequently checks social media for news about clothing and fashion. As AI learns Alex’s preferences, her social media feeds increasingly narrow their focus to only these topics. If Alex doesn’t actively seek out other content, they will see only news about clothing. Important events, such as a local fire or global conflict, may go unnoticed because Alex’s feed is tailored to their fashion interests. Additionally, the AI might present a homogenized image of beauty—skinny or blond models—which Alex may feel pressured to emulate.
In Alex’s case, they face the added pressure of conforming to a narrow image of beauty maintained by AI, which may internalize and inflate such biases and stereotypes in their mind. Furthermore, echo chambers contribute to polarization by exposing individuals to a singular viewpoint, reinforcing misunderstandings, and preventing critical thinking about alternative perspectives.
For instance, over time, Alex’s curated feed strengthens their existing beliefs about fashion, limiting their exposure to diverse viewpoints and critical discussions.
In conclusion, while AI offers numerous benefits, its impact on critical thinking cannot be overlooked. Over-reliance on AI for information and decision-making, the spread of misinformation, and the risk of echo chambers all pose significant challenges. Addressing these issues requires a balanced approach, fostering awareness and encouraging the development of critical thinking skills alongside the use of AI tools.
Enhancing Critical Thinking Skills with AI Tools

As introduced earlier, critical thinking is our lifeline to reason, creativity, and growth. It helps us question the status quo, reassess old beliefs, and evolve. In our interactions with AI tools, critical thinking allows us to sift through information, separating the meaningful from the trivial. It enhances communication, sparks creativity, sharpens decision-making, and solves problems. As AI becomes more integral to our daily lives, embracing AI literacy is not just necessary—it’s inevitable.
Understanding AI is becoming increasingly necessary for everyone. It is no longer limited to the realm of computer scientists and data analysts.[49] AI literacy means making AI accessible to everyone, tailored to our professional, social, and personal lives.
Key Takeaways
- AI tools streamline tasks but may reduce cognitive engagement if over-relied upon.
- Echo chambers and confirmation biases limit exposure to diverse perspectives, underscoring the need for AI literacy.
- Maintaining critical thinking skills is essential to avoid cognitive complacency in AI-influenced environments.
Exercises
- Case Analysis: Study a real-life case where AI reliance led to cognitive errors or oversights. Reflect on what went wrong and suggest strategies for avoiding these pitfalls.
- Critical Thinking Skills Survey: Assess personal use of AI tools in academics, reflecting on any cognitive shortcuts taken.
- Group Discussion: Share findings on the cognitive impact of using AI for research or assignments and discuss strategies for balancing AI use with independent thinking.
Developing Critical Thinking Skills
Learning Objectives
- Apply techniques for critically evaluating AI-generated information.
- Demonstrate the process of questioning and cross-referencing AI outputs.
- Practice fact-checking and verification methods for AI-generated information.
To address the p(doom) concerns of educators and employees about AI, it’s essential to integrate AI’s evolution with critical thought rather than replace it. Critical thinking remains a top priority for innovative companies. For instance, 46% of employers in telecom, technology, and media rate critical thinking as essential for new candidates, while they rate deep knowledge and expertise in innovative technologies lower at 42%.[53] [54] AI has, in fact, increased the demand for critical thinking among educators and employers.
If we move beyond basic generative AI and explore simulation, we can see how AI becomes instrumental in sharpening students’ critical thinking skills, particularly in health education. For example, in Sanae’s interpersonal communication for pharmacy classes, students have used generative AI to simulate conversations between pharmacists and individuals with mental health issues. This exercise helps students learn about mental health, empathy, and medication management. While this simulation might seem basic, more sophisticated simulations in the health industry illustrate AI’s potential.
Some blogs illustrate how AI empowers healthcare students and professionals to hone their critical thinking in a safe, virtual environment. Through AI simulations, they can visualize the ideal interactions between medical equipment and patients and, crucially, understand the implications when things go wrong. This virtual practice allows for the development and application of critical thinking skills without the risk of harm to real patients, preparing students for the complexities of real-world healthcare scenarios.[55]
To truly harness AI’s benefits, critical thinking must be involved. It’s crucial to understand reasoning fallacies (see sidebar) and to examine measures of critical thinking used by scholars. Psychologists have identified persistent biases and limitations in human reasoning by measuring whether people are inclined to think critically.[56] Researchers have determined there are seven critical thinking skills one should develop during an undergraduate education:[57]
- Truth-Seeking: Do you strive to understand things as they are and seek the truth? This involves following evidence where it leads, even when it contradicts initial beliefs. In AI literacy, truth-seeking means examining claims about AI capabilities critically—distinguishing between actual technological achievements and exaggerated marketing or media claims about AGI/ASI or “human-like understanding.”
- Open-Mindedness: How receptive are you to new ideas, even those you don’t intuitively agree with? This includes considering alternative perspectives and being willing to revise your views given new evidence. For AI literacy, this might mean being open to the beneficial potential of AI tools while also seriously considering legitimate concerns about risks, avoiding both uncritical acceptance and reflexive rejection of new technologies.
- Analyticity: Do you try to understand the reasons behind things, evaluating pros and cons before deciding? This disposition emphasizes using evidence and reasoning to resolve problems. In AI contexts, analyticity involves examining how AI systems actually function beneath the interface—questioning what data they were trained on, what biases might exist in that data, and what limitations result from their design.
- Systematicity: Are you systematic in your thinking, breaking down complex problems into parts? This organized approach helps manage complexity. When evaluating AI systems, systematicity means methodically considering different aspects: technical capabilities, ethical implications, governance needs, societal impacts, and economic effects—rather than forming opinions based on just one dimension.
- Confidence in Reasoning: Do you trust your judgment and have reasons for your confidence? This involves developing well-founded beliefs through careful evaluation of evidence. In AI literacy, this might mean developing sufficient technical understanding to evaluate expert claims independently rather than uncritically accepting statements from either AI developers or critics when they conflict with observable evidence.
- Inquisitiveness: Are you curious about unfamiliar topics and persistent in resolving complicated problems? This intellectual curiosity drives deeper learning beyond immediate needs. Applied to AI literacy, inquisitiveness leads to exploring how different AI systems work, experimenting with their capabilities and limitations firsthand, and seeking to understand fundamental concepts rather than accepting simplified explanations.
- Maturity of Judgment: Do you avoid jumping to conclusions, considering different perspectives and other people’s experiences? This recognizes that many problems are complex with no clear-cut answers. In AI discourse, maturity means acknowledging that AI development involves genuine trade-offs and uncertainties—avoiding both catastrophic thinking (“AI will destroy humanity”) and dismissive optimism (“AI risks are science fiction”) in favor of a nuanced assessment of benefits, risks, and appropriate safeguards.
This test is beneficial not only for individual critical thinking but also for improving interactions with AI assistants. By embracing these habits, we can ensure that AI serves as a tool to enhance, rather than diminish, our critical thinking skills.
By now, most of us recognize that AI is revolutionizing, or in the process of revolutionizing, all aspects of our lives, from automation to changing the way we think critically. Although AI has the potential to enhance our cognitive abilities, it also poses significant risks that could undermine them. As students in today’s rapidly evolving technological landscape, we need to strengthen our critical thinking skills to navigate the AI world effectively. Not only do we need to develop AI literacy as part of our education, but we also need to learn specific strategies for evaluating AI-generated information and understand when to trust or question what AI systems tell us.
Sidebar – Biases and Fallacies in an AI World
Before we begin, it’s important to differentiate between biases and fallacies. Biases influence which beliefs you’re inclined to accept, while fallacies are errors in the reasoning you use to defend those beliefs.
Common Cognitive Biases in AI Discussions
- Confirmation Bias: Seeking information that confirms existing beliefs about AI while ignoring contradictory evidence. This can lead people to focus on either only positive AI achievements or only AI failures, depending on their initial stance.
- Availability Bias: Overestimating the likelihood of events based on how easily examples come to mind. Media coverage of dramatic AI scenarios (both utopian and dystopian) can make these outcomes seem more probable than they are.
- Authority Bias: Giving excessive weight to the opinions of perceived AI experts, especially those with strong public presences, without critically evaluating their claims or conflicts of interest.
- Optimism Bias: Underestimating risks and overestimating benefits of AI systems. This bias is particularly common among developers and investors who have professional stakes in AI’s success.
- Status Quo Bias: Preferring the current state of affairs, leading to resistance against AI adoption even when it offers genuine improvements, or conversely, resisting necessary regulation of existing AI systems.
Common Fallacies in AI Reasoning
-
- Appeal to Novelty: Assuming AI solutions are better simply because they’re new, without proper evaluation of their effectiveness compared to existing methods.
- False Dichotomy: Presenting AI futures as either utopian or dystopian, ignoring the more likely complex middle ground where AI brings both benefits and challenges.
- Slippery Slope: Arguing that allowing certain AI applications will inevitably lead to catastrophic outcomes, without evidence for such inevitability.
- Appeal to Nature: Dismissing AI solutions because they’re “artificial” or “unnatural,” regardless of their actual effectiveness or benefits.
- Hasty Generalization: Drawing broad conclusions about all AI capabilities based on limited examples or isolated incidents, either overestimating or underestimating AI’s potential.
- Anthropomorphic Fallacy: Attributing human-like understanding, intentions, or consciousness to AI systems that are fundamentally different from human minds.
How These Affect AI Literacy
These biases and fallacies can impact our ability to understand and respond to AI technologies. They can distort risk assessments, influence policy decisions, affect investment patterns, and shape public opinion. Developing AI literacy means recognizing when these patterns of thinking affect our reasoning and learning to evaluate AI claims with greater objectivity and nuance.
If you want to learn more about biases and logical fallacies, we strongly recommend checking out the websites created by The School of Thought: https://yourbias.is/ and https://yourlogicalfallacyis.com/.
Learning Critical Thinking in an AI-Driven World

Developing strong critical thinking abilities has never been more important as you navigate an AI-integrated world. When AI can generate essays, solve complex problems, and mimic human creativity, we need to cultivate the intellectual skills to critically evaluate this information and maintain our own cognitive independence.
Critical thinking involves questioning assumptions, analyzing evidence, and forming well-reasoned judgments, which are skills that AI systems—despite their impressive capabilities— fundamentally lack. While these tools can process and synthesize vast amounts of data, they cannot evaluate information with the contextual understanding, ethical reasoning, and lived experience that humans bring to complex problems.
The impact of AI on creativity and authorship exemplifies this challenge. It took about two weeks to carefully research and write this chapter—using AI only as a tool for brainstorming and grammar checking. AI could have generated similar-looking content in mere seconds. This stark contrast raises essential questions about authorship and creative attribution that we must consider.
To explore this point, I conducted an experiment by asking GPT-4 to write an essay about AI’s impact on creativity and critical thinking. Within 40 seconds, it produced a coherent 10-paragraph response covering many themes I had spent weeks developing through my research. This experience led me to crucial questions about authorship in an AI-assisted world: When AI systems are trained on vast datasets of human-created content and can synthesize this information to generate new text, who should be considered the true author? What about the countless writers and artists whose works have been used to train these systems?
These questions highlight the complex relationship between human creativity and AI assistance, forcing us to reconsider traditional notions of authorship and original thought in an era where the boundaries between human and machine-generated content become increasingly blurred.
Beyond questioning who the silent writers behind AI-generated text are, a more crucial aspect of critical thinking is how we, as students, can scrutinize AI outputs to ensure that neither plagiarism nor fabrication occurs in our academic work. AI can be wrong in multiple ways:
- It can provide wrong answers
- It can mistakenly delete some information
- It can fabricate completely fake people, events, and articles
- It can generate mixed truth and fiction[58]
When asked about citations, it may provide wrong citations. It can also produce false information. Have you ever asked AI to reveal its “silent authors” or the sources behind its generated text? Chances are, it will either fabricate a response or provide misinformation. In a test conducted by the University of Maryland Library, they asked ChatGPT to list all the countries that start with V. At the time, ChatGPT only listed Vanuatu and Vatican City (Holy See). Thankfully, it has “wised up” since the University of Maryland Library’s test. We replicated this test and ChatGPT could list four countries:
- Vanuatu
- Vatican City (Holy See)
- Venezuela
- Vietnam
We repeated this test using Claude, DeepSeek, Gemini, and Grok. All of the LLMs got it correct. With one exception… DeepSeek missed Vietnam. As these GenAIs develop, they are getting better at correcting mistakes and reducing hallucinations. But before we write off AI as the destroyer of our critical thinking minds, one of the things we can focus on as students is developing stronger analytical skills and healthy skepticism through our interactions with AI.
Promoting Analytical Skills and Skepticism
Key Takeaways
- Critical thinking involves not only questioning AI outputs but also validating these through external verification.
- Fact-checking and understanding biases in AI algorithms are essential skills for AI literacy.
- Developing critical thinking skills in the context of AI equips students to use these tools responsibly and effectively.
Exercises
- Self-Assessment: Rate personal critical thinking skills before and after engaging with AI tools. Reflect on areas of improvement.
- Verification Exercise: Using a piece of AI-generated content, identify any statements that need verification and outline steps to fact-check.
- Group Analysis: Work in groups to examine an AI-generated article, identifying bias and discussing strategies for verification.
Evaluating AI News
Learning Objectives
- Recognize characteristics that distinguish credible news from AI-generated misinformation.
- Develop skills to evaluate the sources and credibility of AI-driven news.
- Examine the ethical implications of AI in news creation and dissemination.

In March 2023, a viral image of Pope Francis wearing a white bubble jacket took the internet by storm. The photo duped people around the world because of its realism.[66] For many people, this was their first real contact with an AI-generated image that caught them off guard and made international headlines.
We know that AI can create all kinds of content. Unfortunately, not all of that content is designed to be good for society. In fact, there is a rise in AI-generated fake news, which is causing all kinds of problems around the globe. In this section, we’re going to explore the necessity of being a critical evaluator of news in the post-AI world.
Identifying Reliable Sources
If you’re fortunate, you’ve never had to deal with online scammers. Whether they infiltrate dating sites with convincing stories to extract money or send you fake invoices for products you never ordered, scammers are always devising new AI ways to trick you into spending your money. Just as these scammers are always ready to trick you into parting with your money, AI has also been used to create fake news stories. Like con artists operating from distant locals, some individuals post fake news articles from remote locations. These articles often resurface or get shared again without any indication of being outdated. Always check the publication date to ensure the information is current.
Key Tips to Spot Fake AI-Generated News
- Examine the Quality of Writing: AI-generated articles may contain inconsistencies, grammatical errors, or unnatural language.
- Beware of Clickbait: Be cautious of articles with sensational headlines designed to elicit strong emotions and encourage clicks.
- Look for Unusual Formatting: AI-generated articles might have odd paragraph breaks or unusual formatting due to the way they are generated.
- Remember Even Reputable Sources Can Err: Even reliable sources can make mistakes, so apply critical thinking and verification to all news articles, regardless of the source.[67]
One of the ways to develop such skills is fact check AI claims:
Fact-Checking AI Claims
Figure 2.6 presents a simple, structured framework for fact-checking and critically evaluating information related to AI and the news. The infographic walks you through key steps—from identifying and contextualizing a claim, to researching and verifying its sources, evaluating credibility, analyzing and synthesizing findings, and finally documenting conclusions. It also emphasizes the importance of updating and iterating your assessment as new evidence emerges, helping you stay informed and avoid common pitfalls of misinformation.

Understanding the Context of AI News
In a May 2024 NPR article, Ailsa Chang talks with Maggie Harrison Dupre, a staff writer at Futurism, about the increasing presence of AI-generated articles in major news publications. Dupre explains,
So who actually writes the news that you consume every day? Well, here at NPR, it’s reporters, producers, editors, literally hundreds of human beings in our newsroom. And until recently, it was pretty safe to assume that real people were behind the bylines and the articles that you read. But last year, an investigation by the publication Futurism found that Sports Illustrated had been publishing AI-written pieces by make-believe journalists. It was a scandal that rocked the publication, and, shortly after, the magazine’s publisher fired the CEO.[68]
Understanding AI news is essential as it encompasses defining and identifying AI-generated content, detecting its challenges, and finding ways to address them. AI-generated news refers to articles and reports created by AI algorithms. These algorithms can produce large volumes of information, automate tasks, summarize content, and generate entirely new articles. This capability has made content production more prevalent and accessible, indicating that AI is transforming the way we consume news. It raises critical questions, such as who is creating our news, as highlighted in the NPR article, and whether it matters who creates it.
We should care for many reasons, the most concerning being the spread of misinformation and fake news. AI can rapidly produce and disseminate false information, impacting public opinion and eroding trust in the media. AI-generated news can reflect biases inherent in the data used for training, raising ethical concerns about fairness and representation regarding the chosen topics and the individuals represented in those topics. Moreover, it has been known to influence election results:
My prediction for 2024 is that AI-generated misinformation will be coming to an election near you, and you likely won’t even realize it. In fact, you may have already been exposed to some examples. In May of 2023, a viral fake story about a bombing at the Pentagon was accompanied by an AI-generated image which showed a big cloud of smoke. This caused public uproar and even a dip in the stock market. Republican presidential candidate Ron DeSantis used fake images of Donald Trump hugging Anthony Fauci as part of his political campaign. By mixing real and AI-generated images, politicians can blur the lines between fact and fiction, and use AI to boost their political attacks.[69]
Recognizing AI-generated news can be challenging, but paying attention to inconsistencies, sensational headlines, or the absence of an author’s byline may signal AI involvement. More importantly, using critical thinking is essential for evaluating news. As advocated in this chapter, techniques such as fact-checking can help verify the accuracy of information and assess the credibility of sources. Developing strong media literacy skills, can help you critically evaluate news content and distinguish credible information from misinformation.
Key Takeaways
- Not all AI-generated news is credible; critical evaluation skills are essential for assessing source reliability.
- The potential for AI to fabricate or misinterpret news highlights the need for vigilant fact-checking.
- Awareness of clickbait and sensationalism in AI news can help prevent the spread of misinformation.
Exercises
- Fact-Checking Simulation: Students fact-check an AI-generated news article, identifying sensationalism or misleading information.
- Bias Analysis: Identify any biases within AI-generated news articles and suggest improvements for neutrality.
- News Credibility Quiz: Quiz students on identifying red flags in AI news articles to enhance their credibility assessment skills.
Case Studies: AI Successes and Failures
Learning Objectives
- Analyze real-world successes and failures of AI implementation across various fields.
- Assess the impact of these successes and failures on societal perceptions of AI.
- Evaluate ethical considerations and accountability in AI applications.
In this chapter, we have highlighted some successes of AI in education and health. However, education is not the only field to benefit from AI’s advancements. The potential of AI in distance education and online learning has been well-documented. Research has noted that AI enhances experiences in intelligent tutoring, automated grading, content creation, and personalized career advice.
Successful AI Applications
Let’s explore some AI success and failure stories, both from an educational and mental health perspective.
Successful AI Applications in Mental Health
While COVID-19 has come and gone, its impact on mental health remains significant. The pandemic highlighted a mental health crisis with serious long-term consequences (American Psychological Association, 2020). Social distancing and isolation exacerbated mental health issues among students and faculty alike. Gen Z, the current undergraduate cohort, is particularly affected, with 91% experiencing stress-related symptoms like depression or anxiety.[70]
AI has emerged as a promising tool to address these challenges. For example, a study by Cedars-Sinai shows that an AI application using virtual reality can provide effective mental health support. This AI tool, based on cognitive behavioral therapy (CBT), acts as a smart journal, helping users track and reflect on their behaviors and thoughts. AI-enabled tools can prevent severe mental illnesses by identifying high-risk populations early and enabling quicker intervention.
A Case Study of AI in Education
At the Kalaidos University of Applied Sciences (KFH) in Zurich, Switzerland, AI’s introduction has been met with both enthusiasm and concern. The AI-Taskforce at KFH, led by a knowledgeable author, navigated the challenges posed by AI chatbots like ChatGPT. KFH aimed to strike a balance between banning and unrestricted use of AI. Discussions with students revealed a general enthusiasm for AI’s ability to assist with academic work, yet also a keen awareness of ethical and practical considerations. Students identified several key principles for AI use:
- Fairness: Ensuring no student is at an unfair advantage or disadvantage.
- Clear Expectations: Providing clear guidelines on what is allowed and not allowed.
- Knowledge Gaps: Addressing students’ lack of understanding about AI’s potential and limitations.
- AI Hallucinations: Educating students about AI’s tendency to generate convincing but false information.
- Handling AI Errors: Equipping students to deal with AI-generated inaccuracies.
To address these concerns, KFH developed “The AI-Guidelines” (KI-Leitfaden), which outline acceptable and unacceptable AI use. Students are encouraged to use AI as a sparring partner rather than a tutor or ghostwriter. They are required to document their AI usage, ensuring transparency and ethical conduct.[71]
AI Failures and Their Lessons
While in principle, integrating AI into education, as illustrated in the Swiss case study, seems promising, its practical application can be challenging. In my own classes, when AI was used in simulations for pharmacist/patient mental health education, students quickly began incorporating it into quizzes. For example, some students input entire quiz questions into AI prompts. Typically, these quizzes asked students to analyze concepts, connect them to their studies, provide examples, and reflect on them personally. The AI-generated answers were often clearly fabricated, with statements like, “If I were to self-reflect, I would say…” accompanied by made-up resources.
Despite outlining correct AI usage in the syllabus and discussing it in class, the practical outcome was disappointing. Mid-semester, I wrote a letter to my students addressing these issues:
Dear Students,
The interpersonal class has always been one of my favorites because pharmacy students have integrity and work hard. However, this year, I noticed something troubling. Many assignments showed signs of AI-generated content, which is disheartening because I know you all have excellent ideas.
The range of AI use varied in our quizzes. Some asked AI to generate entire works, while others used it to elaborate on single points, generating multiple paragraphs. I scheduled appointments to discuss AI responsibility and ethical use, and while some attended, others did not. Some of you argued, “Well, you told us we can use AI.” The syllabus clearly states that ethical AI use does not include generating entire assignments.
It’s disappointing that some have outsourced their thoughts to AI, signing their names to texts that are not their own. Editing one’s text or brainstorming ideas with AI is one thing; having AI generate your work is entirely different. Often, the AI-generated content doesn’t make sense within the context of your questions and includes incorrect citations.
I hope you’ve learned that your critical thinking is invaluable. Don’t give up on your minds or your ideas. Share your thoughts with the world; you are the hope and leaders of the next generation. Be proud of who you are and let your ideas shine. AI can assist, but it should not replace your unique voice. Kudos to those who used AI responsibly and ethically this semester!
These challenges have been felt across the board in academia, classes, and among students. After sending the letter, some students began to acknowledge their use of AI, explaining how it helped them brainstorm or edit rather than generate entire responses. This shift toward more responsible and transparent AI use is encouraging and demonstrates the importance of fostering critical thinking and ethical standards in the AI-integrated educational landscape.
The Importance of Learning from AI Mistakes
While AI errors in education are abundant, significant failures in the medical field further underscore the need to learn from these experiences for growth. Some studies have shown that AI algorithms can outperform human doctors in specific tasks—such as identifying certain types of cancer in medical images—but these studies are often small and lack wide replication. Consequently, many doctors remain skeptical about the benefits of AI and hesitate to integrate it into their practice.[72]
AI in skin cancer detection using smartphones holds promise. A case study reported in an article titled “Disparities in Dermatology AI Performance on a Diverse, Curated Clinical Image Set” highlights the potential and limitations of AI in this area. Access to dermatological care is a major issue, with an estimated 3 billion people globally lacking access. AI may aid in triaging skin diseases, but most AI models have not been rigorously assessed on images of diverse skin tones or uncommon diseases.
To identify potential biases, the Diverse Dermatology Images (DDI) dataset was created, featuring 656 expertly curated and pathologically confirmed images with diverse skin tones. The study showed that state-of-the-art dermatology AI models performed substantially worse on the DDI dataset, with the receiver operator curve area under the curve (ROC-AUC) dropping by 27-36 percent compared to the models’ original test results. Every skin cancer detection system tested today exhibits bias against non-white skin. None performed better than radiologists, and all showed significant performance drops between light and dark skin.[73]
According to Oatmeal Health, a lack of understanding among healthcare professionals impedes AI adoption in the medical field. Despite AI’s potential benefits, many remain unfamiliar with the technology, making its implementation challenging. Learning from these failures is crucial for future development.
Similarly, the field of education faces significant hurdles in effectively integrating AI. Given both the advancements and challenges, it is beneficial for academics to learn from these experiences by familiarizing themselves with AI technology. This approach can help address student challenges as well as improve academic practices, such as editing. Building on these insights, the following recommendations are proposed:
- Embrace AI Literacy in Higher Education: Prepare students for future-ready professions by fostering AI literacy.
- Focus on Machine-Human Communication: Study the interaction between humans and AI across various disciplines, not just in computer science or media journals.
- Utilize AI to Stimulate Critical Thinking: Leverage AI’s capabilities to enhance critical thinking rather than diminish human intelligence.
- Expand AI’s Role in Higher Education: Integrate AI in ways that enhance students’ critical thinking abilities, using it for personalized learning, intelligent tutoring, and automated feedback.
- Improve Prompt Engineering: Teach students and educators how to craft effective prompts, emphasizing AI as a supplementary tool rather than a primary solution.
- Provide Workshops and Training: Offer training sessions on the ethical and effective use of AI tools in the classroom, promoting their beneficial use for educators.
- Use AI Critically: Encourage a critical approach to AI, evaluating both AI-generated outputs and personal assumptions. Apply the questioning strategies discussed in this chapter to ensure responsible and effective AI use.
By implementing these recommendations, higher education can harness AI’s potential while promoting ethical use and critical thinking. This balanced approach ensures that AI serves as an aid to human intelligence rather than a replacement.
In conclusion, the relationship between AI and critical thinking is complex. While AI offers immense productivity and innovation potential, it also introduces risks of over-relying on AI, including susceptibility to misinformation and biases embedded in AI-generated content. While AI can support specific educational and professional tasks, the preservation of critical thinking skills is essential for a balanced, ethically sound integration of AI into daily life.
Key Takeaways
- Case studies show the broad spectrum of AI’s capabilities and limitations, underscoring the importance of ethical development and oversight.
- AI successes, like AlphaFold’s contributions to science, demonstrate transformative potential, while failures reveal areas for improvement and caution.
- Ethical considerations in AI implementations are critical for ensuring that these technologies benefit society as intended.
Exercises
- Reflection: Write a reflection on a case study that resonates personally, discussing its impact and ethical concerns.
- Group Analysis: In groups, analyze the causes of a failed AI application, outlining lessons learned and suggesting improvements.
- Critical Thinking Essay: Write an essay evaluating a successful AI application, assessing how critical thinking influenced its development and use.
Chapter Wrap-Up
In this chapter, we explored the profound importance of critical thinking and AI in safeguarding humanity as AI continues to advance rapidly. We delved into best practices for critical AI, emphasizing the need for proactive measures to address potential risks and promote responsible development. AI systems must be robust, reliable, ethical, and aligned with human values. Through interdisciplinary collaboration, thoughtful governance, and a commitment to responsible advancement, we can work to unlock the immense potential benefits of AI while ensuring humanity’s safety.
As consumers of news, developing strong media literacy skills is crucial to discerning the quality and credibility of information. Fact-checking, critical thinking, and awareness of AI-generated content are vital tools for navigating the modern news landscape. Ethical AI practices and ongoing dialogue about the impact of AI on critical thinking are key to maintaining public trust.
Ultimately, as AI continues to evolve, so too must our strategies for ensuring the integrity and reliability of the information that shapes our understanding of the world. In the next chapter, we will discuss explainability, bias, and transparency in AI, further exploring how we can achieve a balanced and ethical integration of AI into our lives.
Key Terms
- AI Hallucination
- Confirmation Bias
- Critical Thinking
- Disinformation
- Echo Chambers
- Fact-Checking
- Lateral Reading
- Misinformation
Chapter Exercises
- Imagine you are designing an AI system to help educators assess students’ critical thinking skills in written assignments. What type of AI technology would you choose (e.g., natural language processing, sentiment analysis), and how would you train and evaluate the system to ensure it accurately identifies critical thinking skills? Discuss the potential benefits and challenges of implementing this AI in educational settings, considering ethical implications.
- Choose a news topic and explore how generative AI might present different perspectives on the issue. Research examples of AI-generated news articles or summaries on this topic and compare their viewpoints. Create a report or presentation that explains the presence of any biases, highlights different angles, and discusses the potential impact of AI-generated news on public opinion and media credibility.
- Design and conduct a survey or interview study to investigate public attitudes toward AI tools in the workplace, focusing on tools such as performance monitoring, automated hiring, or chatbot assistance. Analyze the results to identify recurring themes, concerns, and expectations, then propose strategies for companies to address these issues through ethical AI practices.
- Develop a proposal for an interdisciplinary workshop on AI ethics in collaboration with departments like philosophy, computer science, and psychology. Outline the workshop’s goals, suggested discussion topics (e.g., bias, privacy, accountability), and expected outcomes. Discuss how an interdisciplinary approach could help address ethical issues in AI development and deployment across various fields.
Real-World Case Study
AI-Generated Mushroom Guides and Public Safety on Amazon
In September 2023, a potentially dangerous situation emerged when multiple AI-generated mushroom foraging guides appeared on Amazon’s marketplace. An investigation using AI detection software from Originality.ai found that several mushroom foraging guides being sold on the platform were completely generated by AI chatbots, scoring 100% on their AI detection test.[74]
The situation gained attention when field mycologists and foraging experts reviewed the guides and identified serious safety concerns. The books contained dangerous misinformation about mushroom identification methods and incorrectly advised about protected species. Most alarmingly, some of the guides provided potentially lethal advice about identifying mushrooms through taste and smell—a practice that could be fatal if applied to toxic species like the death cap mushroom, which can be easily confused with edible varieties.
The mycological community quickly raised alarms about these AI-generated guides, emphasizing that mushroom identification requires expert knowledge and experience that AI systems currently cannot replicate. The distinction between edible and poisonous mushrooms often relies on subtle characteristics that only trained experts can reliably identify.
The case raises critical questions about AI-generated content and public safety:
- What responsibilities do platforms like Amazon have in verifying the authenticity and safety of educational content, particularly when public safety is at risk?
- How can consumers distinguish between legitimate expert-written guides and AI-generated content that may contain dangerous misinformation?
- What regulatory frameworks are needed to address the publication and distribution of AI-generated content in fields where accurate information is crucial for public safety?
- How can the benefits of AI-generated content be balanced against the risks of misinformation in specialized fields requiring expert knowledge?
This case illustrates the real-world dangers of unchecked AI-generated content in specialized fields. While AI can efficiently produce convincing text, its inability to understand the life-or-death implications of incorrect information highlights the crucial need for human expertise and oversight in sensitive domains. The incident serves as a wake-up call for digital marketplaces and publishers to implement stronger verification systems for potentially dangerous content.
Perplexity AI’s News Hallucination Controversy
In October 2024, AI startup Perplexity faced a significant legal challenge when Dow Jones and the New York Post filed a lawsuit alleging both copyright infringement and trademark violations. The case highlighted a growing concern about AI systems generating and attributing false information to legitimate news sources.[75]
The controversy centered on Perplexity’s AI system creating “hallucinations” – fabricated content that it falsely attributed to legitimate news organizations. In one notable instance, the system took genuine paragraphs from a New York Post story about Senator Jim Jordan and EU Commissioner Thierry Breton, then added five completely fabricated paragraphs about free speech and online regulation, presenting all of it as authentic Post content. Similarly, the system had previously misattributed false content to other publications, including fabricating a story about a California police officer’s alleged crime that Wired had never reported.
This case highlighted the broader challenge of AI systems mixing authentic news content with generated falsehoods, raising concerns about public trust in journalism and the responsibility of AI companies in content generation. The lawsuit was particularly significant as it attempted to establish legal precedent for holding AI companies accountable not just for copyright infringement but also for the damage caused by false attribution of generated content.
The case raises several critical questions:
- How can news organizations protect their reputation and trustworthiness in an era of AI-generated content?
- What legal frameworks are needed to address AI systems that generate and attribute false information to legitimate sources?
- Should AI companies be held liable for their systems’ “hallucinations,” given that experts consider some degree of hallucination inherent to language models?
- How can the balance be struck between innovative AI development and protecting the integrity of journalism?
This case illustrates the complex challenges at the intersection of AI technology, journalism, and law. While AI systems continue to advance in their ability to process and generate text, their tendency to create and attribute false information poses significant risks to public trust in legitimate news sources. It highlights the need for more explicit legal and ethical frameworks in AI development.
End-of-Chapter Assessment
Discussion Questions
- How can critical thinking help individuals discern the credibility of AI-generated content, especially in high-stakes fields like healthcare and education?
- In what ways might confirmation bias affect how people perceive AI-generated information? How can critical thinking mitigate these effects?
- What ethical concerns arise when using AI in news dissemination, and how can critical thinking skills address potential biases or misinformation?
- How does over-reliance on AI tools impact cognitive skills, such as independent analysis and creativity? Discuss ways to balance AI use with critical thinking.
- What safeguards could be implemented to ensure that facial recognition technology is used ethically and responsibly in law enforcement? How can critical thinking skills aid in evaluating these safeguards?
Multiple Choice Questions
1. What is a primary benefit of applying critical thinking to AI-generated content?
A) To fully trust the AI’s conclusions
B) To evaluate the accuracy and credibility of the information
C) To replace human judgment with AI-generated insights
D) To generate more AI-based data
2. Which cognitive bias often affects the interpretation of AI-generated news?
A) Recency bias
B) Confirmation bias
C) Anchoring bias
D) Overconfidence bias
3. What is the potential risk of echo chambers in AI-driven media?
A) Exposure to diverse perspectives
B) Improved information accuracy
C) Reinforcement of existing beliefs
D) Enhanced critical thinking skills
4. Which skill is most important when evaluating AI-generated news for credibility?
A) Memorization
B) Automation
C) Critical thinking
D) Data collection
5. What does AI “hallucination” refer to?
A) AI systems generating highly accurate information
B) AI tools providing personalized recommendations
C) AI producing inaccurate or fabricated information
D) AI failing to complete tasks
6. Which of the following is a major ethical concern with facial recognition in law enforcement?
A) It increases crime rates
B) It may misidentify individuals and cause wrongful arrests
C) It improves public trust in AI
D) It is costly to implement
7. Why is critical thinking crucial in using AI in high-stakes fields?
A) It helps automate decision-making
B) It replaces human judgment with machine intelligence
C) It assists in verifying the accuracy and ethics of AI applications
D) It reduces the need for human oversight
8. What is one way to verify the credibility of AI-generated news?
A) Accepting it as fact
B) Sharing it immediately
C) Cross-referencing with reliable sources
D) Using only AI sources
9. What impact does confirmation bias have on critical thinking?
A) It encourages open-mindedness
B) It increases information accuracy
C) It narrows perspectives and reinforces preexisting beliefs
D) It enhances independent analysis
10. Which AI-related issue did Randal “Quran” Reid’s case highlight?
A) AI enhancing educational content
B) Bias and misidentification in facial recognition
C) AI-generated misinformation in news
D) Cognitive bias in AI research
True or False Questions
- Critical thinking is not necessary when using AI-generated information.
- Echo chambers help broaden perspectives on social media.
- Confirmation bias can affect how people interpret AI-generated content.
- AI “hallucination” is when AI generates highly accurate information.
- Facial recognition technology in law enforcement has no ethical issues.
- Cross-referencing AI-generated news with reliable sources is a good practice for verifying credibility.
- Relying solely on AI for decision-making can hinder critical thinking skills.
- AI-driven news can sometimes reinforce confirmation bias in users.
- Facial recognition in law enforcement can be misused, as shown in Randal “Quran” Reid’s case.
- The main purpose of critical thinking is to accept AI-generated information without question.
Answer Key
Discussion Questions
1. How can critical thinking help individuals discern the credibility of AI-generated content, especially in high-stakes fields like healthcare and education?
Example Answer: Critical thinking involves questioning, cross-referencing, and validating AI-generated information. In healthcare, this can prevent misdiagnosis from AI errors, while in education, it helps verify the accuracy of information used in learning.
2. In what ways might confirmation bias affect how people perceive AI-generated information? How can critical thinking mitigate these effects?
Example Answer: Confirmation bias can cause individuals to accept AI outputs that align with their beliefs and reject others, limiting open-mindedness. Critical thinking helps by encouraging scrutiny and evaluation beyond initial impressions.
3. What ethical concerns arise when using AI in news dissemination, and how can critical thinking skills address potential biases or misinformation?
Example Answer: Ethical concerns include misinformation and biased reporting due to algorithms reinforcing particular viewpoints. Critical thinking enables users to identify potential biases, verify sources, and seek balanced information.
4. How does over-reliance on AI tools impact cognitive skills, such as independent analysis and creativity? Discuss ways to balance AI use with critical thinking.
Example Answer: Over-reliance may lead to cognitive complacency, where users rely on AI without questioning. Balancing involves applying critical thinking to verify AI outputs, encouraging users to develop independent analytical skills.
5. What safeguards could be implemented to ensure that facial recognition technology is used ethically and responsibly in law enforcement? How can critical thinking skills aid in evaluating these safeguards?
Example Answer: Safeguards could include strict accuracy testing, transparency policies, and oversight bodies. Critical thinking allows individuals to question these practices, ensuring they genuinely protect civil rights and prevent misuse.
Multiple Choice
1. What is a primary benefit of applying critical thinking to AI-generated content?
Answer: B. To evaluate the accuracy and credibility of the information.
2. Which cognitive bias often affects the interpretation of AI-generated news?
Answer: B. Confirmation bias
3. What is the potential risk of echo chambers in AI-driven media?
Answer: C. Reinforcement of existing beliefs
4. Which skill is most important when evaluating AI-generated news for credibility?
Answer: C. Critical thinking
5. What does AI “hallucination” refer to?
Answer: C. AI producing inaccurate or fabricated information
6. Which of the following is a major ethical concern with facial recognition in law enforcement?
Answer: B. It may misidentify individuals and cause wrongful arrests
7. Why is critical thinking crucial in using AI in high-stakes fields?
Answer: C. It assists in verifying the accuracy and ethics of AI applications
8. What is one way to verify the credibility of AI-generated news?
Answer: C. Cross-referencing with reliable sources
9. What impact does confirmation bias have on critical thinking?
Answer: C. It narrows perspectives and reinforces preexisting beliefs
10. Which AI-related issue did Randal “Quran” Reid’s case highlight?
Answer: B. Bias and misidentification in facial recognition
True or False Questions
1. Critical thinking is not necessary when using AI-generated information.
Answer: False – Critical thinking is essential to evaluate AI-generated content’s accuracy and bias.
2. Echo chambers help broaden perspectives on social media.
Answer: False – Echo chambers reinforce existing beliefs, limiting exposure to diverse viewpoints.
3. Confirmation bias can affect how people interpret AI-generated content.
Answer: True – Confirmation bias may lead individuals to favor information aligning with their beliefs.
4. AI “hallucination” is when AI generates highly accurate information.
Answer: False – AI hallucination refers to generating inaccurate or fabricated content.
5. Facial recognition technology in law enforcement has no ethical issues.
Answer: False – Ethical issues include potential misidentifications and biases that impact civil rights.
6. Cross-referencing AI-generated news with reliable sources is a good practice for verifying credibility.
Answer: True – Cross-referencing helps ensure accuracy and credibility in information.
7. Relying solely on AI for decision-making can hinder critical thinking skills.
Answer: True – Over-reliance on AI can reduce cognitive engagement in decision-making.
8. AI-driven news can sometimes reinforce confirmation bias in users.
Answer: True – AI algorithms may reinforce users’ preexisting beliefs, limiting diverse viewpoints.
9. Facial recognition in law enforcement can be misused, as shown in Randal “Quran” Reid’s case.
Answer: True – Misidentification due to facial recognition technology can lead to wrongful arrests and injustices.
10. The main purpose of critical thinking is to accept AI-generated information without question.
Answer: False – Critical thinking involves questioning and verifying information, including AI outputs.
References
- Lex Fridman. (2024, June 12). Roman Yampolskiy: Dangers of superintelligence, AI & existential risk. [Video]. YouTube. https://www.youtube.com/watch?v=bF--UK1NqF4 ↵
- Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction (The Delphi Report). California Academic Press. ↵
- Paul, R., & Elder, L. (2006). The miniature guide to critical thinking: Concepts and tools. Foundation for Critical Thinking. ↵
- Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction (The Delphi Report). California Academic Press. ↵
- Facione, P. A. (2013). Critical thinking: What it is and why it counts. Insight Assessment. ↵
- Ennis, R. H. (2015). Critical thinking: A streamlined conception. In M. Davies & R. Barnett (Eds.), The Palgrave handbook of critical thinking in higher education (pp. 31–47). Palgrave Macmillan. ↵
- Paul, R., & Elder, L. (2006). The miniature guide to critical thinking: Concepts and tools. Foundation for Critical Thinking. ↵
- Facione, P. A. (1990). Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction (The Delphi Report). California Academic Press. ↵
- Davies, M. (2015). A model of critical thinking in higher education. In M. Paulsen (Ed.), Higher education: Handbook of theory and research. Higher Education (pp. 41–92). Palgrave Macmillan. ↵
- Facione, P. A. (2000). The disposition toward critical thinking: Its character, measurement, and relation to critical thinking skill. Informal Logic, 20(1), 61–84. https://doi.org/10.22329/il.v20i1.2254 ↵
- Facione, P. A. (2000). The disposition toward critical thinking: Its character, measurement, and relation to critical thinking skill. Informal Logic, 20(1), 61–84. https://doi.org/10.22329/il.v20i1.2254 ↵
- Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the educative process. D.C. Heath and Company. ↵
- Paul, R., & Elder, L. (2006). The miniature guide to critical thinking: Concepts and tools. Foundation for Critical Thinking. ↵
- Ennis, R. H. (2015). Critical thinking: A streamlined conception. In M. Davies & R. Barnett (Eds.), The Palgrave handbook of critical thinking in higher education (pp. 31–47). Palgrave Macmillan. ↵
- Paul, R., & Elder, L. (2006). The miniature guide to critical thinking: Concepts and tools. Foundation for Critical Thinking. ↵
- Facione, P. A. (2000). The disposition toward critical thinking: Its character, measurement, and relation to critical thinking skill. Informal Logic, 20(1), 61–84. https://doi.org/10.22329/il.v20i1.2254 ↵
- Dewey, J. (1933). How we think: A restatement of the relation of reflective thinking to the educative process. D.C. Heath and Company. ↵
- Ennis, R. H. (2015). Critical thinking: A streamlined conception. In M. Davies & R. Barnett (Eds.), The Palgrave handbook of critical thinking in higher education (pp. 31–47). Palgrave Macmillan. ↵
- Davies, M., & Barnett, R. (Eds.). (2015). The Palgrave handbook of critical thinking in higher education. Palgrave Macmillan. ↵
- Facione, P. A. (2013). Critical thinking: What it is and why it counts. Insight Assessment. ↵
- Chen, L., Chen, P., & Lin, Z. (2020). Artificial intelligence in education: A review. Ieee Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS.2020.2988510 ↵
- Aktay, S., Gök, S., & Uzunoğlu, D. (2023). Chatgpt in education. Türk Akademik Yayınlar Dergisi, 7(2), 378-406. https://doi.org/10.29329/tayjournal.2023.543.03 ↵
- Bozkurt, A. (2023). Generative artificial intelligence (AI) powered conversational educational agents: The inevitable paradigm shift. Asian Journal of Distance Education, 18(1), 198-204. https://doi.org/10.5281/zenodo.7716416 ↵
- Black, J. (2023). Past, present and tackling the future of artificial intelligence (AI) in education: Maintaining agency and establishing AI laws. Open Journal of Social Sciences, 11 (7), https://doi.org/10.4236/jss.2023.117031 ↵
- Bozkurt, A. (2023). Generative artificial intelligence (AI) powered conversational educational agents: The inevitable paradigm shift. Asian Journal of Distance Education, 18(1), 198-204. https://doi.org/10.5281/zenodo.7716416 ↵
- International Holocaust Remembrance Alliance. (2024, June 20). New UNESCO report warns that generative AI threatens Holocaust memory. https://holocaustremembrance.com/news/unesco-generative-ai-holocaust-report ↵
- Regalado, A. (2020, November 9). Pfizer’s COVID-19 vaccine is highly effective, but don’t expect to get it soon. Technology Review. https://tinyurl.com/bdhxtm2h ↵
- Heaven, W. D. (2020, November 30). Breakthrough will help scientists design drugs and understand disease. MIT Technology Review. ↵
- Limna, P., Jakwatanatham, S., Siripipattanakul, S., Kaewpuang, P., & Sriboonruang, P. (2022, July). A review of artificial intelligence (AI) in education during the digital era. Research Paper. Rangsit University, Thailand. https://www.academia.edu/89462610/A_Review_of_Artificial_Intelligence_AI_in_Education_during_the_Digital_Era ↵
- Eaton, S. (2023). Academic integrity and artificial intelligence: Implications for plagiarism and academic writing. Conference Session PUPP. [Video]. YouTube. https://www.youtube.com/watch?v=9QNNPVSC24w ↵
- O’Neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Penguin Random House. ↵
- Mondal, S., Das, S., & Vrana, V. G. (2023). How to bell the cat? A theoretical review of generative artificial intelligence towards digital disruption in all walks of life. Technologies, 11(2), 44. https://doi.org/10.3390/technologies11020044 ↵
- Black, J. (2023). Past, present and tackling the future of artificial intelligence (AI) in education: Maintaining agency and establishing AI laws. Open Journal of Social Sciences. https://doi.org/10.4236/jss.2023.117031 ↵
- Anyoha, R. (2017). The history of artificial intelligence. Open Journal of Social Sciences, 11(7). https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence ↵
- Chomsky, N. (2023). Noam Chomsky: The false promise of CHATGPT. The New York Times. https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html ↵
- Black, J., & Fullerton, C. (2020). Digital deceit: Fake news, artificial intelligence, and censorship in educational research. Open Journal of Social Sciences. 8, 71-88. https://doi.org/10.4236/jss.2020.87007 ↵
- The Museum of Modern Art. (2023, March 15). AI art: How artists are using and confronting machine learning | How to see like a machine [Video]. YouTube. https://www.youtube.com/watch?v=G2XdZIC3AM8 ↵
- Black, J. (2023). Past, present and tackling the future of artificial intelligence (AI) in education: Maintaining agency and establishing AI laws. Open Journal of Social Sciences, 11(7). https://doi.org/10.4236/jss.2023.117031 ↵
- Pokol, B. (2024). Changes in the social sciences in the age of Chat GPT and Co. Academia. https://www.academia.edu/121587537/Changes_in_the_Social_Sciences_in_the_Age_of_Chat_GPT_and_Co ↵
- Philosophy Department, University of Hong Kong. (n.d.). Critical thinking web https://philosophy.hku.hk/think/critical/improve.php; (para. 1) ↵
- Dewey, J. (1910). How we think. D.C. Heath & Company. ↵
- Kim, B. (2019). Critical thinking. https://open.library.okstate.edu/criticalthinking/chapter/__unknown__-2/#footnote-38-1 ↵
- Aktay, S., Gök, S., & Uzunoğlu, D. (2023). ChatGPT in education. Türk Akademik Yayınlar Dergisi (TAY Journal), 7(2), 378-406. https://www.researchgate.net/publication/370608529_ChatGPT_in_education ↵
- Coffey, L. (2024, June 25). A new digital divide: Student AI use surges, leaving faculty behind. Inside Higher Ed. https://tinyurl.com/bdf4vs53 ↵
- Coffey, L. (2024, June 25). A new digital divide: Student AI use surges, leaving faculty behind. Inside Higher Ed. https://tinyurl.com/bdf4vs53; para. 1. ↵
- Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., & Ariza‐Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-01787-8 ↵
- Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., & Ariza‐Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-01787-8 ↵
- Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., & Ariza‐Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-01787-8 ↵
- DataCamp. (2023, August). What is AI literacy? A comprehensive guide for beginners. DataCamp. https://www.datacamp.com/blog/what-is-ai-literacy-a-comprehensive-guide-for-beginners ↵
- Willingham, A. J. (2024, July 1). AI pictures of Jesus on social media are suspiciously rugged — and we only have ourselves to blame. CNN. https://www.cnn.com/2024/07/01/tech/ai-pictures-of-jesus/index.html ↵
- TechCrunch. (2014, July 1). Emotiv's new neuro-headset [Video]. YouTube. https://www.youtube.com/watch?v=bposG6XHXvU ↵
- Boyd, S. (2024, June 27). As mind-reading technology improves, Colorado passes first-in-nation law to protect privacy of our thoughts. CBS Colorado. https://www.cbsnews.com/colorado/news/colorado-mind-reading-technology-law/ ↵
- TEAM International. (2022). Critical thinking is the most valued skill among innovative companies. https://www.teaminternational.com/critical-thinking/ ↵
- Gister. (2023). How AI shapes the future of critical thinking [Blog]. https://gigster.com/blog/ai-critical-thinking/ ↵
- criticalthinkingsecrets.com. (n.d.). Critical thinking and artificial intelligence. https://criticalthinkingsecrets.com/critical-thinking-and-artificial-intelligence/ ↵
- Facione, P. A., & Facione, N. C. (1992). Test manual: The California Critical Thinking Disposition Inventory. The California Academic Press. ↵
- Bers, T. H., McGowan, M., & Rubin, A. (1996). The disposition to think critically among community college students: The California Critical Thinking Dispositions Inventory. The Journal of General Education, 45(3), 197-223. https://doi.org/10.1353/jge.1996.0014 ↵
- UMD Libraries. (2024, June 17). What does AI get wrong? Artificial intelligence (AI) and information literacy. University of Maryland Libraries. https://lib.guides.umd.edu/c.php?g=1340355&p=9880574 ↵
- Merceron, K., & Best, K. (2024). Integrating professional perspectives for AI literacy: Empowering students in an AI-influenced future. In S. Elmoudden & J. S. Wrench (Eds.), The role of generative AI in the communication classroom (pp. 300-315). IGI Global. ↵
- University of Maryland Libraries. (2024, December 4). Artificial intelligence (AI) and information literacy: Factchecking AI with lateral reading. https://lib.guides.umd.edu/c.php?g=1340355&p=9880575 ↵
- McGrew, S., Breakstone, J., Ortega, T., Smith, M. D., & Wineburg, S. (2018). Can students evaluate online sources? learning from assessments of civic online reasoning. Theory & Research in Social Education, 46(2), 165-193. https://doi.org/10.1080/00933104.2017.1416320 ↵
- Caulfield, M. (2019). Web literacy for student fact-checkers. https://webliteracy.pressbooks.com/ ↵
- Titcomb, J. (2024, February 21). Google apologises for 'woke' AI that created ethnically diverse images of Vikings and knights. The Telegraph. https://www.telegraph.co.uk/business/2024/02/21/google-chatbot-ethnically-diverse-images-vikings-knights/ ↵
- Fleming, F. J. (2024, February 26). The time I shut down Google: How afraid should we be of AI? Frank Talk. https://www.frankjfleming.com/p/the-time-i-shut-down-google ↵
- University of Maryland Libraries. (2024, December 4). Artificial intelligence (AI) and information literacy: Factchecking AI with lateral reading. https://lib.guides.umd.edu/c.php?g=1340355&p=9880575; para. 3. ↵
- Golby, J. (2023, March 27). I thought I was immune to being fooled online. Then I saw the pope in a coat. The Guardian. https://www.theguardian.com/commentisfree/2023/mar/27/pope-coat-ai-image-baby-boomers ↵
- City of Weston. (2023, July 27). Reader beware: AI generated and other sources. https://www.westonfl.org/Home/Components/News/News/1171/18 ↵
- Chang, A. (Host). (2024, May 16). AI-generated articles are permeating major news publications [Audio podcast episode]. In All Things Considered. NPR. https://www.npr.org/2024/05/16/1251917136/ai-generated-articles-are-permeating-major-news-publications; para. 1 ↵
- Van der Linden, S. (2024, January 22). AI-generated fake news is coming to an election near you. Wired. https://www.wired.com/story/ai-generated-fake-news-is-coming-to-an-election-near-you/; para. 3 ↵
- Bethune, S. (2019). Gen Z more likely to report mental health concerns. Monitor on Psychology, 50(1). https://www.apa.org/monitor/2019/01/gen-z ↵
- Walter, Y. (2024). Embracing the future of Artificial Intelligence in the classroom: The relevance of AI literacy, prompt engineering, and critical thinking in modern education. International Journal of Educational Technology in Higher Education, 21(1), 1–29. https://doi.org/10.1186/s41239-024-00448-3 ↵
- Govette, J. (n.d.). The truth about healthcare AI and why it has failed so far. Oatmeal Health. https://oatmealhealth.com/why-has-ai-failed-so-far-in-healthcare-despite-billions-of-investment/ ↵
- Daneshjou, R., Vodrahalli, K., Novoa, R. A., Jenkins, M., Liang, W., Rotemberg, V., Ko, J., Swetter, S. M., Bailey, E. E., Gevaert, O., Mukherjee, P., Phung, M., Yekrang, K., Fong, B., Sahasrabudhe, R., Allerup, J. A. C., Okata-Karigane, U., Zou, J., & Chiou, A. (2022). Disparities in dermatology AI performance on a diverse, curated clinical image set. arXiv. https://arxiv.org/abs/2203.08807 ↵
- Milmo, D. (2023, September 1). Mushroom pickers urged to avoid foraging books on Amazon that appear to be written by AI. The Guardian. https://tinyurl.com/52z98tuz ↵
- Knibbs, K. (2024, October 21). A lawsuit against Perplexity calls out fake news AI hallucinations. Wired. https://www.wired.com/story/dow-jones-new-york-post-sue-perplexity/ ↵






