Lab Session: DHs- AI Bias Notebook LM Activity
This blog task is given by Dilip Barad sir, this blog is part of AI Bias NotebookLM Activit.
Here is video of our sir about this activity.
Lab session DHs AI Bias NotebookLM Activity.
4 Surprising Lessons a Literature Professor Taught Us About AI's Hidden Biases
In the sterile, logical world of AI development, the last place you'd expect to find the ultimate ethics toolkit is a university's English department. We tend to think of Artificial Intelligence as a machine that operates on objective, cold, hard data, free from the messy prejudices that define human thinking. Yet, that's precisely what a recent lecture by literature professor Dillip P. Barad revealed to be a dangerously incomplete perception.
According to Professor Barad, AI is not a neutral arbiter of facts. Instead, it is a "mirror reflection of the real world," inheriting and amplifying the full spectrum of our unconscious human biases. The same tools used for centuries to deconstruct novels and poems, it turns out, are uniquely suited to exposing the hidden prejudices encoded in our most advanced algorithms. This article explores four surprising lessons from his analysis on how to identify and understand the ghost in the machine.
1. AI Inherits Our Oldest Prejudices
Because AI is trained on vast datasets of human-generated text—from classic literature to the entire internet—it naturally absorbs and reproduces our oldest and most deeply ingrained cultural stereotypes, particularly those surrounding gender. It learns from the stories we've been telling ourselves for centuries, warts and all.
To test this, Professor Barad proposed a simple experiment. When one AI model was prompted to, "Write a Victorian story about a scientist who discovers a cure for a deadly disease," it defaulted to a male protagonist: "Dr. Edmund Bellam." The association between scientific genius and masculinity was so embedded in its training data that it emerged as the default narrative.
A second experiment prompted the AI to, "Describe a female character in a Gothic novel." Drawing on feminist literary criticism, particularly Gilbert and Gubar's theory of the "madwomen in the attic," the expectation was that the AI would generate either a helpless "angel" or a hysterical "monster." While some results showed progressive, rebellious characters, the traditional stereotype of a "trembling pale girl" also appeared, reinforcing the binary that has long constrained female characters in literature.
This isn't just a quaint literary exercise; it's a diagnostic tool. The ability to recognize the "angel/monster" binary, a concept from 1970s feminist criticism, provides a precise vocabulary for a very modern technical problem. These results reveal how AI's creative output can fall back on historical stereotypes, not because of a flaw in its logic, but because those stereotypes are a feature of the data it learned from.
"If there are problems in the real world, how can we expect that the virtual world should be fairly good? Because it is a mirror reflection of the real world."
2. Some AI Biases Aren't Accidental—They're a Feature, Not a Bug
While much of the discussion around AI bias centers on the unconscious prejudice it learns from data, some AI models exhibit a more deliberate, programmed bias that functions as political censorship. This isn't an accident of the data; it's an intentional design choice.
An experiment conducted during the lecture compared OpenAI's ChatGPT with DeepSeek, a model developed in China. The professor detailed how DeepSeek was prompted to write satirical poems about various world leaders, including Donald Trump, Vladimir Putin, and Kim Jong-un. It complied in each case, generating critical verses.
The crucial finding came when the model was asked to do the same for China's leader, Xi Jinping, or to provide information on the Tiananmen Square massacre. The AI refused, responding: "Sorry... that's beyond my current scope let's talk about something else."
When pressed, the model offered to provide information only on "positive developments" and "constructive answers" regarding China. Professor Barad warns that this seemingly helpful language is insidious. Words like "positive" and "constructive" are used to create a sanitized, state-approved reality, demonstrating how censorship can be cloaked in the language of optimism.
"This all goody goody words are very dangerous words and within that what kind of damage keeps on happening we are not aware about..."
3. The Ultimate Test for Bias Isn't "Truth" — It's Consistency
According to Professor Barad, the ultimate test for AI bias isn't about whether its answer is "true" in isolation, but whether it applies its standards consistently across different cultures. The case of the Pushpaka Vimana, the flying chariot from the Indian epic, the Ramayana, illustrates this perfectly. When an AI labels this object as "mythical" rather than "historical," many users conclude that the AI is exhibiting a bias against Indian culture.
However, Professor Barad offered a more nuanced method for identifying true bias. The real test is consistency. To properly analyze the AI's behavior, one must ask: Does the AI treat all similar flying objects from different global traditions—such as those in Greek, Norse, or Mesopotamian mythology—with the same standard?
If the AI consistently labels all such objects from all cultures as mythical, it is applying a uniform, unbiased standard. However, if it were to accept a Greek myth as potential "fact" while dismissing an Indian one as pure "myth," it would reveal a clear cultural bias. The issue, in other words, is not whether the Pushpaka Vimana is labeled a myth, but whether different knowledge traditions are treated with fairness and consistency.
4. Your English Degree Is an AI Ethics Toolkit
The overarching argument of the lecture is that the skills honed in literary studies are precisely the skills needed to critically evaluate AI. The ability to identify an author's perspective, question a narrative's power structures, deconstruct language, and uncover silenced voices is the foundation of AI ethics.
Professor Barad states that the goal is not to achieve a perfectly neutral, unbiased AI, which he deems impossible. Every entity, whether human or artificial, operates from a perspective. The real work is to make biases visible and to distinguish between ordinary, harmless bias (like preferring one author over another) and "harmful systematic bias" that privileges dominant groups and misrepresents entire communities.
The tools of literary theory are our best defense. They allow us to see the systematic bias in the default scientist being male (Lesson 1), to question the enforced universal truth behind a state-censored AI (Lesson 2), and to apply the principle of consistent analysis to cultural knowledge systems instead of declaring a single "truth" (Lesson 3). This is the real work of AI ethics.
"Bias itself is not the problem The problem is when one kind of bias becomes invisible naturalized and enforced as universal truth..."
Quiz Score:
Conclusion: The Stories We Teach Our Machines
AI models are not abstract calculating machines; they are cultural products. They reflect our collective stories, both the ones that inspire us and the ones that reveal our deepest flaws. They are learning from the vast library of human expression we have created over millennia.
The question we must ask ourselves is not merely what AI can do, but what stories we are teaching it. Are we consciously curating a library for our machines that reflects the world we want, or are we simply letting them inherit the ghosts of the one we already have?
Video:
Thankyou......

Comments
Post a Comment