SR: Film Screening- Humans in the Loop
This blog task is given by Dilip Barad Sir, as a sunday reading task.Teacher's Article
1- What do ypu undersatnd by the term "Human in the Loop"?
Before watching Humans in the Loop, my initial understanding of the title was connected to the idea of the corporate world and the “rat race.” The phrase “Humans in the Loop” made me think of individuals constantly running within systems they do not fully control — much like employees working in corporate structures where competition, deadlines, and performance pressures dominate daily life.
The word “loop” suggests repetition — doing the same tasks again and again without significant change or escape. In the corporate rat race, people often feel trapped in cycles of productivity, targets, promotions, and survival. Therefore, I interpreted the title as referring to humans being stuck within technological or corporate systems, continuously contributing labour while the larger system benefits.
From this perspective, the title appeared metaphorical rather than technical. I assumed the film might explore how modern workplaces reduce individuals to functional parts of a system, similar to machines. The term also suggested that even in advanced technological environments, humans remain central — yet possibly exploited or overworked.
Thus, before viewing the film, I understood “Humans in the Loop” as symbolizing the repetitive, competitive, and sometimes dehumanizing cycle of corporate and digital labour — a rat race where individuals keep running but rarely gain true control over the system they sustain.
2- What are your expectations from a film dealing with AI and labour?
When a film combines themes of artificial intelligence and labour, I expect it to move beyond the glamorous and futuristic representation of technology. Instead of presenting AI as a self-thinking, independent system, such a film would likely expose the human effort that makes these systems function. AI does not emerge magically; it is trained, corrected, monitored, and refined by human workers. Therefore, I would expect the film to explore the hidden workforce behind technological infrastructures — the people who label data, moderate content, categorize images, and perform repetitive digital tasks that remain largely invisible to society.
Another expectation would be the exploration of digital labour and economic inequality. In the contemporary global economy, technological development often depends on outsourced labour from economically vulnerable communities. Workers from rural or marginalized backgrounds may contribute to global AI systems without receiving adequate recognition, fair wages, or long-term stability. A film addressing AI and labour might highlight this imbalance — showing how technological progress in one part of the world is sustained by underpaid labour in another. This raises important questions about capitalism, globalization, and digital exploitation.
I would also expect the film to address ethical concerns related to automation. As AI systems become more powerful, there is increasing fear that machines will replace human workers. However, there is also a paradox: while AI threatens certain jobs, it simultaneously creates new forms of precarious labour such as data annotation and algorithm training. A thoughtful film might examine whether automation truly eliminates labour or simply restructures it in less visible ways. It may question who benefits from automation and who bears its risks.
Another crucial theme I would anticipate is the tension between human knowledge and machine intelligence. AI operates through classification, patterns, and pre-programmed logic. Human knowledge, however, is emotional, contextual, cultural, and experiential. When machines attempt to replicate intelligence, they may fail to capture the complexity of lived human realities. Therefore, I would expect the film to explore moments where machine logic clashes with human judgment — revealing both the limitations of AI and the uniqueness of human cognition.
Finally, I would expect the film to challenge the myth that AI is entirely self-sufficient. Popular narratives often portray AI as autonomous and superior to human intelligence. However, in reality, AI systems rely heavily on human supervision. A critical film might reveal this dependency and demonstrate that behind every “smart” system lies continuous human intervention. It may also explore how marginalized communities contribute significantly to technological development while remaining invisible in mainstream narratives. In doing so, the film could humanize digital systems and remind viewers that technology is not neutral — it reflects the labour, biases, and inequalities embedded within society.
Overall, before watching the film, I would expect a layered exploration of technology, labour politics, economic disparity, and ethical responsibility — not just a story about machines, but a story about the humans who sustain them.
3- What Social or ethical issues might arise in AI based systems?
Before watching Humans in the Loop, several social and ethical concerns come to mind when thinking about AI-based systems. Artificial intelligence is often presented as neutral, objective, and data-driven. However, since AI systems are created and trained by humans, they inevitably reflect human values, assumptions, and biases. Therefore, one of the primary anticipated issues is algorithmic bias.
Algorithmic bias occurs when AI systems reproduce or amplify existing social prejudices. Because AI depends on datasets collected from society, if that data contains gender bias, racial discrimination, cultural stereotypes, or economic inequalities, the algorithm may replicate those patterns. For example, certain communities may be misrepresented or underrepresented in datasets, leading to inaccurate or unfair outcomes. This raises serious ethical questions about fairness, accountability, and responsibility in technological design.
Another expected issue is data exploitation, particularly concerning human labour. AI systems require massive amounts of data to function effectively. Behind this data are individuals who annotate images, categorize information, moderate content, and correct algorithmic errors. Often, this labour remains invisible and underappreciated. Workers may receive minimal wages while contributing significantly to highly profitable technological systems. This imbalance raises ethical concerns about recognition, fair compensation, and digital capitalism. If human labour is essential to AI development, then ignoring or undervaluing it becomes a moral issue
A third concern is digital inequality. AI development is often concentrated in technologically advanced urban centers, while the labour supporting these systems may come from rural or marginalized communities. This creates an uneven distribution of power and profit. Some communities gain technological authority and financial benefits, while others provide labour without equal access to resources, education, or technological decision-making power. Such inequality reinforces existing social hierarchies instead of reducing them.
Additionally, there is the risk of cultural erasure or simplification. AI systems operate through categorization and standardization. However, cultural knowledge — especially indigenous or local knowledge — is often complex, contextual, and relational. When AI attempts to fit diverse cultural practices into rigid frameworks, it may oversimplify or misinterpret them. This process can marginalize alternative epistemologies and privilege dominant worldviews. The ethical question here is whether AI systems respect cultural diversity or unintentionally suppress it.
Ultimately, AI systems depend entirely on data, and data is never neutral. It is shaped by historical conditions, social structures, and human decision-making. Therefore, ethical AI must go beyond technical efficiency. It must consider fairness, inclusivity, cultural sensitivity, and labour justice. Before viewing the film, these anticipated issues suggest that AI is not just a technological subject but a deeply social and political one.
4- How might indigenous or local knowledge challengr AI systems?
AI functions through classification and predefined categories. However, indigenous knowledge systems are often relational, contextual, and experiential. They may not fit neatly into binary or rigid structures.
Before watching, one can assume that if the film involves an indigenous character working with AI, there may be tension between lived cultural knowledge and algorithmic logic. This tension may reveal limitations of technological universality.
5- Personal reflection before viewing
Before watching the film, AI may seem like an abstract, advanced, and almost futuristic system. However, thinking about the title suggests that the film will humanize technology. It may encourage viewers to rethink digital systems not as neutral innovations but as socially embedded processes.
Post viewing tasks
1- How does the film represent AI and its dependence on human labour?
The film powerfully dismantles the myth of artificial intelligence as an autonomous, self-sufficient system. Instead, it reveals AI as fundamentally dependent on human intervention — hence the title Humans in the Loop. The protagonist Nehma works as a data annotator, categorizing images and information so that AI systems can “learn.” Through her repetitive yet highly interpretive work, the film exposes how machine intelligence is built upon invisible human cognition.
Rather than presenting AI as futuristic spectacle, the narrative situates it within everyday labour conditions — modest offices, strict deadlines, mechanical workflows. The term “loop” becomes metaphorical: humans continuously correct, train, and refine algorithms. Without this human presence, AI collapses into inaccuracy. The film therefore critiques the popular imagination of AI as superior to human intelligence and instead emphasizes its reliance on marginalized labour.
2- Discuss the theme of invisibility and digital labour in the film.
One of the most striking aspects of the film is its exploration of invisibility. The labour that sustains AI systems is hidden from public view, much like the communities performing it. Nehma’s work is critical for global technology infrastructures, yet she remains economically vulnerable and socially unseen.
Cinematically, invisibility is emphasized through framing. Close shots of computer screens and hands typing create a sense of mechanical repetition, while Nehma’s identity and emotional world remain subdued in professional settings. This visual language mirrors the structural invisibility of digital labourers worldwide.
The film suggests that digital modernity rests on the backs of workers whose contributions are erased from technological narratives. It asks viewers to reconsider who truly powers innovation.
3- How does the film address AI bias and indigenous knowledge systems?
The conflict between indigenous ecological knowledge and algorithmic classification lies at the heart of the narrative. AI systems function through predefined categories that often reflect dominant cultural assumptions. However, Nehma’s lived knowledge — rooted in tribal traditions and environmental relationships — does not always align with these rigid frameworks.
When the AI struggles to interpret culturally specific images or contexts, it reveals its epistemological limitation. The film argues that AI bias is not accidental; it is embedded in the data and worldviews used to construct it. Indigenous knowledge, which is relational and contextual, resists reduction into binary categories.
Thus, the film critiques the universalist claims of technology. It shows that intelligence is plural, and when AI fails to recognize certain forms of knowledge, it exposes systemic bias rather than indigenous inadequacy.
4- Analyze the representation of Adivasi identity in the film.
Unlike mainstream portrayals that exoticize or stereotype tribal communities, the film presents Nehma’s identity with dignity and realism. Her community is neither romanticized nor depicted as backward. Instead, it is shown as culturally rich, emotionally complex, and intellectually capable.
The film avoids reducing Adivasi life to suffering alone. Instead, it balances scenes of domestic warmth, communal interactions, and ecological harmony with the pressures of digital employment. This balanced representation challenges dominant cinematic narratives.
By placing an Adivasi woman at the centre of a technological narrative, the film disrupts assumptions about who belongs in digital spaces. It asserts that marginalized communities are not outside modernity — they are actively shaping it.
5- Examine the use of cinematic techniques (mise-en-scène, sound, framing).
The film’s mise-en-scène creates a clear contrast between natural and digital environments. Village scenes are often shot in open spaces with natural light, emphasizing organic rhythms and community connection. In contrast, AI workspaces are confined, artificial, and repetitive.
Sound design reinforces this division. Natural ambient sounds — birds, wind, village chatter — contrast with the clicking of keyboards and notification alerts. This sonic tension highlights the emotional divide between relational knowledge and mechanical processes.
Framing also plays a symbolic role. Tight shots during annotation scenes create a sense of confinement, while wide landscape shots suggest freedom and continuity. These stylistic choices subtly reinforce the film’s thematic concerns.
6- What ethical questions does the film raise about technology?
The film raises several urgent ethical questions:
Who benefits from AI systems?
Are data annotators fairly compensated for their cognitive labour?
What happens when marginalized knowledge is filtered through dominant technological frameworks?
Can AI ever be neutral?
Rather than providing definitive answers, the film invites reflection. It suggests that ethical AI must acknowledge human contribution, cultural diversity, and labour justice. Without these considerations, technological progress risks deepening inequality.
7- Reflect on the significance of the title “Humans in the Loop.”
The title functions both technically and philosophically. In machine learning, “human in the loop” refers to systems requiring human feedback for improvement. In the film, this phrase extends beyond technical terminology.
It signifies emotional loops — Nehma’s personal struggles, her balancing of motherhood and work. It signifies social loops — marginalized communities feeding data into global systems without recognition. Most importantly, it asserts that humanity cannot be removed from technological discourse.
The title becomes a reminder: behind every intelligent system, there is human labour, human bias, and human responsibility.
8- Personal Reflection
Watching the film shifts one’s perception of AI from abstract innovation to embodied labour. It makes visible the hidden scaffolding behind digital convenience. The emotional depth of Nehma’s character ensures that technological critique does not remain theoretical; it becomes personal and immediate.
The film encourages viewers to think critically about the systems they participate in daily. It humanizes digital economies and foregrounds the ethical responsibility embedded within technological advancement.
Conclusion:
Through its nuanced storytelling and restrained cinematic style, Humans in the Loop transforms a technical concept into a deeply human narrative. It interrogates power structures within AI, highlights invisible labour, and dignifies indigenous knowledge systems. The film ultimately argues that technology is never neutral — it reflects the values, biases, and labour of the humans who create and sustain it.
Comments
Post a Comment