Module 4 -> Overview

Essential Question

To what extent does bias influence reliability and impact of facial recognition technology?

Module Overview

In this module, students continue deepening their understanding of FRT with a particular focus on how this technology is being used in NYC and the related risks, particularly those arising from various forms of bias (e.g., automation bias, confirmation bias, and algorithmic bias). Students closely read and evaluate a rich and complex text set and continue to develop claims, identify relevant evidence and engage in deeper analysis. Students closely read the NYPD FRT Impact and Use Policy and are exposed to the role of federal and local oversight and how that might be leveraged to maximize the benefits and minimize the risks associate with FRT when used for policing. Students engage in another SPAR debate at the end of this module.

Anchor Text(s) for this Module

Supporting Text(s)/ Resources for this Module

NYS Next Generation ELA Standards

  • L4c: Consult general and specialized reference materials (e.g., dictionaries, glossaries, thesauruses) to find the pronunciation of a word or determine or clarify its precise meaning, its part-of-speech, or its etymology.

  • L6: Acquire and accurately use general academic and content-specific words and phrases, sufficient for reading, writing, speaking, and listening; demonstrate independence in applying vocabulary knowledge when considering a word or phrase important to comprehension or expression.

  • RH9: Compare and contrast treatments of the same topic in several primary and secondary sources.

  • RST1: Cite specific evidence to support analysis of scientific and technical texts, charts, diagrams, etc. attending to the precise details of the source. Understand and follow a detailed set of directions.

  • R1 Cite strong and thorough textual evidence to support analysis of what the text says explicitly/ implicitly and make logical inferences; develop questions for deeper understanding and for further exploration.

NYS Computer Science & Digital Fluency Standards

  • 9-12.DL.1: Type proficiently on a keyboard.

  • 9-12.DL.2: Communicate and work collaboratively with others using digital tools to support individual learning and contribute to the learning of others.

  • 9-12.IC.1 Evaluate the impact of computing technologies on equity, access, and influence in a global society.

  • 9-12.IC.3 Debate issues of ethics related to real world computing technologies.

  • 9-12.IC.5 Describe ways that complex computer systems can be designed for inclusivity and to mitigate unintended consequences.


  • algorithmic bias: systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others; example: facial recognition algorithms trained with a data set that does not reflect human diversity and therefore results in different reliability based on gender and race.

  • cognitive bias: a thought process caused by the tendency of the human brain to simplify information processing through a filter of personal preferences and opinions; examples: confirmation bias; automation bias; authority bias; gender bias; group attribution error

  • confirmation bias: the conscious or subconscious tendency for people to seek out information that confirms their pre-existing viewpoints, and to ignore information that goes against them (regardless of whether that information is true or false); example: only paying attention to news stories that confirm your opinion and ignoring or dismissing information that challenges your position.

  • automation bias: to trust information provided by a machine/ computer and ignore information that contradicts it; automation bias is one type of cognitive bias. example: trusting the output of FRT when it produces a “match” without verifying that match by other means.

  • authority bias: tendency to trust / follow influence of a leader/ person in position of authority

  • false positive: when the face recognition system does match a person’s face to an image in a database, but that match is actually incorrect

  • false negative: when the face recognition system fails to match a person’s face to an image that is, in fact, contained in a database

Last updated