Error loading page.
Try refreshing the page. If that doesn't work, there may be a network issue, and you can use our self test page to see what's preventing the page from loading.
Learn more about possible network issues or contact support for more help.

More than a Glitch

Confronting Race, Gender, and Ability Bias in Tech

ebook
1 of 1 copy available
1 of 1 copy available
When technology reinforces inequality, it’s not just a glitch—it’s a signal that we need to redesign our systems to create a more equitable world.
The word “glitch” implies an incidental error, as easy to patch up as it is to identify. But what if racism, sexism, and ableism aren’t just bugs in mostly functional machinery—what if they’re coded into the system itself? In the vein of heavy hitters such as Safiya Umoja Noble, Cathy O’Neil, and Ruha Benjamin, Meredith Broussard demonstrates in More Than a Glitch how neutrality in tech is a myth and why algorithms need to be held accountable.
Broussard, a data scientist and one of the few Black female researchers in artificial intelligence, masterfully synthesizes concepts from computer science and sociology. She explores a range of examples: from facial recognition technology trained only to recognize lighter skin tones, to mortgage-approval algorithms that encourage discriminatory lending, to the dangerous feedback loops that arise when medical diagnostic algorithms are trained on insufficiently diverse data. Even when such technologies are designed with good intentions, Broussard shows, fallible humans develop programs that can result in devastating consequences.
Broussard argues that the solution isn’t to make omnipresent tech more inclusive, but to root out the algorithms that target certain demographics as “other” to begin with. With sweeping implications for fields ranging from jurisprudence to medicine, the ground-breaking insights of More Than a Glitch are essential reading for anyone invested in building a more equitable future.
  • Creators

  • Publisher

  • Release date

  • Formats

  • Languages

  • Reviews

    • Publisher's Weekly

      January 30, 2023
      “The biases embedded in technology are more than mere glitches; they’re baked in from the beginning,” argues Broussard (Artificial Intelligence), a data journalism professor at New York University, in this scathing polemic. Telling the stories of individuals from marginalized communities who have been wronged by technology, the author shows how design and conceptual failures produce unfair outcomes. She describes how a Black man was arrested by Detroit police because a facial recognition algorithm incorrectly flagged him as a match for a shoplifter, reflecting the tendency of such programs to produce false matches for people of color, who are underrepresented in the images used to train those programs. Other case studies are Kafkaesque, such as the Black Chicago man who was shot twice under suspicion of being a snitch because police cars frequently parked outside his house after predictive policing software identified him as at risk for gun violence. The author condemns “technochauvinism,” or the belief that “computational solutions superior to all other solutions,” as exemplified by the story of a Deaf Apple Store employee who was denied an on-site interpreter because Apple preferred alternative, inadequate solutions that used its products. The stories enrage and drive home the cost of the failures and prejudices built into ostensibly cutting-edge programs. This sobering warning about the dangers of technology alarms and unsettles.

    • Kirkus

      February 1, 2023
      A sharp rebuke of technochauvinism. Broussard brings her perspective as a multiracial woman, data journalist, and computer scientist to an eye-opening critique of racism, sexism, and ableism in technology. She decries technochauvinism, which she defines as "a kind of bias that considers computational solutions to be superior to all other solutions." Examining the use of AI programs in areas such as facial recognition, learning assessment, and medical diagnosis, Broussard argues persuasively that algorithmic systems "often act in racist ways because they are built using training data that reflects racist actions or policies." Moreover, these systems have been developed by "able-bodied, white, cis-gender, American men" who test programs on a similar pool. Racial bias is blatant when facial recognition programs are instituted in policing, leading to harassment and false arrests. "Facial recognition is known to work better on people with light skin than dark skin," she writes, "better on men than on women, and it routinely misgenders trans, nonbinary, or gender nonconforming people." Broussard explains clearly how data sets limit the efficacy of AI in predictive policing--"a strategy that uses statistics to predict future crimes"--as well as in medical diagnostics: "The skin cancer AIs are likely to work only on light skin because that's what is in the training data." The author draws on her own experience with breast cancer to point out the inadequacy of an AI assessment that missed her disease. Fortunately, her experienced doctor did not even consult the AI results. Broussard highlights the work of the Algorithmic Justice League, the Surveillance Technology Oversight Project, and other groups involved in algorithmic auditing. "If we are building AI systems that intervene in people's lives," she warns, "we need to maintain and inspect and replace the systems the same way we maintain and inspect and replace bridges and roads." An informed analysis of one of the insidious elements of technology.

      COPYRIGHT(2023) Kirkus Reviews, ALL RIGHTS RESERVED.

Formats

  • OverDrive Read
  • EPUB ebook

Languages

  • English

Loading