Amazon just followed IBM's lead in reigniting the discussion around whether AI is biased, but more needs to be done to ensure the technology isn't racist
Artificial intelligence holds a lot of promise to revolutionize our lives. Still, there's a dark side to the technology that could not only hinder progress on critical issues but also exacerbate them. As the country prepares for a deep examination around the drivers of systemic racism, the regulation of AI deserves to be a part of that discussion. On Tuesday, IBM reignited the discourse with its decision to no longer offer "general purpose" facial-recognition software. There may now be appetite for change, which could begin to assuage concerns and make AI something we can dream about changing our lives again. Click here for more BI Prime stories.
Sign up here to receive updates on all things Innovation Inc. There's a lot to be excited about when it comes to artificial intelligence. Robots, self-driving cars — it's the kind of technology that I envisioned when I was asked as a little kid what the future would look like. But for all the positive developments that AI supports, it has a more nefarious side that could not only prevent forward progress on the most critical issues of the day but also exacerbate the existing problems. Now the historic protests under way around the world in the wake of the killings of George Floyd and other Black Americans are reigniting an argument around AI and the tech's potential bias. On Tuesday, IBM agreed with the critics, saying it would no longer offer "general purpose" facial recognition, given concerns over mass surveillance and racial profiling. And on Wednesday, Amazon relented too and said it would stop allowing law enforcement to use its facial recognition software for one year. The tech has long been chastised as inherently biased as more police forces across the country have used it. IBM's decision was a long time coming. There have been countless stories over the past year of AI models exhibiting racist behavior. One directed Black Americans away from higher-quality healthcare, while another labeled a thermometer in the hands of a Black person a gun. Google CEO Sundar Pichai even once famously demonstrated how easy it was to create a racist model. It shouldn't be surprising then that the discussion of AI regulation is resurfacing now. In fact, Dario Gil, the director of IBM research, told me last year that it might not be the case that AI is biased, but rather that the technology is a reflection of our own problematic viewpoints. "We could blame it on the technology — which sometimes it does deserve blame — but very often it is a mirror back into ourselves. And when you look more deeply, it tells you something about society because AI is trained by example: What examples have we set in our past? What are the examples in our society?" he said at the time. It was a failing of mine at the time to realize just how prescient his comments were. Trust as the major inhibitor in AI adoption There's good reason that the industry should be willing to take a tough examination of whether additional regulation is needed. Trust is a key inhibitor in the adoption of AI, according to Gil. So without addressing the shortcomings of the tech, we could risk missing out on the significant benefits it could provide. AI, for example, held a lot of promise for removing inherent bias in hiring and leading to more diversity within workforces. Many algorithms, however, remain shrouded in mystery and could be unwittingly contributing to fewer people of color getting hired. So lawmakers across the country now have a real opportunity to craft a regulatory framework for the burgeoning technology that begins to assuage the very real fears that many Americans have about it. The momentum for change appears to be there beyond IBM. CEOs from Google, IBM, Tesla, and Salesforce have also called for more regulation. But the tech giants may have opposing views on what that should entail. Google, for example, has already pushed back against some of Europe's attempts to rein in AI. Still, Americans are clamoring for change across corporate America. There's no reason that AI shouldn't be a part of that. It's a good thing for us to dream about all the ways the technology can change our life. But it's up to us to make sure that vision doesn't come at the detriment of so many of our fellow Americans. SEE ALSO: Prominent black economists outline 5 ways to start rooting out systemic racism and make the American dream a reality for all Join the conversation about this story » NOW WATCH: Here's what it's like to travel during the coronavirus outbreak
More like this (3)
The Boston City Council unanimously approved an ordinance that will ban the purchase and use of facial...The Boston City Council unanimously approved an ordinance that will ban the purchase and use of facial recognition technology by city officials, including the police department. Among the reasons for issuing the ban, the ordinance cited "racial bias in facial surveillance" and the potential for surveillance tools to damage public trust in government. Councilor Michelle Wu specifically referenced the false arrest of Robert Julian-Borchak Williams, a Black man living in Michigan. Detroit police arrested and interrogated Williams after facial recognition software incorrectly identified him in robbery surveillance footage, according to a New York Times investigation released this week. The incident is believed to represent the first instance of facial recognition leading to a false arrest in the US. Local bans on facial recognition, like this Boston ordinance, will undercut big tech efforts to instate facial recognition reform at the federal level. In response to heightened public scrutiny, Amazon, IBM, and Microsoft suspended or terminated the sale of facial recognition services to law enforcement; the companies also advocated for federal regulation of the technology. But while Boston issued an outright ban on government use of facial recognition, the tech companies envision federal reforms that would instead set standards as to how the technology can be used. Amazon, for instance, has advocated for regulation that would set accuracy thresholds for facial recognition software. Given the current political climate, we believe the Boston ordinance will precipitate bans in other large US cities, undercutting big tech efforts to make facial recognition more palatable through regulation. Increased scrutiny on government use of facial recognition could lead to big tech players discontinuing sales of facial recognition to law enforcement. While Amazon and Microsoft left the door open to resume selling facial recognition software to law enforcement, IBM, which reportedly wasn't making much money from facial recognition software sales in the first place, decided to terminate its program altogether. The public was already skeptical of facial recognition technology, and we expect it will become even more so following recent events — in 2019, 50% of US adults said they wouldn't trust tech companies to use facial recognition responsibly, and 27% said the same about law enforcement agencies, according to Pew Research. Because of these conditions, we expect at least one other big tech company will follow in IBM's footsteps. This would leave players like Clearview AI, Cognitec, and Vigilant Solutions to provide facial recognition to law enforcement — even though these players have lower public profiles, they actually held a larger share of the market compared to the big tech players, according to The New York Times.Join the conversation about this story »
Amazon, Microsoft, and IBM say they want federal rules around the technology. Critics of the proposal,...Amazon, Microsoft, and IBM say they want federal rules around the technology. Critics of the proposal, sponsored by four Democrats, say it doesn't go far enough.