Weapons of Math Destruction: How Big Data threatens democracy and creates inequality is a new book by Cathy O’Neil, a political science professor at the University of San Francisco. It explores how certain big data algorithms are being used to exacerbate existing inequality. O’ Neil argues that many of these algorithms have been inappropriately used to increase the power and profitability of those at the top of society. Specifically, she targets the high-frequency trading strategies favored by hedge funds and high-net-worth individuals, as well as the massive data mining practices utilized by major telecommunications companies. The result, she says, is that people at the bottom are left with literally pennies on the dollar compared to those at the top.
In Weapons of Math Destruction, O’ Neil offers practical advice on how to fight back against these highly developed algorithms that are exacerbating existing income and wealth inequality. Specifically, she describes the relationship between algorithms and credit scores, highlighting how the practice of using mathematical formulas to predict consumer demand may lead to inaccurate or inefficient pricing decisions. Further, she contends that indiscriminate use of mathematical formulas for credit scoring may also lead to unfair discrimination against certain groups of people.
While the specific examples O’ Neil mentions in Weapons of Math Destruction do happen to be instances in which algorithms were used to discriminate against certain demographics or ethnic groups, she argues that this can extend to other areas of online lending. For example, she points out that certain payday lenders have been targeting certain regions in which to make their services available. Those regions may have included lower-income neighborhoods or areas where the unemployment rate was above the national average. In other words, people who live in areas with higher unemployment rates or lower credit scores stand to experience the worst consequences of using a bad algorithm to compute their interest rates.
It’s not just the poor who stand to lose by using bad algorithms in making credit decisions, though. Even those who have good credit scores stand to benefit from an incorrect algorithm, since they stand to see their scores drop if the system includes errors or omissions. Worse yet, though, is that those who have higher credit scores stand to suffer even greater penalties should their score fall because the algorithm wasn’t programmed to take into account the effects of recent financial events like the subprime mortgage crisis. This means that although algorithms aren’t misbehaving randomly, they’re still following a perfectly designed, highly automated system and may not respond to outside stimuli in a way that any living person could intuitively relate to.
As the author makes clear in her book, it’s unrealistic to expect that we’ll soon have total control over artificially intelligent machines. Even if we do eventually have total control over them, however, we don’t yet know how to properly use those machines. And in an age of increasing inequality and extreme technological disconnect, it’s hard to see how the powers that be will remain in power long enough to ensure that our techno utopia, whatever that might be, comes to fruition. If we want a world where everyone gets a fair shot at the American dream, then we need to stop relying on software to predict the behavior of other software.
Of course, many will argue that even if these algorithms are wrong, there is still plenty of free market competition among competing firms to drive up the quality of goods and services. After all, the companies that provide artificially intelligent software can just as easily provide personalized service, with better customer service and more personalized features. If the government wants to increase its control over the Internet, it needs to come up with a better growth model. But if we’re already in the age of the gated community, where access is controlled via a password and keychain and social networks are used to keep tabs on individual behavior even in the face of digital proximity, then who is to say that a big data analytics platform can’t work the same way? In fact, this very factor is one of the main arguments against such tools being made available to the general public.
A number of technology enthusiasts have been arguing in favor of big data analytics tools as a potential way to fight back against the algorithms that govern our society, claiming that the algorithms actually increase recidivism. According to these people, if recidivism rates are increasing in a mathematical model that was not designed to deal with this kind of thing, then we should not apply that model to another mathematical model. This is, after all, the beauty of the Internet. One could create a completely infallible machine, but it wouldn’t be much of a machine if no one knew how to program it, or if it could program itself. The same is true for artificial intelligence, particularly with regards to artificial intelligence that deals with recidivism.
One of the biggest problems with artificial intelligence is that the algorithms that it creates may not be able to distinguish between what is a good credit risk and what is a bad credit risk. While it may be able to discern between the two by using complex mathematical equations and neural networks, it won’t be able to tell the difference between good credit risk and bad credit risk without human intervention. In other words, algorithms can create a good credit score and a bad credit score at the same time. This means that while they might be able to help us in some ways, such as determining whether to send a spam email or whether to make a purchase online, they will most likely do so at the cost of human life, and this is something that we should be worried about.