Will Algorithms Erode Our Decision-Making Skills?
BEHAVIOR
Will Algorithms Erode Our Decision-Making Skills?
Algorithms are embedded into our technological lives, helping accomplish a variety of tasks like making sure that email makes it to your aunt or that you're matched to someone on a dating website who likes the same bands as you.
Sure, such computer code aims to make our lives easier, but experts cited in a new report by Pew Research Center and Elon University's Imagining the Internet Center are worried that algorithms may also make us lose our ability to make decisions. After all, if the software can do it for us, why should we bother?
"Algorithms are the new arbiters of human decision-making in almost any area we can imagine, from watching a movie (Affectiva emotion recognition) to buying a house (Zillow.com) to self-driving cars (Google)," Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., says in the report.
But despite advances, algorithms may lead to a loss in human judgment as people become reliant on the software to think for them.
That's one of the conclusions made in the report. It included responses from 1,300 technology experts, scholars, businesspeople and government leaders about what the next decades hold for the future of algorithms.
One of the themes that emerged was "humanity and human judgment are lost when data and predictive modeling become paramount." Many respondents worried that humans were considered "inputs" in the process and not real beings.
Additionally, they say that as algorithms take on human responsibilities, and essentially begin to create themselves, "humans may get left out of the loop."
And although some experts expressed concern, others gave reasons algorithms were a positive solution and should expand their role in society.
Here's a sampling of opinions about the benefits and drawbacks of algorithms from the report:
Bart Knijnenburg, an assistant professor in human-centered computing at Clemson University: "My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies and users into zombies who exclusively consume easy-to-consume items."
Rebecca MacKinnon, director of the Ranking Digital Rights project at New America: "Algorithms driven by machine learning quickly become opaque even to their creators who no longer understand the logic being followed to make certain decisions or produce certain results. The lack of accountability and complete opacity is frightening. On the other hand, algorithms have revolutionized humans' relationship with information in ways that have been life-saving and empowering and will continue to do so."
Jason Hong, an associate professor at Carnegie Mellon University: "The old adage of garbage in, garbage out still applies, but the sheer quantity of data and the speed of computers might give the false impression of correctness. As a trivial example, there are stories of people following GPS too closely and ending up driving into a river."
Amali De Silva-Mitchell, a futurist and consultant: "Predictive modeling will limit individual self-expression hence innovation and development. It will cultivate a spoon-fed population with those in the elite being the innovators. There will be a loss in complex decision-making skills of the masses."
Marina Gorbis, executive director at the Institute for the Future: "Imagine instead of typing search words and getting a list of articles, pushing a button and getting a narrative paper on a specific topic of interest. It's the equivalent of each one of us having many research and other assistants. ... Algorithms also have the potential to uncover current biases in hiring, job descriptions and other text information."
Ryan Hayes, owner of Fit to Tweet: "Technology is going to start helping us not just maximize our productivity but shift toward doing those things in ways that make us happier, healthier, less distracted, safer, more peaceful, etc., and that will be a very positive trend. Technology, in other words, will start helping us enjoy being human again rather than burdening us with more abstraction."
David Karger, a professor of computer science at MIT: "The question of algorithmic fairness and discrimination is an important one but it is already being considered. If we want algorithms that don't discriminate, we will be able to design algorithms that do not discriminate."
Daniel Berleant, author of The Human Race to the Future: "Algorithms are less subject to hidden agendas than human advisors and managers. Hence the output of these algorithms will be more socially and economically efficient, in the sense that they will be better aligned with their intended goals. Humans are a lot more suspect in their advice and decisions than computers are."
Via NPR
Via NPR