Many talented people are working to integrate LLMs and agentic AI systems into our daily lives. As these tools roll out, we will continue confronting the “algorithm aversion” phenomenon: the tendency to trust human judgment over algorithmic decisions. This bias could hinder the adoption of AI technologies despite their obvious potential to revolutionize industries and enhance productivity. The good news is that we know how to overcome the aversion.
The problem
Algorithm aversion is not a new concept; however, its implications have evolved with technology. Initially observed in simple computational tools, skepticism has grown alongside algorithm complexity. In short, when given a choice between human decision-making and a machine, we will generally choose our fellow man, even when the machine is better. The effect has been found in many contexts and across different populations.
Studies, such as those by Dietvorst, Simmons, and Massey (20141, 20152) and Logg, Minson, and Moore (2019)3, have illuminated the contours of algorithm aversion. This research reveals a paradox: while people can accept the superior accuracy of algorithms, they often default to human judgment, especially after seeing an algorithm err. Despite evidence that algorithms outperform human prediction accuracy in many domains, this bias persists.
The backlash against self-driving cars is an excellent example of this tendency. Despite being an order of magnitude safer than human drivers for many years, autonomous taxis are only now starting to gain traction in a few key markets, and widespread adoption is a decade away. Another concerning case is doctors’ prejudice against machines in a diagnostic setting. When asked to give a second opinion about patients’ heart conditions, doctors were biased against the advice of AI systems, even when they were as accurate as humans.4
A way out
However, there are some contexts in which we will defer to machines. In their second paper, Dietvorst et al. found that when given the ability to modify the algorithm with human input, we trusted it as much as a human. The craziest thing about the study was that giving participants influence over algorithm behavior worked even when the input was illusory.
Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2).
Logg et al. found that laypeople prefer algorithms, while “experts” placed too much weight on their judgment (sound familiar?)
Paradoxically, experienced professionals, who make forecasts on a regular basis, relied less on algorithmic advice than lay people did, which hurt their accuracy. These results shed light on the important question of when people rely on algorithmic advice over advice from people and have implications for the use of “big data” and algorithmic advice it generates.
Unfortunately, the experts will decide whether to use AI agents in business and government; we, the users or citizens, won’t have too much say in the matter. Giving decision-makers some influence on how these AI systems operate may be the key to overcoming pushback as they roll out.
Solutions
With this knowledge, we can use tools to overcome algorithm aversion.
Educating the public about AI's benefits and limitations can demystify technology and reduce unfounded fears, fostering a more nuanced understanding of when and how to trust algorithms. I don’t think governments are best positioned to do this. So far, the only enthusiasm for AI that governments have shown is for regulating it. It’s presumably in the interest of the biggest AI companies to sponsor educational efforts. Just as efforts have been made to teach programming in K-12 education, we should teach students how to use AI systems productively. The explosion of ChatGPT in high school and college essay writing suggests this shouldn’t be too difficult.
Offering users transparency about how AI systems make decisions and the ability to influence these processes—whether through adjusting parameters or choosing between decision-making styles—can also help. Making a multibillion-parameter model understandable is difficult, but many teams are working on this problem. Making it more accessible to fine-tune smaller models for specific tasks will also help. These smaller models will have better domain performance and be easier to understand.
Incorporating human feedback into AI decision-making loops allows users to feel in control, potentially overcoming aversion. Even if this control is somewhat illusory, it acknowledges the human need for autonomy and influence over technology that impacts our lives. This already happens with RLHF, but the user doesn’t necessarily provide the feedback. AI systems tailored to users or groups that adapt based on their feedback could help bridge the trust gap.
The future
LLMs and AI are no longer just theoretical or limited to niche applications. They drive decisions in finance, healthcare, and customer service, marking a shift towards more "agentic" systems that can operate autonomously. The integration of AI into everyday life brings algorithm aversion to the forefront. The impersonal nature of algorithms, contrasted with the "human touch," often leads to mistrust despite AI's efficiency and scalability benefits.
As AI systems become more agentic, performing tasks with minimal human oversight, understanding and addressing algorithm aversion becomes paramount. This trend suggests a future where our interaction with AI could be as frequent as with human colleagues. Without intervention, algorithm aversion could stall AI adoption or lead to underutilization, impacting productivity and innovation. This is particularly relevant in critical decision-making areas such as healthcare, finance, and criminal justice, where trust is paramount.
Algorithm aversion is a significant barrier to the full integration of AI in our lives, but it is not insurmountable. By designing AI systems that are not only transparent but also customizable, however limited this customization might be, we can bridge the gap between human and algorithmic decision-making. This approach will be crucial as we increasingly rely on AI systems to make decisions, big and small, in our daily lives.