sciencenewsnet.in

Understanding and mitigating user biases in online information searching

When searching for information online, the results can vary widely from person to person. Jiqun Liu, an assistant professor in the School of Library and Information Studies in the University of Oklahoma’s College of Arts and Sciences, wants to improve the quality of online search results that accounts for users’ biases and returns more balanced and useful results.

Liu said an example of system bias in information retrieval is when results give preference to popular or established works, people or content, rather than potentially more relevant results.

“When searching for music, algorithms are biased toward popular artists, artists who are already established in the field,” he said. “New artists need opportunities to increase the impacts of their latest works. How do we increase the fairness in the exposure and information presentation?”

“People often act intuitively and are subject to systematic biases when making decisions under uncertainty due to their inability to calculate all the possible consequences of their choices, a fundamental cognitive phenomenon called bounded rationality,” Liu said. “Without proactive information supports, these decisions could be driven by misleading information, cognitive biases and heuristics and may result in significant deviations from desired outcomes.”

Liu received a $175,000 grant from the National Science Foundation to study users’ systematic biases, like previous search queries, to better understand the relationships between search interactions and users’ systematic biases, and to build bias-aware prediction models of search interactions. Using the results, Liu will then develop a scalable and potentially transformative approach to modeling users and their decision-making processes in interactive information retrieval.

Liu said his study is different from other algorithmic bias studies in that he is focusing on user biases rather than systematic biases.

“There are many assumptions we make when we build user models on the algorithmic side,” he said. “We assume that users are rational, which means they always seek maximum utility; they always look for optimal results and have perfect computational capacity. They know all of the possible options and can rationally, mathematically, compare those options which, frankly, never happens.”

“Health information seekers may easily trust medical misinformation that confirms their existing expectations and beliefs,” Liu said. “Students often heavily rely on top-ranked results and stop at short satisficing answers, rather than exploring more credible and informative web pages. Online shoppers tend to quickly accept immediate mediocre recommendations after encountering several bad quality products (with low reference levels), without examining all available options. In this project, I’m trying to incorporate users’ bounded rationality into these predictions so that we can make more realistic predictions on users’ actions and can offer those user models with a more solid behavioral and psychological foundation,” he added.

By investigating users’ systematic biases, the project aims to break new ground for information retrieval research and address fundamental bottlenecks in the development of bias-aware intelligent search and recommendation systems.

Liu says the goal is not only about returning fair and balanced information regardless of user bias, but also making sure the information is accessible. This project is building on a previous study Liu conducted that developed machine learning algorithms to predict whether users had a clear search goal, or were searching more generally for a given topic, which he terms “exploration mode” and serves as the foundation for this study.

“If a user is in a hurry, then you may want to present some quick answers,” he said. “If the user is in an exploration mode, you can present more open-ended information. We tried to build the algorithm to predict their implicit state from observable behaviors. We need to know their cognitive state, their existing biases and the limitations in their knowledge to present more personalized and potentially useful content to them in the informational retrieval system.”

Audrey Reinert, a researcher in OU’s Data Institute for Societal Challenges, said, “identifying and correcting questions of systematic bias in information encoding and retrieval processes is necessary for the development of more fair and equitable information systems.”

Liu said the outcomes of this project can help people better leverage the power of information through incorporating the knowledge about their biases into search algorithms, proactively capturing bias-related search problems in multiple modalities of search interactions, and promoting informed, unbiased decision-making.

###

This part of information is sourced from https://www.eurekalert.org/pub_releases/2021-06/uoo-uam061521.php