Doctoral thesis introduces a scale to measure human’s trust in technology


In the present world we are witnessing headlines which regularly chronicle technology-based issues such as security hacks, inappropriate or illegal surveillance, misuse of personal data, spread of misinformation, algorithmic bias, and lack of transparency.

Siddharth Nakul Gulati finds these aspects about trust important: “Quite recently, elections in the US have sparked debate over whether or not the systems put in place for voting can be trusted. So, trust is an important concept binding together different facets of human interaction with technology and it is important to be able to both study and measure it.” To reach the result, he used a sequence of studies with different kinds of technology using an empirical technique called structural equation modelling. Each study built upon the results of the previous one.

Siddharth further explains the importance of trust in human technology interactions.

The development of intelligent algorithms, made possible due to advancements in artificial intelligence (AI) and machine learning (ML) are driving most of our day-to-day interactions with digital technology. From a simple Google search to product and movie recommendations on Amazon and Netflix to more complex tasks such as being able to manage an electric power grid or helping doctors make crucial decisions when diagnosing patients, intelligent algorithms are encroaching more into our day to day lives and have the capacity to reason and make decisions on behalf of their human counterparts.

Developments in AI and ML have also allowed complex concepts such as driverless cars, autonomous drone delivery systems, collaborative robots as teammates, robotic concierges/hosts etc, which were once thought of as science fiction to now become reality, and are being used across the globe on a daily basis. As these intelligent technologies slowly become the norm and encroach more into our daily lives, they are also making several decisions on people’s behalf. This naturally raises the questions about trust toward these autonomous systems. Can the advice and recommendations offered to use by these systems be trusted? How can this trust be measured? These were some of the questions guiding the doctoral thesis.

There were four studies conducted to develop the scale to measure trust. Before carrying out the studies, Siddharth identified an initial model which consisted of seven factors which affect trust. This model was tested in a first study to understand trust perception of individuals with the Estonian e-voting service. Even though Siddharth was able to find that some factors from this initial model do not predict trust, he ran more studies to be able to identify with a high degree of statistical certainty which out of these seven initial factors actually do affect trust.

So, Siddharth carried out a second study, this time to study trust with Siri, Apple’s intelligent personal assistant and was able to identify four factors from the initial seven which affect trust.

To be able to claim with a high degree of statistical certainty, a third study was run using a novel technique called Design fiction, where instead of studying trust with actual technical artefacts, Siddharth used fictional scenarios to gauge user trust perception with technologies or devices that do not actually exist, but that are on plausible future trajectories. He used two such fictional scenarios and was able to identify three factors with a high degree of statistical certainty that predict trust in human technology interactions. After identifying these three factors, the final scale was developed as part of Siddharth’s PhD consisting of 12 statements which can be used to measure trust in human technology interactions.

There are different use cases of the developed scale. As an example, researchers and practitioners can use it to calculate trust score for an individual product or service, or can compare trust scores for two different products or services or two different versions of the same product or service, or can compare trust scores of multiple products. “If there is a research project which involves understanding and measuring how much the individuals trust Covid-19 tracing applications, the scale developed during my thesis could be used. The results obtained can then help researchers and practitioners to better design these applications should user trust levels with them be low,” he adds.

###

The doctoral thesis is available in Tallinn University Digital Library ETERA:


https:/

/

www.

etera.

ee/

zoom/

96774/

view?page=

1&p=

separate&search=

Developing%20a%20scale%20to%20measure%20human%20computer%20trust&tool=

search

This part of information is sourced from https://www.eurekalert.org/pub_releases/2020-12/erc-dti121520.php

withyou android app