5 takeaways from CU Boulder engineers’ study to help tech developers create AI tools people actually want to use
A study published in the journal AI and Ethics from University of Colorado Boulder professor Amir Behzadan in the Department of Civil, Environmental and Architectural Engineering, and his Ph.D. student Armita Dabiri, studied how artificial intelligence (AI) technology can earn public confidence.
The two created a framework for developing AI tools that people actually want to use because they are reliable, ethical and built with humans in mind, according to a press release from CU Boulder.
In the study, Behzadan and Dabiri drew on that framework to create a conceptual AI tool that incorporates the elements of trustworthiness.
“As a human, when you make yourself vulnerable to potential harm, assuming others have positive intentions, you’re trusting them,” Behzadan said. “And now you can bring that concept from human-human relationships to human-technology relationships.”

According to CU Boulder, Behzadan studied the building blocks of human trust in AI systems that are used in the built environment, from self-driving cars and smart home security systems to mobile public transportation apps and systems that help people collaborate on group projects.
Behzadan said trust has a critical impact on whether people will adopt and rely on AI or not.
Here are five main takeaways Behzadan and Dabiri created that demonstrate the core elements of trustworthy AI technology:
- It knows its users
Trust looks different for everyone. Our comfort level with AI technology is shaped by our experiences, values, cultural beliefs, and even brain wiring.
“Even if you have a very trustworthy system or person, our reaction to that system or person can be very different. You may trust them, and I may not,” Behzadan said.
2. It’s reliable, ethical and transparent
A trustworthy system should work well, protect user data, adapt to change, avoid harmful bias and not discriminate between users. If it does fail, it should not harm people, property or the environment.
3. It takes context into account
AI should incorporate contextual information. In their study, the researchers proposed a conceptual AI assistive tool called PreservAI designed to help engineers, urban planners, historic preservationists and government officials repair a historic building.
The tool would balance competing interests, incorporate stakeholder input, analyze different outcomes and collaborate helpfully with humans rather than replacing them.
4. It’s easy to use and asks users how it’s doing
AI tools should be intuitive and open to user input.
“Even if you have the most trustworthy system, if you don’t let people interact with it, they are not going to trust it,” Behzadan said.
Giving users the ability to test and question results helps refine the tool and build confidence.
5. It can rebuild trust when it’s lost
According to Behzadan, public trust in technology can change quickly.
“A self-driving car crash can erode trust,” he said. “Transparency and consistent performance can help rebuild trust over time.”
Behzadan also said when people trust AI systems enough to share their data and engage with them meaningfully, those systems can improve significantly, becoming more accurate, fair, and useful.
“Trust is not just a benefit to the technology; it is a pathway for people to gain more personalized and effective support from AI in return,” he said.
Read the full study from CU Boulder about AI technology here.




