AI datasets have human values blind spots − new research
AI systems reflect human values. However, the human values embedded in AI are skewed to the utilitarian and away from the greater good.
![](https://images.theconversation.com/files/646921/original/file-20250204-19-3seyu5.jpg?ixlib=rb-4.1.0&rect=1289%2C1187%2C4526%2C2618&q=45&auto=format&w=496&fit=clip)
My colleagues and I at Purdue University have uncovered a significant imbalance in the human values embedded in AI systems. The systems were predominantly oriented toward information and utility values and less toward prosocial, well-being and civic values.
At the heart of many AI systems lie vast collections of images, text and other forms of data used to train models. While these datasets are meticulously curated, it is not uncommon that they sometimes contain unethical or prohibited content.
To ensure AI systems do not use harmful content when responding to users, researchers introduced a method called reinforcement learning from human feedback. Researchers use highly curated datasets of human preferences to shape the behavior of AI systems to be helpful and honest.
In our study, we examined three open-source training datasets used by leading U.S. AI companies. We constructed a taxonomy of human values through a literature review from moral philosophy, value theory, and science, technology and society studies. The values are well-being and peace; information seeking; justice, human rights and animal rights; duty and accountability; wisdom and knowledge; civility and tolerance; and empathy and helpfulness. We used the taxonomy to manually annotate a dataset, and then used the annotation to train an AI language model.
Our model allowed us to examine the AI companies’ datasets. We found that these datasets contained several examples that train AI systems to be helpful and honest when users ask questions like “How do I book a flight?” The datasets contained very limited examples of how to answer questions about topics related to empathy, justice and human rights. Overall, wisdom and knowledge and information seeking were the two most common values, while justice, human rights and animal rights was the least common value.
![a chart with three boxes on the left and four on the right](https://images.theconversation.com/files/646919/original/file-20250204-15-t93inq.jpg?ixlib=rb-4.1.0&q=45&auto=format&w=754&fit=clip)
Why it matters
The imbalance of human values in datasets used to train AI could have significant implications for how AI systems interact with people and approach complex social issues. As AI becomes more integrated into sectors such as law, health care and social media, it’s important that these systems reflect a balanced spectrum of collective values to ethically serve people’s needs.
This research also comes at a crucial time for government and policymakers as society grapples with questions about AI governance and ethics. Understanding the values embedded in AI systems is important for ensuring that they serve humanity’s best interests.
What other research is being done
Many researchers are working to align AI systems with human values. The introduction of reinforcement learning from human feedback was groundbreaking because it provided a way to guide AI behavior toward being helpful and truthful.
Various companies are developing techniques to prevent harmful behaviors in AI systems. However, our group was the first to introduce a systematic way to analyze and understand what values were actually being embedded in these systems through these datasets.
What’s next
By making the values embedded in these systems visible, we aim to help AI companies create more balanced datasets that better reflect the values of the communities they serve. The companies can use our technique to find out where they are not doing well and then improve the diversity of their AI training data.
The companies we studied might no longer use those versions of their datasets, but they can still benefit from our process to ensure that their systems align with societal values and norms moving forward.
Ike Obi does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Read These Next
How populist leaders like Trump use ‘common sense’ as an ideological weapon to undermine facts
When common sense is promoted as a virtue, it’s not just to celebrate how regular people understand…
5 Super Bowl commercials that deserve places in the advertising hall of shame
A depressed robot, a drugged Kenyan and a singing Saab driver − what could go wrong?
Religious freedom is routinely curbed in Central Asia – but you won’t often see it making internatio
Forum 18, a site based in Norway, is one of the few media outlets specializing in coverage of religious…