Being honest about using AI at work makes people trust you less, research finds
They say honesty is the best policy − but when it comes to using AI on the job, research suggests that can backfire.
Whether you’re using AI to write cover letters, grade papers or draft ad campaigns, you might want to think twice about telling others. That simple act of disclosure can make people trust you less, our new peer-reviewed article found.
As researchers who study trust, we see this as a paradox. After all, being honest and transparent usually makes people trust you more. But across 13 experiments involving more than 5,000 participants, we found a consistent pattern: Revealing that you relied on AI undermines how trustworthy you seem.
Participants in our study included students, legal analysts, hiring managers and investors, among others. Interestingly, we found that even evaluators who were tech-savvy were less trusting of people who said they used AI. While having a positive view of technology reduced the effect slightly, it didn’t erase it.
Why would being open and transparent about using AI make people trust you less? One reason is that people still expect human effort in writing, thinking and innovating. When AI steps into that role and you highlight it, your work looks less legitimate.
But there’s a caveat: If you’re using AI on the job, the cover-up may be worse than the crime. We found that quietly using AI can trigger the steepest decline in trust if others uncover it later. So being upfront may ultimately be a better policy.
Why it matters
A global survey of 13,000 people found that about half had used AI at work, often for tasks such as writing emails or analyzing data. People typically assume that being open about using these tools is the right choice.
Yet our research suggests doing so may backfire. This creates a dilemma for those who value honesty but also need to rely on trust to maintain strong relationships with clients and colleagues. In fields where credibility is essential – such as finance, health care and higher education – even a small loss of trust can damage a career or brand.
The consequences go beyond individual reputations. Trust is often called the social “glue” that holds society together. It drives collaboration, boosts morale and keeps customers loyal. When that trust is shaken, entire organizations can feel the effects through lower productivity, reduced motivation and weakened team cohesion.
If disclosing AI use sparks suspicion, users face a difficult choice: embrace transparency and risk a backlash, or stay silent and risk being exposed later – an outcome our findings suggest erodes trust even more.
That’s why understanding the AI transparency dilemma is so important. Whether you’re a manager rolling out new technology or an artist deciding whether to credit AI in your portfolio, the stakes are rising.
What still isn’t known
It’s unclear whether this transparency penalty will fade over time. As AI becomes more widespread – and potentially more reliable – disclosing its use may eventually seem less suspect.
There’s also no consensus on how organizations should handle AI disclosure. One option is to make transparency completely voluntary, which leaves the decision to disclose to the individual. Another is a mandatory disclosure policy across the board. Our research suggests that the threat of being exposed by a third party can motivate compliance if the policy is stringently enforced through tools such as AI detectors.
A third approach is cultural: building a workplace where AI use is seen as normal, accepted and legitimate. We think this kind of environment could soften the trust penalty and support both transparency and credibility.
The Research Brief is a short take on interesting academic work.
Oliver Schilke received funding from the National Science Foundation (Award #1943688).
Martin Reimann receives funding from the National Endowment for the Arts research grant (#1925643–38-24) and a National Security Systems (TRIF NSS) research grant.
Read These Next
The treaty meant to control nuclear risks is under strain 80 years after the US bombings of Hiroshim
In the wake of the US attacks, the international community asked itself how it could ensure that such…
Shingles vaccination rates rose during the COVID-19 pandemic, but major gaps remain for underserved
Gains in vaccination may reflect heightened public awareness of the importance of vaccination during…
Are you really allergic to penicillin? A pharmacist explains why there’s a good chance you’re not −
As many as 1 in 5 Americans believe they have a penicillin allergy, but just a tiny fraction actually…