Is using Ai at work… cheating?

When more than half of workers say they’re using AI at work in secret, it’s time to ask:
Is using AI cheating — or just how we work now?

In this episode, we talk to Dr. Tanya Kant (University of Sussex) and Greg Bortzkiewicz (Magenta Associates) to unpack the ethics, emotions, and real-world practice behind AI use in PR, communications, and academia — drawing on Tanya and Greg’s research report, CheatGPT.

No one around the table thinks using AI is actually cheating. But they do think we need better questions.

People aren’t cheating with AI — they’re hiding it out of shame

Tanya shares that most people using AI secretly aren’t trying to cut corners — they’re embarrassed, or unsure if it’s allowed.

“Nobody said they were using it to get ahead. They were using it in secrecy out of embarrassment or shame.”

Comms pros are using AI - just not how you think

Despite the hype, most professionals aren’t using AI to generate full articles or press releases.
They’re using it for editing, rewording, structuring, and getting unstuck.

“Writers are not using generative text AI to generate press-ready content.” – Tanya

The ethical risk isn’t usage - it’s opacity

If teams can’t talk openly about the tools they’re using, they can’t share best practices or spot problems.

“We need cultures of transparency — not just rules.” – Tanya
“It’s not just ethical to be open — it’s practical.” – Greg

AI still needs human expertise to be any good

Greg compares AI writing to video tools: anyone can use them, but that doesn’t mean they’ll produce quality.

“You can ask ChatGPT to write a blog, but without expertise, it’s not going to be good.”

Universities are scrambling, but AI use is already here

Tanya shares how institutions are trying to balance critical thinking, plagiarism risk, and the reality that students will use these tools — just like their professors do.

“Universities should be leading the conversation, not just policing it.”

Text isn’t the same as knowledge

One of Tanya’s strongest warnings: just because a tool produces clean copy doesn’t mean it’s giving you truth.

“AI will always generate text — but it won’t always generate knowledge.
If you don’t know the difference, you’re going to get in trouble down the line.”

Critical AI literacy matters more than productivity tips

It’s not just about how to prompt AI — it’s about knowing when not to use it, and understanding the trade-offs.

“Ethical use isn’t just individual — it’s collective. It’s cultural.”

This one’s for comms professionals, educators, team leaders, and anyone secretly wondering: Am I using this right?

👉 Listen now wherever you get your podcasts.

Next
Next

What if we’ve been backing the wrong kind of leaders?