As someone who pursued a postgraduate degree in AI and whose primary research was on NLPs well before GPTs became mainstream, I must admit I’m increasingly concerned about the direction AI is taking us.
This isn’t to question AI’s usefulness. On the contrary, its capabilities are remarkable. But it is a reflection on how the irresponsible use of AI could undermine what makes us fundamentally human: the ability to think critically and independently.
The situation with Grok at xAI offers a perfect case study. It underscores why AI regulation is essential. In that instance, the manipulation was obvious and detected relatively quickly. But what about the scenarios where misuse isn’t so easily identified?
A recent research paper from the University of Zurich serves as a wake-up call. Ethically, it raises important questions, but the findings are astounding.
TL;DR: The researchers ran an experiment in the r/ChangeMyView subreddit. They found that AI-generated responses were 3 to 8 times more persuasive than human-generated ones. In other words, large language models could significantly outperform humans at changing people’s minds in online discussions.
Read more about the University of Zurich study and its ethical implications