Originally posted on LinkedIn on 01.06.2025.
Artificial Intelligence can do impressive things: write texts, generate code, answer complex questions in seconds. But it also has a weakness that’s not talked about enough, it sometimes just makes stuff up. In the AI world, we call that a “hallucination.”
Here’s a real-world example:
A developer asks an AI for help writing code. The AI suggests importing a helpful-sounding software library. The name looks legit, the recommendation seems professional, but the library doesn’t exist.
Harmless? Not quite.
Cyber attackers are already taking advantage of these hallucinations. All they need to do is register the fake library name, fill it with malicious code, and wait for someone to trust the AI blindly.
Just like that, an innocent AI suggestion becomes an entry point for an attack.
A Trojan Horse, delivered with a smile.
And it gets worse with prompt injection (carefully crafted inputs that trick AI systems into bypassing restrictions, leaking data, or producing harmful content). This is especially dangerous when AI is embedded in automated workflows.
But the real risk? It’s not the tech. It’s the human behind the screen.
And here’s a personal story:
When my teenage daughter proudly showed me her “scientifically sound” math presentation, I casually asked if I might happen to find ChatGPT in her browser history. Her face? A mix of surprise, mild panic, and that universal teen look of “how did you know?” 😅
The point is: AI isn’t the problem.
Misusing it without understanding is.
We shouldn’t fear the technology, we should teach how to use it wisely.
Especially for younger generations, growing up with AI as the norm, we need to go beyond just “AI skills.”
We need critical thinking, context awareness, and the ability to evaluate outputs, not just copy and paste them.
Let’s talk solutions:
How do we build true digital literacy in schools, teams, and society at large, so we can use AI confidently and responsibly?
Looking forward to your thoughts
.
We have to view AI as googling on steroids, carrying some garbage up as Google does, too. Plus: these extra risks, and hallucinations.
Plus: the quality of a prompt. Is it precise, to the point, best with partly anonymized context.