It’s the most common advice in the world of prompting: “Tell the AI to act like a professional.” Whether you want it to be a financial guru or an ai expert speaker, the goal is always to get a better, more sophisticated answer. However, a new study from researchers at UC Davis (published on arXiv) suggests this “persona-driven” approach has a dark side.
The Persona Trap: Roleplay vs. Reality
The study tested 12 distinct personas across six major language models. The findings were clear: while a persona like an ai expert speaker makes the AI sound more polished and professional, it frequently damages factual recall.
When the model is forced into a role, it shifts its internal priorities to “instruction-following” (how to sound like a specific character). This comes at the expense of “knowledge-retrieval” (recalling the actual facts). In short: the AI becomes a better actor, but a worse source of truth.

Fixing the Flaw: The PRISM Method
To solve this, researchers developed PRISM (Persona Routing via Intent-based Self-Modeling). This framework forces the AI to evaluate itself:
-
Dual Drafts: It generates two answers—one from its “default” core and one from the requested persona.
-
Smart Routing: It compares the two and delivers the one that is most accurate for the specific query.
For creative tasks, the persona wins. For raw data and facts, the “default” mode is the king.
The Nikki Verdict
While some models get lost in their “stage performance,” Gemini stays focused. By balancing professional tone with real-time data verification, Gemini acts less like a “pretend expert” and more like a reliable partner.
Smart Buying Advice: Drop the Act
If you’re using AI for smart shopping—calculating price drops on tech or analyzing car specs—don’t ask it to be a “guru.” Just ask for the data. The simpler the prompt, the more accurate the facts.



