Skip to main content
EnablementWorkshopsAI Adoption

What Analysts Actually Want from AI: Lessons from the Field

5 min read

After running workshops with analysts and researchers across multiple domains, a few patterns emerge consistently. Here is what actually moves the needle, and what misses the mark.

There is a version of AI enablement that looks like this: a vendor runs a half-day training, analysts learn what a large language model is, someone demos ChatGPT, and the session ends with a slide about responsible use. Attendance is good. Behavior change is minimal.

After running workshops with analysts and researchers across different domains, I have a clearer picture of what actually changes how people work, and what does not.

What Does Not Move the Needle

Generic prompt engineering tutorials. Teaching analysts to write better prompts for a general-purpose chatbot is only marginally useful if their actual work is in a specific domain with specific tools and specific data. The skill does not transfer cleanly.

Demos that are too polished. When you show analysts a perfectly constructed demo that works exactly as expected on carefully selected inputs, you create a false impression. The first time they try it with their messy real-world data and it does not work, trust erodes quickly.

One-shot training with no follow-up. Skills require repetition and feedback. A single training session, even a good one, rarely changes sustained behavior. People leave motivated, return to their existing workflows, and the new approach never gets integrated.

AI tools framed as replacements. When enablement is framed as "this will automate your job," analysts become defensive rather than curious. Defensive analysts do not experiment. They do not ask questions. And they find reasons not to adopt.

What Actually Works

Start with their specific problems. The workshops that generate the most engagement are the ones that start by asking analysts what takes them the longest, what they find most tedious, what would free up the most time. Then the workshop shows, concretely, how AI tools address those specific bottlenecks.

This sounds obvious. It is rarely done.

Use their data, or close approximations. Analysts are much more engaged when the examples look like their actual work. If they spend their time writing SQL against a particular schema, the workshop demos should use that schema. If they analyze a specific type of document, use a similar document.

Be honest about failure modes. Showing analysts what the model gets wrong, and why, builds more trust than showing only what it gets right. It also gives them the calibration they need to use AI outputs appropriately rather than accepting them uncritically.

Give them time to try, not just watch. The sessions that stick are the ones with significant hands-on time. Watching a demo is passive. Running the same workflow yourself, with your own question, against data you understand, is active. Active learning integrates.

Build repeatable workflows, not one-off experiments. The goal of an enablement session should be to leave participants with a workflow they can repeat on Monday morning without assistance. Not a collection of interesting prompts. Not a general sense of AI's potential. A specific, documented process for a specific task they do regularly.

The Mindset Shift That Matters Most

The most important thing I try to convey in workshops is not a specific technique. It is a way of relating to AI outputs.

The default relationship most analysts have with software is: the software is either right or wrong, and if it is wrong, it is broken. That model does not work well for probabilistic systems. An LLM output is not right or wrong. It is a draft with a confidence level you have to estimate from context.

When analysts internalize that, everything changes. They stop treating AI as an oracle and start treating it as a fast, capable, occasionally unreliable collaborator. That shift produces better outcomes than any specific prompt technique, because it generalizes across every task they will ever try to use AI for.

A Practical Recommendation

If you are planning AI enablement for your team, design it like a product, not a training event.

- Start with user research: what problems does your team actually have?

- Build a curriculum around those specific problems, using their tools and data

- Include hands-on time with real workflows, not just demos

- Plan for multiple touchpoints, not a single session

- Measure adoption and output quality, not just attendance

The teams that get the most out of AI tools are the ones that treated enablement as an ongoing practice. The ones that got the least out of it are the ones that ran a workshop and called it done.

Working through a similar challenge?

A 20-minute call is enough to figure out whether I can help.

Book a call