Why 75% never used the dashboards you paid for.
Static dashboards were built for a different era. When data warehouses were expensive and analysts were scarce, pre-building a chart for every conceivable question made sense. That model never scaled past the analysts themselves. The promise of self-service died in the gap between "the data exists" and "I can ask a useful question of it."
Natural language interfaces have been bolted onto BI tools for a decade. They mostly didn't work. The questions users asked were ambiguous, the joins were wrong, the metrics didn't match the definitions in the board deck. Trust evaporated. What's different now is the combination of large language models that can parse vague, business-flavoured questions and data layers mature enough to keep them honest. Most of the time.
Three platforms, one shared dependency: your metric definitions.
The landscape has consolidated around three entry points, each with a different set of trade-offs.
GA4 Analytics Advisor is the one most organisations already have. It's built into Google Analytics 4 at no additional cost, and it answers plain-language questions about your web and app data in seconds. The constraint is that "engagement," "conversion," and "session" mean what Google says they mean. Your organisation's more precise definition, the one that reconciles with the board deck, has nowhere to live. The grunt work of defining metrics has moved upstream and, in this case, out of reach.
Conversational Analytics in Looker, generally available since April 2026, takes the opposite approach. The chat interface is powered by Gemini, but the senior partner is the Looker semantic layer: your business's own definitions of what "active customer" or "monthly recurring revenue" actually mean. When the model is grounded in those definitions, it answers more reliably. A question like "what's our churn rate by acquisition channel for mobile users last month" can pull from product, marketing, and finance data in one go. The trade-off is that someone has to build and maintain those definitions. That work doesn't disappear. It moves upstream into the data foundation.
Amplitude's AI Agents, launched in February 2026, went further still. Rather than waiting for a question, Amplitude's Global Agent monitors dashboards, investigates anomalies, and surfaces hypotheses about what's driving changes in your funnels. It operates inside Slack and connects to Notion, Figma, and GitHub, which means the analytics comes to wherever your team already works. One early customer put it simply: "I go into every Monday morning feeling like the smartest person in the room without any work." By Amplitude's own figures, AI agents now account for around a quarter of all queries on the platform, for a feature that didn't exist a few months ago.
Same question. Opposite reliability. The difference is who owns the definitions.
The marketing manager who couldn't get an answer from her dashboard without waiting three days can now type: "compare conversion rate for the spring campaign versus last year, broken down by Nordic country, for users who saw both the email and the display ad." She gets a chart. She follows up, digs in, and the conversation becomes the analysis.
In Looker, if her organisation has done the upstream work, "conversion rate" means exactly what her business agreed it means. In GA4 Analytics Advisor, it means what Google decided. Same question. Same fluent interface. Completely different reliability depending on whether you control the definitions. And that difference is invisible until the answer contradicts something from the finance team.
The winners did the upstream work before the tool arrived.
The companies seeing the most value from conversational analytics right now are not the ones with the newest tools. They are the ones that spent the last few years agreeing on what their metrics mean and making sure those definitions live somewhere the system can find them. Tools amplify whatever is underneath. Good data foundations get more useful. Bad ones produce confident-sounding answers that quietly contradict each other across the organisation. The lesson from a decade of self-service BI applies here with more force. The interface problem is solved. The foundation problem is not.
Start with a scoping exercise, not a tool decision.
The right first step is a short, living reference that names three things:
- Which questions your conversational analytics setup can answer reliably on its own
- Which questions need a dashboard cross-check before you act on the answer
- Which metrics and dimensions in your data are trusted, and which still need governance work before they can be trusted in a chat interface
It works regardless of whether you are starting from GA4, Looker, Amplitude, or any other analytics platform. It turns "spot the bad answer in real time" into "know what to ask in the first place."
At Dear Future, this scoping exercise is how we open every conversational analytics engagement. Across our work with retail, telco, and manufacturing clients, the same pattern shows up: the questions that look easiest to answer are usually the ones with the most fragile definitions underneath. For GA4 users, the document clarifies what the platform can do within Google's definitions and where your own data governance needs to start. For Looker buyers, it becomes the foundation of the semantic layer investment. The framework is short. The clarity it creates is not.
If you are starting to ask what conversational analytics could mean for your team, that is the conversation we would be happy to have.


