Why your Analytics Agent still sucks

⏰ Reading Time - 9 minutes ⏰ 

“The future analyst won’t write SQL, they’ll configure the AI analyst.”

Let that sink in.

It’s a fundamental shift in how data work will be done.

Last week, I promised that I will go deeper into what I'm actually building, where things are breaking down, and why the real bottleneck isn’t what you think it is.

So, let's get into it, shall we?

What’s the goal of this newsletter?

Simple: I want to take you inside the process of building a functional AI analytics agent, one that could change how you analyze data in your team or business.

And I’ll show you why a GPT wrapper isn't the real answer.

You’ll learn:

  • Why many LLM data agents fail in practice
  • How I’m designing a 3-step agent architecture
  • Where my approach is currently breaking
  • The critical role of semantic layers - and what I'm trying next

The Vision: A 3-Step Analytics Agent

In the last issue, I outlined the general direction of where this is going. Here's a quick recap:

Step 1 – Reactive Agent
Answers business questions on demand. Think: "What was revenue last week?" or "How did the email funnel perform this month?"

Step 2 – Proactive Agent
Constantly monitors business goals and initiatives. It tracks whether you’re hitting targets and surfaces risks before they turn into problems.

Step 3 – Autonomous Agent
Takes action. For example, it might pause a losing landing page test or reallocate spend between ad creatives, without you lifting a finger.

Right now, I’m building Step 1: a reactive agent that works.

And trust me, even that’s not as plug-and-play as people claim.

Building the First Agent with Wobby.ai

Wobby's claim is to build AI Analysts that deliver business-ready insights straight from your data warehouse, right in Slack or Teams. Self-serve analytics, trusted results.

I connected my own Data Action Mentor BigQuery Data Warehouse and started building my first agents.

Featured image

The goal:
No more ad hoc SQL.
Just ask questions about my data in Slack and get deterministic, high quality answers.

The Wobby team sees a future where the data analyst role will disappear and merge into the analytics engineering role. This role will be responsible for providing the agents with clean data and metadata and configuring the agents.

The idea is to build domain-specific agents. You configure them with rules and data access, and then ask questions in Slack or Teams.

Meet my sessions agent:

Its job: Analyze web funnel performance for sales of my Masterclass "From dashboard factory to strategic partner."

The Instructions window contains general context on how the AI agent should interpret and respond to analysis tasks.

Featured image

Next, you can restrict the agent to specific datasets and tables and I decided to let it laser-focus on the sessions table in my DWH datamarts layer .

Featured image

You can also decide if your agent is allowed to execute custom SQL queries. If this toggle is switched on, the agent can write their own queries based on your question and the metadata it can access. I switched this off as 90% of the custom queries were completely off (and my DWH has very clean metadata 😉)

Featured image

If the switch is "off", the agent can only perform queries based on what it knows from its Knowledge Base.

Featured image

Here, you can provide two types of knowledge:

  1. Metrics definitions as SQL queries that can be defined as "Verified" or "Not Verified"
Featured image

2. Contextual descriptions of your business context and vocabulary. For example, I defined my different types of funnels:

Featured image

Sounds clean. But here’s where things started to fall apart.

The Problem: Context Is Not Enough

Despite all the prep and the very tightly defined use case and data access, my Wobby agent didn’t behave deterministically.

Even when I blocked it from writing its own SQL, results were inconsistent. And when I tried expanding the context to help it out more, two big issues emerged:

  1. Context overload.
    Writing and maintaining all the required logic in Wobby’s UI is exhausting. Every new funnel, segment, or metric needs handcrafted input. A metric is a combination of a measure, a dimension and a filter and it can (and will) create thousands of definitions
  2. Non-deterministic answers.
    Even with strict rules, the agent often misunderstood even simple questions and/or provided wrong results.

Why?

Because Wobby doesn’t currently support a real semantic layer. All that business logic and interpretation is stored in isolated bits of context.

In short:
You can’t get deterministic answers if your agent is guessing what your data means every time.

The Insight: You Need a Semantic Layer in Code

I'm convinced:
AI Agents won’t work reliably without a semantic layer. One that’s written and maintained in code.

The current Wobby-agent approach breaks because:
→ The agent has no structured way to understand your data
→ All logic lives in disconnected text fields
→ Updating it at scale is a nightmare

If your agent doesn’t know what “revenue,” “active user,” or “checkout conversion” mean in a precise, reusable way, then you’ll always be stuck babysitting it.

Which defeats the point.

The Solution: Toward a Code-Based Semantic Layer

That’s where I’m heading next.

Right now, I’m testing Connecty AI, a tool that helps build and maintain a semantic layer using AI. It promises to:

  • Ingest your data models
  • Help define key metrics and business entities
  • Build a knowledge graph with AI support
  • Give your agents a structured model to reason over

This matters, because defining and maintaining a semantic layer manually is:

  • Tedious
  • Error-prone
  • Hard to scale across teams and products

Connecty's goal is to build the world's first fully autonomous Day 0 semantic layer.

The Bottom Line

Many data teams playing with AI agents are missing the most important piece: common, foundational understanding.

Without a semantic layer, your agent is just a fancy autocomplete that makes pretty dashboards but doesn’t understand what it's showing.

Here’s what I’ve learned:

  • Building AI agents only works with strong guardrails.
  • Context blobs are not enough to create deterministic outputs.
  • The real unlock is a semantic layer that agents can reason over.
  • That semantic layer must live in code, not tool-specific UIs.
  • Maintaining this layer will be the key job of analytics engineers.

If you’re serious about building analytics agents that don’t just look cool but actually help your business, the next step is clear:

You need to start investing in a real semantic layer.

I’ll go deeper into how I’m testing Connecty and what that setup looks like in the next newsletter.

Until then:
Ask yourself: Are you ready to build AI analytics agents?
Or will you build master hallucinators?

See you next time!

Sebastian

P.S.: This is not a sponsored post. I'm sharing my neutral, unbiased observations.

Join 2,500+ readers

Subscribe for weekly tips on building impactful data teams in the AI-era

Error. Your form has not been submittedEmoji
This is what the server says:
There must be an @ at the beginning.
I will retry
Reply
Emoji icon 1f64c.svg

Whenever you need me, here's how I can help you:

Data Action Mentor Masterclass : 🏭 From dashboard factory to strategic partner♟️

A digital, self-paced masterclass for experienced data professionals who want to work on high-leverage projects (not just dashboards). 📈

Knowledge Base

Free content to help you on your journey to create massive business impact with your data team and become a trusted and strategic partner of your stakeholders and your CEO.

​10X Data Team Collective 🦸​ 

We build 10X, AI-first data teams. Together.

A curated community for ambitious data leaders who generate outsized business impact (and outsized career growth) by building the AI-powered 10X data team of the future. For the price of less than $1 per day.

You'll get expert content, hype-free conversations, and curated 1:1 matchmaking with forward-thinking professionals in the data & AI space.