The Hard Problems No One Wants to Solve

Margherita ZamaMargherita Zama
-April 13, 2026
hard-work
Building standalone agents is easy. Building reusable, composable capabilities is the hard part.
Most companies don’t have an AI problem. They have a governance and operating model problem.
We see it every week: teams don’t struggle to build agents. They struggle to make them work together, safely, and at scale.
Open LinkedIn right now. You'll see a wave of posts that all look the same: "I built an agent that does X in 30 seconds." "I automated my entire workflow with agentic AI." Someone vibecoded an entire product over a weekend.
The productivity gains are real. Building agent #1 is thrilling.
But nobody talks about what happens six weeks in. By then, marketing has three agents running. Finance has four. Usage no longer fits the seat-based pricing model. And the CEO has questions: How many agents are live across the company? Who built them? What data can they access? Was any of it approved? And what's the ROI?
That's agent #50 territory. And that's where things get real.
Agent #50 is when you discover that every agent is an island on someone's laptop, invisible to everyone else. When a departing employee takes their validated workflows with them. When two teams build duplicate agents because neither can see what the other built. When someone's agent quietly accesses customer data it shouldn't touch, and no one finds out because there's no audit trail.
There's a reason 80% of companies are still at the "chat with AI" stage. It's not the model. Moving from personal productivity to agents that touch company data, trigger workflows, and span multiple teams requires governance capabilities most tools weren't built to handle. The gap isn't in the technology itself, it's in the infrastructure needed to actually deploy it across a company.

The problems no one wants to solve first

1. AI Leadership and Governance

The scaling problem: Governance doesn’t matter at 1 or 10 agents. At 50, it’s existential. This isn’t a policy doc problem, it’s architectural.
What you need:
  • Executive ownership (not a quarterly AI committee)
  • An "AI Operator" role: team members who know the company and the job redesign company processes around agents, rather than layering AI on old workflows
  • Admin controls that let IT govern without blocking AI operators from building 
The pitfall: Too much control → everything slows down. No control → shadow AI everywhere and non-compounding learnings. The balance is structured autonomy.
Where infrastructure matters: Agentic-based architecture and permissions that let admins scope data access and agent permissions at the organizational level while teams build freely within those boundaries. Otherwise, “what’s running?” has no answer.

2. AI Strategy and Business Model

The scaling problem: 95% of enterprise AI pilots fail to deliver lasting value. The reason is almost never the model. It's that companies treat AI adoption as a technology project instead of an operating model shift.
What you need:
  • A maturity path (chat → agents → workflows)
  • Clear unit economics
  • Clear roles: decision makers, AI operators, users
  • Multi-model strategy
The pitfall: Scaling before use cases are solid. Growth amplifies what's broken. A company that deploys 200 agents or Skills built on shaky prompts and disconnected data will create 200 sources of wrong answers at scale. Models are good, but how they are steered is ineffective.
Where infrastructure matters: Model flexibility becomes critical for resilience, but also for cost and iteration speed.

3. People and the AI Operator Role

The scaling problem: AI doesn't replace people. It changes what people do and pushes everyone to think about how the very essence of talent changes. The companies seeing 80-90% daily active AI usage aren't the ones with the best models only. They're the ones who invested in a new kind of role.
What you need:
"AI Operators": domain experts who design agent workflows for their teams, not IT or consultants deploying tools to teams. AI Operators can be  PMs, ops leads, rev ops, or support managers. They build  a culture where improving a shared agent is as natural as editing a shared document.
The pitfall: Assuming AI adoption is an IT project. The highest-impact AI Operators at companies like Clay (58 hours saved per month), Vanta (400 hours saved per week), Wakam (196 agents deployed), Alan (100% HR team daily adoption), and PayFit (250+ agents) are surprisingly not engineers. They're the people closest to the work. Lock agent creation behind technical expertise, and you've bottlenecked innovation at IT's bandwidth.
Where infrastructure matters: Deploying AI tools is not AI transformation. The tool you choose defines the system you build, and how much your company can actually rewire itself around it. 
The goal isn’t to multiply agents. It’s to create shared systems that improve over time. In mature deployments, users don’t rely on a single agent, as they interact with a portfolio of agents every week, built on shared, reusable capabilities. When one of those improves, the benefit propagates instantly across teams.
That’s how you move from individual productivity to organizational leverage. The companies reaching 80–90% daily active usage didn’t get there by just buying licenses. They got there by treating agents as shared infrastructure, creating internal flywheels where each improvement compounds across the organization.

4. Agent Architecture, Operations and Security

The scaling problem: Agent #1 to #5 is a solo project. Agent #50 is an architecture and maintenance problem. One customer configured 7,683 agents in a single year. At that scale, you don't have an AI deployment. If you don’t have the right cockpit to manage and maintain your agents securely, you might end up dealing with a legacy codebase held together by collective memory and a lot of hope.
What you need:
  • Visibility (what exists, who owns it, how it’s used)
  • Reusability (shared building blocks instead of copy-paste agents)
  • Control (permissions, audit logs, and safe execution of actions)
  • Zero data retention with model providers, contractually guaranteed 
  • Compliance by design: SOC 2 Type II, GDPR, HIPAA, EU data residency
The pitfall: The blank-slate or “yolo” trap. Copy-paste agents and Skills everywhere. You succeeded in autonomy and failed at scale because your IT and security teams don’t know which agents are built with which permissions and won’t be able to prevent agents going rogue because of a failing trigger or agentic loop. 
Where infrastructure matters: You need structure ( granular permissioning and system to handle tools’ stakes, shared capabilities) or you end up with sprawl you can’t manage. If compliance isn’t built-in, you won’t retrofit it later. A risk assessment agent processed 2.09 million messages for a single fintech customer. A Snowflake-connected analysis agent handled 65,000+ queries. These are not demo numbers. They're production workloads on infrastructure that was built for agents from day one, not retrofitted from a chatbot.

5. Technology, Data Infrastructure and Compounding impact

The scaling problem: The ceiling for enterprise AI is not intelligence. It's the organizational layer around it. Models will keep getting better. The harder prediction is that most companies will fail to capture that value because they adopted tools designed for individuals and spent years retrofitting governance. If every team builds agents in isolation, you get duplication, inconsistency, and silos.
What you need:
  • A governed context layer (company data accessible, but controlled)
  • Extensibility (not locked into one ecosystem)
  • Shared systems across teams (not isolated agents)
  • Observability (what agents do, and how they perform)
The pitfall: Building everything in-house, or locking into one ecosystem. One customer described their internal AI build as "18 months and 20 engineers." The other trap: choosing a tool locked to a single ecosystem. If 79% of your workforce runs on Microsoft but your AI platform only works with Google, you've created an adoption wall.
Where infrastructure matters:
Compounding only happens if agents are not personal artifacts but shared infrastructure. In practice, this means visibility across what exists, reuse instead of duplication, and governance by default. Without that, every new agent adds entropy. With it, every improvement becomes a multiplier across the organization.
The real breakthrough happens when agents are no longer personal tools,
but shared building blocks that multiple teams rely on, improve, and extend together.

6. Cost Management and Unit Economics

The scaling problem: Individual AI productivity tools look cheaper on a per-seat basis. They are cheaper, until you account for everything they don't do.
What you need:
  • Usage-based visibility
  • Cost per agent / team / use case
  • Model selection by task
The pitfall: A $20 seat with no guardrails is not a bargain. It's a deferred invoice. The total cost of absent governance is almost always higher than the cost of the platform that provides it natively. Agentic sessions burn 10 to 100 times more tokens than chat. A company that approved 200 seats at a flat rate discovers within weeks that real agent workloads blow through any spend cap.
Where infrastructure matters: Multi-model + consumption pricing = control. Without it, costs drift fast.

Why we built it this way

We started building Dust in 2023 with a simple thesis: the hard part of enterprise AI is not the model. It's the organizational layer around it.
Models will keep getting better. 
Most companies still won’t capture the value because they built on tools designed for individuals, then try to retrofit governance, collaboration, and cost control later.
That rarely works.
Dust is built as infrastructure, not a wrapper. Agents are scoped, permissions are explicit, actions are traceable. Models are interchangeable. Knowledge is shared, not siloed.
So when one person improves something, it compounds across the org.
We’re also not pretending this is solved. Governance at scale is still messy: permissions, synchronization, observability. Anyone saying otherwise is selling something.
But there’s a difference between problems you can iterate on, and ceilings you can’t break.
We chose to build for the first. 
Because building agent #1 is easy. Running agent #50, however, is an operating system problem.
And most companies are still trying to solve it with tools designed for individual productivity. The next phase of AI will be about better systems: governance, collaboration, and infrastructure that let organizations actually operate AI. That work is a lot less sexy on LinkedIn, but it’s where the durable value is.