Sponsored by

What 2,000 SaaS Companies Reveal About Growth in 2026

Is your growth in-line with your peers in B2B SaaS & AI? 

Benchmark yourself against actual billings data for Maxio’s 2000+ global customers, alongside firsthand company perspectives to understand how growth varied by company size, business model, and strategic focus.

Key takeaways from the report: 

  • Average growth across 2,000 companies

  • Growth by revenue band 

  • AI-led vs AI-enhanced. Who performed better? 

GM to the Top 1% ☕

I sat in a room last week while an SVP announced his team was killing their AI initiative. Six months, $2.1M burned, no pipeline to show for it.

He pulled up a slide titled "AI Learnings." The only bullet that mattered was the third one. "We asked the agents to do too much."

Quiet room. Somebody coughed. He moved on. But that is the whole story.

💡 THE REAL FAILURE IS ASK WIDTH, NOT MODEL QUALITY

40% of agentic AI projects got canceled or paused in Q1 2026. 78% of enterprises are still in pilot. Under 15% make it to production. Everybody is writing think-pieces blaming the model.

The model is fine. The ask is broken.

Every failed AI project I have audited asked the agent to do a job, not a task. "Generate pipeline." "Handle objections." "Qualify prospects." A human middle-performer cannot do those prompts in one shot either. Why do we think an LLM can?

The teams that made it to production did the opposite. They scoped down until the ask fit on a Post-it. "Read this email and classify the next step." "Draft the thank-you note." "Label the risk level." Agents hit 95% accuracy on Post-it tasks. They hit 30% accuracy on essay tasks. That is the entire story of 2026 so far.

🔧 THE POST-IT TEST (FOUR STEPS TO SAVE YOUR PILOT)

Before you deploy another agent, run it through this.

1. Write the ask on a Post-it: If you cannot fit the task on a 3x3 sticky, the agent will fail. No exceptions.

2. Describe the "done" state in one sentence: "A labeled row in a spreadsheet." "A 6-bullet summary." "A yes/no classification." Vague done state equals drift.

3. Define the input with zero ambiguity: One document, one email thread, one CRM record. If the agent has to decide what to read, it will read the wrong thing.

4. Set the kill switch: Define the exact signal that means "this agent is off the rails." Accuracy below 85%. More than 2 human edits. You cannot improve what you do not monitor.

🎯 THIS WEEK'S HOMEWORK

Pull your current AI use cases into a list. Next to each one, write the one-sentence done state. If you cannot, you do not have a use case. You have a hope.

QUESTION OF THE DAY

What is one "agent ask" in your org right now that would fail the Post-it test?

Reply with it. I will suggest a rewrite.

See you tomorrow.

Edward

Founder, Morning Sales

P.S. I have 14 "Post-it tested" prompts inside the 500 AI-Powered Prompts PDF that are hitting 90%+ in enterprise pilots. Reps who grabbed it last week flagged prompt #203 (discovery risk scoring) and #318 (renewal language rewrite) as the standouts. Grab it here for $27.

Keep Reading