LIVE · OPS
PROJECTS 52+1 THIS WKUPTIME YR 4.2YSINCE 2022CLIENTS 30+KR · JPAVG REPLY <24HMON–FRIAI AGENTS LIVE 18RUNNINGWORKFLOWS 124N8N · CLAUDEPROJECTS 52+1 THIS WKUPTIME YR 4.2YSINCE 2022CLIENTS 30+KR · JPAVG REPLY <24HMON–FRIAI AGENTS LIVE 18RUNNINGWORKFLOWS 124N8N · CLAUDEPROJECTS 52+1 THIS WKUPTIME YR 4.2YSINCE 2022CLIENTS 30+KR · JPAVG REPLY <24HMON–FRIAI AGENTS LIVE 18RUNNINGWORKFLOWS 124N8N · CLAUDE
AI Agent2026-05-07·42 min read

AI Adoption: Where to Start — 5 Questions Before You Decide

Jake Hwang · Founder · 5years+READ MORE ↓
TABLE OF CONTENTS

The decision before the decision

The CFO asked one question, and the room shifted. "If we don't do this now, are we behind?"

No one answered. Not because they didn't have opinions — they had several — but because the question itself was wrong, and on some level everyone in the room knew it. I was sitting in as an outside advisor, and I remember looking at the COO, who was staring at her laptop, then at the head of operations, who was staring at the wall. The silence wasn't awkward. It was honest.

This is how most AI adoption conversations in mid-sized companies actually begin. Not with a problem. With a fear of being late.

A leader pausing in front of a whiteboard before an AI decision

Industry surveys in 2026 suggest that only about 12% of SMBs have anything resembling a real AI strategy, while roughly 58% of small businesses are already using generative AI in some capacity — more than double the rate in 2023. Enterprise AI tells a different story: about 72% of large organizations now have at least one AI workload in production, up from 20% six years ago. SMBs sit in the middle of that gap, adopting tools faster than their strategies can absorb them.

The leaders I work with usually arrive at the same realization a few months in. They've spent budget. They've signed a contract or two. And they cannot answer a simple question from their board: what changed? It is at this point — sometimes earlier, sometimes a quarter too late — that they call us.

What follows is the conversation I wish more leaders had before that point. Not a vendor checklist. Not a maturity model. Five questions. Each one looks small. Each one, answered honestly, will reshape how you think about AI implementation in your company. If you can't answer all five, the right move isn't to start faster. It's to wait a week, talk to your team, and come back.

1. What problem are we actually trying to solve?

The first time I sat in a workshop where leaders tried to brainstorm "where AI could help us," they listed eleven things in fifteen minutes. Customer support. Lead routing. Invoice processing. Translation. Onboarding. Inventory forecasting. The whiteboard filled up. Everyone nodded. Nothing happened.

The problem with "where can AI help" is that it lets you answer "everywhere," which is operationally identical to "nowhere." A better question — slower, more uncomfortable — is: where does work hurt today? Which task generates the most complaints from your own staff? Which one quietly eats Friday afternoons? Which one is the reason a person you can't afford to lose is thinking about leaving?

This distinction matters because pain-driven adoption and FOMO-driven adoption look identical in the budget but produce opposite outcomes. The first has a built-in success metric — the pain either lessens or it doesn't. The second has no metric at all, which is one reason 61% of SMBs report cost as their primary barrier to AI adoption. Cost isn't really the problem. Unjustified cost is. And cost without a defined pain point is, by definition, unjustified.

At first I thought the right starting question was "where could AI add the most value." I was wrong. That framing lets companies pick the most exciting use case rather than the most expensive pain. Pain is a better compass than potential.

2. What does success look like in measurable terms?

If you cannot name a number, you are not ready to start.

I mean that literally. Hours saved per week. Error rate. Response time. Tickets resolved without escalation. Revenue per employee. Pick one. Write it on a whiteboard. Make it the metric. Recent SMB data suggests the average worker saves about 5.6 hours per week using AI tools, with managers closer to 7.2 hours and individual contributors closer to 3.4. Those are useful benchmarks, but they aren't your number. Your number has to come from your own work.

I once watched a leadership team argue for forty minutes over whether their target should be "improving customer experience" or "reducing manual workload." Both sound reasonable in a meeting. Neither survives a quarter. By the end of the call we had narrowed it to a single metric: average first-response time on inbound support emails, measured weekly, with a six-week baseline. Suddenly the conversation about which tool to use became dramatically simpler, because most of the tools on the shortlist couldn't move that number.

Thirty percent sounds small until you realize that thirty percent of a team's email volume is twenty hours a week no one had time to read carefully. Numbers without human context get ignored. Numbers with human context get budgeted.

3. Who owns this, day to day, after launch?

This is the question most teams skip, and it is the one that quietly kills more projects than any technical issue. The answer cannot be "the consultant." It cannot be "the vendor." It cannot be "we'll figure it out after launch." Someone inside the company has to wake up on a Tuesday and care that the system is working.

Industry data backs this up indirectly. Roughly 78% of organizations that successfully completed AI implementation worked with external partners for at least part of the build. That sounds like a vote for outsourcing — until you read the next layer down, where education was the number-one talent-strategy adjustment companies made for AI, ahead of role redesign or new hires. The successful pattern isn't outsourcing. It's outsourcing the build and insourcing the ownership.

I've seen this go both ways. In one case, a logistics firm we'd worked with had a part-time operations analyst — originally hired for spreadsheet work — who quietly took over the system the week after handover. Six months later she was the most fluent person in the company about what the model did and where it failed. The project succeeded because of her, not because of the technology. In another, a similar-sized company had no internal owner. Everyone agreed it was important. No one's calendar reflected it. The system worked beautifully for about ninety days and then drifted into irrelevance, the way unmaintained tools do.

If you cannot name the person before launch, you are not ready to launch.

4. What data do we already have, and is it usable?

Most AI projects in SMBs are really data hygiene projects with an AI veneer. The model is the easy part. The data is the part that takes three months and one uncomfortable meeting with whoever has been maintaining the customer database in a personal spreadsheet for the last four years.

About 41% of SMBs cite data quality as a primary barrier to AI adoption. That number, in my experience, understates the issue, because most companies don't realize their data is bad until they try to use it for something that depends on it. Reports tolerate a certain amount of mess. Models do not.

A 200-person manufacturer we worked with last year had three separate customer databases — a CRM, a spreadsheet maintained by sales, and an export from their accounting system — and no one in the company could say with certainty which one was authoritative. They wanted to deploy a customer-facing chatbot. The chatbot was the last problem they needed to solve. The first was getting the three databases to agree on what a customer's email address was.

The honest answer to this question almost always changes the scope of the project. Sometimes it makes it smaller. Sometimes it makes it longer. Almost never does it leave it the same size.

5. What is the smallest version we can run in 4–6 weeks?

A real proof of concept. Not a slide deck with the word "PoC" in the corner. Something that runs, with real data, used by real people, that produces an answer to the success metric you defined in question two.

If the smallest version of your idea is bigger than six weeks, the idea is wrong. Not the timeline. The idea. Most failed AI implementations I've seen failed because the team tried to build the cathedral before the chapel. Recent figures suggest 79% of organizations face challenges in adopting AI despite their investment, and the most common reason — by a noticeable margin — is that the first version was scoped to do too much. A six-week scope forces clarity. It forces you to pick one user, one workflow, one number.

The other reason for the six-week ceiling is political. Six weeks is short enough that the project doesn't change leadership halfway through. Short enough that a champion can stay engaged. Short enough that, if it fails, you can still call it a learning exercise rather than a budget item. Enterprise AI projects can survive longer timelines because they have organizational slack. SMBs don't have that slack. The good news is that the constraint usually produces better work.

What these questions are really for

I'll be honest. These five questions don't tell you whether to adopt AI. They tell you whether you've earned the right to start. Most companies that ask them carefully end up doing two things: shrinking the scope of their first project, and naming an internal owner they hadn't thought of. That's it. The drama of choosing between vendors and frameworks turns out to be a distraction from the work that has to happen first.

The companies that skip the questions usually don't fail dramatically. They drift. The pilot ships, the metrics never get tracked, the owner moves on, the data quality issue resurfaces in a different form a year later, and the leadership team quietly stops mentioning AI in board updates. Only about 28% of organizations describe their AI adoption as mature. Most enterprise AI failures look dramatic. Most SMB failures look like drift.

In the next piece I'll walk through a five-area self-assessment that maps your current state — your data, your team, your processes, your budget, your appetite for change — to the type of AI that actually fits. The right model and the right tool depend almost entirely on which area is your weakest, and most companies don't realize that until they audit themselves.

For now, take the five questions to your next leadership meeting. Spend an hour on them. If the room goes quiet the way that CFO's room did, that's a good sign. It means you're asking something real. If you'd like a thinking partner for the conversation, the team at 5years+ has run this exercise with mid-market companies across Asia and is happy to compare notes — no pitch, just a working session.

Related Posts · 3 posts
▸ WRITTEN BY
J.H
Jake Hwang
Founder · 5years+ · EST. 2022

Founder of 5years+. Helping Korean and Japanese companies escape the repetitive grind and focus on growth — through AI agents, workflow automation, and product engineering. 52+ projects shipped on a stack centered around Claude API, n8n, and Next.js.

▸ Found this useful?
Want to bring real AI automation
into your business?

Let's map out a concrete plan together.