LIVE · OPS
PROJECTS 52+1 THIS WKUPTIME YR 4.2YSINCE 2022CLIENTS 30+KR · JPAVG REPLY <24HMON–FRIAI AGENTS LIVE 18RUNNINGWORKFLOWS 124N8N · CLAUDEPROJECTS 52+1 THIS WKUPTIME YR 4.2YSINCE 2022CLIENTS 30+KR · JPAVG REPLY <24HMON–FRIAI AGENTS LIVE 18RUNNINGWORKFLOWS 124N8N · CLAUDEPROJECTS 52+1 THIS WKUPTIME YR 4.2YSINCE 2022CLIENTS 30+KR · JPAVG REPLY <24HMON–FRIAI AGENTS LIVE 18RUNNINGWORKFLOWS 124N8N · CLAUDE
AI Agent2026-05-10·31 min read

Realistic Guide to AI Cost and Timeline — Understanding by Scale

Jake Hwang · Founder · 5years+READ MORE ↓
TABLE OF CONTENTS

The question that comes up in every first call

"How much is this going to cost, and how long will it take?"

That is usually the third question I get on a first call with a founder considering AI. The first two are some version of "is this even possible for a company my size" and "how do I know we are not just buying hype." By the time we reach budget and timeline, the room has relaxed a little. It feels like the part where I am supposed to give a number.

I never do. Not because I am being cagey, but because the honest answer is shaped less by what the AI does and more by who has to live with it after we ship.

Empty conference room with scattered financial documents representing AI project budget planning

Why a single price tag is a warning sign

If a vendor responds to "how much does an AI project cost" with a clean three-figure range and no follow-up questions, walk. The variation between AI projects in the same category — same use case, same headcount, same industry — can easily be five-fold. Industry data backs this up: 85% of organizations miss their AI budget forecasts by more than 10%, and final costs frequently land 3 to 5 times above the original quote. That is not because vendors are dishonest. It is because the cost lives in places you cannot see during the sales conversation.

The places it lives, in rough order of size: data preparation, integration with the systems you forgot to mention, change management for the people whose work is about to shift, and the long tail of maintenance after launch. Skip any of those and you will see the symptom in your bank account three months later.

Three scales, three different conversations

Instead of giving you a number, let me give you the shape of the conversation by company size. The dollar figures vary too widely to print, but the pattern is consistent.

Small: a single workflow, a single team

This is the bracket where most SMBs should start. One repetitive workflow — invoice processing, customer email triage, a chatbot scoped to one website — owned by one team, plugged into one or two systems. The vendor''s job here is restraint: do not sell the platform when the customer needs the tool. Off-the-shelf SaaS plus a thin layer of configuration is often the right call. When custom work is genuinely needed, scope it as a focused PoC and resist the urge to expand mid-flight.

Timeline: weeks, not months. If the vendor is talking quarters at this scale, something is over-engineered.

Mid: cross-functional, multi-system

Now the AI has to read from one system, write to another, and notify someone in a third. This is where the "invisible integration layer" — every consultant''s favorite phrase, with reason — eats budget. Industry estimates put pilot-to-production transitions at 250 to 400 percent more investment than the pilot itself, and most of that delta is integration plumbing nobody scoped.

Data preparation also surfaces here. If your CRM has been used by six different sales teams over four years with no naming convention, that is not a bug to fix later. That is the project, in a hat.

Large: organization-wide, multi-model

At this scale you are not buying a tool. You are buying a small operating shift. Custom models, governance frameworks, an internal team that has to be trained or hired. Procurement, security review, and compliance pull the timeline out further than the build itself often does. Companies in regulated industries — finance, healthcare — should add a quarter to whatever the technical estimate says, just for the audit and review cycles.

Where the budget actually goes

One thing that surprises first-time AI buyers: the model itself is rarely the most expensive line item. The split, drawn from projects I have watched ship, looks roughly like this:

  • Data preparation and quality work — often 30 to 50 percent of the total. The least glamorous, the most decisive.
  • Integration and the invisible plumbing — 20 to 30 percent. Connecting the AI to the systems where the work already happens.
  • Model selection and prompt engineering — 15 to 25 percent. The part most people imagine when they picture "AI work."
  • Change management, training, documentation — the rest. Underbudget this and the project ships but does not get used.

Notice the model line is in the middle, not at the top. A team that pours 60 percent of the budget into model fine-tuning while the data underneath is still a mess will end up with a very expensive way to be wrong.

Timelines, in honest brackets

If you have already chosen a vendor and your data is reasonably clean, here is roughly what to expect. These are bands, not promises.

PoC — 4 to 8 weeks. Define the problem narrowly, train on a real subset of your data, ship something you can demo internally. The goal is not a polished product; it is a yes-or-no answer to "does this approach actually work for our specific case."

Pilot to MVP — 8 to 16 weeks after the PoC validates. Hardening, integration, starting to involve real users. This is where projects that looked great in demo get ugly. That is expected.

Production — another 6 to 12 weeks of monitoring, documentation, and the boring infrastructure work that turns a working prototype into a system someone other than the original developer can run.

Adoption rollout — 4 to 8 weeks if you are disciplined about training. Often longer if the people whose work is changing were not involved early.

Add it up and a first real production AI system at a mid-sized company is a 6-to-12-month commitment, not a quarter. Anyone telling you otherwise is either selling a heavily packaged off-the-shelf product (which is fine, just be clear about what you are buying) or has not done one before.

What to budget for that nobody mentions

A short list, because these surprise first-timers more than they should.

Annual maintenance lands somewhere between 15 and 30 percent of the original build cost. A model is not a finished asset; it drifts. Data drifts. The world the model was trained on stops looking like the world it is running in. Budget for someone — internal or vendor — to watch that and intervene.

API usage costs scale with adoption. The success scenario, where everyone in the company starts using the tool, also looks like a 5x bill in month four. Set caps and alerts before launch, not after.

Internal time is real. The estimates above are vendor cost. The hours your own team will spend reviewing data, sitting in workshops, validating outputs — that is another whole budget. In practice it is the line item most likely to be missing from the original spreadsheet.

The pattern under all of this

The companies that hit their budgets are not the ones that picked the cheapest vendor or moved the fastest. They are the ones who spent the first two weeks being honest about which scale they were actually operating at. Trying to run a small-scale workflow project with a large-scale governance committee is expensive theater. Trying to do an organization-wide rollout under a small-scale budget is the same mistake in the other direction.

If you have been following this series from the start, this connects directly to the four-stage roadmap covered in the previous post on the AI adoption roadmap from PoC to production. Each stage in that roadmap has its own budget signature. Skipping straight from PoC to "let us scale this company-wide" is the most common way I see these numbers blow up.

Next time, we move past the spending question and into the harder one: how do you know if any of this actually worked? ROI in AI projects has its own conventions, its own honest measures, and a lot of vanity metrics that look good in a board deck but tell you nothing useful. That is the focus of the next piece in this series.

Related Posts · 3 posts
▸ WRITTEN BY
J.H
Jake Hwang
Founder · 5years+ · EST. 2022

Founder of 5years+. Helping Korean and Japanese companies escape the repetitive grind and focus on growth — through AI agents, workflow automation, and product engineering. 52+ projects shipped on a stack centered around Claude API, n8n, and Next.js.

▸ Found this useful?
Want to bring real AI automation
into your business?

Let's map out a concrete plan together.