What Does "Going AI" Actually Mean?
The Five-Tier AI Decision Taxonomy Every CXO Needs Before the Next Board Meeting
Machine Capital is a practitioner's letter on AI economics — written for the CIO, CFO, and CTO making real technology investment decisions, not the consultant writing the strategy document. Every issue comes with a financial model, a decision framework, or both. No vendor affiliations. No hype.
Your board wants an AI strategy. Your CEO wants to know what you're doing about AI. Three vendors pitched you this week. A competitor just announced an "AI-powered" product.
None of that tells you what to actually do.
The reason most AI strategies fail — or worse, never get implemented — is that organizations start with the wrong question. They ask "how do we do AI?" before they've answered "what kind of AI, for what purpose, at what cost, with what capability we actually have?"
"Going AI" means five completely different things. Each has a different cost structure, risk profile, time-to-value curve, and competitive implication. Conflating them is the most expensive mistake in enterprise technology today.
This brief gives you the taxonomy. Not a vendor framework. Not a maturity model with colored boxes. A financially grounded map of the five things "AI" can mean — and a decision logic for which one belongs in your organization right now.
The Five Tiers of AI Adoption
Work through these in order. Most organizations should be operating across multiple tiers simultaneously — the question is which ones, at what investment level, sequenced how.
Packaged AI — The AI Already In Your Software
This is AI embedded in software you already pay for. Microsoft Dynamics AI, Salesforce Einstein, ServiceNow's predictive workflows, Workday's anomaly detection. You may already be paying for it and not using it.
That last point matters. When every company using Salesforce has access to the same Einstein features, using Einstein well gets you to table stakes — it doesn't create differentiation. You're renting intelligence from your vendor. The moat belongs to them, not you.
Where Tier 1 is genuinely valuable: automating routine work (invoice processing, ticket routing, anomaly alerting), freeing your team to focus on higher-value problems, and buying credibility with your board while you build real AI capability.
The mistake: calling Tier 1 an AI strategy. It's a starting point.
Point AI Tools — Departmental Adoption at Speed
Jasper for marketing copy. Glean for enterprise search. Midjourney for creative. Moveworks for IT helpdesk. Otter for meeting transcription. These are fast, cheap, and immediately useful.
The organizational risk is what nobody talks about: AI sprawl. Eighteen months into a Tier 2 adoption pattern, most organizations discover they have 30–50 active AI subscriptions, most of them departmental, few of them integrated, none of them contributing to a coherent data strategy.
Shadow AI is the new shadow IT. Except the data exposure risks are significantly higher — employees are putting sensitive client information, financial projections, and strategic plans into consumer AI tools with no governance.
The mistake: confusing fast adoption with strategy. Tier 2 requires governance from day one.
Embedded ML — Where Real Competitive Advantage Begins
This is where the conversation changes. Tier 3 means building custom machine learning models into your core products or operations — running on your proprietary data. Recommendation engines. Fraud detection models. Churn prediction. Demand forecasting. Underwriting models. Clinical risk scoring.
PG&E runs Tier 3 ML. Their grid fault prediction models, wildfire risk scoring, and demand forecasting are not purchased capabilities — they're built on decades of proprietary sensor data, weather data, and operational history that no vendor can replicate. That's a genuine moat.
The critical insight most organizations miss: the model is not the asset. The data is the asset, and the operational discipline to keep it clean, labeled, and feeding a retraining pipeline is the capability. A well-trained model on proprietary data compounds in value over time. A poorly maintained one drifts into irrelevance within months.
The MLOps tax is real and consistently underestimated. Budget 3–5x the model development cost for the infrastructure to run it in production — data pipelines, monitoring, drift detection, retraining cycles, and the engineers to maintain all of it.
The mistake: treating Tier 3 as a data science project. It's a product engineering commitment.
AI-Augmented SDLC — The Fastest ROI Nobody Is Maximizing
Tier 4 is not building AI — it's using AI to build and operate better. GitHub Copilot in your development workflow. Azure Document Intelligence extracting structured data from unstructured documents. Automated testing with AI-generated test cases. AIOps for infrastructure anomaly detection. AI-assisted code review.
This is the most underutilized tier in enterprise technology. The ROI is measurable, the implementation risk is manageable, and the talent requirement is lower than Tier 3. A development team using Copilot effectively ships 20–35% faster — that's not a marginal improvement, that's a structural cost advantage that compounds quarterly.
The caveat: productivity gains require culture change, not just tool adoption. A team that uses Copilot to generate boilerplate without reviewing it ships bugs faster. The tool amplifies both good and bad engineering practices. The investment is as much in process redesign as in software licensing.
The mistake: buying the licenses and declaring victory. Tier 4 requires deliberate adoption management.
ML Platform Capability — The Long Game
Tier 5 means building your own AI capability infrastructure — the ability to train, fine-tune, evaluate, and deploy your own models at scale. Internal GPU clusters. Model registries. Feature stores. Private LLM deployments.
The economics are brutal until they aren't. An H100 GPU cluster costs $200–300K per server. Training a serious model requires dozens of them running for weeks. The infrastructure team alone — ML engineers, MLOps, data engineers — costs $2–4M annually at Bay Area loaded rates. You are making a multi-year, multi-million dollar capital commitment before you see a single inference in production.
The organizations that should be in Tier 5: those with genuinely proprietary data that creates model differentiation no vendor can replicate, and the balance sheet to sustain the investment through the 18–36 month time-to-value curve.
The mistake: starting Tier 5 because it sounds impressive. Start here only if your data flywheel is real.
The Five Tiers at a Glance
Tier 1 — Packaged AI
Tier 2 — Point AI Tools
Tier 3 — Embedded ML
Tier 4 — AI-Augmented SDLC
Tier 5 — ML Platform
Decision Logic — Where Should You Be?
The right answer is not "all five tiers." It's the right tiers, sequenced correctly, resourced honestly.
Most mid-market organizations ($500M–$5B revenue, 500–5,000 employees) should be active in Tiers 1, 2, and 4 immediately, building toward Tier 3 selectively in their highest-value use cases, and deferring Tier 5 until they have the data maturity and capital appetite to do it properly.
The Financial Reality Nobody Puts in the Pitch Deck
Vendors sell AI by tier. They don't tell you the total cost of ownership across the stack.
Tier 1 hidden costs: Licensing uplift (AI features cost 20–40% more than base modules), integration work, change management, and the productivity dip during adoption. A Dynamics AI rollout that looks like $200K/year in licensing typically costs $400–600K all-in the first year.
Tier 2 hidden costs: Subscription proliferation, data governance infrastructure, security review overhead, and the cost of the one data breach that occurs when someone puts client data into an unvetted tool. The governance investment to manage Tier 2 safely is often larger than the tool costs themselves.
Tier 3 hidden costs: The MLOps tax (3–5x model development cost), data pipeline engineering, feature store infrastructure, model monitoring, retraining cycles, and the ML engineering talent to maintain all of it. A $500K model development project often requires $1.5–2.5M in ongoing operational investment annually.
Tier 4 hidden costs: License costs are the smallest line item. The real investment is in training, process redesign, code review culture change, and the productivity dip before the curve turns upward. Plan for 3–6 months of adoption investment before productivity gains materialize.
Tier 5 hidden costs: GPU depreciation cycles of 2–3 years (not 5), power infrastructure (H100s draw 700W each), networking and storage at AI scale, model refresh costs as architectures evolve, and the talent premium for ML engineers in a market where demand still significantly exceeds supply.
"The model is the cheapest part of any AI investment. The data infrastructure, the production engineering, and the organizational change that surrounds it are where the real costs live — and where the real value is either created or destroyed."
What To Do Monday Morning
Before your next AI budget conversation, answer these five questions honestly:
- Which tier are we actually in today — not which tier do we aspire to?
- Do we have the proprietary data that would make Tier 3 defensible, or are we building on the same data every competitor can access?
- Have we budgeted for the full stack — including the MLOps tax, the governance infrastructure, and the adoption management — or just the tool licenses?
- What decision are we making in the next 90 days that this taxonomy changes?
- Who in our organization has the mandate and the capability to own the answer — not just the conversation?
The organizations that get AI right in the next three years will not be the ones that moved fastest. They will be the ones that moved deliberately — with a clear-eyed view of which tier they were investing in, why, and what it would actually cost.
Get Issue 002 when it publishes
Buy vs. Build Economics — with an interactive TCO calculator for each tier. Free subscription, no vendor affiliation.
Subscribe to Machine Capital →About the author. Nandeep Nagarkar is the founder of SVAG Labs LLC, a fractional CIO and AI strategy practice based in Sunnyvale, California. He has 25+ years of technology leadership across healthcare, financial services, retail, and energy, and is the author of Thinking With Machines and The Shared Intelligence. Free decision tools at svaglabs.com/tools.