All notes
·5 min read

What to Think About Before Going All-In on AI

88% of companies use AI. Only 6% see real bottom-line value. The gap is not about the technology. It is about data access, system integration, and organizational readiness.

StrategyGetting Started

88% of companies now use AI in at least one function. Only 6% extract meaningful bottom-line value. That gap is not a technology problem. It is a readiness problem.

Two questions before you implement anything
The data question

Is your data in good enough shape for an AI to actually use it?

If notThat is where you start, not with the AI.
Then
The integration question

Are you willing to integrate AI into your real work environment and data?

If yesThe possible transformation is large.
If noFine, but be honest that you will unlock a smaller slice of the value.

The enthusiasm gap

Every business leader we talk to wants to use AI. Most have tried. Many are disappointed.

The pattern is consistent. A company buys an AI tool, runs a pilot in a controlled environment, gets impressive demo results, and then nothing changes. IDC found that 88% of AI proofs of concept never make it to production. Only 4 out of every 33 pilots graduate to something real.

Why? The pilots work in isolation. They run on sample data, disconnected from the systems where real work happens. The moment you try to move from demo to production, you hit the actual hard problems: data access, integration, and organizational willingness to change how things work.

Data: AI is only as good as what it can reach

Gartner predicts that through 2026, organizations will abandon 60% of AI projects because of data that is not ready for AI. Not because the AI is bad. Because the data is scattered, inconsistent, or locked away.

This is the most common thing we see. A company wants AI to help with customer support, but the product information lives in one system, the order history in another, the previous support tickets in a third, and the customer context in someone's head. The AI cannot help because it cannot see the full picture.

AI-ready data does not mean perfect data. It means accessible data. AI needs to reach the information it needs to do its job, in a format it can work with, with enough context to make it meaningful. In 2024, only 19% of organizations cited data quality as a top challenge. By 2025, that number more than doubled to 44%. Companies are learning this lesson the hard way.

The practical question is simple: if you asked a new employee to do this task on their first day, could they find everything they needed? If not, AI cannot either.

System access: AI in a sandbox is AI that cannot help

Here is the core argument. If you do not give AI real access to your systems, you severely limit what it can do for you.

An AI tool that can read your emails but not your CRM is working with half the picture. An AI assistant that can draft documents but not access your internal knowledge base produces generic output. An AI agent that can analyze data but not update your systems creates more work, not less, because a human still has to do the actual doing.

McKinsey found that only 21% of organizations using generative AI have redesigned any workflows around it. Nearly 80% are layering AI on top of existing processes without changing how work flows. That is like hiring a highly capable employee and then only letting them answer the phone.

The companies seeing real returns give AI access to the systems where work happens. Email, CRM, project management, calendars, databases. Not unrestricted access. Thoughtful, governed access with clear boundaries. But real access, not a sandbox.

72% of enterprises rely on custom-built integrations to connect AI to their systems. It is not plug-and-play for most organizations. But the investment in integration is what separates a demo from a working system.

The cultural piece: letting AI into real workflows

This is the part that surprises people. The hardest part of AI adoption is often not technical. It is organizational.

BCG found that 70% of the value from AI comes from the people component, not the technology. Only 25% of frontline employees receive sufficient guidance from leadership on how to use AI. When leadership approves AI budgets but does not actively champion adoption, employees treat AI as optional.

The pattern looks like this: leadership announces an AI initiative. A tool gets bought. A few people try it. Most keep doing things the old way. Nobody changed the process, nobody changed the expectations, nobody showed the team what good looks like. Six months later, the license gets canceled.

The organizations that succeed invest in their people before they invest in technology. Training, workflow redesign, clear expectations about what AI handles and what humans handle. BCG's data shows adoption jumps from 25% to 76% when employers provide actual AI training. The technology is the easy part.

What 'ready' actually looks like

AI readiness is not a checklist you complete once. It is a set of conditions that make AI useful rather than decorative.

You have made the integration decision. Has your organization actually decided to lean into AI, integrate it into your workflows and data, and let it in the door? Or have you decided not to? Both are valid answers, but the answer matters. This decision sits on top of things you have probably already thought about: IT security, privacy, data sharing policies, legal and regulatory requirements. If you have not decided to integrate AI into real systems, you can still use it, but only for a fraction of its potential value. The transformation you can expect scales directly with how far you are willing to let it in.

Your data is accessible. Not perfect, not centralized in one warehouse. Accessible. The AI can reach what it needs through integrations, APIs, or direct access.

Your team is open to change. Not just tolerant. Actively willing to try new workflows, give feedback, and iterate. The first version of any AI integration will be rough. The value comes from tuning it.

You know what problem you are solving. Not 'we need AI' but 'our support team spends 3 hours a day on repetitive email, and that is work AI could handle.' Specific problems lead to specific solutions. Vague ambitions lead to abandoned pilots.

Leadership is involved, not just approving. The RAND Corporation found that the top cause of AI project failure is miscommunication about the project's purpose. When leadership sets the direction and stays engaged, projects succeed at dramatically higher rates.

You have a governance plan. Who decides what AI can access? How do you review its output? What happens when it gets something wrong? These questions are easier to answer before you start than after something goes sideways.

The starting point

None of this means you need to have everything figured out before you start. It means you should be honest about where you are and what needs to change.

The companies that get the most from AI did two things first. They got honest about their data. And they decided, as an organization, to let AI into real work. Everything else follows from those two choices.

Want to talk about this?

If you are weighing whether your organization is ready for AI, we can help you figure out what to do first and what to leave for later.

Get in touch