Enterprise AI is finally moving out of the lab. After two years of pilots, proofs of concept and internal demos, 2025 marked a visible shift toward production use, operational metrics and hard conversations about return on investment. What changed was not the models. It was the way organisations chose to deploy them.
At Six & Flow, this is the inflection point we see most clients wrestling with. The question is no longer “Can AI help?”. It is “Where does it sit in the operating model, who owns it, and how do we prove it earns its keep?”
The evidence is starting to catch up with that question.
Large surveys published through 2025 show that AI adoption is broad but shallow. According to McKinsey & Company, close to nine in ten organisations now use AI in at least one business function. The headline sounds impressive until you look closer. Only around a third have moved beyond experimentation into scaled, production deployment.
That gap matters. Organisations with multiple AI systems running inside live workflows behave very differently from those with isolated tools sitting on the side. Deloitte’s 2026 State of AI in the Enterprise report highlights this shift. Companies with more than 40 percent of AI initiatives in production are increasing rapidly, and employee access to AI tools grew by roughly half during 2025 alone.
The signal is clear. AI is becoming part of how work gets done, not a specialist capability.
For the first time, we have credible, independent evidence of productivity impact in live environments. Academic research into software engineering teams found that AI assistance reduced pull request review time by roughly a third and increased code throughput by close to 30 percent. These were not surveys or self reported estimates. They were measured outcomes inside production teams.
Similar patterns are emerging elsewhere. Legal review, customer support triage, internal knowledge retrieval and RevOps analytics are showing consistent reductions in manual effort when AI is embedded into the flow of work rather than bolted on as a separate tool. In practical terms, this often translates into 30 to 80 percent time savings on specific tasks, depending on data quality and process maturity.
This is an important distinction. AI does not make people “more productive” in the abstract. It removes friction from defined steps in a process. Organisations that cannot describe those steps struggle to see consistent gains.
The most striking change in recent research is the widening gap between leaders and laggards. High performing organisations that have operationalised AI across multiple functions are reporting returns close to three times their AI investment. Others are barely breaking even.
This is not because the technology behaves differently. It is because the operating conditions do. Frontier firms redesign workflows, invest in data foundations and assign clear accountability for outcomes. Laggards accumulate tools, pilots and subscriptions without changing how decisions are made.
CIO and CFO surveys heading into 2026 show optimism about future returns, particularly around agentic AI systems that can act across tools and datasets. But only a minority of organisations are actually running agent based systems in production today. Expect that gap between expectation and reality to be a major source of board level tension over the next 18 months.
Consultancies are aligned on one point: AI value is not unlocked by deployment alone. It requires workflow redesign, role clarity and training at a scale most organisations underestimate.
In practice, this means redefining how work moves from intake to decision to action. It means retraining teams not just on tools, but on judgement, escalation and exception handling. In several large scale programmes reviewed in 2025, productivity gains of around 10 to 15 percent only materialised after formal job redesign and structured enablement, often requiring dozens of hours of training per employee.
This is where many programmes stall. The technology is ready, but the organisation is not.
Regulation is no longer a future concern. The EU AI Act is already influencing how global organisations design, document and govern AI systems. Transparency requirements, risk classification and human oversight obligations are adding real overhead to deployment, particularly for customer facing and decision influencing systems.
One unintended consequence is positive. Boards are asking better questions. Risk disclosures related to AI are rising sharply in corporate filings, forcing leadership teams to articulate where AI is used, what data it touches and how outcomes are monitored. For organisations that get this right early, governance becomes an enabler rather than a brake.
For revenue, customer success and operations leaders, the lesson is straightforward. AI works when it is tied to decisions, not outputs. It delivers returns when it is embedded into systems of record, not layered on top of them. And it compounds when learning loops are built into the operating rhythm.
At Six & Flow, this is why we push clients to move beyond pilots quickly. Not recklessly, but deliberately. Define the decision you are improving. Fix the data that feeds it. Embed AI where work already happens. Measure the delta, not the novelty.
Enterprise AI is no longer about possibility. It is about execution. The organisations that treat it as an operating discipline will widen the gap. The rest will still be talking about pilots this time next year.