From Philip Yeo's 'Intermediate Business Machine' to Generative AI: What Organisations Get Wrong About New Technology
In the late 1970s, Singapore’s government tightly controlled computing.
Computers were expensive, scarce, and centrally managed. The Ministry of Finance held a monopoly over government computing. Other ministries weren’t supposed to run their own environments, partly to control costs and partly because the systems needed specialist operators.
From a policy perspective, fair enough. Mainframes were strategic infrastructure, not everyday tools.
But technology was already moving.
Philip Yeo, then a senior official at Singapore’s Ministry of Defence (MINDEF), had a problem. Defence logistics required analysing supply chains, inventory levels, and operational planning scenarios. The work was slow and manual, and the centralised computing model made it hard to experiment with data locally.
Yeo decided a computer inside the ministry would fix this. The rules didn’t allow it. So he submitted procurement paperwork for an “intermediate business machine” (a deliberate play on IBM, International Business Machines). By avoiding the word “computer”, the purchase slipped past the restrictions.
Once the system arrived, it worked. Logistics planning improved. Modelling became easier. Staff could explore scenarios in hours instead of days.
People usually tell this story as an amusing anecdote about bureaucratic creativity. I think that reading misses the point.
Yeo was a senior leader, so his workaround wasn’t treated as a compliance failure. It was treated as a proof of concept. What he actually demonstrated was that the governance model for computing no longer matched the technology. Computing wasn’t something that belonged exclusively in centralised facilities any more. It was becoming a tool that could improve how individual teams worked.
Within a few years, Singapore’s government started expanding computing across the public sector. In 1980 they formed the Committee on National Computerisation. In 1981 they established the National Computer Board. Yeo became its founding chairman.
The result was the Civil Service Computerisation Programme, which gradually rolled computers into ministries across government. Administrative processes were automated, data analysis improved, and operations became more efficient.
That transformation didn’t happen because someone smuggled a computer into MINDEF. It happened because the experiment revealed that existing policies were built for an earlier generation of technology.
The same pattern is appearing with AI
Many organisations are experiencing a similar mismatch right now.
Generative AI tools can already draft documents, summarise information, analyse material, generate code, and perform research. The productivity benefits are often immediate and measurable.
But productivity is actually the less interesting part. What’s more compelling is what happens to quality and consistency. An employee with access to a well-configured AI assistant can produce work at a standard they couldn’t reliably hit on their own, whether that’s writing clearer documentation, catching edge cases in analysis, or structuring a brief in a way that follows organisational conventions. Scale that across a team or a department and the floor on output quality across the whole organisation goes up.
Organisations that treat AI purely as a cost optimisation exercise tend to miss this. The more productive question is whether existing staff can do better work, more consistently. In a growing number of cases, the answer is yes.
The risk of doing nothing is real too. Competitors and peers who deploy AI effectively will pull ahead on both quality and speed. Waiting makes that harder to recover.
Yet inside many organisations there’s no AI strategy, no governance model, and no internal platform for safe experimentation.
This puts governance functions (legal, compliance, risk) in a difficult position. They’re asked to evaluate potential risks without clear organisational guidance about acceptable use. Without that guidance, the safest response is to slow adoption or block it.
That’s understandable. AI does introduce real concerns around data confidentiality, intellectual property, regulatory obligations, and reliability of generated outputs.
But prohibition rarely eliminates experimentation. It pushes it out of sight.
When employees believe a tool provides clear value but the organisation offers no sanctioned way to use it, informal adoption (“Shadow AI”) is inevitable. The irony is that this creates exactly the risk environment governance teams were trying to avoid.
Governance alone isn’t enough
Defining policy and acceptable use guidelines is a good first step, but it doesn’t solve the operational problem by itself.
If employees are told AI can only be used under strict conditions, but the organisation doesn’t provide approved tools that meet those conditions, staff will use public services instead. A governance framework without a supported technical implementation just pushes usage outside the organisation’s visibility.
There’s also an interesting inversion here. In the 1980s, the solution to the technology bottleneck was decentralising computing, getting it out of the hands of a single ministry so departments could use it directly. Today, because AI is instantly accessible from any web browser, the solution is the opposite: centralising the governance layer so employees can safely access distributed intelligence.
The goal is a controlled pathway for using AI responsibly, not a barrier to it.
Building a controlled AI access layer
The solution many organisations are landing on is a centralised AI access layer: a controlled intermediary between employees and public AI services.
In this model, employees access approved AI capabilities through internally managed platforms integrated with existing security and identity systems. Instead of going directly to ChatGPT or Claude from their browser, requests pass through a controlled platform that applies policy, inspection, and logging.
This lets organisations combine productivity gains with the governance mechanisms they already use for other enterprise systems.
graph LR
A[Employees] --> B[Identity Provider / SSO]
B --> C[AI Access Layer]
C --> D["Data Inspection<br />e.g. Microsoft Purview"]
D --> E["Model Abstraction /<br />AI Gateway"]
E --> F["AI Provider<br />e.g. OpenAI"]
E --> G["AI Provider<br />e.g. Anthropic"]
E --> H["AI Provider<br />e.g. Google"]
C --> I[Audit & Logging]
style A fill:#f8f9fa,stroke:#6c757d,color:#212529
style B fill:#d1ecf1,stroke:#17a2b8,color:#212529
style C fill:#fff3cd,stroke:#ffc107,color:#212529
style D fill:#d4edda,stroke:#28a745,color:#212529
style E fill:#e2d9f3,stroke:#6f42c1,color:#212529
style F fill:#f8d7da,stroke:#dc3545,color:#212529
style G fill:#f8d7da,stroke:#dc3545,color:#212529
style H fill:#f8d7da,stroke:#dc3545,color:#212529
style I fill:#fde8d8,stroke:#fd7e14,color:#212529
A well-designed AI access layer typically needs a few things.
First, identity and access through SSO. AI services should be integrated with the organisation’s identity provider so usage is tied to authenticated users and governed by existing role-based access models.
Second, data inspection and policy enforcement. Platforms like Microsoft Purview can inspect prompts and responses for sensitive information types. Existing data classification labels and information protection policies can prevent confidential or regulated data from reaching external models.
Third, a model abstraction layer. This sits between your applications and AI providers, routing requests through a single point where you can enforce policy, monitor usage, and apply rate limiting. It also prevents lock-in to any single AI vendor. Commercial options include Portkey, Helicone, and Kong AI Gateway, among others. Open-source alternatives like LiteLLM and Envoy AI Gateway suit organisations with the engineering capacity to run their own infrastructure. If you’re already invested in a major cloud provider, AWS Bedrock, Azure AI, and Google Vertex AI all offer managed gateway capabilities with native integration into their respective ecosystems. The right choice depends on your existing stack and how much you want to manage yourself.
Fourth, audit and logging. A managed platform provides visibility into how AI tools are being used, lets you investigate security incidents, and helps you understand adoption trends.
Avoiding the wild west
Without this kind of architecture, organisations tend to drift into what you might call the wild west model of AI adoption.
Employees create their own subscriptions. Different teams experiment with different tools. Sensitive information may or may not be entered into prompts. Nobody has a consistent view of what’s happening.
From a governance perspective, this is a nightmare.
Providing a centralised, approved AI platform changes the dynamic. Staff don’t need to bypass policy to get access to useful tools. They get a sanctioned environment where experimentation is encouraged but properly controlled.
A small experiment that reshaped a government
The computer Philip Yeo brought into MINDEF didn’t digitise Singapore’s civil service by itself.
What it did was show that computing could improve how government work was done. That demonstration justified a broader programme that eventually put computers across ministries and changed how the public sector operated.
Organisations today are at a similar point with generative AI. It probably won’t replace existing systems overnight. But like those early departmental computers, it can change how everyday work gets done.
Writing policies is the easy part. The organisations that benefit most will build the governance and technical infrastructure that lets employees actually use AI safely and productively.
Singapore’s experience shows what’s possible when experimentation gets institutional backing. One machine proved the point; a coordinated programme did the rest.
That’s the real lesson. The transformation came from building the infrastructure that let people use the technology, not from the policy itself. The same applies here.