Idea 7 min 2026-04-15

Institutional AI Knowledge Grows in the Margins, Not the Org Chart

Most companies wait for a dedicated AI team. The ones building durable knowledge aren't waiting — they're doing it sideways.

A contracts lawyer at a mid-sized logistics firm spent three weeks last year quietly testing whether Claude could accelerate first-pass review of supplier agreements. She didn’t have a mandate. She didn’t file a project proposal. She just started doing it, kept notes in a shared document, and eventually walked her practice group through what she’d learned over a Tuesday lunch. Six months later, that document is the closest thing her legal department has to an AI policy.

This is not a heroic story about a visionary individual. It’s a structural observation: in most organisations without a dedicated AI function, institutional knowledge about AI is being built exactly this way — informally, locally, and surprisingly durably — or it isn’t being built at all.

The dedicated team fallacy

There’s a comfortable assumption that AI knowledge will arrive fully formed once the right team is hired. A Centre of Excellence. An AI lead. Someone whose job title contains the word “transformation.” Until then, the thinking goes, it’s too early to act and too risky to experiment.

This waiting posture has a real cost. It means every team starts from zero when they do eventually engage with AI tools. It means the organisation accumulates no institutional memory of what worked, what failed, and why. And it means that when the dedicated team does arrive, they spend their first six months learning things that already happened in the dark.

The organisations building durable AI capability right now are mostly doing it without a dedicated team, because they had to.

What “institutional knowledge” actually means here

It’s worth being precise. Institutional AI knowledge isn’t a library of prompts or a vendor comparison spreadsheet. It’s the accumulated, shared understanding of how AI tools behave in your specific context — your data, your workflows, your professional obligations, your failure modes.

An HR team that has tested three different approaches to drafting job descriptions knows something a generic guide cannot tell them: which outputs required the most revision, which prompts produced language that felt tone-deaf, where legal flagged a concern. A finance analyst who has spent two months using an AI assistant for variance analysis has learned something about its tendency to confabulate figures when source data is ambiguous — something that would take a new hire weeks to discover independently.

This knowledge is fragile. It lives in people’s heads, disappears when they leave, and rarely survives a team reorganisation. Making it institutional — shared, documented, retrievable — is the actual problem worth solving.

The mechanics of building it without dedicated resource

The approach that tends to work is deliberately unglamorous. It requires one person in each functional team who treats their AI experimentation as something worth writing down, not just something worth doing.

In operations, this might look like a running log attached to a process document: “Tried using AI to draft supplier escalation emails — works well for tone, consistently wrong on SLA specifics, needs human check before send.” In marketing, it might be a section in the campaign retrospective: “AI-generated headline variants required less editing than body copy. Audience segmentation prompts need more context than we initially gave them.” Small, specific, honest.

The critical ingredient is not sophistication — it’s consistency. A shared document that gets updated after each significant AI interaction, even briefly, compounds into something genuinely useful within a few months. The format matters less than the habit.

What tends to fail is the centralised knowledge base nobody maintains and the all-hands demo nobody follows up on. One-time efforts create the impression of progress without the substance.

Where professional context changes everything

Different functions encounter different failure modes, and this is not adequately acknowledged in most AI guidance.

Legal and compliance teams need to know that AI tools can produce confident, plausible, wrong legal citations — and that the risk profile of that failure is categorically different from a marketing team getting a mediocre tagline. The knowledge that matters here is about verification workflows, not just prompt construction.

Finance teams working with AI on numerical analysis need hard-won understanding of where hallucination risk is highest — typically when the model is asked to synthesise across data sources it cannot directly access. The lesson is usually learned once, painfully, and then it needs to become team knowledge immediately.

HR faces a different problem: AI tools can inadvertently encode or amplify bias in screening and drafting contexts in ways that are legally and ethically significant and not always immediately visible. The institutional knowledge worth building is less about efficiency gains and more about where human review is non-negotiable and why.

None of this knowledge transfers cleanly from one function to another. Cross-functional AI working groups can be useful for sharing general experience, but they’re not a substitute for function-specific learning.

Making knowledge stick without a mandate

The honest answer is that knowledge sticks when someone with credibility in the team treats it as worth their attention. Not a junior employee running a side project, but a senior manager or partner who asks, in ordinary meetings, what the team has learned about a tool this week and treats the answer as professionally significant.

This is a culture question more than a technology question. Most organisations have not yet made AI learning a legitimate part of professional development — something you’re expected to do, not just permitted to do. Until they do, the knowledge will keep being built in the margins, which is better than nothing but worse than it could be.

Here is the reframe worth sitting with: the organisations that will have the strongest AI capability in three years are probably not the ones that hired the most AI specialists. They’re the ones that made it normal for people who do the actual work to learn systematically, document honestly, and share without waiting for permission. The knowledge was always going to be built by the people closest to the problem. The question is whether anyone thought it was worth keeping.