Self-Organising

The LLMs see.
The ontology thinks.

Neural networks recognise patterns.
Symbolic systems enforce rules. Coherence combines both into a conscious model grounded in organizational truth.
A neuro-symbolic architecture where AI is powerful AND trustworthy.

The Problem with AI Today

Your AI is guessing
your business.

Every AI tool your organisation uses.
Copilot, custom LLMs, vendor AI layers. reads your data without understanding what it means.
It doesn't know the difference between "revenue" in your ERP and "revenue" in your CRM.
It doesn't know your "customer onboarding" is actually 11 interdependent microservices.
So it guesses.

RAG (Retrieval-Augmented Generation) gives LLMs raw text chunks. Ambiguity in → hallucination out.
The AI can't distinguish between two definitions of the same word across your organisation.

The Coherence Approach

OAG. Beyond RAG.

Ontology-Augmented Generation injects structured, typed, relationship-aware objects from the living knowledge graph into the LLM.
Not text chunks. Instead, graph objects rooted in ground truth. The AI's output is structurally constrained by the ontology.

DimensionRAGOAG (Coherence)
Input to LLMRaw text chunksTyped graph objects with relationships
Hallucination riskHigh. Ambiguity propagatesStructurally constrained
CitationsSource document onlySpecific graph node + confidence score
Cross-domain queriesCannot relate conceptsFull multi-hop traversal
GovernanceNonePer-concept DGF clearance on every node

The Deterministic Fallback

OAG is not “smarter RAG.” It is a fundamentally different input format. When the LLM receives typed graph objects instead of raw text, three things happen:

Confidence ScoringEvery link between a concept and a technical asset carries a cosine similarity score. ≥0.70 = auto-accept. 0.40–0.70 = human review. <0.40 = discard. The math decides, not the AI.
Deterministic FallbackIf the ontology doesn't contain a concept, the system says "I don't know." No generation beyond the graph boundary. This is the architectural guarantee against hallucination.
Mathematical LinkingHyDE linking uses cosine similarity. 750K pairwise comparisons in seconds. The bridge between business vocabulary and code is pure math, not prompt engineering.

The Bridge

HyDE Linking. Mathematical, not manual.

Business concepts and technical code inhabit entirely separate vocabularies. "Customer Onboarding" has nothing lexically in common with process_kyc_check(). Standard search fails completely.
HyDE solves this.

Forward HyDE

Take a business concept → LLM generates hypothetical code that would implement it → compare synthetic code to real code via cosine similarity.
Code-to-code matching.

Reverse HyDE

Take real code → LLM generates hypothetical business descriptions → compare to actual concept definitions.
Both directions run, results combine.

~$0.50 per 15 concepts (one-time)·~$5/month compute cost at 50K chunks·vs. Collibra: $100K+/year

The Trust Layer

Fixed Entity Architecture.

Every concept in Coherence is a Fixed Entity. Human-curated. Human-locked. Immutable until a human says otherwise.
This is the foundational constraint that makes everything else trustworthy.

What "Fixed" Means

A Fixed Entity has a human-written definition, a human-approved scope, and a human-controlled lifecycle. AI cannot create, modify, or delete concepts. AI can only enrich them. and every enrichment carries provenance.

Why It Matters

Every competitor lets AI generate the ontology. Coherence does the opposite. Humans define meaning. Math does the linking. AI handles enrichment. The result: enterprise-grade trust without enterprise-grade bureaucracy.

The FEA Lifecycle

01 · DefineDomain expert writes the concept definition. What does "Client Onboarding" mean in THIS organization?
02 · LockConcept becomes a Fixed Entity. Immutable. Every downstream process references this definition.
03 · LinkHyDE generates bidirectional links to every technical asset. Mathematical, not manual.
04 · EnrichAI adds context, cross-references, and quality scores. Every enrichment carries: model version, confidence, timestamp, content hash.
05 · EvolveHuman reviews enrichments. Approves, rejects, or refines. The cycle repeats. The ontology lives.

FEA is what makes Coherence an ontology-bred product, not another AI wrapper. The concepts are the product. Everything else is infrastructure.

Trust Architecture

Four layers. One guarantee. 100% deterministic.

Hallucination protection is not a feature. It is the architecture. Four independent layers, each catching what the others miss.

Layer 1 · Fixed Entities

Human-curated concepts define the knowledge boundary. If it's not in the ontology, it doesn't exist. The AI cannot invent concepts.

Layer 2 · Mathematical Linking

HyDE linking uses cosine similarity, not generative AI. The bridge between vocabulary and code is pure math. 750K pairwise comparisons. Reproducible.

Layer 3 · Confidence Gates

Every link is confidence-scored. ≥0.70 auto-accept. 0.40–0.70 human review. <0.40 discard. The threshold is deterministic.

Layer 4 · Provenance Chain

Every AI enrichment carries full provenance: pipeline version, model used, confidence score, content hash, timestamp. Auditable end-to-end.

Layers 1–2: deterministic, zero AI. Layer 3: mathematical threshold. Layer 4: full audit trail. Together: trustworthy AI without the asterisk.

Self-Organising

Trustworthy AI is a paradigm, not a feature.

Every answer cited. Every link confidence-scored. Every concept human-approved. This isn't a guardrail bolted on. It's the architecture.

This site uses cookies

We use essential cookies for the site to function and analytics cookies (Google Analytics) to understand how you use it. Analytics cookies are only activated with your consent. We do not track you across other websites. Your data is stored in the EU and processed in accordance with GDPR. Read our Privacy Policy

CoherenceCoherence