Product Overview

The semantic runtime that makes AI remember

Yohanun sits between your application and any LLM — OpenAI, Claude, or local models. It provides the cognitive infrastructure that transforms stateless AI into persistent, rule-following systems.

Three components. One semantic runtime.

Yohanun provides the essential infrastructure that every intelligent system needs: memory that persists, rules that explain, and context that evolves.

Memory Layer

Hybrid vector + graph storage that survives restarts, learns from patterns, and builds semantic understanding over time.

Cross-session persistence
Semantic search & relationships
Contextual recall & learning

Rules Engine

Declarative business logic that creates explainable, auditable decisions. Define rules once, trust them everywhere.

Permission-aware logic
Audit trails & compliance
Explainable judgment

Context Manager

Intelligent context weaving that understands what matters when. Right information, right time, every time.

Temporal understanding
Multi-modal context weaving
Contextual awareness
Model Agnostic

Works with any LLM. No vendor lock-in.

Yohanun doesn't replace your LLM — it makes it smarter. Whether you're using OpenAI's latest models, Claude, or running local LLMs, Yohanun provides the same semantic infrastructure.

Switch models without rebuilding your memory. Change providers without losing context. Build once, run anywhere.

See How It Works

Supported Models

OpenAI
GPT-4, GPT-3.5
Anthropic
Claude 3, Claude 2
Local Models
Llama, Mistral, etc.
And More
Any OpenAI API

Simple integration. Powerful results.

Add semantic intelligence to your existing AI workflows in minutes, not months.

your-app.py
# Before: Direct LLM calls
response = openai.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Help me plan..."}]
)

# After: Semantic runtime with memory & rules
from yohanun import SemanticRuntime

runtime = SemanticRuntime(
    memory=True,    # Persistent context
    rules=True,     # Business logic
    model="gpt-4"   # Any LLM
)

response = runtime.chat(
    user="user_123",
    message="Help me plan...",
    # Memory and rules automatically applied
)

Ready to build intelligent systems?

Start with Yohanun Cloud for instant deployment, or explore self-hosted options for complete control.