The semantic runtime that makes AI remember
Yohanun sits between your application and any LLM — OpenAI, Claude, or local models. It provides the cognitive infrastructure that transforms stateless AI into persistent, rule-following systems.
Three components. One semantic runtime.
Yohanun provides the essential infrastructure that every intelligent system needs: memory that persists, rules that explain, and context that evolves.
Memory Layer
Hybrid vector + graph storage that survives restarts, learns from patterns, and builds semantic understanding over time.
Rules Engine
Declarative business logic that creates explainable, auditable decisions. Define rules once, trust them everywhere.
Context Manager
Intelligent context weaving that understands what matters when. Right information, right time, every time.
Works with any LLM. No vendor lock-in.
Yohanun doesn't replace your LLM — it makes it smarter. Whether you're using OpenAI's latest models, Claude, or running local LLMs, Yohanun provides the same semantic infrastructure.
Switch models without rebuilding your memory. Change providers without losing context. Build once, run anywhere.
Supported Models
Simple integration. Powerful results.
Add semantic intelligence to your existing AI workflows in minutes, not months.
# Before: Direct LLM calls
response = openai.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Help me plan..."}]
)
# After: Semantic runtime with memory & rules
from yohanun import SemanticRuntime
runtime = SemanticRuntime(
memory=True, # Persistent context
rules=True, # Business logic
model="gpt-4" # Any LLM
)
response = runtime.chat(
user="user_123",
message="Help me plan...",
# Memory and rules automatically applied
)
Ready to build intelligent systems?
Start with Yohanun Cloud for instant deployment, or explore self-hosted options for complete control.