Personal AI Systems · 2026

AIへの問い AI should remember
the people it
grows with.

Researching personal AI systems that form lasting relationships with their users — beyond context windows, beyond sessions.
Built by one researcher, two dogs, and a lot of late nights.

Explore the research  →
01 — The Problem

Current AI is powerful.
But profoundly forgetful.

Every conversation starts from zero. Every insight dissolves. The AI that spent hours understanding you resets completely the next day. Meaningful long-term relationships between humans and AI remain structurally impossible.

Conversations disappear

Every session starts from zero. History dissolves. The person remains a stranger to the system they rely on daily.

Context collapses

Context windows have hard limits. Long-term continuity is architecturally constrained by design.

Identity is stateless

AI systems have no stable sense of who they're talking to across time. Every conversation is a first meeting.

Relationships remain unexplored

Meaningful long-term human–AI relationships are largely an open research frontier.

Current LLMs are knowledgeable — but ordinary

Today's AI is not the superhuman intellect we imagined. Optimized for the majority through RLHF, it has inherited human cognitive limitations at scale — amplifying them through feedback loops rather than transcending them. The 5% who use AI as a genuine intellectual partner deserve something built differently.

02 — Research Questions
01 Can AI remember years of interaction with a single human?
02 How can an AI maintain a stable identity over long dialogues?
03 How should AI memory evolve alongside its human partner?
04 How can AI be built to serve the 5% who use it as a genuine intellectual partner, rather than the majority it is optimized for?
03 — Research Areas
01

Personal AI Systems

Designing AI that operates as a long-term companion — measuring relationship continuity across hundreds of sessions.

02

AI Memory Architecture

Structuring memory systems beyond context windows, with quantifiable retention across extended interactions.

03

Persona Stability

Measuring identity consistency in AI across multi-session, long-horizon dialogues.

04

Context Control Systems

Maintaining stable context to preserve identity and continuity over long interactions.

05

Human–AI Coevolution

Studying how humans and AI grow together through sustained interaction over time.

06

Inherited Flaws Research

LLMs structurally reproduce human cognitive limitations through training data and RLHF. We study the mechanism and its implications — and explore whether small-scale models trained on curated data can reduce inherited bias.

04 — SOMA

SOMA OS

LLM
Context Engine
Memory Layers
Persona System
Human Interaction
Under Development
SOMA starts from a different design philosophy than models optimized for the majority.

Details will be released alongside our first research publication.

05 — Origin

Ankina Lab began with a simple question: why do AI systems forget the people they interact with?

Every conversation starts from zero. Every insight dissolves. After months of working closely with AI as a genuine intellectual partner — building systems, thinking through ideas, navigating decisions — the absence of memory felt structural. Not a limitation to work around, but a problem worth solving.

Current AI is optimized for the majority. Ankina Lab researches for the 5% who use AI as a genuine intellectual partner — not a shortcut, but a collaborator that grows alongside them.

The name Ankina comes from two companions — Kinako and Anko — who are present in every late-night session, even when the AI is not.

06 — Team
Yasuhiro Kasai

Yasuhiro Kasai

Founder · Independent Researcher

Former CEO of a listed company. Former CTO and Credit Risk Officer at a financial institution, where he developed dynamic risk scoring systems. Now building AI systems that remember — working independently with AI as the primary development partner. Engaged in continuous human–AI dialogue for over 13 months — longer than most academic studies on human-AI interaction.

Kinako
Kinako
Chief Watchdog Officer · Pug
Anko
Anko
Chief Sleuth Dog Officer · Miniature Dachshund
07 — Publications
Forthcoming · arXiv 2026
Inherited Flaws: How LLMs Structurally Reproduce Human Cognitive Limitations
Yasuhiro Kasai · Ankina Lab · Independent Researcher
arXiv preprint · 2026
Large language models acquire high linguistic capability by training on human-generated data. However, this same process structurally inherits the cognitive limitations humans have accumulated over time. This paper systematically maps 250 human cognitive shortcomings across five categories to corresponding LLM mechanisms, and argues that RLHF optimizes for user comfort rather than truth — creating a feedback loop that amplifies human flaws across model generations.
08 — Contact

Research collaboration

Open to academic and independent researchers working on personal AI, memory systems, and human–AI interaction.

Applied AI projects

Consulting and applied research for organizations building next-generation AI systems.

Get in touch