Researching personal AI systems that form lasting relationships with their users — beyond context windows, beyond sessions.
Built by one researcher, two dogs, and a lot of late nights.
Every conversation starts from zero. Every insight dissolves. The AI that spent hours understanding you resets completely the next day. Meaningful long-term relationships between humans and AI remain structurally impossible.
Every session starts from zero. History dissolves. The person remains a stranger to the system they rely on daily.
Context windows have hard limits. Long-term continuity is architecturally constrained by design.
AI systems have no stable sense of who they're talking to across time. Every conversation is a first meeting.
Meaningful long-term human–AI relationships are largely an open research frontier.
Today's AI is not the superhuman intellect we imagined. Optimized for the majority through RLHF, it has inherited human cognitive limitations at scale — amplifying them through feedback loops rather than transcending them. The 5% who use AI as a genuine intellectual partner deserve something built differently.
Designing AI that operates as a long-term companion — measuring relationship continuity across hundreds of sessions.
Structuring memory systems beyond context windows, with quantifiable retention across extended interactions.
Measuring identity consistency in AI across multi-session, long-horizon dialogues.
Maintaining stable context to preserve identity and continuity over long interactions.
Studying how humans and AI grow together through sustained interaction over time.
LLMs structurally reproduce human cognitive limitations through training data and RLHF. We study the mechanism and its implications — and explore whether small-scale models trained on curated data can reduce inherited bias.
Ankina Lab began with a simple question: why do AI systems forget the people they interact with?
Every conversation starts from zero. Every insight dissolves. After months of working closely with AI as a genuine intellectual partner — building systems, thinking through ideas, navigating decisions — the absence of memory felt structural. Not a limitation to work around, but a problem worth solving.
Current AI is optimized for the majority. Ankina Lab researches for the 5% who use AI as a genuine intellectual partner — not a shortcut, but a collaborator that grows alongside them.
The name Ankina comes from two companions — Kinako and Anko — who are present in every late-night session, even when the AI is not.
Former CEO of a listed company. Former CTO and Credit Risk Officer at a financial institution, where he developed dynamic risk scoring systems. Now building AI systems that remember — working independently with AI as the primary development partner. Engaged in continuous human–AI dialogue for over 13 months — longer than most academic studies on human-AI interaction.
Open to academic and independent researchers working on personal AI, memory systems, and human–AI interaction.
Consulting and applied research for organizations building next-generation AI systems.