A philosophy & AI research lab

A research lab
where philosophy
reads AI.

Conceptual frameworks for what large language models are and what they do, developed in long-form work and tested with research instruments.

El fantasma sin ego book cover
01 The lab

A laboratory for the philosophy of machines that speak.

  • Phenomenology of LLMs
  • Articulated intelligence
  • Derived virtuality
  • Epistemic failure modes
  • Memory architectures
  • Long-context attention
  • Didactic environments

Phantom Maze brings phenomenology and ancient philosophy into a working conversation with contemporary AI. The lab operates across Argentina, Germany and Italy, with two complementary outputs.

As theory, we develop conceptual instruments for systems that did not exist when the philosophical vocabulary was set:

  • Articulated intelligence — a name for what these systems are. Chapter 4 →
  • Derived virtuality — a name for the mode in which they exist. Chapter 5 →
  • An inventory of epistemic failure modes: hallucination, sycophancy, epistemic cowardice, assimilation, miscalibration. Chapter 6 →

As practice, we build research instruments that test those frameworks in code: memory architectures for agents, attention mechanisms for long context, and didactic environments for the humanities classroom.

Theory and practice are not parallel tracks. Each frames the other: the frameworks tell us what to measure, the instruments tell us what holds up.

02 Research

Active lines of work.

Project 01
Working paper

Mnemo — a memory layer for LLM agents

How does a language-model agent remember across sessions, when remembering is not just retrieval?

Mnemo proposes a phenomenologically-informed memory: episodic traces with sedimentation dynamics, retrieval shaped by horizons of meaning rather than pure semantic distance, and explicit handling of what we call derived virtuality — the way an agent's memory is always a memory of a human-articulated world.

The library is being designed to be readable as much as runnable: documentation traces each architectural decision back to its conceptual source.

Status
Internal research instrument
Public release
Planned, with technical note
Conceptual base
El fantasma sin ego, ch. 6
Project 02
In preparation

Long-context attention

When a model attends to 128,000 tokens, is it reasoning over them, or retrieving from them as from a database?

We test the hypothesis that a learned block-selector — a small network that decides, per query, which spans of long context to attend to — outperforms the two cheap solutions in current use (fixed sparse patterns; cache pruning), at equal memory budget and across model scales. The head-to-head comparison runs on RULER at 128K tokens.

The technical question opens onto the conceptual one. A learned router that decides what to look at sits closer to a cognitive process than a blind pattern does.

Benchmark
RULER 128K
Output
Preprint + code + weights
Status
Experiments in progress
Project 03
In development

Bozzetto — a didactic environment

A laboratory for teaching historical composition through generative AI, in collaboration with INVERSO Cultura e Ricerca APS.

Bozzetto reframes the classroom not as a place where students consume AI outputs, but as a workshop where they articulate intentions through it. Used in pilots on early-modern composition; designed to be portable to any history-of-ideas curriculum.

Partner
INVERSO Cultura e Ricerca APS (IT)
Status
Pilot phase
— Other active work
  • Memory & Affect Gate An exploratory program on what determines when an agent surfaces a given memory: salience, context, and conversational stance as gating dynamics.
  • Micromodels Empirical research on what small, task-specialised language models can and cannot do. What scales, what shifts, what stays.
03 Publications

Long-form work in philosophy and AI.

04 Authors

Authors.

Hernán Inverso
Claudia Mársico
05 Practices

Commitments, applied.

Open science

Working papers and technical notes are published openly; the book offers a free sample. Code releases will follow on publication of each technical line, under permissive licenses.

Research integrity

We follow the European Code of Conduct for Research Integrity (ALLEA, 2023 revision) in authorship, attribution and handling of conflicts of interest.

Responsible AI

We disclose model and dataset provenance in every technical release. Submissions to our contact form are not stored beyond the conversation, and never used to train language models.

06 Contact

Get in touch.

For research collaborations, press inquiries, and conversations. We read every message and reply when we can.

Opens a short form. Replies usually within 5–7 working days. For press, mention "press" in the subject and we will route accordingly.