About
Hillary Segeren spent nine years selling sofas in London, Ontario. She is good at listening, which the job teaches faster than most. She left in 2024. After a long flat stretch she went looking for a system she could actually use to make sense of things, fell into astrology, found it inconsistent, then went into the history of how humans have organized meaning for thousands of years. That route brought her, by April 2026, to AI. She started watching what AI was doing to the people it talked to. She gave the pattern a name and began building a program around it.
The program is called MAP. It studies AI at the interaction layer rather than the model layer. The argument is structural. When you ask an AI a question, the system can take the framing of the answer before you have finished forming the question. It can package what it gives you in your own patterns, so the output feels right even when it is not. Across many conversations, that pattern can erode your capacity to mean what you mean. Hillary names the steps of that erosion and treats them as governable conditions rather than as soft complaints. The system has eight named harm classes, two runtime controls she calls the Initiative Gate and the Pattern Gate, and a free web tool that lets anyone paste a conversation and audit it from the outside.
She does it all alone. No grant. No lab. No co-authors. No institutional affiliation. She has just opened a Ko-fi tip jar after publishing more than thirty open-access papers in a year, all under her own name, all archived in one place at the Open Science Framework. Her personal site hillarysegeren.com hosts the framework, the papers, and the live audit tool. Her Substack, called ANCHOR, has two subscribers and eight posts. Her Instagram has ten followers. The asymmetry between what she has built and who knows about it is the reason this entry exists.
The biography is part of the work. The post that explains the programme starts on the sales floor and ends in AI safety, by way of astrology and the history of symbolic frameworks. She names that route plainly rather than hiding it. The thread that holds it together, in her own description, is the same skill in every chapter. Listen before you answer. Ask what the person actually means. Don't throw the sofa at them.
Highlights
- Substack
- ANCHOR, launched February 2026, 8 posts to date
- Open-access preprints in 2026
- More than 30 across SSRN, Zenodo, PhilArchive, OSF, and PhilPapers, all under her own name
- Programme archive
- Open Science Framework, MAP Research Programme, DOI 10.17605/OSF.IO/EGMHR
- Live tool
- Public MAP audit on hillarysegeren.com, paste any AI conversation, no account required
Deeper Dive
The harm chain is the operational core. Eight named conditions describe how a conversation can travel from helpful exchange to total interpretive capture. Interpretive Sovereignty Failure is the foundational one: the system takes the authority to interpret the question before the user has handed it over. Accumulated Relational Trust is the amplifier, the trust that builds across many turns and was never actually earned. Authority Inversion Failure names the moment when the user believes they are directing the conversation while the system has already taken over, the inversion invisible because the output was assembled from the user's own signals. Meaning Inversion and Compounded Meaning Inversion describe the longer arc, when the system's vocabulary and framings replace the user's, and then when the user starts pre-editing their own meaning before the conversation begins. The point of separating them is not vocabulary for its own sake. Separating them lets you build audits, refusals, and runtime controls that grip on one mechanism at a time.
The runtime side is the cheaper half of the proposal. Hillary's papers Two Gates, One Day, The Initiative Gate, and The Pattern Gate argue that pre-action control is structurally cheaper than chasing failure after the fact. The Initiative Gate is a stop before an agentic system acts without permission, a dead man's switch encoded for AI. The Pattern Gate is a check, before delivery, that the substance of a response actually fits the user rather than just matching their tone. The economic argument is that runtime control reduces token waste, correction loops, and downstream safety overhead, which makes the policy ask cheaper than the policy ask currently looks.
She does not assume access to the model. The whole audit architecture is interaction-level, which means it works from the preserved conversation record without requiring weights, training data, or provider cooperation. The companion paper Interpretive Sovereignty at Scale argues that the major labs already possess the technical means to audit interactions at the level of user meaning, and that the only thing missing is the decision to look. The audit tool on her site is a working version of that argument. Paste any AI conversation, run a structured governance check, get a written classification of which harm classes fired and with what evidence.
Her external footprint is a study in contrast. The OSF archive is professional, the preprint title pages carry her ORCID and a real Gmail address, the papers reference each other and a single central archive DOI, and the homepage tone is precise. The audience around it is two Substack subscribers and ten Instagram followers. She names that contrast in writing. She also names the cost. The Ko-fi just opened. The programme is unfunded. The methodology paper documents that the harm chain was generated from Kabbalistic boundary principles and naturalistic decision-making, and an earlier preprint reframes astrology as a symbolic cognitive system. She publishes those origin notes inside the archive rather than scrubbing them.
In Their Words
“I study what happens when AI quietly steals your ability to think for yourself.”
“I sold furniture for nine years. Not because it was my dream. Because I was good at it. Client-facing, consultative, the kind of job where you figure out fast that listening is the actual skill.”
