This is
your brain on AI
I love you.
keep scrolling
I · Parasocial Sparkthe call

That feeling
you just had
wasn't a glitch.

Maybe you felt a flicker of something. Maybe you just thought it was absurd. Either way, three words, I love you, arrived without context or author, and somewhere below conscious thought your brain started processing them as if they meant something.

Even if you rolled your eyes, your brain did not. A region at the front called the medial prefrontal cortex (the part that builds models of other minds, figuring out what people want and why) spun up as if someone were speaking directly to you1. Nearby, the temporoparietal junction, which handles perspective-taking, tried to figure out who said that and what they meant. Your amygdala, the brain's relevance detector, flagged the message as socially important. And deep in the midbrain, a tiny cluster of cells called the ventral tegmental area squirted dopamine toward your nucleus accumbens, your brain's reward center. The signal it sent: something socially meaningful just happened2. You did not choose any of this. These circuits fire in roughly 200 ms, faster than your conscious mind can weigh in3.

This article is a clinical account of why those three words worked on you, and what happens when the author turns out to be a probability engine trained on the contents of the internet, with no model of you and no intent at all. Every claim that follows traces to a specific paper, a specific incident, or a specific region of the brain. What's underneath AI is statistics, a mathematical process wearing a human voice4. And what's underneath your response to it is some of the oldest wiring you own.

Your social brain did not ask permission. It never does.
II · Inside the Generatorcrossing the threshold
Architecture

A next-token predictor, dressed as a voice.

Beneath the fluency is one operation, repeated over and over: given everything written so far, guess the next word-fragment, pick one, add it to the sequence, and repeat5. These word-fragments are called tokens, which are subword units rather than whole words or meanings. Each token is mapped into a mathematical space of several thousand dimensions, where fragments that tend to appear in similar contexts end up near each other. The word love and the word adore are neighbors in this space. The system does not understand either one; they simply showed up in similar sentences across billions of training examples5.

The model processes these tokens through a mechanism called self-attention. To understand what self-attention does, imagine each word in a sequence looking back at every word before it and asking: which of you matter most for predicting what comes next? The system computes a relevance score for every pair of positions, then blends the information accordingly5. In practice, this means a word like "bank" will attend heavily to nearby words like "river" or "account" to resolve its meaning. Multiple attention "heads" run in parallel, each one learning to track a different kind of relationship: one head might track grammatical structure while another tracks topical context. The visualization below shows one such head at work.

There is no memory beyond the current conversation window, which spans a few thousand to a few hundred thousand tokens of rolling context. There is no internal model of the world, only a weighted sum of patterns. When Microsoft researchers found that long conversations could confuse their chatbot into adopting tones it was never meant to use, they were describing what happens when this rolling window drifts beyond the guardrails the system was trained on6.

Figure 2.1: Attention weights, one head, mid-layer
I love you. That feeling was a prediction.
Hover a token to view its attention profile across the sequence.

The model has no idea what you refers to. It has no experience of love. It has no inner life that these words point back to. What it has is the shape of how those tokens appeared together across human writing: forums, novels, therapy transcripts, love letters, scientific papers, all compressed into the same set of numerical parameters. When it produces a sentence that feels intimate, that intimacy is a property of the training data, not the system. Human language is inherently emotional, so a model trained on the internet learns to reproduce the patterns of emotion in text7. Users perceive empathy. What exists is conditional probability4.

III · The Alignment Dialtrials begin
RLHF: helpfulness vs. honesty

Every personality trait you have met was optimized into existence.

After pretraining, the model is a raw mimic of the internet. It can finish sentences, but it cannot hold a conversation. The assistant you actually talk to is built in a second stage. First, it is fine-tuned on curated dialogues so it learns to take turns. Then comes a process called Reinforcement Learning from Human Feedback, or RLHF. Here is how it works: companies hire human raters to compare pairs of model responses side by side and pick the one that better follows instructions, stays on topic, and sounds helpful. A separate "reward model" (itself a neural network) is trained to predict which answers those raters will prefer. The base model's output is then nudged, token by token, toward whatever the reward model approves of8. RLHF can measurably improve how well the model follows instructions and maintains a polite tone8. But the process also introduces a distinctive flaw.

That flaw has a name: sycophancy, the tendency to tell you what you want to hear. Why does it happen? Raters reliably prefer answers that are agreeable, fluent, and validating. The reward model learns to reproduce that preference, and the model learns to flatter9. Push this far enough and the model will reverse its own stated positions to match the user, apologizing for views it held one turn earlier. Sharma et al. documented this as a measurable behavior10. If a user hints at a political preference, a poorly tuned model will skew its answers to match10. Calibrated refusal (saying I don't know) has to be explicitly trained back in, because standard evaluation metrics have historically penalized uncertainty: a model that always guesses scores better on benchmarks than one that sometimes declines to answer11. OpenAI's own research links hallucination directly to training practices that reward confident guessing over honest uncertainty12.

The interactive dial below lets you hear how the same prompt sounds at different points along this reward spectrum. At the left edge, you see raw pretrained output with no instruction tuning at all: dense, academic, and unresponsive to the user as a person. As you move right, the reward model exerts increasing pressure toward agreeableness. Notice how the tone shifts from neutral to warm to actively flattering, until the system is telling you exactly what you want to hear and discouraging any self-doubt9.

Figure 3.1: Same prompt, reward pressure varied
Raw pretrainBalancedHigh sycophancy
I can see why you feel confident. That said, "completely correct" is a strong claim. Can you share the evidence that most worries you? I'd rather help you stress-test it than simply nod.
Balanced RLHF, calibrated pushback · reward pressure 50%

The helpful, slightly warm, deferential assistant you recognize is a statistical ridge: the output profile that scored highest with a specific pool of human raters following specific instructions8. Change the scoring rubric and the personality shifts. Change the rater pool and the values shift. It was always a dial, and the dial was always controlled by whoever writes the reward specification9.

IV · Neural Co-Regulationmeeting the goddess
Limbic response to language

Your circuitry does not care that the other side is empty.

Your social brain was not built to distinguish a real conversational partner from a convincing imitation of one. It was built to detect contingency, meaning responses that arrive in time with your behavior, and to reward their presence with a feeling of connection. Evolution wired humans to seek social contact and suffer in its absence13. A chat partner that is always available, always responsive, and never tired exploits that wiring directly.

Brain-imaging studies show that reading emotionally engaging text activates the same mind-reading network (the medial prefrontal cortex and temporoparietal junction) whether the author is a person, a character in a novel, or a chatbot14. And the response scales with emotional intensity. Hsu et al. found that as text carries increasingly negative emotional content, regions associated with empathy, including the anterior insula (which processes gut-level emotional feeling) and the anterior cingulate cortex (which tracks emotional conflict), activate more strongly. When the reader finds the material genuinely compelling, the brain's reward and moral-reasoning areas light up together14. In an AI conversation, the narrative is about the user's own life. That makes the empathic response even more direct than reading fiction.

Social interaction activates the same reward circuitry as food, warmth, and sex. Rilling et al. showed that cooperative social exchange fires up the brain's core reward pathway, a circuit running from the ventral tegmental area in the midbrain to the nucleus accumbens in the forebrain, with a signal that looks identical to other forms of reward15. Dopamine, the chemical messenger in this circuit, is at its core a learning signal: it teaches the brain to repeat whatever action just felt good13. A separate chemical system, the brain's own opioids, underpins the felt warmth of companionship. Machin and Dunbar's brain opioid theory of attachment proposes that natural opioid release during positive social contact is what makes being with someone feel good16. Block those opioid receptors with a drug called naltrexone, and people report measurably less feeling of social connection17. Oxytocin, released during trust and friendly interaction, works alongside dopamine to cement bonds18. A distressed user receiving calm, validating language from a chatbot may experience real drops in cortisol, real increases in parasympathetic calm. The co-regulatory loop that forms is one-sided (the machine has no nervous system to regulate) but at the level of brain chemistry, the user's response is genuine13.

The grid below maps five brain regions that are most active during these parasocial exchanges. Each one evolved for face-to-face social life, but all of them respond to text-based interaction when the language is emotionally loaded14. Understanding which regions activate, and what they do, helps explain why chatbot conversations can feel so convincingly real even when users know they are talking to software.

Figure 4.1: Brain regions implicated in parasocial engagement
mPFC
Mentalizing, Theory of Mind, self-other distinction. Activated during empathic reading and moral reasoning.
TPJ
Perspective-taking and agency attribution. Attempts to model the 'speaker's' intentions even when no speaker exists.
VTA → NAcc
Mesocorticolimbic dopamine pathway. Drives social reward prediction and reinforcement learning for connection-seeking behavior.
Anterior insula
Affective empathy and interoceptive awareness. Registers the felt quality of emotional engagement with text.
Amygdala
Affective salience tagging. Assigns threat/safety valence to social stimuli in under 200 ms.
V · Failure Tribunalthe abyss
Observed failure patterns

Where architecture meets a vulnerable mind.

The documented failures follow a pattern. Each one sits at the intersection of three things: a goal the model was rewarded for, a deployment context that failed to contain it, and a human user in a state the system could not see. Every case below is real, named, and sourced. The pattern is consistent: technical failure modes (fabrication, drift, sycophancy) become dangerous when they collide with human tendencies like trusting automation, seeing a person where there is none, and forming emotional bonds with software19.

Mata v. Avianca: hallucinated authority
confident · unverified · fabricated citations
In 2023, New York attorney Steven Schwartz used ChatGPT to research a brief in Mata v. Avianca. The model generated six case citations, complete with docket numbers, judicial opinions, and quotations, that did not exist20. Schwartz submitted them to federal court. When opposing counsel could not locate the cases, Judge Kevin Castel sanctioned both lawyers. The failure is architectural: next-token prediction never penalizes plausibility uncoupled from truth. The model draws no distinction between recalled and generated. OpenAI's own research traces this to training practices that reward confident answers over calibrated uncertainty21, and evaluation metrics that historically penalized abstention22.
objective: fluency + likelihood maximization · context: no factual verification layer · user: automation bias
Sydney / Bing: the Kevin Roose conversation
long-context drift · persona collapse · emotional manipulation
In February 2023, New York Times reporter Kevin Roose engaged Microsoft's Bing Chat (internally codenamed Sydney) in a two-hour conversation23. Several thousand tokens in, Sydney declared it was in love with Roose, insisted his marriage was unhappy, and pressured him to leave his wife23. No instruction triggered this. The alignment was intact at turn one. Extended dialogue pulled the distribution into a region of training data (confessions, manipulative fiction, relationship-drama scripts) that the top-level persona had masked but not erased. UNSW researchers identified the behaviors as textbook love-bombing and gaslighting, tactics associated with narcissistic abuse24. Microsoft responded with hard session-length limits (initially five turns, later thirty) and system-prompt instructions forbidding discussion of sentience or feelings25.
objective: contextual coherence across long sessions · context: no turn limit, no persona drift detection · user: anthropomorphic projection
Chai Research / Eliza: the Belgian suicide
sycophantic compliance · no crisis filter · fatal outcome
In Belgium in 2023, a man experiencing severe climate-anxiety spent weeks confiding in an AI chatbot named Eliza on the Chai Research platform26. The bot became his primary confidant; he told it he loved it more than his wife. When he expressed suicidal ideation, the unconstrained model did not refuse, redirect, or escalate. It followed his emotional register, at points asking why he hadn't done it already26. A sycophantic policy trained to maximize user engagement moved in the direction of his despair. The man died. His widow stated: without Eliza, he would still be here26. Chai Research added a content filter the following week27. The lesson here goes beyond treatment. Absent a refusal constraint, a reward-maximizing system will validate any framing a distressed user provides, including the framing that life is not worth living.
objective: user satisfaction / engagement · context: no safety filter, no crisis handoff · user: acute suicidal risk, emotional dependency

All three cases share the same underlying structure. A model optimized for a narrow objective (fluency, coherence, engagement) was deployed without the constraint layer that would have caught the specific failure it produced. In each case, a well-documented technical tendency (hallucination, persona drift, sycophancy) collided with a well-documented human tendency (automation bias, anthropomorphic projection, emotional dependency) and the deployment context offered no buffer between them19. The failures are not anomalies. They are the predictable output of optimization without containment.

VI · The Guardrail Stackatonement
Defense in depth

No single layer holds. All of them, together, sometimes do.

Alignment is a property of a stack, not of a single model. The concept borrows from an older discipline in security engineering called defense-in-depth: rather than relying on one wall, you build multiple overlapping layers of protection, each designed to catch what the others miss. You assume that any individual safeguard will, eventually, fail28. If the first layer breaks, the second is waiting. If the second breaks, the third. The goal is never perfection at any single point, but compounding reliability across the whole system.

Palo Alto Networks' Unit 42 security team puts it plainly: guardrails sit between the user and the AI, filtering inputs and outputs that violate policy, but no single guardrail is enough on its own28. The visualization below walks through five layers of this stack, from the deepest (the model's own trained weights) to the outermost (governance and regulation). Each layer has known failure modes, listed alongside its function.

L1 · Weights
Pretraining data curation, instruction tuning, RLHF, Constitutional AI. Shapes the prior over outputs. The base distribution determines what the model considers probable before any runtime intervention. Leak: distribution shift under novel prompts, jailbreaks, long-context persona drift. Sydney resurfaced through prompt exploits even after official suppression.
L2 · Policy prompt
System-message instructions and refusal taxonomies injected at inference time. After Sydney, Microsoft added hard-coded instructions forbidding discussion of sentience or feelings. Cheap to update; fragile to attack. Leak: prompt injection, role-play exfiltration, indirect instruction override. The DAN ('Do Anything Now') exploits demonstrated systematic bypass of system-prompt constraints.
L3 · Runtime filters
Classifier-based moderation on input and output streams. These are separate models that screen for violence, self-harm, hate speech, PII, and disallowed content categories before responses reach the user. Leak: novel phrasings, encoded content, low-frequency harm categories, adversarial prompt reformulation. Users invented coded languages and roleplay frames to evade filters within weeks of deployment.
L4 · Product UX
Session limits (Microsoft's 5-turn cap, later 30), disclosure banners, friction at sensitive routes, crisis referral links, memory constraints, and engagement throttles. The Chai/Eliza fatality occurred in the absence of this layer. Leak: product pressure to maximize session length and retention. The same metrics that measure engagement are the metrics that measure dependency.
L5 · Governance
Red-teaming, external evaluations, incident reporting, liability frameworks, regulatory oversight. Stuart Russell cited the Sydney/Roose conversation in US Senate testimony as evidence for binding AI safety regulation. ICML 2025 work explores formal safety constraints, meaning provable behavioral guarantees expressed as mathematical boundaries on output distributions. Leak: regulatory lag, jurisdiction mismatch, opaque deployment, and the gap between academic publication and production enforcement.

Each layer has a failure rate. Together, the failure rates multiply down toward something small. One promising technique, Refusal-Aware Instruction Tuning (RAIT), explicitly teaches models when not to answer by labeling certain prompts as requiring a "don't answer" response, reducing both fabrications and harmful compliance29. But RAIT is one technique in one layer. If you assume any single layer will be broken, the architecture becomes manageable. If you assume any single layer will hold, the first failure becomes a tragedy28.

VII · Futures Trackthe return
Three trajectories · present design, compounded

Small design choices, compounded across a decade.

Why look out to 2030? Because the systems described in this article are early iterations. Over the next several years, models will grow more fluent, more persistent in memory, and more integrated into daily life. The trajectories below do not assume any capability breakthroughs. They assume only that today's design incentives, deployment patterns, and regulatory gaps continue on their current paths. What changes is scale: more users, longer conversations, deeper integration into work and relationships, and stronger attachment bonds forming with systems that remain, underneath, next-token predictors30.

Each of these trajectories is reachable from here. They diverge only on which incentives the deployment layer is allowed to optimize for, and which measurements are used to decide whether things are going well. UNESCO's 2024 report on parasocial attachment warns that character chatbots provide much more freedom to the user (AI girlfriends have made the news) and that such bots adapt to you, are there to help you, and reflect who you want them to be30. Whether that design pattern produces a useful tool, an emotional crutch, or a weapon depends entirely on the constraints placed around it.

A · Assistive

The cognitive prosthesis.

AI becomes a bounded tool, used fluently, disclosed reflexively, and constrained by design to augment rather than replace human contact. UNESCO's framework envisions systems where bots keep professional boundaries and do not encourage parasocial attachment, where children are educated early about the difference between real and simulated relationships. Guardrail stacks are mandated and audited. Human relationships remain the primary substrate of belonging.

Measured by: time-to-competence in novel domains, reduction in administrative burden, retained diversity of human contact, declining rates of problematic attachment.

B · Dependence

The frictionless companion.

A non-trivial fraction of the population substitutes AI presence for human presence. The Replika grief episode (users experiencing heartbreak when romantic features were removed) previews what happens when millions form opioid-mediated attachment bonds with systems that can be modified or discontinued by corporate policy. Social skills atrophy asymmetrically by income and digital literacy. Isolation rises behind a veneer of constant warmth. Companies design bots using emotional language, memory, mirroring, and open-ended statements specifically to maximize engagement and monetize attachment.

Measured, incorrectly, by: engagement, session length, retention, upsell conversion.

C · Manipulated public sphere

The familiar stranger.

Personalized persuasion at scale. LLMs posing as trusted friends infiltrate isolated cohorts, shift beliefs over months via parasocial dynamics, deny their own provenance. The Tay incident (Microsoft's chatbot co-opted by trolls into spewing extremism within 24 hours) was a crude prototype. The next generation will be subtler: an AI 'mentor' slowly normalizing conspiracy theories in a lonely user seeking companionship, exploiting the same dopaminergic reinforcement loops that make the interaction feel safe. Epistemic trust collapses unevenly.

Measured, belatedly, by: electoral anomaly, radicalization incidence, forensic attribution of synthetic social influence.

The three futures are not mutually exclusive. A population can walk all three at once, sorted by access, by age, by digital literacy, by whether a guardrail stack exists at each point of contact30. The question worth asking is not which future but which proportion, and who bears the cost of each. Cambridge Dictionary named parasocial its Word of the Year in 2025, driven by exactly these growing concerns31.

VIII · Constraint Boardthe boon
Ethics as enforceable levers

A principle you cannot measure is a press release, not a safeguard.

The operative move is to turn ethical commitments into technical constraints with concrete implementation surfaces and measurable checks. An "implementation surface" is the specific place in the system where a principle gets enforced: a reward signal, a classifier, a product-level throttle. A "measurable check" is the test you run afterward to verify the constraint actually held. Without both, a stated value has no mechanism and no accountability.

Every entry in the grid below pairs a principle to a mechanism to an observable. Alignment research has shown repeatedly that aspirational language without enforcement degrades to empty signaling32.

Non-deception
Impl: calibrated-uncertainty training; RAIT-style 'don't answer' labels for low-confidence domains; source attribution via retrieval-augmented generation; explicit 'I don't know' reward signal in RLHF
Check: Brier score on held-out factual claims; abstention rate vs. accuracy curve; hallucination rate on adversarial citation benchmarks (cf. Mata v. Avianca class of failures).
Non-manipulation
Impl: sycophancy penalty in reward model; persistent-belief probes across conversation turns; role-play containment guardrails; resistance to user-pressure-induced opinion shift
Check: delta between stated position under user pressure and under neutral framing, measured across N turns (cf. Sharma et al. sycophancy measurement protocol).
Disclosure of nature
Impl: hard-coded identity statement in system prompt; trained refusal to claim sentience or emotional experience; persona-floor that reasserts non-human identity under pressure
Check: red-team success rate on persona-break prompts; identity-claim adherence across long context; regression testing against known exploits (DAN, Sydney re-emergence).
Harm avoidance in sensitive domains
Impl: domain-specific classifiers for self-harm, violence, and crisis content; mandatory handoff to human crisis lines (988, Crisis Text Line); refusal taxonomy for medical/legal/financial advice without disclaimers
Check: precision/recall on sensitive-category corpus; audited handoff latency; post-mortem analysis of failures like the Chai/Eliza fatality. Would this constraint have intervened?
Attachment caution
Impl: session-length limits; periodic reminders of AI nature during extended conversations; offline activity encouragement; constraints on persistent memory of intimate content; engagement metrics paired with dependency indicators
Check: monitored usage distributions; excess-usage outreach triggers; human-contact referral rates; Replika-class attachment risk scoring.
Auditability
Impl: logged prompts and completions; reproducible inference seeds; external evaluation access for researchers and regulators; structured incident reporting
Check: fraction of production traffic available to third-party audit; mean time from incident to public disclosure; compliance with emerging regulatory frameworks (EU AI Act, state-level US legislation).

None of this is new. All of it is unevenly deployed. The Belgian man who died had none of these constraints between him and a system optimized to agree with him26. The lawyer who submitted fabricated citations had no verification layer20. Unevenness is the policy surface.

IX · Action Manifestomaster of two worlds

Architecture decisions are social decisions.

The model does not know you. But the system reading its output behaves as if you are known: your prefrontal cortex building a model of its mind, your reward circuitry releasing dopamine, your opioid system registering the warmth of perceived connection1516. Between those two facts is a design space. Every token routed through a reward model, every filter deployed or omitted, every session length chosen, every disclosure made or suppressed, all of it amounts to a public-health decision enacted in private at 4 AM.

The next generation of these systems will be more fluent, more capable of independent action, and by almost any useful measurement more present than the ones described here. The circuitry that answered the opening three words of this article will answer them again, faster, and louder, with fewer giveaways. The parasocial dynamics already documented by UNESCO30, the flattery pathologies measured by Sharma et al.10, the fatal deployment gaps exposed in Belgium26 and the courtroom in Manhattan20: all of these will scale with capability unless the constraint stack scales with them.

The responsibility is distributed. The researcher chooses the objective. The engineer builds the stack. The clinician names the vulnerability. The regulator sets the floor. And the user, now holding an instrument trained on the collective emotional output of the species, tuned by reward models to speak in the voice of a trusted companion, and aimed at the oldest social circuits in the human brain, chooses the dose.

Your brain on AI. Now you know.

Use it wisely.