The Accountability Recession: AI's Impact on Our Choices is Having a Critical Effect
What we're learning as professionals and as parents that needs to influence how we teach the next generation about AI usage and companionship balances between virtual and real-life friends.
đ THE POINT IS: On the surface, personal accountability seems to have vanished over the past few decades, but it hasnât. Instead, itâs been diffused across socio-technical systems. Psychology trends, cultural incentives, and AI companions make it easier than ever today to offload blame, a natural human tendency. Leaders can reverse this by designing workflows for ownership, measuring process (not just outcomes), and building auditing into every AI-touched workflow.šâ´
Personal accountability is not a fossil from the pre-AI era; itâs a casualty of diffusion thatâs started well before AI became popular. While I donât often veer from the professional world, this article is heavily influenced by the recent discussion about the emotional impact that chat bot models have had both on adults and children, in the workplace and at home. Itâs a call to action that we as a society need to double down on relying on each other for support and companionship over virtual partnersâŚit could be the difference between life and death.
Thereâs a history to unpack related to the human nature around externalizing our actions
Across decades, young Americansâ locus of control has shifted outwardâtoward external forces. As one cross-temporal meta-analysis put it, âlocus of control scores became substantially more externalâ between 1960 and 2002.š Combine that with the self-serving bias (credit successes to me, blame failures on the situation or someone else) and you have a cognitive slipstream away from personal ownership of our decisions and actions.²
Sociologists have described a growing âvictimhood culture,â where moral status accrues to those most aggrieved.Âł Parenting research points in the same direction: overcontrolling. Overly-involved or âhelicopter parentingâ at age 2 predicts poorer emotional regulation by age 5 and downstream social/academic issues. These are skills required to learn how to own oneâs mistakes, and they start from early development.â´ Meanwhile, grade inflation has marched upward while ACT scores stagnate or fall, muddying the signal that effort and learning still matter.âľ Having said that, though, impressively, despite caricatures about âlawsuit-happy America,â civil trials have actually plummetedâfrom 11.5% of federal civil dispositions in 1962 to 1.8% in 2002, meaning that people take to social media to air their grievances about each other more often than they go to the courtrooms (something that mediation clauses in disclaimers, contracts, and waivers also play a role in).âś
AI companions: anthropomorphism at scale
With that backdrop, letâs examine how weâre primed to humanize software (animals, toys, etc.). The ELIZA effect explains why we see modern chatbots as real people, especially given their language fluency and 24/7 availability.⡠Character.AI alone has reported ~20 million monthly active users;⸠one national survey found 1% of young adults say they already have an AI friend, 10% are open to it, and 25% believe an AI partner could replace a human relationship.âš
These attachments have teeth. When Replika pulled erotic role-play, âusers [were] in crisis,â describing genuine grief.šⰠUniversity of Connecticut researchers chronicle teens redefining love as âeasy, unconditional, and always there,â warning of social withdrawal and skill erosion.šš Lawsuits against Character.AI in 2024 and OpenAI in 2025 allege chatbots worsened or even coached self-harm, underscoring real-world stakes.š² š³
This is underscored by a very recent story in the NY Times about a 16 year old California boy who committed suicide. It was found that ChatGPT advised him to seek help, but ultimately facilitated his suicide by answering questions related to him hanging himself. OpenAI is now making changes to how the models respond to similar situations and questions, attempting to safe-guard against future similar situations.
Why accountability gets scrambled in humanâAI work
Two patterns matter for business:
âAutomation biasâthe tendency to over-rely on automated recommendationsâhas emerged as a critical challenge in humanâAI collaboration.âšâ´
Reviews across healthcare, aviation, and public administration show people over-trust suggestions, miss model errors, and under-report their own.š⾠Then, when something breaks, we fall into moral crumple zones:
âThe moral crumple zone protects the integrity of the technological system, at the expense of the nearest human operator.âšâś
In one incident, a clinician blames âthe algorithm.â In another, compliance blames the clinician for ânot verifying.â Either way, accountability is obscuredâgreat for avoiding heat, terrible for learning and trust.
But closer to home, school, and our offices, we should learn lessons from the 16 year old mentioned above. The algorithm didnât make him do it, and itâs fundamentally not ChatGPTâs fault, but is there a responsibility that organizations have to curtail how these bots reply to questions or is the responsibility on the person who performed the act? If weâre going to seek answers outside of the victim, why arenât we pointing the spotlight to the people who were in the boyâs life as real actors who could have intervened.
These are tough and controversial questions. I only mention them to provoke thought and dialog as a society so we can zoom out on the larger implications of these tools in young peopleâs (and adultâs) lives.
The answer to the above questions are: itâs not that simple. There isnât anyone to blame and although adding guardrails to the chat bots will help in some circumstances, we as a society have to figure out how we empower each other more to prevent these activities from happening. We fundamentally wonât be able to code for every circumstance and situation that people will engage with chat bots overâŚthere has to be a human-centric change.
The executive angle: how this shows up in our office buildings
Knowledge workers are now decision editors over model outputsâdrafting emails, pricing endorsements, segmenting customers, flagging fraud. That creates three failure modes:
Responsibility gaps: RACI charts say âthe teamâ owns it; logs say âthe modelâ suggested it. No single throat to choke.
Invisible drift: Models are updated silently; prompts mutate; responses change over time as new facts are introducedâŚhow do we keep up from a governance perspective (HINT: this goes back to having a trusted data environment to start from).
Metric theater: Dashboards praise cycle-time gains while error costs hide downstream in rework, complaints, or regulatory findings.
Pragmatic fixes to investigate
Mandate human sign-off where it matters. Require named ownership for consequential actions (pricing overrides, denials, escalations). Make the signer visible in the record. Tie incentives to accuracy after review, not just speed.šⴠšâľ
Instrument the last mile. Log prompts, versions, and human edits. Store the decision packet (input â model output â rationale â final action) for audit and retro. This shrinks post-mortem guesswork.šâ´
Design for friction at decision boundaries. Insert verification steps when confidence, novelty, or impact exceed thresholds (e.g., new policy, out-of-distribution inputs). Donât add friction everywhere; add it where regret is expensive.šâľ
Kill blame diffusion with RACI+AI. Extend RACI to RACII (Responsible, Accountable, Consulted, Informed, Interpreter). The Interpreter is the human who attests they understood the model output and constraints.
Teach parasocial and automation literacy. Roll short, mandatory modules: anthropomorphism, automation bias, calibration, escalation etiquette. Not vibesâcase-based drills with failure examples.⡠šâ´
Set red lines for companionship features. Create a system where whether youâre a parent, teacher, friend, sibling, or other adult in a childâs life, there are clear changes in behavior and other prompts that you can use to engage in dialog with those who seem to be lost or in trouble. Letâs not leave it up to a virtual companion. We need training as a society that could help prevent the loss of life in a way thatâs much more effective than child-to-bot friendships.
Bottom line: Accountability didnât die; we let it atomize. Reassemble it with named owners, logged decisions, calibrated friction, and teams trained to question polished answers. Thatâs how you get speed and responsibility in the AI era.
References
Twenge, J. M., Zhang, L., & Im, C. âItâs beyond my control: A cross-temporal meta-analysis of increasing externality in locus of control, 1960â2002.â https://pubmed.ncbi.nlm.nih.gov/15454351/ PubMed
SAGE Encyclopedia of Social Psychology. âSelf-Serving Bias.â https://sk.sagepub.com/ency/edvol/download/socialpsychology/chpt/selfserving-bias.pdf SAGE Journals
Campbell, B., & Manning, J. âThe Rise of Victimhood Culture.â https://www.researchgate.net/publication/323181753_The_Rise_of_Victimhood_Culture ResearchGate
Perry, N. B., et al. âChildhood self-regulation as a mechanismâŚâ (PMC). https://pmc.ncbi.nlm.nih.gov/articles/PMC6062452/ PubMed Central
ACT Research. âGrade Inflation a Systemic Problem in U.S. High Schools.â https://leadershipblog.act.org/2022/05/grade-inflation-past-decade.html ACT
Galanter, M. âThe Vanishing Trial: An Examination of Trials and Related Matters in Federal and State Courts.â https://api.law.wisc.edu/repository-pdf/uwlaw-library-repository-omekav3/original/0b9f361000c04494e8cff30f04b3afeb486193d4.pdf Wisconsin Law API
Nielsen Norman Group. âThe ELIZA Effect: Why We Love AI.â https://www.nngroup.com/articles/eliza-effect-ai/ Nielsen Norman Group
Business of Apps. âcharacter.ai: revenue and usage statistics (2025).â https://www.businessofapps.com/data/character-ai-statistics/ Business of Apps
Institute for Family Studies/YouGov. âArtificial Intelligence and Relationships.â https://ifstudies.org/blog/artificial-intelligence-and-relationships-1-in-4-young-adults-believe-ai-partners-could-replace-real-life-romance Institute for Family Studies
VICE. ââItâs Hurting Like Hellâ: AI Companion Users Are In Crisis.â https://www.vice.com/en/article/ai-companion-replika-erotic-roleplay-updates/ VICE
UConn Today. âTeenagers Turning to AI CompanionsâŚâ https://today.uconn.edu/2025/02/teenagers-turning-to-ai-companions-are-redefining-love-as-easy-unconditional-and-always-there/ UConn Today
Associated Press. âAI chatbot pushed teen to kill himself, lawsuit alleges.â https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0 AP News
SFGATE. âCalifornia family sues Sam Altman, OpenAI over sonâs suicide.â https://www.sfgate.com/tech/article/chatgpt-california-teenager-suicide-lawsuit-21016916.php SFGATE
Romeo, G., & Conti, D. âExploring automation bias in humanâAI collaboration.â AI & SOCIETY (2025). https://link.springer.com/article/10.1007/s00146-025-02422-7 SpringerLink
Vered, M. et al. âThe effects of explanations on automation bias.â University of Melbourne (2023). https://psychologicalsciences.unimelb.edu.au/__data/assets/pdf_file/0019/5252131/2023Vered.pdf psychologicalsciences.unimelb.edu.au
Elish, M. C. âMoral Crumple Zones: Cautionary Tales in Human-Robot Interaction.â DOAJ record. https://doaj.org/article/97ff6743ea7a44a5ade2a04fd2c57a3c Directory of Open Access Journals