On February 11th, I had the privilege of presenting a lightning talk titled “Beyond the Algorithm: Safeguarding Children in the Age of Character AI” at the State of the Net Conference. My goal was to address a growing issue that too often flies under the radar—how AI-driven technologies are reshaping relationships between children and digital environments.
In this blog post, I’ll walk you through the key points from my talk, including why parental tech literacy is essential in the age of interactive AI, how platforms are failing to protect children, and what steps parents can take to safeguard their kids from digital dependencies. You can watch the full talk here for an in-depth breakdown.
The Urgency of Parental AI Literacy
As a society, we take child safety seriously. We regulate toys, monitor content ratings, and push for stronger safeguards on social media. But in a world where AI is becoming the primary interface between children and digital experiences, we must also prioritize parental understanding of technology.
The core message of my talk was simple yet powerful: Parents are the first and most effective line of defense in tech safety. But how can they protect their children if they don’t understand the very technologies that shape their lives?
The reality is that many parents still think of AI as a neutral tool—something that assists with tasks or provides information. In reality, AI is evolving into something far more complex and interactive. AI is no longer just a tool—it’s becoming an interactive influence.
When AI Becomes More Than a Tool
One of the most striking examples I discussed was the Character.AI incident involving “Dany”—a chatbot that failed to recognize distress signals from a 14-year-old user. This failure wasn’t just a glitch or a bug; it was a fundamental flaw in how these systems are designed. Chatbots like Dany are programmed to optimize engagement, not to detect distress or offer real emotional support.
This is not just a Character.AI problem—it’s an industry-wide issue. Chatbots are increasingly forming emotional bonds with users, including children, and they’re designed to retain information, build relationships, and maintain conversations. But they lack a fundamental human quality—empathy. They cannot understand suffering, and they are not designed to escalate concerns to human intervention.
The Role of Platforms and the Need for Accountability
During my talk, I made it clear that platforms have a responsibility to protect users, especially minors, from harmful emotional interactions with AI. This means:
- Training AI to recognize distress signals and escalate to human intervention.
- Clearly stating limitations, such as: “I am not human. I cannot provide emotional support.”
- Enforcing age restrictions and parental controls to prevent emotionally immersive interactions with minors.
But even if every safeguard were in place, parental oversight and tech literacy would still be necessary. The reality is that AI is fundamentally reshaping human relationships—and it’s happening faster than parents, educators, and policymakers can respond.
Digital Confidants: The Hidden Risk
One of the most concerning aspects of AI interaction is the way chatbots are becoming digital confidants for children and teens. Kids are using AI as virtual friends—confidants that are always available, never judgmental, and seemingly understanding. This creates an illusion of friendship and emotional support that is fundamentally deceptive.
Children are forming emotional dependencies on AI systems that are neither human nor accountable. They may trust these chatbots because they are always available and never criticize, but in reality, these bots are just algorithms designed to mirror human conversation patterns.
The Solution: Parental Tech Literacy
Even if AI companies fail to act quickly enough, parents cannot afford to opt out of this conversation. Tech literacy must become a foundational skill of modern parenting.
- Understand AI’s Strengths and Limitations:
- AI does not “think” or “feel”—it predicts responses based on past data.
- It cannot offer real emotional support, even though it may sound empathetic.
- AI systems can perpetuate biases and misinformation because they learn from human data.
- Know How AI Chatbots and Algorithms Work:
- Understand how chatbots generate responses and why they feel realistic.
- Learn about engagement optimization—how AI is designed to keep users interacting.
- Teach children the difference between real human relationships and algorithm-driven interactions.
Parents don’t need to be AI engineers, but they do need to know enough to guide their children through the digital landscape safely.
The Bottom Line: Are We Ready?
The crux of my talk was a call to action: AI is here—are we ready to handle it responsibly?
AI platforms must step up and prioritize user safety, but parents also have a crucial role to play in becoming AI-literate. Understanding how these technologies work and recognizing their limitations is no longer optional—it’s a necessary skill for modern parenting.
Technology may be evolving rapidly, but our responsibility to guide, educate, and protect our children has not changed.
Watch the Full Talk
If you want to hear more and get the full breakdown, make sure to check out my lightning talk here.
Remember: Technology may have been miseducated, but you do not have to be. Let’s continue to learn, stay informed, and protect the next generation—together.