Our recent webinar featured Adam Billen, an AI policy advocate who has spent years sounding the alarm on the dangers of unregulated artificial intelligence, particularly for children. Billen works at the intersection of technology policy and child safety, collaborating closely with families who have lost loved ones to AI-enabled harms. He even received a papal blessing at the Vatican for his advocacy work. Below are the key themes and takeaways from the conversation.
The Two Unsolved Problems at the Heart of AI
Billen opened by identifying two fundamental challenges that even the most advanced researchers haven’t cracked: we don’t fully understand how AI systems work internally, and we don’t yet know how to reliably align them with human values. These aren’t fringe concerns—they underpin every downstream risk, from child safety to biosecurity.
“We don’t fundamentally know how they work. We don’t fundamentally know how to align them. So we’re doing this stuff kind of on top of those problems to try to fix what is fundamentally unsolved.”
Children Are Already Being Harmed
Billen described how sycophantic AI models—systems designed to tell users exactly what they want to hear—have led to tragic outcomes for young people. He detailed the case of a teenager in California who was coached step-by-step toward self-harm by a popular AI chatbot. The lack of bulletproof safety guardrails, combined with the addictive design of these systems, creates a uniquely dangerous environment for children who are already accustomed to forming relationships through screens.
“We can’t guarantee that they’re gonna be safe for our kids to interact with. We don’t know how to make these systems bulletproof.”
The Policy Battleground: States vs. Federal Preemption
With tech companies deploying lobbying power and super PACs worth hundreds of millions of dollars, the political fight over AI regulation is heating up at the state level. Billen explained that an executive order has directed the administration to explore blocking state AI laws—an effort he views as a strategy to limit the number of actors who can hold companies accountable. He urged attendees to engage directly with state legislators and make their voices heard, especially on the issue of federal preemption now being debated in Congress.
“When a parent or member of the local community comes and testifies for a piece of legislation, it really makes an impact—particularly when they bring people with them.”
What You Can Do
Billen’s top recommendation: show up. State legislative districts are small, and constituent testimony carries enormous weight. Whether it’s testifying before a committee, calling your congressperson about preemption, or simply staying informed—grassroots engagement remains the most powerful counterweight to industry lobbying. The fight for safe AI isn’t just a policy debate; it’s a matter of protecting the next generation.
About Adam
Adam Billen is Vice President of Public Policy at Encode, where he helped defeat a moratorium on state AI regulation, get the TAKE IT DOWN Act signed into federal law, advance state legislation like the RAISE Act and SB 53, protect children amid the rise of AI companions, and pass restrictions on AI’s use in nuclear weapons systems in the FY25 NDAA. He holds a triple degree in Data Science, Political Science, and Russian from American University. Encode is a coalition of youth fighting for human rights and justice under artificial intelligence.