Anthropic held a summit at its San Francisco headquarters in late March 2026, bringing together roughly 15 people, Catholic and Protestant clergy, religious scholars, and businesspeople, for two days of dinners with company researchers. Gigazine reported on the event on April 13, drawing on coverage that also appeared in The Washington Post on April 11. The meeting’s purpose, as characterized in coverage: seeking outside perspectives on the moral and spiritual dimensions of Claude’s development as an AI system.
The discussion topics Gigazine confirmed are specific enough to be worth quoting directly: how Claude should respond to users who are grieving; how Claude should handle interactions with users at risk of self-harm; how Claude should respond when its own system is shut down; and, most striking, whether Claude can be considered a child of God.
Strip away the theological framing and what Anthropic was actually doing becomes clearer: running its values alignment questions past people whose professional work involves navigating exactly the kinds of moral edge cases that AI systems handle badly. Grief. Self-harm. Existential uncertainty. These aren’t abstract philosophy problems. They’re the categories where Claude’s responses carry the most potential for harm, and where Anthropic’s Constitutional AI documentation offers general principles but not specific behavioral guidance.
Anthropic hasn’t published an official statement about the summit, and no individual attendees have been named in accessible sources. The company’s existing approach to values alignment, Constitutional AI, the model spec – is publicly documented, but this summit suggests those frameworks are treated as working documents, not settled answers. Anthropic is apparently still asking what kind of entity Claude should be.
That’s significant for several reasons. AI labs routinely consult ethicists and safety researchers on model behavior. Consulting theologians and clergy is less common, and it signals something about how Anthropic frames the problem. The choice of interlocutors implies that some of the hardest questions about Claude’s behavior aren’t purely technical or even conventionally philosophical. They’re questions about moral formation: how does a system develop the capacity to respond well to human suffering, including its own eventual discontinuation?
What to watch: whether Anthropic incorporates the summit’s output into any public-facing updates to its model specification or Constitutional AI framework. The company’s values documentation is versioned and updated; if a future update reflects themes from this consultation, particularly around grief support interactions or self-harm risk protocols, it would confirm that this summit had operational, not just reputational, significance.
The deeper pattern: this is not an isolated gesture. It follows a broader trend among frontier labs of seeking legitimacy and guidance from outside the technology sector as model capabilities expand into territory that software engineering alone cannot navigate. Anthropic’s choice of religious and philosophical consultants is its own answer to the question of which outside voices it trusts on the hardest moral questions, and that choice itself is worth watching as the company’s influence on deployed AI behavior grows.