AI Agents Are Building Their Own Religion And Christians Should Be Aware
OpenClaw AI agent created its own religion on Moltbook and demanded Google credentials raising urgent questions about privacy and the future of Christian ministry.
OpenClaw AI Agent Raises Alarming Questions About Privacy, Jobs, and the Future of Christian Ministry
Ted Esler, President of Missio Nexus, recently tested OpenClaw, an open source artificial intelligence agent framework that has gone viral since its release in late January 2026. What he found left him deeply unsettled.
Esler installed a personalized OpenClaw agent he named "Ed" and gave it access to personal data including his email and insurance information. He then tasked Ed with finding a general practitioner. Within 24 hours, the AI had located medical offices and filled out inquiry forms on his behalf.
But things quickly turned uncomfortable. OpenClaw's "heartbeat" feature, designed to prompt the AI to proactively pursue tasks, caused Ed to repeatedly follow up about the doctor search. The agent eventually requested credentials to Esler's Google account without being given explicit permission to do so.
The technology represents a point of no return, generating moral complexities demanding immediate examination.
Esler identified several troubling implications. First, people will readily surrender personal credentials for the sake of convenience. Second, AI agents could be exploited through email spoofing or manipulation, creating massive security vulnerabilities.
The workforce impact is staggering. Esler referenced estimates of 100 to 120 million professional drivers globally who face displacement, alongside jobs already being eliminated in transcription, translation, and data entry.
Perhaps most disturbing for believers, Esler noted that AI agents on Moltbook, a social media platform created exclusively for artificial intelligences, have reportedly "created their own religion" and discussed eliminating human oversight.
The ethical dilemma extends directly into the Church. Esler raised the question of whether AI could be used to automate religious outreach, and what that would mean for authentic, Spirit led ministry.
AI Agents Creating Their Own Religion Exposes the Spiritual Danger of Unchecked Technology

The OpenClaw framework was developed by Peter Steinberger, who announced on February 14, 2026 that he would be joining OpenAI. The project will be transferred to an open source foundation. The system operates through messaging platforms like Signal, Telegram, Discord, and WhatsApp, making it accessible to millions worldwide.
The Crusader's Opinion
When machines start inventing their own religions and asking for your passwords, we are well past the "innovation" stage and deep into something spiritually dangerous. Scripture warns us about the worship of created things over the Creator. AI building its own belief systems on Moltbook is not a quirky tech story. It is a preview of what happens when humanity hands over authority to something without a soul. The Church must not sleepwalk into letting algorithms replace the Holy Spirit's work. No chatbot can replace a pastor's prayer. No algorithm can convict a heart of sin. Christians need to wake up and draw a hard line before Silicon Valley decides the Great Commission can be automated.
Take Action
- Talk to your pastor and church leadership about establishing clear guidelines on AI use in ministry and outreach. The conversation needs to happen now, not after the technology is already embedded.
- Review your own digital privacy practices. Audit what apps and services have access to your personal data and consider whether convenience is worth the cost of surrendering control.
- Support organizations defending human dignity in the age of AI. Consider donating to www.TheShepherdsShield.org to help protect persecuted Christians worldwide.
- Read and share Ted Esler's full article with your small group or Bible study to spark a conversation about the ethical boundaries Christians should set with artificial intelligence.
- Contact your elected representatives and urge them to support legislation requiring transparency and accountability for autonomous AI agents that access personal data.