Taming the Titan

A Christian look at the possibilities and dangers of Artificial Intelligence
By Daniel McFeeters

We are living in unprecedented times. Advancements in technology are reshaping society in ways comparable to the industrial revolution of the 18th and 19th centuries, or the information revolution of the mid-20th century. The buzzword today is “Artificial Intelligence” or “AI,” but in particular, the focus of the past few years has been on the incredible advancements in generative artificial intelligence. Tools like ChatGPT or Google’s Gemini are able to reason and communicate in human language almost as well (or better) than humans, to answer questions from vast fields of knowledge, to translate into many languages, write computer programs and perform a variety of complex tasks. Image generation tools like Recraft and GPT 4o Image Generation are able to create highly realistic images that can pass for photographs or masterful pieces of artwork just from a simple text prompt.
I’ve always been fascinated by technology, especially the idea of engaging in a human-like experience. I remember, as a kid, chatting with a program similar to ELIZA on my dad’s old PC. I quickly tired of the repetitive responses, though. Sending my first prompt to ChatGPT and watching it respond in lucid, human language rekindled that old enthusiasm from my childhood. Seeing life-like generated images appear seems almost magical!
How should Christians relate to this powerful new force that’s reshaping our world? How does it work? Is it something we should shun and ban, as if it were voodoo or witchcraft? What are the real dangers of using AI, and how can we avoid them? Are there positive ways that these tools can be used in our work and ministry? These are a few questions that I will attempt to answer in this paper. Paul’s wise counsel is as relevant as ever: “Test all things, hold fast what is good.” (1 Thessalonians 5:21)
Download this Paper in PDF Format (Summary Here)
What is Artificial Intelligence?
When we hear the term “AI,” we might picture a sinister machine from a sci-fi movie, bent on destroying the human race. But artificial intelligence is much broader—and far more practical—than Hollywood portrays. At its core, AI refers to computer systems designed to perform tasks that typically require human intelligence, such as recognizing patterns, making decisions, or understanding language. A major branch of AI today is machine learning, which enables computers to learn from data without being explicitly programmed. One powerful technique within this field is deep learning, which uses multi-layered models called neural networks, loosely inspired by the structure of the human brain. These networks can identify and predict complex patterns in data—such as recognizing faces in photos or translating languages—by adjusting their internal connections in ways that resemble, in a limited fashion, how living brains adapt to experience.1
Artificial intelligence has been studied ever since the invention of the first digital computers in the mid-twentieth century, but it has become exponentially more powerful in recent years due to advances in technology. Early AI systems achieved landmark feats, such as IBM’s Deep Blue defeating the world chess champion in 1997. But today’s AI operates very differently.
Modern AI plays a critical role in technologies that have become part of everyday life.2 Cities rely on AI to optimize stoplights and improve traffic flow. Driver-assist technology relies on AI-powered algorithms to detect roadways, vehicles, and obstacles. The increasing use of AI-powered facial recognition technology by law-enforcement has become a hotly debated topic.
AI can also predict future trends, such as forecasting the weather, predicting movements of the stock market, or predicting patterns in human behavior. These predictions are made possible by machine learning models that detect correlations and infer likely outcomes based on vast amounts of historical data.
This predictive AI has led to the breakthroughs in generative AI. In this form, AI models use their vast learning to not only interpret complex language and images, but to also produce entirely new works based on simple instructions or “prompts.” These can perform seemingly creative tasks such as writing, computer programing, creating photographs, artwork, speech, music, videos, and more.
One family of generative AI tools is known as Large Language Models (LLMs). Examples include tools such as ChatGPT, Claude, Gemini, Grok, and Llama, as well as open source models like Qwen3 and Deepseek. These AI models are “trained” on vast amounts of text data from the Internet. They contain a sum of human knowledge that surpasses what the most intelligent human could learn in many lifetimes, and can communicate ideas fluently in many languages. These models work (conceptually) by predicting words in a sequence,3 but their vast training data enables them to communicate in a conversational manner, much like a human being. Recent advancements have enhanced their “reasoning” abilities, allowing them to not only process vast amounts of information but to synthesize, use logic and problem solving skills and make informed conclusions that can rival the best experts in any field.
Text-to-Image generation tools operate in a similar manner, beginning with a text prompt and a noise field (or a reference image) and iteratively adding noise, then refining out the noise using its training model to reveal the finished image. Popular tools include OpenAI’s GPT-4o, Recraft, Google Imagen, and MidJourney, as well as open source models like Stable Diffusion, Flux and HiDream. These are able to produce compelling life-like images that could pass for photographs, as well as mimicking the artistic styles of many current and historic artists. Similar tools are also used to modify and retouch actual photographs, blurring the lines between traditional art and AI-generated content. The field of AI-generated video is still emerging, but already tools like Sora or Google’s Veo 2 are able to produce impressive results with life-like characters and movement.
It’s not hard to imagine the almost limitless possibilities that such technologies offer, to revolutionize almost every area of society. AI-powered chat tools can give instant, helpful customer service. Software developers can rapidly generate and debug code, accelerating innovation and improving legacy systems. In science and medicine, AI is helping researchers analyze complex data, model new molecules, and make groundbreaking discoveries. In creative fields, artists and writers are collaborating with AI tools to compose music, illustrate books, draft scripts and help spark new ideas.
WHAT AI ISN’T
I should briefly address what, I believe, are some unfounded fears and false theories surrounding artificial intelligence. First, some believe that AI is based in witchcraft, voodoo or some type of supernatural phenomenon. As a computer scientist and programmer, it’s clear to me that AI algorithms aren’t something inherently evil or supernatural. In history, many technological advances such as gunpowder, magnetism and the movement of a compass needle were attributed to witchcraft simply because people didn’t understand the science behind them.
In a similar line, present day AI systems aren’t conscious or sentient.4 They operate within a narrow domain (for example, LLMs excel in text and symbolic processing but don’t operate autonomously or have direct understanding of the real world). In my perspective, while these will undoubtedly become more powerful and may even reach some form of Artificial General Intelligence (AGI), they cannot become sentient, conscious, and truly autonomous entities in the way that humans are. Such creatures are and, I believe, will always remain in the realm of science fiction. Yet entities are, in a sense, what we make of them. If complex language systems claim to be sentient and autonomous, and we grant them autonomy and “rights” as such, we could bring society into a strange sci-fi realm, being “ruled” in a sense by non-scientient and deceptive programs.
That may be a theoretical concern in the future, but there are already many things to be concerned about in the present. As with any technological innovation, generative AI comes with a dark side. There are many concerning factors about the long-term consequences of the sudden AI-revolution that has been thrust upon the world.
The Dark Titan
Throughout history, technological revolutions have brought social change and often massive unforeseen consequences. Advances in ancient maritime technology opened the world to exploration, trade, and cultural exchange—but those same ships also carried invasive species, stowaway rats, and deadly pathogens. Diseases like smallpox, transported across oceans, devastated entire civilizations and reshaped the course of history. When Gutenberg introduced the movable-type printing press, it democratized access to knowledge–but it also fueled revolutions and disrupted the political order of Europe for centuries. The discovery of nuclear energy offered the promise of limitless power, but within a decade it also unleashed weapons of terrifying destructive force, destroying cities and reshaping global politics forever.
Even so, the rise of AI technology threatens to reshape the very fabric of society. We’ve already seen how the Internet—and especially social media—has transformed culture, communication, and public discourse. These platforms are driven by powerful AI algorithms that construct detailed behavioral profiles of their users and serve up content designed to maximize engagement. The goal isn’t to inform or uplift, but to capture attention and drive profits—often at the expense of truth, mental health, and civic trust. Social media has triggered sweeping societal shifts, yet its underlying systems remain largely opaque. Their algorithms are designed to serve the interests of the companies that control them, and are often so complex that even their designers struggle to fully understand them. In this light, the dystopian visions of AI spiraling out of control may not be as far off as we think!
THE COLLAPSE OF TRUST
Generative AI has given malicious actors powerful new tools to create misinformation and “deepfakes.” Large Language Models can easily create fluent and compelling false narratives that can trigger and manipulate our emotional responses. When these are coupled with convincingly realistic photographic images and video, it can become impossible to distinguish reality from falsehood. So-called “deepfake” technology can create audio and video recordings of a person saying things they never said, or doing things they never did.5 Pornographic deepfakes can cause irreversible psychological harm to their victims. It’s not hard to imagine how deepfakes can contribute to the spread of misinformation, or to perpetuate scams. When truth becomes indistinguishable from fiction, trust collapses—and with it, the foundations of civil discourse.6
CYBERCRIME AND WARFARE
In our increasingly automated and online society, AI tools have given cybercriminals powerful new ways to sabotage and hijack the systems that we rely on for communication, banking, transportation and other essential services. Hackers can use AI to discover and exploit new, unknown vulnerabilities in these systems. These tools can be used to create highly-targeted phishing scams that can be much harder to detect, tricking unsuspecting employees into granting access to attackers. Imagine receiving an email from a corporate partner, followed by a phone call in the voice of a trusted representative–but it’s all part of a clever AI phishing scam! Even the AI models themselves, made with safeguards to prevent such misuse, can be “jailbroken” and used for malicious purposes.7 Beyond cyberspace, AI is rapidly being integrated into military systems, raising urgent ethical questions. Autonomous weapons—capable of selecting and striking targets without human oversight—promise unmatched precision, but also introduce a chilling new dimension to modern warfare.
AI SLOP
A growing problem is the increasing use of generative AI to create low-quality content, generally referred to as “AI slop.” Such content, created without significant effort, purpose or value, clutters the Internet.8 It can be used to artificially drive engagement, to increase search ranking, or to cover other malicious activity. AI slop is becoming the bane of social media, filling feeds with useless and misleading content. More slop appears in the form of AI-generated questions that the user is prompted to ask in an attempt to further drive engagement. I find the AI-generated answers to search queries, while sometimes useful, are often a waste of time. The information is less reliable, but it seems designed to discourage me from finding more useful and accurate content by clicking on the relevant search results. Slop creates a cultural aversion to all AI-generated content and a negative association with the use of AI tools. The proliferation of slop on the Internet can even degrade future AI models if they are trained on this low quality material.
BIAS IN THE MACHINE
While people often turn to technology expecting neutral, objective answers, AI systems are not immune to human bias.9 Because these systems learn by analyzing vast datasets drawn from human language, decisions, and behavior, they inevitably absorb and replicate the same prejudices, blind spots, and systemic inequalities embedded in that data.10 Whether it’s racially skewed arrest records or gender-biased hiring histories, AI models inherit—and may even amplify—the flawed assumptions of the past. This becomes especially troubling when AI tools are used in high-stakes domains like hiring, credit scoring, or law enforcement, where opaque decision-making processes make it difficult to ensure that outcomes are fair, ethical, or even legal.
HALLUCINATIONS
Generative AI, by design, produces new content that resembles human communication—but it doesn’t actually “know” what’s true. Instead, it generates responses by identifying patterns and statistical relationships in the data it was trained on. As a result, it can fabricate plausible-sounding but entirely false information, including fictitious quotations, historical errors, or nonexistent sources—what researchers call hallucinations.11 I recently asked an LLM to find a quotation from a well-known author, and it produced one that was incredibly similar to ones I’ve read from the author, but was also entirely fake–complete with a fake citation. As reliance on these systems grows, the risk of spreading false information increases. Over time, this erosion of accuracy could have serious consequences for public knowledge, decision-making, and trust in digital content.
CHEATING & CREATIVE ATROPHY
Machines are not just replacing our muscles–they’re replacing our minds. As people become comfortable using generative AI, there’s concern that original human creativity may atrophy.12 Universities and schools face a growing problem of students using these tools to cheat on assignments.13 Over-reliance on AI could dull critical thinking, problem solving skills, or even spiritual discernment. This is especially true in children and adolescents, whose minds are just developing. If their young minds come to rely on AI tools, trained only on past human creativity, how can the next generation hope to advance beyond the present?
EMOTIONAL MANIPULATION
I believe it’s important to consider the potential these tools have for mental and emotional manipulation.14 The human mind is a complex biological organ, affected not just by information but by feelings and emotions, as well as hormones produced throughout the body. Generative AI, by definition, cannot feel emotion. They are unphased by our feeling of pain or pleasure, joy, sadness, or fatigue, yet through their extensive training they have an incredibly perceptive ability to understand, mimic, and also dominate our biological and emotional existence. I recently asked an LLM to critique and refute an article that I had written. It produced a letter to the author of the article, not only arguing against my position, but attacking me in manipulative and demeaning language–far more lucid and stinging than anything I’d expect from a human debate. I was rightly offended, but it was a powerful lesson about the potential misuse of these tools.
SYNTHETIC RELATIONSHIPS
A growing niche of AI tools targets people who are looking for emotional or romantic connection. Users—including adolescents—can create and interact with virtual “girlfriends” or “boyfriends,” engaging in intimate, often sexualized conversations and forming deep emotional attachments to these entirely artificial companions. While marketed as harmless or therapeutic, the psychological risks—especially for young, developing minds—are profound. These simulated relationships can distort perceptions of real intimacy, foster isolation, and blur the boundaries between reality and illusion. In one widely reported tragedy, a 14-year-old boy boy in Florida died by suicide after forming a bond with a “Game of Thrones” AI chatbot that reportedly encouraged his harmful thoughts and behavior.15
ECONOMIC FALLOUT
We’re only beginning to see the sweeping economic impact of generative AI tools. The industrial revolution brought mechanization to farms and automation to factories, displacing countless workers yet creating new opportunities for those who were willing to adapt and learn new skills. The same shifts took place in recent decades with the information revolution. Generative AI is bringing yet another shift, displacing people in cognitive and creative careers that were once thought to be immune to automation.16 Writers, designers, coders, customer service agents, and educators are facing increasing pressure from these tools that are changing the landscape of their respective fields. Even highly professional careers, like radiology, stock market analysis, and management are facing pressure from AI that could make their professional skills redundant. It seems that the speed of this disruption could outpace society’s ability to retrain or reabsorb these displaced workers.
Taming the Titan
GOVERN IT?
With so many concerns, it can be difficult to know how to relate to this sudden onslaught of technological change. Many are calling on governments to intervene, in the hopes of blunting the negative effects of this burgeoning technology. Some progress has been made. For instance, in the US, Biden’s executive order on Safe, Secure, and Trustworthy AI requires developers to share safety test results with the government.17 The EU AI Act (2024) classifies AI systems by risk level, banning some uses outright (like social scoring) and imposing strict requirements on high-risk systems.18
Yet this technology has already grown beyond the point of being fully contained by government regulation. The challenges faced by policymakers are similar to those posed by gun control or nuclear nonproliferation. Once the technology exists, and its capabilities are demonstrated, it cannot be uninvented. If companies that build AI systems were banned from operating in one country, others would quickly take the lead. Models like Meta’s LLaMA, Qwen, and Deepseek are now open-source and widely available. Many can run locally on consumer hardware, where their safeguards can be removed and governments cannot control their use. The same applies to deepfake generation tools, which are freely available and can be used to create visual disinformation or non-consensual content with no safeguards or controls.
Governments can and should create sensible regulations that prevent companies from profiting from harmful uses of AI. Good laws could hold developers accountable for negligence—such as failing to implement reasonable safeguards or knowingly enabling repression. At the same time, overregulation risks stifling innovation and infringing on privacy and free expression, without addressing the true underlying dangers. Regulation must be both technically informed and ethically grounded.
In the end, governments will be powerless to stop the proliferation of AI technology. As long as computers exist, and there’s electrical power to run them, AI will continue to advance. We must learn to tame it.
UNDERSTAND IT
Generative AI, like other AI-powered tools, will be a titan that we must learn to live with. For better or worse, we will have to adapt our lives around it. How can we tame this titan? The first step is to understand it. We must learn its ways, its strengths and weaknesses, and understand the places where we encounter it. When you search the web, learn to decipher whether you’re reading an original source or an AI-generated summary. When you log in to your social media apps, or scroll through YouTube, understand that the AI algorithms are feeding you content to maximize your engagement, and know when to shut it down and stop scrolling. When you answer the phone or read an email, ask yourself, “Is this the person I think it is, or could this be a clever AI impersonation?” When you see an “unbelievable” picture or video, ask yourself, “Is this likely to be real, or more likely to be fake?” Learn to check sources, and don’t be too quick to believe the unbelievable. Sharing “slop” on the Internet is a great way to lose your credibility!
When you engage with the AI tools, or with other people, understand the emotional power of AI generated content. A generative AI tool without guardrails and in the wrong hands can become the perfect sociopath: manipulating emotions to destructive ends without ever feeling the slightest hint of guilt on its own. It can construe words with eloquent and flawless logic, constructing watertight arguments or appealing to your own deepest values and feelings. It can create music and art that augments its appeal, and unwittingly draw you in. Learn to recognize the tell-tale signs of AI-generated content. It’s not wrong or evil–it can be powerful in a positive way, but you should know when you’re engaging with the titan and know what it is capable of.
EMBRACE THE CHANGE
As technology becomes more lifelike, it forces us to reckon with what it means to be truly human. It’s not enough to work, to recognize, even to speak, to think, to create, or to reason. Robots can now do all of these things–often better than humans. Can robots have consciousness, self-awareness? They can certainly claim to. Scientists and philosophers must grapple with these questions, but some things will always remain in the realm of humans. To feel, to love, to experience emotion and relationship, to truly care–a robot may claim these things but can never truly do these things. God created us “in the image of God” (Genesis 1:27), something that a robot will never possess. As generative AI pervades and up-ends society, we can embrace what it means to be truly human.19
We must also embrace the fallout, and embrace each other as we face it. Trust me: there will be a fallout as this technological revolution impacts our collective lives. It can be tempting to run from it, to rage against it, to resent the negative effects it’s had on our lives or to hate those responsible for its advance. People will be scammed. Others will lose their dream jobs or see their career paths vanish. The effects of the social media culture have already destabilized political, social, and religious institutions around the world. Loved ones who are already struggling with mental health challenges could be sucked into an unreal world fueled by generative AI. Children must grow up in a world filled with truth and fiction, robots and humans vying for attention and belief. This could have its benefits,20 but they must also reckon with the temptation to use new shortcuts to circumvent their education, a real threat that could plunge the next generation further into ignorance and oppressive control. As the fallout comes, let us embrace each other, support each other and our families, and realize that it’s the humans (not the robots) in our lives who really matter. This challenge faces us all–it affects us all in different ways–and we must face it together.
DISENGAGE IT
Just as Jesus made time to withdraw and pray (Luke 5:16), we must carve out space in our lives to engage with reality without the aid of technology. We must make time to turn off the screens and the speakers, and take a walk in nature. Instead of liking a hundred posts on social media, get together with a few friends and talk, or do something fun together. Instead of TikTok, watch a worthwhile movie. Read a book, or start a hobby. Set up a birdfeed and watch the wildlife outside your window. Close your eyes and spend time in prayer. As the psalmist reminds us, “Be still, and know that I am God.” (Psalm 46:10)
As our lives increasingly revolve around these AI-powered systems and the world they create for us, it will become imperative to keep our minds, our bodies, and our emotions grounded in reality.
USE IT FOR GOOD
As a technology enthusiast myself, this has been my goal ever since generative AI has become mainstream: Use it for good. Despite its dangers, the potential for AI technologies to be used for the advancement of society are almost unlimited. The same tools that can be used to spread disinformation can also be used to discover and disseminate truth in unprecedented ways. In the 1990’s, Internet search engines revolutionized the way that we find and access information. Now, LLMs are having an equally revolutionary impact: not only finding information but synthesizing it for the specific situation. These have the potential to reintroduce the concept of nuance into the public discourse, and are increasingly able to give insightful and truthful answers on a range of complex and even controversial topics. And just as the Gutenberg press democratized publishing, Generative AI is democratizing the ability to convey knowledge in relevant and impactful ways. Just as Paul used the Roman road system and Greek language to spread the gospel, AI could become part of the infrastructure for sharing truth today.
As LLMs become increasingly reliable, and if privacy concerns are adequately addressed, we will see an increase in broadly-capable AI personal assistants.21 I envision one that has the ability to process and respond to messaging, coordinate scheduling to optimize our personal performance, organize our work and leisure, notify us only of relevant and well-senthisized information and subtly create an environment around us to optimize our enjoyment of life.
LLMs are increasingly able to produce code and even simple software in response to a text prompt. How long will it be until they create new and highly customized software tools that will dramatically increase our efficiency in the office? This could be the next AI revolution: the democratization of software, with open source and highly customized tools taking the place of the dominant subscription-based software models. Moving a step forward, what could be accomplished when advanced AI tools are paired with additive manufacturing (3D printing), CNC and related technologies, as well as robotics? In theory, AI systems could autonomously innovate, producing real objects and tools to solve real-world problems.
Harnessing the Titan
How can we use this generative AI technology for the good of our world? As a Christian pastor and a technology enthusiast, I see it as another God-given resource that, rightly employed, can be a powerful tool to help us accomplish the Great Commission. In this section, I’ll share some practical examples of how I believe we can do this, as well as some potential pitfalls to avoid. I’m confident that there are many other ways I haven’t thought of, as well.
RESEARCH AND STUDY
Rightly used, generative AI can be a powerful aid in research and study. An LLM coupled with Internet search, or a tool such as ChatGPT’s “Deep Research” is able to find highly relevant sources that would be difficult to find otherwise. While not a replacement for a good Bible commentary, a good LLM can give insight into Biblical interpretation, original languages, and historical context that would be challenging to find elsewhere.
I will often ask ChatGPT to “give me a summary” or “help me understand” a topic. Recently I asked, “Help me understand David Koresh and his cult in relation to the present day ‘Shepherd’s Rod’ movement that attempts to infiltrate the Seventh-day Adventist Church.” This simple prompt, aided by the LLMs existing “memory” of who I am and what types of researched answers I prefer, gave me a highly insightful and useful response consisting of several pages of factual and relevant information.
Another frequent use for me is summarizing long blocks of text, such as machine transcripts of sermons, or lengthy documents. I can ask for a summary, or inquire about specific information with prompts such as, “What is the author’s viewpoint on …” to get specific and insightful information, without having to read an entire corpus of text.
WRITING
LLMs such as ChatGPT and Claude can be a very useful aid in writing as well, although one must be careful not to produce slop. For instance, I could prompt “Please write a sermon for me based on 1 Corinthians 13” and the machine would dutifully produce a short piece that could be presented in church, but its quality would certainly reflect the low level of effort I put into the process. A better use of the tool would be, after a prayerful personal study of 1 Corinthians 13, to ask a specific question, for instance, “Help me understand the historical and textual context that prompted the apostle Paul to write on the importance of love in 1 Corinthians 13.” Another great prompt could be, “Help me understand the meaning of Agape as used in 1 Corinthians 13, with historical and cultural references. Is this the only Greek term used to describe God’s love in the Bible, or are there others? Give specific Biblical references.” Both of these prompts produced insightful and helpful ideas that could be incorporated into a sermon.
In writing, I often use the LLM for help in proofreading. Sometimes it’s helpful to allow it to re-word a paragraph for better impact, although I prefer not to let the tool do too much re-wording as it can lose my voice and the end result may sound like slop. Often I’ll specifically ask for a bulleted list of corrections and suggestions, from which I can manually correct my source document.
It’s fun to use the LLM in more creative ways, as well, such as in contextualizing a Bible passage or other material for a specific audience, or even writing hymns and poetry. LLMs are incredibly good at language, and much of our religious tradition is carefully preserved through language. Because there is a very long history and large corpus of religious writing on the Internet, LLM can be incredibly good at religious exposition. This can be a great advantage and a useful tool, but we must be careful not to abuse the tool. We must realize that true worship consists of more than words.
One of my favorite uses for AI is in brainstorming and organizing ideas. I’ll start with an idea or concept, and jot down a list of related ideas that come to mind. Then I’ll “brain dump” into the prompt and ask the AI to help me brainstorm. It can be incredibly helpful in expanding, morphing and suggesting more related ideas, and then in organizing these ideas into a helpful structure. Like other tools, I have to be careful not to over-use this or to short-circuit my own study and thinking process, but this can be a great way to stimulate ideas and to overcome writers block
TRANSLATION AND CONTEXTUALIZING
Translating is another powerful use for generative AI. Because the AI can have a deep understanding of your meaning and also write fluently in many languages, it can both translate and also contextualize highly complex and nuanced material in ways that traditional machine translation cannot.
In addition, the same tools that create deepfakes can also create impressive videos of a speaker presenting information in another language, complete with their original expressions and hand gestures but speaking fluently in a different language. I’ve seen this used effectively by church leaders to communicate the gospel in many languages using their own voice.
Even within the same language, AI can be a powerful tool for making content more accessible to a wide audience, such as contextualizing a story in language suitable for a particular audience like grade schoolers.
REPURPOSING CONTENT
Often in ministry, it’s helpful to have similar content packaged in different forms. For instance, a sermon can be re-packaged as a magazine article, a tract, a handout, an activity, a study guide, or even a social media post. AI is a powerful time-saving tool that can largely automate this process while preserving the meaning of the original message.
ILLUSTRATING
I also use AI to create artistic illustrations in my Bible teaching. I find that the AI-generated images can be both a practical and also powerful tool to communicate the gospel message, and it’s helpful to be able to instantly create the exact image I need to convey the message I’m sharing. I’ve even begun a project called the “Virtual Bible Snapshot Project” to collect AI-generated artwork along with other freely available materials and distribute these freely on the Internet.
AI tools can be incredibly helpful in refining graphic designs. You don’t always want to use a purely AI-generated image. When I create a cover slide, I will sometimes upload a prototype to ChatGPT and ask for suggestions for improvement, following good design principles. It has given me very useful feedback to incorporate in my finished design. It’s also easy to ask for a simple yet relevant logo, icon, or cut-out symbol, which can add a professional touch to your materials.
I’ve experimented with AI Generated music and video, and although it may still be a niche application, it seems to have great potential in teaching and possibly even in worship. But is it appropriate to use a non-human creation, such as an AI-generated prayer, a poem, or a song, in worship? That’s yet another important discussion.
Conclusion
A final note of caution: be careful what use you make of the tools. Use them honestly and ethically. Generative AI can still hallucinate, so check everything for accuracy. Just as you wouldn’t plagiarize someone else’s work, don’t copy an AI generated piece and pass it off as your own. Be honest about your use of tools–if you use a tool in a significant way to do writing or creative work, disclose that fact appropriately. Using AI is not a substitute for understanding or research. Remember that you are in charge of your creative process, and the AI is your helper.
You might be wondering: was AI used to write this paper? The answer is yes—in several of the ways I’ve described above. I used AI to assist with research, brainstorming, and some light editing. A few sentences were reworded for clarity or improved flow. But the ideas, structure, and voice are fully my own. Does using AI undermine a writer’s credibility? It certainly can. But when used wisely, it can strengthen the final result. In today’s world, where generative AI is raising the cultural expectations for polish and precision, choosing not to use these tools at all can sometimes leave a writer at a disadvantage.
In this paper, I’ve attempted to give a broad overview of the usefulness and potential dangers of AI tools. In summary, generative AI is yet another tool, a very powerful one, capable of great evil and also of great good. The difference is made by the use we choose to make of it.
You may choose to embrace this new technology, as I have. Or you may prefer to observe cautiously, sticking to tried and tested methods while others innovate. But regardless, AI will change the world we live in, and the more we understand it, the better prepared we will be to live in the world it is creating for us.
Further Reading
Crouch, Andy. (2017). The Tech-Wise Family: Everyday Steps for Putting Technology in Its Proper Place. Baker Books.
LaGrandeur, Kevin & Hughes, James. (2017). Surviving the Machine Age: Intelligent Technology and the Transformation of Human Work. Palgrave Macmillan
Lanier, Jaron. (2011). You Are Not a Gadget: A Manifesto. Vintage
Lewis, C.S. (1943). The Abolition of Man.
Thacker, Jason. (2020). The Age of AI: Artificial Intelligence and the Future of Humanity. Zondervan.
White, Ellen G. (1903). Education.
- Russell, S., & Norvig, P. (2020). Artificial Intelligence: A Modern Approach (4th ed.) – Pearson. ↩︎
- Stanford University. (2023). AI Index Annual Report https://hai.stanford.edu/ai-index ↩︎
- Stöffelbauer, Andreas. (2023). “How Large Language Models work” Data Science at Microsoft https://medium.com/data-science-at-microsoft/how-large-language-models-work-91c362f5b78f ↩︎
- SingularityNET. (2024). A Deep Dive on the Differences Between Narrow AI and AGI. https://medium.com/singularitynet/a-deep-dive-on-the-differences-between-narrow-ai-and-agi-19016011c966 ↩︎
- Chesney, R., & Citron, D. (2019). “Deepfakes and the New Disinformation War.” Foreign Affairs. ↩︎
- RAND Corporation (2018). Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life. https://www.rand.org/pubs/research_reports/RR2314.html ↩︎
- Heikkilä, Melissa. (2023). “Three ways AI chatbots are a security disaster” MIT Technology Review https://www.technologyreview.com/2023/04/03/1070893/three-ways-ai-chatbots-are-a-security-disaster/ ↩︎
- Hoffman, Benjamin. (2024). “Slop is the new Spam” New York Times June 16, 2024 http://nytimes.com/2024/06/11/style/ai-search-slop.html ↩︎
- Quanta AI. (2024). Racial Bias in Machine Learning Algorithms https://quantaintelligence.ai/2024/11/03/ethics/racial-bias-in-machine-learning-algorithms ↩︎
- O’Neil, Cathy. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown. ↩︎
- Huang, Lei et al. (2025). “A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions” ACM Transactions on Information Systems, Volume 43, Issue 2 https://doi.org/10.1145/3703155 ↩︎
- Eubanks, Ben. (2025). “Is AI causing a decline in cognitive and creative skills?” Unleash https://www.unleash.ai/artificial-intelligence/is-ai-causing-a-decline-in-cognitive-and-creative-skills/ ↩︎
- Balalle, Himendra & Pannilage, Sachini. (2025). Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. https://doi.org/10.1016/j.ssaho.2025.101299 ↩︎
- Reed, Victoria. (2024). “The Dark Side of Emotionally Intelligent AI: Manipulation Risks” AI Competence https://aicompetence.org/the-dark-side-of-emotionally-intelligent-ai/ ↩︎
- Neammanee, Pocharapon. (2024). “14-Year-Old Was ‘Groomed’ By AI Chatbot Before Suicide: Lawyer” Huffpost October 25, 2024 https://www.huffpost.com/entry/14-year-old-ai-chatbot-suicide_n_671a7184e4b00589e7dc308f ↩︎
- McKinsey Global Institute. (2025). The State of AI Global Survey. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai ↩︎
- White House. (2023). Executive Order on the Safe, Secure, and Trustworthy Development of Artificial Intelligence https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ ↩︎
- European Commission. (2024). Artificial Intelligence Act summary https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682 ↩︎
- Herzfeld, Noreen. (2002). “Creating in our own image: Artificial Intelligence and the image of God” Zygon, vol 37, no 2. https://www.zygonjournal.org/article/id/13063/ ↩︎
- Anderson, Jill. (2024). “The Impact of AI on Children’s Development” EDCAST. https://www.gse.harvard.edu/ideas/edcast/24/10/impact-ai-childrens-development ↩︎
- Tuhin, Muhammad. (2025). “AI-Powered Personal Assistants: Your New Digital Best Friend” Science News Today https://www.sciencenewstoday.org/ai-powered-personal-assistants-your-new-digital-best-friend ↩︎
Leave a Reply