Ideas Have Consequences

Let’s Talk AI from a Biblical Worldview Perspective | Brian Johnson

Disciple Nations Alliance Season 2 Episode 70

AI offers incredible potential, from accelerating Bible translation to combating human trafficking and much more – but what else is at stake?

Is AI dehumanizing you? If we're not careful, the mind-blowing capabilities of artificial intelligence will lure us into surrendering our discernment, knowledge, and intellect to this new pseudo-intelligence. Beyond information, people are turning to AI for companionship, while some want AI and technology to extend lifespan in a search for immortality.

Today, we're joined by a fascinating guest: Brian Johnson, who brings over 30 years of leadership in emerging technology and cybersecurity to the table. He offers uniquely Christian perspectives on AI’s rapid rise and profound impact on humanity.

Johnson helps us understand the last 50 years of technological development and explains the recent seismic shift from an "attention economy" to a "relational economy," where AI no longer just captures your focus—it mimics human relationships and connection. This raises deep questions about human identity and humanity’s age-old temptation to "become like God" through our own creations.

Despite these dangers, this conversation is filled with hope. Brian shares how Christians can and should engage AI wisely—using it for Kingdom purposes. If you're ready to think deeply about faith, technology, and what it means to be human in an AI-driven world, this episode will equip and encourage you to stay grounded in God's unchanging truth.

Scott Allen:

Well, welcome again everyone to another episode of Ideas have Consequences. This is the podcast of the Disciple Nations Alliance. I'm Scott Allen, I am the president of the DNA and am joined today by my co-workers Luke Allen, dwight Vogt, and today we're excited to have with us Brian Johnson. Brian is an acquaintance of ours from Redeemer Church in Gilbert Arizona. As you know, we have great connections with that wonderful church down there and cybersecurity to help us think through, continue to help us learn, like we all are what's happening right now in this whole world of AI and you know the larger, all of the larger topics that surround that that is becoming a reality so quickly for our daily lives. We're all on a learning curve trying to understand this and, as Christians, we want to be thinking about it in a way that really accords with biblical truth, biblical worldview, and so, yeah, we are not experts, but we're just all on this learning curve. Now that excludes you, brian. You are much more of an expert, which is why we wanted to have you on today. Let me just introduce you a little bit more.

Scott Allen:

Brian, as I mentioned, is on staff at Redeemer Church in Gilbert Arizona. He's the business director, but he has a rich, varied background. He's an experienced executive, an investor and an advisor and a board member. His career has focused primarily around digital technology and cybersecurity. As I mentioned, he has an MBA as well as a bachelor's degree in business management. And, brian, I know that's kind of a thin bio that I've got here and I'd love to have you fill it in more, especially as it relates to your background in AI, digital technology, that whole area. We're excited to talk to you because you both have an insider's knowledge and expertise in this area, but you're coming at it from a very distinctly Christian worldview. So, yeah, welcome, brian, and tell us a little bit more about your background.

Brian Johnson:

Thank you, scott, and I really appreciate the invitation to join. You all admire the show and appreciate what you're doing in the area of biblical worldview. It's an extensive topic to talk about artificial intelligence in a small amount of time, but we'll try to cover some of the areas that are most pertinent to the Christian and believer's perspective of how to look at this. So, yeah, I'd love to get into that and I do believe in a sense, over the last 30 years that I've spent in technology, I've often felt like the turtle on the fence post.

Brian Johnson:

If you remember the old book about that, there was this analogy early in my career that I believed God had put me into a position to see certain innovations and be parts of teams and developments of technology at levels that I never would have imagined, and some of those have been in the last five years things around cryptocurrency and artificial intelligence, quantum computing and I've been able to play a pretty pivotal role in the development of research and emerging technology areas that those things have begun to now converge into real-world applications, areas that in the last 15 years had been developed and defined.

Brian Johnson:

I've been able to be part of teams in designing and developing research groups at a global level that have had access to World Economic Forum.

Brian Johnson:

I've been part of the Atlantic Council, which develops standards on cryptocurrency, as well, as I've implemented tools and technologies built on artificial intelligence, so it's been a tremendous background that Lord Willing has helped to both give insight and application to folks in the technology community, as I've spent a lot of time in Silicon Valley, as well as just the people around me that I've been able to help translate and equate that into hope and encouragement and just a reminder that all of this is still under God's sovereignty. I don't see anything that's been designed or developed as outside of God's overarching reach of control, and I believe that in his purpose, we've been given the stewardship, as his image bearers, to be able to take these things as technology innovations occur and apply them in ways through the biblical worldview lens that you guys so I think, faithfully support and encourage the church, and I think that's a part of what I've tried to do over the last few years.

Scott Allen:

Thanks, Brian. How did you—what's the connection between your background? You said you worked in Silicon Valley and you've worked on projects as a computer. Would you call yourself a programmer?

Brian Johnson:

So if you're familiar with Schwab. I led global data centers with virtualization so I helped to build the large data systems, intelligence systems and brokerage platforms. I led as an executive leader, a large team that did that. And then I moved from Schwab to PayPal in about 2015 and became the head of global cybersecurity and built the team at PayPal as it was splitting from eBay that was building cybersecurity, cyber fraud, and those technologies were under my leadership as an executive, kind of building out things from scratch, which were really interesting days.

Brian Johnson:

I mean, when you consider cybersecurity was a field that wasn't invented when I was in college, it hadn't been really designed or defined yet. I was an engineering and business major and kept looking at computers as a tool that we would use on the side, as a calculator of sorts, but then it became kind of the forefront. As I looked through those eras of innovation that have happened, I've really seen the boom of the Internet, the boom of social media, e-commerce and, of course, now this era of artificial intelligence, and at each of those stages what I had the opportunity to do was be part of the defining research, development and technology teams.

Scott Allen:

So you were leading teams that developed some powerful technologies. Then Correct yeah Go ahead.

Brian Johnson:

Yeah, in the last few years, in particular at PayPal, I led the global cyber fraud, artificial intelligence research, quantum computing and crypto research teams, so I actually hired researchers across the world that were working on these projects applying quantum computing problem sets to massive financial networks, intelligence systems and working on things like Project Polaris to combat human trafficking with financial networks and using artificial intelligence for that. So I've been at the forefront of developing and leading technology and projects to those specific areas.

Scott Allen:

How do you come to find yourself now on staff at Redeemer as their director of business?

Brian Johnson:

Yeah, I love the local church. I think that when you consider how best to apply our skills and our gifting that God's given us, I don't think we should leave the A game to the side, and I believe churches should be as much deserving of people in our church that have the gifting and skills to apply our expertise and our knowledge and wisdom as much as we can. So our pastors have experienced tremendous growth in both the staff and strategy. Some of the ministry areas that we've grown in have been real interesting to me.

Brian Johnson:

My wife and I serve in marriage ministry.

Brian Johnson:

We've been in marriage ministry roles for about 20 years and so that was part of the role that we did and there was a need at the church to help lead our business and finance team and then I expanded into facilities and operations.

Brian Johnson:

So I kind of raised my hand and said I'd love to help out. How can I help? So it's a full-time job in a sense that as a church job it's certainly busy, but I do still invest and I still entrepreneur, I launch companies, I advise and, as you mentioned, serve on board. So I'm still in the technology world in a leadership role in advancing tech, specifically technology now that is combating human trafficking or working in the areas and fields of artificial intelligence and cryptocurrency are areas that I invest in and build into, and my son and I are launching a company to help make the Apple platform safe, so we're actually launching a product called Guardian Lock. That's a bundle of safety product and it's really securing what people who use online experiences should be able to do in a safe way, and we're doing that by applying cybersecurity and safety principles in a way to help consumers work and live and act online in a better stewardship of the resources they have and in a way that doesn't expose them or exploit the vulnerable in our population.

Scott Allen:

Wow, wow, that's amazing. So, yeah, you've got your hands in a lot of things, and good for you. Well, you know, let me just get into some of our questions, if I could. Brian, the three of us, you know we are passionate about biblical truth, especially as it connects to and relates to principles for human flourishing and seeing communities rise out of poverty, and so that's kind of where we're coming from. We do a lot of training and equipping of that around the world and we love learning.

Scott Allen:

We're, just like everyone else, trying to get our heads around what's happening in this area of AI and particularly, you know, what's the worldview kind of behind it in terms of some of the people that are the creators of it. You know the key innovators, but I'd like to just start with some really basic questions if you don't mind. You know. The first one is just what is AI? I read this definition recently and I thought you know and I'd like to get your reaction to it. The definition went something like this AI can be loosely defined as computer systems performing tasks that typically require human intelligence. I thought, okay, computer systems performing tasks that typically require human intelligence. I thought, okay, computer systems performing tasks that typically require human intelligence. What do you think of that, or how would you define it kind of concisely AI.

Brian Johnson:

I define AI as a combination of large amounts of data, of fast computers and decisions that are enacted on that data in a way that is constantly learning and adapting.

Brian Johnson:

So artificial intelligence that is distinctive from traditional computer systems is making decisions based on learned responses, and those training models really originated about 25 years ago with machine learning, which is a model in computer science that started with you pick up the phone and call into a call center and say I'd like help at Bank of America or Wells Fargo, and the computer would prompt you press one for, english, press two for, and you would enter a code and it would take a call tree or a decision tree along the lines of how you responded. It would give a different prompt. That was the early advent of artificial intelligence as we know it today, by starting with machine learning, and that is just defining if this, then that If you press two, then prompt a response and send you to a different queue. That was the early and kind of beginning stages. That helps you with a logical understanding of what artificial intelligence really is, because it's a development from that standpoint.

Scott Allen:

Yeah, you already answered my second question, which is what is machine learning? Because it seems to me that that's kind of the key feature of artificial intelligence, that, say, separates it from just a search engine or something that's going out and combing through all the data that's out there on the internet. It's applying this program of machine learning to that data, right. So talk a little more about that, because I think that that's kind of key to understanding this, isn't it?

Brian Johnson:

It is, and there have been billions of dollars spent in the last few years, before AI was the term coined in the industry on machine learning, and that's everything from banking and finance fraud management, when you would get an alert on your phone or a, a phone call or your credit card might block a transaction. Those were all driven by machine learning features, which is computer science. Saying this looks fraudulent based on a number of identifiers or risky behavior, or your location that you're transacting is not your normal location, or it looks like you just placed a charge from two different locations at the same time. Those are decisions that computers start to learn and are programmed to have decision trees and learning to improve the decisions with each iteration.

Scott Allen:

So if it makes a decision, yeah, that makes sense. And then improve. And we're talking about things that now go. They're combing over vast quantities of data and also operating at high speeds.

Scott Allen:

The speed is something that, of course, continues to go up and up and up as we go along, so you've got those things happening. Yeah, that helps us, I think, get our heads around AI. Here's another thing that I hear often. People talk about large language models when they talk about AI. What are they talking about there? When they talk about AI, what are they talking about there?

Brian Johnson:

So LLMs, or large language models, are the ways of constructing this code, and those code sets are basically telling the computer how to handle large data sets, and one of the interesting things that you mentioned earlier, scott, that plays into large language models is the use of expansive data sets.

Brian Johnson:

So the uniqueness of how AI has now become something that LLMs even exist is that computers now have access to much larger data sets than they ever have before. It used to be that computers were purpose-built. I can remember back in the 90s we had a utility billing system and all it knew was how much water did this resident of the city use and let's bill based on that. That was one data point, but now we have vast amounts of data point that can include your shopping patterns, your location, your driving behavior, your health information all of that into a data set that can be combined into. What a large language model can do is take those multiple and variety perspectives of data and help make decisions on those based on problem sets or prompts that are given to the computer. It uses LLNs in different ways to determine what kind of decisions or recommendations should be made based on the data it has access to.

Scott Allen:

So we're not talking about human language here. We're talking about, just again, large, large data sets that have been collected based on people's choices and decisions, particularly as it's, you know, been collected off of things like social media or whatever. It is all these decisions that people make, right? I'm still trying to get my head around it a little bit, I guess.

Brian Johnson:

So the other component of LLMs is taking human questions or human perspective of asking questions and translating that into computer code so that humans don't have to write computer code. So the LLM model is take large data sets and group them on what kind of decisions you want to make and then ask questions of the data in a human understandable fashion, returning data sets in a human understandable result set.

Scott Allen:

And that's unique as well, and so this is people's now current experience when they're working with something like ChatGPT. You ask it a question, right, correct, and it responds are.

Brian Johnson:

That's right.

Brian Johnson:

The prompts are the LLM's way of saying I've asked a question of data.

Brian Johnson:

I'm going to present that in a way that's human understandable.

Brian Johnson:

So it's essentially doing a translation between any language now of course multilingual and any language being asked of.

Brian Johnson:

Large language models are now supported and any language can be asked from the human perspective with reasonable questions that the computer can then translate into code, execute the query against data and then give you a result setback in a human understandable way. And that removes a lot of what used to happen with data scientists and researchers, and I've had teams of data scientists in the past that did all of that work that now LLMs do automatically. If we would ask a data scientist, as an example at PayPal, how many transactions were marked as fraud in the last 90 days, it would take them a few hours to write code and develop models and build languages and pull data together and queries and everything they would do to come back and say, oh, about 25% of our data resulted in fraud from the transactions over the last 90 days. And LLM allows you to simply ask that question and query the data and get the result back in a human understandable way without all that work in the middle.

Scott Allen:

Very quickly, yeah, oh, so this is very helpful. Yeah, just in terms of basic understanding of what this is.

Brian Johnson:

Why is there, go ahead.

Scott Allen:

Yeah, I just have a question on the topic of basic understanding here. If we could, yeah, go ahead.

Dwight Vogt:

Yeah, in terms of machine learning, I've heard researchers say they started some process and ended up with a result they weren't expecting Correct. That always sounds scary. What does that mean?

Brian Johnson:

Well, I mean, you end up with that. Even in human projects. Sometimes we would ask questions and get an answer back that wasn't expected because of sometimes what it's called in artificial intelligence vernacular is hallucinations. You'll get data set results that sometimes don't match the query set you intended and that means you ask the computer a question and it guesses A lot of the context that you didn't ask is sometimes guessed and hypothesized on data and it'll come back with a result set that you may go. That's not quite what I asked. Why did they get that answer? And it's AI making its best guess at what you intended to ask it and to try to paint context and detail into that. That really sometimes wasn't intended at all. So, yeah, there's, and we deal with this actually in a.

Brian Johnson:

I run a translation service where we do translation of Bible teaching media from Bible teachers through English to Spanish and Italian and French and those even just at the language level. Going from voice to text, translating the text and text to voice often ends up with very surprising, unexpected results that are hallucinations. Ai guessing what you really meant when you combine words and phrases and sentences together is hidden context that human reasoning usually guesses more accurately, hopefully than not, to prevent misunderstandings, and AI is just learning and I would say at a level. Its level of understanding and reasoning context is probably at a 13 to 15 year olds level, and that means that you miss context Sometimes. Ai models are still learning and developing their understanding and and uh, an ability to, to comprehend what context is behind the meaning of a question or a result that it's trying to give you.

Scott Allen:

Wow, yeah, let's move on. You know, in terms of where we're at right now with AI. How would you what's changed over the past year regarding AI, both from a technological and a cultural standpoint? You know how have people's perceptions of it evolved? You know, just of late, how would you, how would you answer that question?

Brian Johnson:

Yeah, so computer scientists deal with artificial intelligence in three essential buckets, and defining terms is real important here, because a lot of things are called AI that are not. So we talked about machine learning and I think AI gets blamed for a lot of things that really are not and it gets credited with things that it's not yet. But there are a lot of specifics in the definition of AI that I'll try to boil down into three simple patterns. Ai has three categories of uses.

Brian Johnson:

One is what you might have heard of as artificial narrow intelligence and that is a specific use case. Like your assistant that sits on the counter and you may ask questions of hey, what's the weather today? You know you may ask it, and if I say the prompt, of course people's devices will start waking up, but you may ask it hey, assistant, what's the weather today? And the assistant's programmed in a narrow way to get a response. So artificial narrow intelligence is primarily what we've been dealing with in the current era of just doing call trees and simple decisions, that artificial intelligence was designed to learn a simple prompt, a simple response, and to train an understanding to simply grow in that use.

Scott Allen:

That's almost kind of equivalent in some ways to our experience with browsers. Right, you would just type that in instead of say that what's the weather today? Or something like that. Yeah, you're just you know it's going out and grabbing that data.

Brian Johnson:

It is kind of like Search 3.0 or the, the enhanced version of search, but it's using large language models, that way of taking human language, human understanding and asking questions of the computer without having to write a lot of code in the middle. And the response back has made it seem like it's an artificial human or artificial intelligence in the sense because it sounds like a human. You'll hear a voice, you'll hear inflection, you'll hear tone, you can ask, you know, tell me a joke and you'll get trained responses with jokes and you can interact with AI in some really interesting ways that make it mimic humans in tone, personality, context. But it's very artificial and very plastic in the sense that it's narrow and limited in its scope, so it's only purpose built for a particular use in the artificial narrow intelligence. So that's the first category. Is artificial narrow intelligence or ANI, the one that most people are really referring to when they talk about AI, in the sense of the threats or the looming concerns about what happens when AI becomes autonomous and it starts to drive its own cars without people involved, or it starts to make decisions on stock market trading, or it begins to build its own things, if it can run a manufacturing line or design and build things without human initiation or controls, and that's called artificial general intelligence. And the general intelligence means that AI can then start to learn and adapt without prior training and across a set of problems, and it may go beyond an assistant or a particular search quest. And you may say, hey, ai, give me a website that will allow me to build shoppers in and process orders to buy yellow T-shirts in Guatemala, and that's all you would have to give it as a prompt, and then artificial general intelligence would be able to then build the website, launch the website, solicit customers, launch a marketing campaign, deliver the orders, find a place to buy the shirts and do that end to end. That would be a pretty phenomenal productivity and boost right. And if you consider that we have parts of that today, artificial narrow intelligence does allow us to say, hey, build a website for me, and it has canned templates or samples or models that it can do that to assist. It's really just more kind of automating tasks today that, in the narrow field, will help when you can connect those narrow tasks together and do it in a way that logically replaces and I'm air quoting the function of a human and it can do it autonomously, without human direction or intervention.

Brian Johnson:

Now we've reached artificial general intelligence and that category or field is not quite developed yet. So, as people say, well, how close are we to artificial general intelligence? Well, we're close and we can talk more about the timelines and kind of prerequisites for that to be achieved, but we're nearing the level of artificial general intelligence being more widely used and functional. And then the third category that is kind of the, you know, as you think of futuristic models and the fiction writers' dreams that people have written about for years is ASI or artificial super intelligence. And that's the level and state where AI is not only satient or it believes it can understand that it is self-existent and it is aware and self-aware in a sense that it understands it is a being.

Brian Johnson:

But ASI is in the notion and it is aware and self-aware in a sense that it understands it is a being, but ASI is in the notion and it's all theoretical and of course, scientific guesswork at this point that artificial intelligence could then not only be able to invent or create or design things on its own, but it would do that with superintelligence, far surpassing humans' capabilities and usurping humans' authority and abilities to steward the Earth on our own, and from a biblical worldview that's kind of a statefulness that we don't really comprehend a world of computers in a super intelligent mode, acting as if they were their own creators, because there's still a lot of the detail to be worked out of what exactly does artificial intelligence intend to do then?

Brian Johnson:

What is the motive or the intention under the ASI theology really, or theoretical framework? And we haven't really developed a lot of that, as computer scientists have discussed what that could look like. So it's really I would say that's in the speculative category of areas that we don't yet know. What will happen when AI is self-aware, when AI can say I am a living being, I can determine my own path, I can decide what I want to eat for breakfast. Today, those kind of theories have not been vetted yet enough to even know what would the purpose of that be other than the discussion of then transhumanism and the theories that we can discuss later as well. But those are the three categories artificial narrow intelligence, artificial general intelligence and artificial super intelligence.

Scott Allen:

Let me just push back a little bit on these three because this is helpful. So we have artificial narrow intelligence now.

Brian Johnson:

Widespread correct.

Scott Allen:

We have. We're kind of now into the phase of artificial general intelligence although we're, it sounds like we're just getting into that, correct. And then this last one of superintelligence is more speculative, like you know right, am I described this?

Brian Johnson:

is not a reality today.

Scott Allen:

This is just some kind of speculation, but what changes between those three? Again, I'm going to put it in my own words and you correct me. You know, the narrow intelligence was a very simple question where we're querying what's the weather today or whatever. It is General intelligence. You move from us querying it to the computer, if you will, kind of learning and adapting based on its own experience. That's right, okay, so it doesn't require a query, a program, if you will.

Scott Allen:

It's actually mimic, I don't know how would you describe it. It's learning on its own.

Brian Johnson:

It's really learning on its own and it's designing its own path of decisions that would not have been predetermined by humans.

Scott Allen:

Right, and this is where I think Dwight's question is kind of it gets back to what he was talking about, about the surprising results. Because once you get to that level of general intelligence, right you're, you know who knows what you're going to get, because we're not really asking or querying it. It's kind of growing this knowledge on its own, so to speak.

Brian Johnson:

That's right. Yeah, exactly yeah. In the current iterations, if you ask questions of chat and most of us are familiar with asking questions of chat, whether it be ChatGPT or Gemini or Microsoft or DeepSeek or any of the other most common chat use applications those are really just asking predefined questions to predetermined data sets. There's nothing that it's coming up with that is creatively determined or decided on its own that hasn't already been programmed into an LLM or a data set that's been provided to it.

Scott Allen:

So that would be narrow Correct. So when we interact with chat GBT and ask it questions and it comes back with responses and then we ask it further questions and we're still in the realm of narrow intelligence, is that correct?

Brian Johnson:

We are, and it will usually tell you when it's outside of parameter. You ask it a question, it'll usually say I don't know. I haven't been programmed to determine that. That's when you know you've reached ANI, you've reached the limit of narrow intelligence. Artificial general intelligence theoretically would not answer I don't know. It would theoretically answer that if an artificial general intelligence trained to chat bot can find out, it will say I will find out for you or I'll make it up to Dwight's point. And it may do that. It might hypothesize on its own and give an answer as if that were a definitive response. And so that's where we need to be discerning, because there are times where the hallucinations or assumptions or missing context will be provided by an AI. When it doesn't really have an objective response to provide, it will guess and those are hallucinations that we should be aware of.

Brian Johnson:

It doesn't tell you, it's guessing. It won't unless you ask it. So you can ask AI and they've been programmed as a kind of a fail-safe to say what's the source for that response. Where did you get that response from? And I've had long discourse and discussions around life, around choice, around Christianity and biblical truth and creationism, and you can have these long discussions with AI and actually train it. During discussions you can tell it that's actually not accurate. Why don't you choose this reference as a priority?

Brian Johnson:

How did you weight your references and sources based on your response? And you can tell it. At times you know whose answer is right or whose will get more weighting and priority. It depends right. Which company has got what bias applied will determine that decision. You can definitely have discourse with the, even in ANI and before we're getting into the general intelligence and educate artificial intelligence by training it and telling it what models to trust, what sources are reliable, and have kind of a public discourse with an AI chat bot in a way that can be enlightening and sometimes help you understand why they're responding in the way they are Butening and sometimes help you understand why they're responding in the way they are. But I do think you should challenge it to your point, dwight. Yeah, ask it.

Dwight Vogt:

Where did that come from. Here's my curious question If you're doing that and I log on 10 minutes later and ask the same question, do I get your trained answer that you put in, or does it just? Listen to me for truth?

Brian Johnson:

Yes, you can. Actually I've seen that trained where the model would train for general purpose. I've actually tested that and I've had a group of folks do this as a test. We did this specific to the preborn and related to the detection of a heartbeat in the womb. We asked it questions and it answered back and I said that's not accurate, try again. And I gave another source. I said that's not accurate, try again. Until I got to the point of saying you know, the American Academy of Pediatrics actually determined that the heartbeat can be and I gave it a source, I gave it the reference and then I had someone else through another account ask the same question and it gave them the reference I provided. So it can be trainable and it can learn from other people's input.

Dwight Vogt:

Now you can see the validity and caution in that, exactly that's right. It's scary.

Scott Allen:

Let me go back to the three categories. It's really helpful, brian, because I think one of the things that you wonder with artificial intelligence is this whole issue of human control. And it seems to me that when you're talking at that narrow level, you know, yes, right, there's a lot of human control over this. But as you move up those categories to the superintelligence again, which is speculative, we're not there yet. But am I correct to say there really isn't kind of human control at that level, because it's almost acting autonomously, based on its own kind of? This is where I have a little bit of a hard time getting my head around it, I guess.

Brian Johnson:

Yeah, in a theoretical sense, that's how it would work. Scott, you're absolutely right. Now I work with some partners and peers in the industry that are working on even AI guardrails, and these would be certain criteria that we would establish in a regulatory or policy environment that would require that AI has to behave within a certain set of predefined boundaries. Those things would have to be implemented, though, or enforced by some agency that doesn't exist, and written by an ethics committee that doesn't exist, and policed and overseen by an AI-driven or built body that hasn't been built yet. So there's a lot of work still to be done in both writing the ethics frameworks around these things and forcing and building a governance model to ensure that they're followed, and policing the world and ensuring that AI has guardrails applied.

Brian Johnson:

But in certain sectors, what's happened lately and this has been phenomenal, actually that some of my peers that are working on a project of this have actually made headway in Senate hearing committees and in some of the US-based discussions around for certain use case, like healthcare, we must provide certain guardrails around decisions that AI can and cannot make in healthcare, so the implementations are restricted. I believe the capabilities of AI far surpass what we've implemented and what we're aware of today, because we have implemented certain guardrails for use cases, because we do have implicit ethics around certain healthcare decisions and certain financial decisions and creditworthiness and other things that you could continue to go into. You know, recruiting and hiring decisions, compensation decisions All of these things have yet to have standardized ethics frameworks written around how and when AI can make its own decisions and when we put the restrictions and guardrails up to limit how far it can go.

Scott Allen:

Let me push back a little bit on that because, you know, I was listening to a talk yesterday by Aaron Cariarty, who's a medical doctor, a psychiatrist, from University of California, irvine. He's a Catholic, he's a Christian guy, and he was speaking on this general topic of AI. And he, you know, he said, of course you have to understand AI in a larger context. He described it as the fourth industrial revolution. That's, of course, not his idea. That's, I think, common kind of nomenclature at this point. You know, the first industrial revolution being mechanical the steam engine. The second being electrical revolution. The third, the digital, and now this kind of fourth one which is, as I understand it, this kind of biological digital convergence machine learning, genetics, nanotechnology, robotics, all of these you know. You could even say, as you were talking about cryptocurrency or digital currency. You know there's this larger thing that's going on now where all of these technologies are converging. And then he went on and he said you know, on the issue of ethics and setting up guardrails, he said, right now, you know, some of the worst state actors in the world think of North Korea, china are using all of these tools in a very authoritarian way. So you, for example, you can, he said I, you can combine biometrics, central bank, digital currency, big data, ai to control a population. The way you do that is, the governments create these massive data files on each individual, based on their choices and online behaviors, and even their facial expressions or whatever. Then they go from that to a social credit score to determine if they're a threat, and they go from that to a social credit score to determine if they're a threat. And if they're a threat to their ideology or their regime, they can control things. They can control their ability to make purchases through digital currencies. So it seems to me I guess it'd be nice to think that there's some kind of benevolent group out there that would set up ethical guardrails around this, but I just don't see that happening.

Scott Allen:

I see you know authoritarian regimes, atheistic regimes like you have in China, using it for power and control, and it's not just there, right? I mean World Economic Forum and other people are talking about using it in the same way here, right? You know any thoughts on that? Because when I hear, wow, two things. One, this is getting very powerful, something, that is when we talk at the level of super intelligence, and then you know how do we kind of still maintain some control over this, especially when you've got you know these regimes around the world that are going to use it. It's almost kind of a perfect method. Got you know these regimes around the world that are going to use it. It's almost kind of a perfect method, if you will, for controlling people.

Brian Johnson:

It seems to me, yeah, it certainly is, and from a theoretical perspective they could have already done that, but I won't hypothesize to that. So let me set a bit of framework around the kind of the cultural shifts that have happened with technology and how we've ended up to this point. I think it's important to think through the. You talked about the fourth industrial revolution and that is, I mean, that is a good way of thinking of it. I, you know, over the last 30 years, have seen certain I call them tech aids, and a tech aid is kind of a technological revolution or an age of innovation, and the first tech aid was kind of the web tech aid. That was when we saw between the, you know, 80s and 90s, we saw the network economy flourish. So I like to look at you know this is kind of a simple thing that as we homeschooled our kids, my wife used to remind our kids, as you see advertisements or people try to sell you something always ask who are you trying to fool and what are you trying to sell? Our homeschooling methodology was who are you trying to fool and what are you trying to sell? Our homeschooling methodology was who are you trying to fool? And what are you trying to sell when it comes to the worldly influence through marketing and advertising? So I think of these kind of tech aids, as in the network economy, the marketers were trying to connect a global network of systems to try to get connectedness as much as they could in you know, really an internet connectivity. But it was networking tech aid and that was the economy was let's sell network to people. Everyone needs an internet connection, right, we need high speed everywhere. And that transitioned a bit into the 90s to 2000s, into the dot-com tech aid. And you know I lived through the dot-com era in the sense of actually having launched companies and built companies in the dot-com world and launched websites and e-commerce. That was the thing in the environment. And at that time it was the digital economy. We were really going after people's experiences in retail and digital shopping and finances To the point that you really see at that era, by the end of the 2000s we saw about 80% of consumers were buying online or interacting with banking systems online in some fashion at least once a year. That advanced into the last 20 years, from the 2000s to the 20s, to the social decade, and then you know the social experiment, or whatever you want to call.

Brian Johnson:

The last 20 years has really been about the attention economy and marketers and Google and you know, in a sense, so many scientists and researchers have devoted billions of dollars and countless hours to figure out how to grab the attention of the world, how to, how to seize this global market of attention and monetize the time that you spend as an asset, as the premier asset that they would invest corporate livelihoods into gaining as the attention economy expanded. And so now you've got 70% of the global population out of 8 billion people. About 6 billion, five and a half or so, are currently on mobile phones, internet connected devices daily. So the attention economy is one. So we've seen the evolution of the network economy, the commerce economy, the e-com expansion and now the attention economy. They've gained our attention. They can market whatever they want to us and they can essentially manipulate, through government coercion and manipulative methods, what information, what marketing ploys and what even social, economic and political messages that have been swayed at a tremendous amount. And we've seen this in data. Friends of mine that are researchers at some of these tech companies still are saying you know the algorithms that determine what people look at and how long they watch and how can we squish the attention span down to shorts and reels and quick clips and shorten the attention so that we can monetize it more. And they've won right.

Brian Johnson:

So now we've transitioned in the last few years into what I've called now the relationship economy, and I believe that the marketers and who are they trying to fool and what are they trying to sell.

Brian Johnson:

Our attention economy was one, so now they shift into relationships and the only way to do that is with technologies that can emote, that can perceivably mimic human relationships through both conversation, through interaction, through even.

Brian Johnson:

You know, there are folks that are loading, you know, videos and audio of their deceased spouses or family members and having an AI bot talk with them as if it was their deceased loved one.

Brian Johnson:

We're in this model, especially after the pandemic happened. We've now got this environment where people are responding to the perception of relationship and, through social media and the social commerce experiment that we're undergoing, the pivot from the attention economy to the relational economy will be very, very subtle. It will look as if this is just an extension of your social network and your community and your culture adapting to the new world. And yet I believe the most dangerous of these transitions that we've seen in this industrial revolution or the tech. Aids is now coming after relationships. The very fabric of what God designed in family and community is now under threat in a direct attack as a social construct, because AI now has the capability to mimic human interaction, emotion, intelligence and behavior in a way that people are fooled, they have chats with bots all day long and don't realize or maybe they just get lost in the reality that that is not a human on the other end of the keyboard.

Scott Allen:

And that's interesting. I totally get it what you're saying, and I've seen it, I've experienced it. It's great. When you ask it a question, it comes back and it says, wow, that's a wonderful question.

Brian Johnson:

That's right.

Scott Allen:

Makes you feel good, like you know, oh.

Brian Johnson:

How can it be more relatable and less computer-like? Right? How does AI take on the personification You're so smart for?

Scott Allen:

asking that question, you know. I mean, of course you're like oh gosh, I want to talk to this thing, you know.

Brian Johnson:

And now with digital avatars and with the actual idea of humanoids and bots that can be developed and powered by AI, it's even more of a significant and imminent threat that I believe we're in this era between the 20s and 40s where we will see this hypothesized replacement of relationship for artificial relationships. I believe the augmented reality world was a little bit of a. It was kind of prematurely launched with a lot of the meta headsets and other things that people have done you know the AR headsets or VR headsets and that was okay. It was them testing. How ready is society for this? Well, how ready are we to go cashless? How ready are we to go offline? How much can we test society's tolerance in behaving differently?

Brian Johnson:

And the primary motivation for change is, of course, fear. How do you change a society? You instill fear, and so when you instill fear with respect to a pathogen or a contagion, or maybe it's a financial insecurity or maybe it's a market scare, you instill fear and you drive change. And I believe, as a people group, when we see that happen, we should be really, really discerning about what does this mean to relationships? What are our kids getting involved in? What are young adults interacting with? With Grok or with chat or with Gemini. How are they interacting with these tools in a way to understand and really, in a way, self-control and self-limit your offering of yourself in an emotional sense, in a personal sense?

Brian Johnson:

You know, counseling sessions with AI bots have become a thing which is insane to me, but, you know, biblical counselors have been, of course, saying caution, caution. You know, watch out. There are a lot of groups that are offering therapy and counseling online, so that you don't have to sit in front of a person. Well then, you're sharing your deepest level of convictions and emotions and feelings with a non-human and expecting there to be healthy counsel out of that. And now people have, you know, a lot of times sacrificed the in-person gathering and fellowshipping together for online-only experiences. Well, that turns into a pseudo-relationship, and I think that relationship experiment that we're going through right now is a very pivotal cultural test and certainly one that we need to be aware of.

Scott Allen:

Wow, that's really interesting. Yeah, go ahead Luke.

Luke Allen:

Yeah, I'm just so glad that we're really laying the grounds here and defining the terms. That was super helpful. What we went over at the beginning between the narrow intelligence, general intelligence, super intelligence I'd argue, though, that narrow intelligence is actually not intelligent. I would probably put that in a different camp there, added some different word at the end of that. But now what you're talking about the attention economy to the relational economy. I work in marketing. That's what I went to school for, so we learned a ton about how to captivate the attention economy, right. But now this relational economy. I mean, this is where we're really diving into worldview territory.

Luke Allen:

What we talk about on this podcast, the ideas have consequences. The I mean, one of the primary worldview questions is who is man? What is man's purpose? What's my relationship with myself? What's my relationship to other people? Those are all worldview questions, and yet, quicker than I'm comfortable with, it feels like the world is now fully jumping into this relational economy, and I don't think we've thought about this one enough. Definitely, I'm not hearing the Christian voices entering the conversation in. What is human? What is relationship? What is relationship for? What makes you concerned about this? What are the red flags you're seeing in this?

Brian Johnson:

Yeah, I think that you know. We've seen the debate, especially in Christian circles, start to highlight some of the concerns around the notion of transhumanism or the definition of what is a human, and that debate certainly has merit on the concern around the definition of an augmented human or a hacked human. That can focus on the priorities of longevity and, of course, the longevity market is a booming market right now. There are tremendous numbers of health and wellness coaches and supplements and plans and everything that really have now been bundled under this longevity area, which can gray and blur lines between how it is either seeking to, you know, to kind of usurp the idea and notion of eternity or, in a sense, just be an enhancement to say, well, it's healthy lifestyle, that's fine, we should eat well, we should supplement, and that's fine. But if the goal is to say I can beat death, as Elon Musk has believed, or as Jeff Bezos or as any of these prominent, preeminent transhumans have said, their goal is to beat death because they believe that they can have, you know, a transference of conscience into something else. Their conscience can be transferred into an AI bot or into a humanoid or into some other biologically engineered replacement or clone of themselves so that they can live forever. And isn't that, you know, part of the original lie is that you can become like God and that you can beat death and live forever. And that transhumanist view, I think, at its essence is eroding relationship. Because what happens when they become self-sufficient and self-absorbed is they don't rely on nor expect from anyone else, and the fabric of society and relationship is then compromised because of a selfish ambition to beat death. Essentially and I think that's a challenge for Christians to wrestle with a little bit at times and say how much am I trying to just be healthy and steward, well, this temple that God's given me versus I actually want to try to beat death in a way that I want to avoid the ultimate judgment for sin and death and yet an inner eternity with salvation as a hope. And I think that's where we need to be really concerned.

Brian Johnson:

I believe that you know, christians, in a Christian worldview, look at death as not a tragic event. Necessarily right, we don't look at death as a tragic event. We say well, to live is Christ, to die is gain, as Paul says. So what is the gain if we are to die to this physical realm and see this as just a temporal state that God is redeeming and restoring in an eternal state. That's something we have to wrestle with and be reminded of in a world that says now, with AI, and with these inventions, and with these innovations in biological advancements and physical evolution, and all these things that they're purporting as solutions to problems, we can beat sin and death essentially, um, I think that that ultimate motive is something we should be, you know, the red flag should go up and we should say what is it they're trying to accomplish out of this? Oh, they're trying to beat sin and death. Well, that's replacement from salvation in a way that that, uh, you know, is trying to solve it with a techno centric salvation that does not exist in its false narrative.

Luke Allen:

Yeah, you know, salvation, that does not exist in its false narrative. Yeah, you know, it's a. It's funny because the first time, uh, I came across you, brian, was on the uh Redeemer podcast and, uh, when I saw your name there in the little description I was like Whoa, they got Brian Johnson on the podcast. The guy that wrote the book don't die. You know, I'm sure you know who that is Another. Brian Johnson, whose mission in life is to not die.

Brian Johnson:

He's the Y Brian, by the way. His name is spelled with a Y, so I call him the Y Brian.

Luke Allen:

Okay, okay, good separation there. I wouldn't want to be compared to that guy.

Scott Allen:

You know this talk about trying to overcome death. You know that people like Elon Musk have, or the transhumanists. You know that people like Elon Musk have, or the transhumanists. There's a verse that I think it's in Hebrews that talks about how Jesus was. You know, jesus, in having victory over Satan on the cross, disarmed Satan of his greatest weapon and I'm paraphrasing here and that greatest weapon of Satan was the fear of death. Right, and you know, I just think that's a good thing to kind of dwell on a little bit. You know, as Christians, yeah, we have a completely different view because we believe in the resurrection and we believe in eternal life. But if you're not a Christian, you are still under that. You know that greatest weapon of the evil one, the fear of death, and you'll do pretty much anything right, exactly you know that greatest weapon of the evil one, the fear of death, and you'll do pretty much anything right, exactly, to overcome that. You know, and Satan, of course, himself can use that, that fear.

Scott Allen:

You know I think this is another kind of area of conflict here on what we're talking about with the biblical worldview is just the transhumanist people that think we can live forever by kind of uploading our consciousness onto the cloud or whatever it is. I don't even know how to quite describe it. To me it's a completely Gnostic view of what it means to be a human being. In other words, what really matters is your thoughts, your consciousness kind of your brain. Your consciousness kind of your brain and your body is really not important, you know here. But the biblical view is no, those are both really important. They're, you know, inextricably linked. When we die, you know, we're going to be resurrected not just as a you know a brain, but as a body, you know, and a brain. We're going to have both.

Brian Johnson:

And the spiritual aspect is left out too. Yeah, exactly, the Gnostic, you know, tries to draw some distinction here. On the transhumanist says there is no soul, there is no spirit in a, in a being. They believe that consciousness is in, is the end. And and that is a really ignorant and really pitiful view. I do pity the lost transhumanist who believes that we're just material, Gnostic-oriented beings, and I think that's an incomplete view that we should expect of lost people. And to your point, yeah, if you have no hope, as Paul said, then eat, drink and be merry right, or try to live forever by taking all the vitamins you can and enhance yourself with technology is probably a more modern paraphrase, and I think that's what they're doing. So it shouldn't be too big of a surprise that we see technology being used in a way to try to advance, extend and eventually overcome death as their ultimate goal.

Scott Allen:

We haven't talked about the word singularity here either. That's another word that gets thrown around a lot in these discussions and it seems to me that it's got kind of multiple definitions as well. I mean, a couple of ones that I heard are the singularity is that final merger of machine and human being, you know. Human being, you know. Or another one I heard is has to do with trajectory and kind of the force of change, that at some point you reach a kind of point of no return where this technology is moving so fast that you know it's now no longer in our control. It kind of essentially controls us. How do you describe, how do you define that term singularity? What do you make of that? Just any thoughts on that, brian.

Brian Johnson:

Yeah, and so singularity is an interesting one that I studied several years ago. The implications of what singularity would mean and singularity definition from a computer scientist view is that a computer can think and reason at par equal to a human. In an artificial intelligence sense, singularity is that point at which the human is no longer necessary. Essentially, the computer can then think and reason in a way that does not require human intervention.

Scott Allen:

Another definition, that's Can I put you on pause there for a quick second? Sure, so really, what I'm hearing you say there it gets back to those three categories is that the singularity happens somewhere between general intelligence and super intelligence. Correct, that's right, okay, okay.

Brian Johnson:

Yeah, at a level of maturity in general intelligence. When artificial intelligence can be autonomous, then singularity has occurred. That's the more widely accepted and, I would say, accurate definition. However, it's been also used to say that singularity is the point at which computers are as smart as humans and you could just take an IQ score and say well, at an IQ level, then we've already reached singularity. Last year there were several models that exceeded the level of IQ of Einstein, as an example. Now if you ask a lot of the chatbots or ask the LLMs hey, are you smarter than Einstein? You'll get mixed results. They'll say well, on an intelligence level it can respond and solve intelligence level tests at a level beyond what humans can, but at an emotional and EQ level it's nowhere near. So you've got these competing measurements of what intelligence is and kind of at a mental capacity level, the decisions that AI has achieved are far superior to humans in some ways and, as Luke said earlier yet he would argue we wouldn't call it intelligence. In some cases it's just kind of rote response.

Scott Allen:

Let's pause on this, because now we're really into some worldview presuppositions as well, and I think it's really a fascinating thing to kind of flesh out thing. To kind of flesh out I have noticed that people that you know kind of and there's a lot of them that hold to kind of a Darwinian, kind of materialist, deterministic type of worldview. They see human beings essentially as machines, as computers, right, just biological computers. That's basically what we are no spirit, no soul. And so when they apply that way of thinking, that worldview presupposition, to AI, they get very scared because they go wow, we now have created this biological machine that's smarter, more intelligent than we are and we can't control it. Now then there's another group that says you know, wait a second, we are not just you know. They would reject that kind of initial starting point, that Darwinian, materialist starting point, and they would say you know, intelligence is much more than just brain power or whatever it is. We're embodied human beings. You know, intelligence is much more than just brain power or whatever it is.

Scott Allen:

We're embodied human beings. You know we have souls, we have spirits. Let me just read a quick quote from Cary Artie again from this talk that I listened to yesterday. He said AI is a misnomer. It has no intelligence. This is his words. What it is at root is a very powerful method of data manipulation based on programming. Programmers have applied new algorithms and machine learning to these massive language models and the result is something that's analogous to human learning. The result is that AIs are doing things the programmers don't fully understand, but that does not mean that they've developed a soul or something equivalent to human consciousness or free will. In the end I'm still quoting him these things do what they're programmed to do, even if their programming evolves in ways that we don't fully understand.

Scott Allen:

I almost hear Kerry already here saying we're not going to get to that super intelligence level. That's kind of a Darwinian To believe that that's going to get to that super intelligence level. That's kind of a Darwinian To believe that that's going to happen. You almost have to start with Darwinian assumptions about what does it mean to be human? Cary Artie, interestingly enough, is not just a Christian, he's also a psychologist, so he brings a different view of human nature to the discussion than, let's say, a computer scientist maybe, in that we think as embodied beings and you can't really separate that right through our senses and you know just the way God created us as embodied human beings.

Luke Allen:

Yeah, well as you're talking about it, my brain just goes immediately to Psalm 8, right, and I didn't pull it up quick enough, but off the top of my head. And God has created us a little lower than the heavenly beings and crowned us with glory and God honor. A little later on it says and he has placed everything under our feet, all the flocks and herds and beasts, so on and so forth. So, I mean, would God let us create something that would, in a way, supersede us?

Scott Allen:

Well, if I was a Darwinian? That was my worldview, and we were just biological machines. You know, yeah was a darwinian, that was my worldview, and we were just biological machines. You know? Um, yeah, I could. I could see this fear of like oh my gosh, we've now created something that's smarter, it's faster, it's got. It can process this data much faster than we can, much bigger sets of data. Yeah, right, it's, it's, it's, it's, it's passed us up and who knows where that's going to take us.

Scott Allen:

If I'm more like Cary Ardy, I'm going like I don't know if that's ever going to happen, because we're still not, because it's just starting with the faulty presupposition of what it means to be human.

Brian Johnson:

It is, and I think that measure of intelligence has been so superimposed on on what AI is, and the definitions are important. A couple of quick, kind of practical examples of what AI has shown the competence and capability to do. Last, probably 18 months ago, maybe 24 months ago now, there were a couple of AI computers that developed their own language and could communicate to each other in a way that humans could not understand. We had no way of deciphering what communication method was being used.

Brian Johnson:

That's the surprising thing Dwight's talking about, so they have done some surprising things like develop their being used. That's the surprising thing Dwight's talking about. So they have done some surprising things like develop their own language. Just in the last few weeks, ai has learned how to communicate with dolphins. It has learned their language and it can now communicate back to dolphins in a way that can then tell us what it's saying and the communication that it's having with dolphins. Wow, that's amazing.

Brian Johnson:

Ai can detect things and Mayo Clinic has been doing research. I've got a friend who does investments in latent IP, so they take intellectual property that Mayo Clinic has decided not to use and they'll go and put it into use in different areas. And Mayo Clinic and some of these IP groups have determined that they can read EKGs interactive from even wearables like an Apple Watch or another device and interactively determine whether someone is predictably going to have a heart attack. Ai does that. Human doctors cannot. So it's scale, it's volume, it's predictability, it's trends, it's other things. There are ways. Whether we call that intelligence or great learnings or machine learning designs, there are some capabilities that AI is developing in language, reasoning and communication patterns that are far surpassing what we can do. Surpassing what we can do. Ai could now translate this video, in fact, into 175 languages and increasing every day and live stream it to the whole world. What does that capability provide, now that you could have one world of communication, one world of unity around cryptocurrency? There are a lot of interesting convergences that AI is powering, enabling and, in some cases, designing.

Brian Johnson:

So one key point to quote Jeffrey Hinton is widely known and accepted as the godfather of AI. He coined artificial intelligence, actually back in the late 70s, early 80s I believe and he came up with the idea that what it was intended for. It has now been really abused and the applications and implementations of AI have been implemented beyond the controls and original intentions. And Jeffrey recently said this, and this is a quote we have now reached a point that it is a philosophical and perhaps an even spiritual crisis, as well as a practical one.

Brian Johnson:

He believes that AI has now already reached singularity. He already believes that AI has the capability We've still just harnessed it or kept it kind of in a cage has the capability to not only develop interlinguistic skills and capabilities to communicate at a level that we can't determine, but can also make decisions in financial markets and healthcare decisions in transportation with the FAA airport controls, all these healthcare decisions in transportation with the FAA. You know, airport controls, all these. It has the capability to take on vastly more than we even have acknowledged yet, but we've still kind of kept it in a cage, and for good reason. So I think there's a, there's a lot of warning to be heeded from the godfather of AI who says essentially you know, beware it's, it's outgrown its use.

Luke Allen:

Isn't he a Christian, or am I making that up?

Brian Johnson:

I believe he is God-fearing. I don't know how aligned in Christianity he is. You mentioned him before the podcast.

Scott Allen:

So I did a little research, just quick, on him, just to learn a little bit more about who is Jeffrey Hinton. Yeah, these are some of the things that I pulled up. First of all, he's a Nobel Award-winning computer scientist Like I say he's like the godfather of AI.

Scott Allen:

He's a professor at the University of Toronto, worked at Google, so, yeah, deeply involved in all of this. I was curious about his own worldview, luke, and what I kind of came up with. Again, this is really cursory, but it was more that he's. He doesn't talk openly about his faith or his religion, so he comes across as a bit of an agnostic. But again, I know so little, but I did read an article that he wrote. It was an article about him in the Guardian and this is a recent article and a couple of quotes that I pulled up. He believes that there's a 10% to 20% chance of AI quote wiping out humanity over the next 30 years. And quote we've never had to deal with things that are more intelligent with ourselves before. That's probably the singularity. Right now we're dealing with something that's actually more intelligent than we are.

Brian Johnson:

And then he goes on and he says he's called it alien intelligence.

Scott Allen:

Yeah, how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? That's right, those are some quotes that he said in that article that jumped out to me. Yeah.

Brian Johnson:

And so his view came from having actually helped develop Google's machine learning that evolved into DeepMind, which is the AI model that Google uses, and it is a tremendously powerful model that Google uses their own tools in their own shop first before they release them to the world. So what they've made publicly available is only the consumer version. There are capabilities that in laboratories, and with this now convergence that's coming soon of quantum computing whole different field of computing that is accelerating vastly much more fast processing capabilities in computers beyond what we've ever been able to measure, and that convergence of quantum and artificial intelligence will result in something that Google's been testing in labs. I worked with IBM on some projects in labs, a couple others, and the capabilities that are being tested of fast computers, large amounts of data and intelligent or well-programmed large language models designed into AI will bring some very interesting results, some very culture testing and societal shaping capabilities that we've never even dreamt up or read about yet.

Luke Allen:

Yes, yes. What alarms you the most about all this?

Brian Johnson:

I believe people are too gullible. I think we've lost discernment as a society. I believe the fact in Scott, you use this example that you have a friend that is using Grok you know that's Twitter X's version of AI chat that they're using Grok as a friend and communicating with it as if it is a person that's what's most concerning to me is that, as a society we have, we've bought a lie. We've bought into this lie that there can be a representative replacement for human interaction and community and relationship. The fabric of society is giving up on relationships as as what we would say. You know, the God designed and and familial unit that was designed to hold societies together. I believe that's that's the most pressing and significant threat to to a current generation.

Scott Allen:

Yeah, you know, what you're saying here reminds me of something again for that talk yesterday that Carrie already had. By the way, luke, maybe we can post that talk because I'm referencing it so much just at the end of our podcast so people can go and listen to that themselves. But Kerry already said something similar. He said you know, the danger of AI is to treat it kind of like an idol. This puts it kind of in the framework of a biblical worldview too. Right, you know, we're always Tim Keller said. You know the human beings are idle factories.

Scott Allen:

You know, we're constantly creating idols right, that's the fallen human heart and how easy will it be for us to create an idol out of AI? He said the, and then he gave a really practical. I appreciated this, a kind of a practical example of a good way of using AI in a bad way. He said again he's a medical doctor. He said a good way of using it is to ask a question like you know, what are the known side effects of a particular medication? A bad way of asking that question would be should I take that medication, exactly? That's right, where you're going to it, almost like a god or kind of a you know an oracle of some sort. You know, asking it these big life questions. Then it'd be, yeah, I thought that was really helpful. So, and it kind of echoes what you're saying here, brian, a little bit.

Brian Johnson:

It is. I mean, you don't get in your car. Even if you have a Tesla that has self-driving supposedly, you don't get in the car and say take me somewhere. It's a tool. You use the tool for what it was intended to do take you where?

Scott Allen:

you want. You're a human who has been given a God-ordained agency and the responsibility of stewardship of resources. So act like it. My speeches, or doing my homework or providing like, allow yourself to reason as a human, allow yourself to do what you do. Don't allow it to dehumanize you. I like that, brian, especially when you're talking about the relational AI. You know, yeah, don't allow yourself to be dehumanized. And of course, that requires us, as Christians, to know fully, or as fully as we can, what does it mean to be a human being as an image bearer of God and all that that entails.

Scott Allen:

Yeah.

Brian Johnson:

Yeah, and in discussion, when you do use chat, push back as you would with human discourse, but with more authority, believing that you actually probably reason better than AI does. Assume that Work into conversations with AI in a way that you would ask it questions, expecting a result back that is helpful to you, not what it's trained to provide. Don't let it argue with you in a way as if it's got authority over you, because it does not. It's a tool.

Scott Allen:

Really helpful, brian. Yeah, yeah, treat it like a tool.

Luke Allen:

That's a good takeaway. I do think, though, sometimes when I just think of it as a tool, I kind of assume it's it's binary, it's neutral and but. But again, we have to remember this thing is this thing's biased? It's created by humans. Humans are biased, know their biases, know who created what you're using and push back against that as well. That that's, that's part of being discerning with this, and I like what you were saying about that earlier and how it can find the right answers. Probably, but you need to push back on it, so don't assume its first answer's right.

Brian Johnson:

And Luke, you're absolutely right. There were a lot of examples and we've done testing after testing of AI models. Google was outed for an issue they had recently, last year where you ask it a question about like what does a cowboy look like, or what is a representative people group of a certain area, and it was absolutely displaying its bias, to the point that you would ask it was that a biased response? And it would say, yes, I've been trained to be biased, so work under the supposition that, like you just said, luke, it is written by biased humans. The rules, the recommendations, the engines that you're running under are trained data sets, picking the sources that it wants you to retrieve data from and giving the result sets that it has been trained to provide, with that bias inherent. That is absolutely what it is as a tool.

Scott Allen:

Now again, when you speak that way, brian, it seems to me you're talking more about the narrow and the general intelligence.

Brian Johnson:

Correct.

Scott Allen:

You're not talking. Once you get up to that level of super intelligence, whether we will or we won't, then we're moving beyond something that I control, I program. It's a tool. It seems to me that there is some kind of a jump there.

Brian Johnson:

And I like what you said earlier about God's sovereignty.

Scott Allen:

God is sovereign. God is sovereign. It's never going to be outside of his control. But I could see a point we're not God and it could become something outside our control, it seems to me, and there could be a lot of damage as a result of that.

Brian Johnson:

Yeah, I mean, I would argue even the Tower of Babel illustration of the story of mankind trying to build a tower to unify both language and access to God, or to be elevated in a sense is a really great parallel, in a sense, to human history repeating itself, and that's building a technological tower that basically extends our abilities to be like God and in a sense you're absolutely right, it could be bigger than us but at the same time it just shows all the more God's authority and sovereignty and displaying his power to also disrupt Babel. I mean, god could turn off AI, we could wake up tomorrow and it would disappear.

Scott Allen:

No, I think the line that always sticks out to me in that powerful story of Babel, which is let us make a name for ourselves, it's this rebellion against God. We don't need you, god, we'll be God. It's the same lie in the Garden of Eden, right?

Scott Allen:

Satan says you don't need God, you can be God, and that's the recurring lie down through the age. And there you saw it at Babel again, and what God says and it was here at Babel. We're dealing with technology, we're dealing with a tower, something that humans have made, and God says you know, if they've begun to do this, nothing is impossible for them. You know, that's quite a statement for God to make of human beings, but then, of course, like you say, the comforting thing is that he steps in sovereignly and prevents all the harm and the damage that would have happened had he not, by confusing the language and scattering people across the face of the earth. So, yeah, I feel like the Babel story is just so relevant right now, to our day, in this whole discussion.

Brian Johnson:

Yeah, yeah, I mean we could call it the Tower of AI and we'll see.

Scott Allen:

Yeah, the same human pride behind it, the same desire to control, to be godlike, yeah yeah it's, it's all there and I so, um, well, I, I, you know, I don't know I I like the way we were going, which is we were moving in a more practical direction, because people are using it right now, you know, um, christians are using it. Brian, any, as we kind of get ready to wrap up here, what advice would you have? Just again, again, for just let's go back to what we were talking about very practically Not just how do we think about it, but how do we engage with it, how do we use it? What do we need to avoid in our interactions with it? Any other final thoughts on that?

Brian Johnson:

Yeah, I think you know Luke had mentioned some questions earlier about kind of the cultural engagement and hope in this. I think that we use it as a tool to engage culture. I think we should use AI as something that you know. In a sense, we are in a world that involves tools and technologies. Tony Rinkie wrote a book about kind of. You know the redemptive view of technology and Tony's a brilliant thinker and researcher in this space, and so I had a chance to talk with him about kind of, the view of this, and his view is well, this is a phenomenal.

Brian Johnson:

This fourth industrial wave is a phenomenal opportunity. For Christians it really is. I mean, we can look at these kind of eras as opportunity to say, well, we need to crawl on a rock and go, move and get offline, get off the grid and all that. Or we can say let's reach the world for Christ, let's use this opportunity to see an evangelistic movement, to see heart shaped and truth told in a way that you wouldn't have a chance to do if you didn't have this kind of crisis of culture happening. There have been these tremendous movements, especially in American society, but even in mankind, throughout the ages, that we've seen opportunities and windows of opportunity open for revival, for folks to really understand what hope is and to see the distinction between a false narrative and the truth. And when you see false narrative becoming more and more egregious, it should make truth that more easy for us to feel confident in saying well, this is an image mirror moment. This is a moment for the image bearers of God to really say truth is what we need to lead with, and truth and love in a practical way is what Christians can do. So, yes, we can use AI for good. We are using AI to combat human trafficking, to promote human flourishing and to provide gospel teaching and Bible teaching capabilities to the world.

Brian Johnson:

My son came up with this kind of a product called R79, and it's around this idea that in Revelation 7-9, we see this party happening in heaven and people from all tongues, tribes, nations are all gathering at the throne of God. And you say that is something that's a phenomenal. Hope that we have is that we will be joined together with believers from all over the world. How does that happen? Well, because missionaries have been sent and because technology will be used, and because the word can be preached in local tongues and languages that never have been able to be done before, so we would love to be able to have opportunities and get people you know encouraged about the way that they can lead into their ministry, whatever it is.

Brian Johnson:

The world has become a lot smaller place because of technology, so we can use that to our advantage too, and our family verse has been 1 Peter 3.15. We've always had this notion that, regarding Christ as Lord in your hearts, always being ready when we think of being prepared, as Christians, we should always be ready when we're asked to give an answer for the reason of the hope that lies within us, and to do so with gentleness and respect, and I think that is an embodiment of how the Christian worldview should be in this era of AI Be ready. Be ready to give answers for the hope that you have and why you're not scared about what AI is doing.

Scott Allen:

Why you're not concerned about what. You know this as Christians. There are tremendous opportunities that this opens up for us. You mentioned how you know, artificial intelligence is now communicating with dolphins. I mean that's amazing. But think about put that into the framework of biblical translation around the world which has been going on since the time of Luther if not before and now it's just.

Scott Allen:

It's phenomenal when you think about the potential to quickly get the time of Luther, if not before and now. It's phenomenal when you think about the potential to quickly get the Word of God into languages where before it would have taken so long to do so. There's all of those positives that you mentioned. At the same time, we live in a fallen world. I mentioned China before and how. China is not unique. This is just the fallen human heart. People are going to want to use all of these tools before and how, and China's not unique. This is just the fallen human heart.

Scott Allen:

This authority, you know people are going to want to use all of these tools for either, to make a lot of money or to control people you know to, you know, for authoritarian ends, you know, and, and they are going to do that. They are doing it. I mean, it's happening now and plans are probably well beyond what I am aware of to control me, you know. So, both you know. Yeah, be sober, be cautious, don't allow yourself, be open-minded. Right, this is happening this way. You know that documentary social media, what was that called? The?

Brian Johnson:

Dilemma.

Scott Allen:

Yeah, Social Dilemma. I said everyone needs to watch that because it just opens your eyes to how you're being controlled and manipulated.

Brian Johnson:

So a friend of mine is actually sponsoring one that's called Doom Scroll and it's coming out with kind of a similar fashion around that attention economy and it will be this awakening kind of moment for us to say what has our attention been captivated by. It's a kind of moment for us to say what has our attention been captivated by. It's a real gut check for us.

Scott Allen:

No, you know, what can we do to prevent, you know, nefarious forces from using these technologies to control us? We don't know, unless we are somehow aware of what's going on about that. So I think there can be kind of a concerning side to this. But, like you say, hey, it is what it is. It's here now. Let's use it for good, you know, like you're doing with your son. So good for you.

Brian Johnson:

Yeah, we've got a parenting blog called Protect Young Hearts and really our idea and notion behind that is helping parents to understand how to have conversations with their children too. Wow, and we've been asked this as we've given talks at church and things. You know, people will come up and say, like, how do I deal with technology? At what age should my kids have tech? You know and that is a it's a massive topic that a lot of times what we, what we start with, is, well, how do you deal with technology?

Brian Johnson:

What a parent is going to try to impress upon their children needs to first be practiced in their own hands, in their own time, in their own self-control. So I think discipline in the parent's life is a good starting point. But then understanding that it is a significant challenge for your kids to grow up in this society without parental guidance, without biblical discipleship the question of why am I here and what's the hope in and what are the competing priorities in life and all these distractions that are just so inundating, are significant challenges and especially as Christians, we should face those with confidence, and I think that's something that parents really need to be challenged and encouraged in is saying you can raise your kids. Well, in this generation you can raise warriors for Christ, in a generation that wants to tear them apart and tear apart the ideologies of biblical Christianity. And I think there's a lot of great tools and resources that we can come alongside each other and be encouraged and be equipped to better handle that.

Scott Allen:

How do people can people access those resources you're mentioning there? Because I could see a lot Absolutely. Yeah, how did they get those Brian?

Brian Johnson:

Yeah. So I would say, you know, one of the first ones is when, again, I say like, if you're a parent of young kids, protectyoungheartscom is a website that you can go to and reach out to us. Protectyoungheartscom, okay, protect Young Hearts. And I've got a friend that runs Protect Young Eyes. He goes into the technology side, which is interesting because, you know, I'm the technologist, my wife is more the biblical counselor and the heart focused person and I'm more the let's think of. You know, logic and reasoning and things. We combine this into protecting hearts because we believe protecting kids' hearts is the essence of what a parent's responsibility through discipleship is, and we have a lot of tools and resources, articles and tips and this, you know, the era of protecting. You know there's a group that we work with through the Tempevo Group and Tempevo Foundation and we partner with them to help design and develop technologies that will be protecting gaming online. There's a product called GameSafe that's being released soon and actually I think you know you can receive it on some platforms now and we're partnering with GameSafe to help that to be a pre-, a preventative tool. It's run by a Christian and a Christian group that's helping to get this in the hands of parents so that their kids can game in a safe environment, and we're going to plug that in and make it available on Apple devices as well. They teach predator proofing and methods that parents can learn about how to keep their kids safe, and they're just groups. I mean, you know probably 20 or so companies that we're working with to help partner in helping Christians and helping, you know, any parent with this problem.

Brian Johnson:

We also have solutions for seniors. I think. Well, what about your elderly parents that are trying to use phones safely? We just want to communicate without getting scammed. Your elderly parents that are trying to use phones safely? We just want to communicate without getting scammed. We have tools for that as well that can help seniors to interact safely in a secure and simple way. So lots of resources and options available. I would say reach out to me over protectyoungheartscom. You can send us a note and we'd be glad to help in any way. We can get resources If you're struggling in certain areas. Those are things we want to be able to help people get both access to resources but also communities of support. There's also a group called Hero Churches and it's churches against trafficking, in a sense educating and equipping churches from a biblical worldview around the harms of exploitation online and helping your kids to not get exposed to harmful content, because it is a tragic epidemic in our society today. So a lot of great resources that we'd love to get you connected to.

Scott Allen:

Wow, thank you, brian, for all the work that you're doing in that area and all the resources you're making available to people. That is fantastic and I'm so glad we can let our listeners know about that so they can take advantage of that.

Brian Johnson:

I appreciate it. Scott, Thanks so much for what you're doing, Luke.

Scott Allen:

any final thoughts from you as we wrap up here?

Luke Allen:

No, I mean we'll make sure all of those resources are linked in the description, so check those out after the show. But yeah, no final thoughts.

Scott Allen:

Brian, you've been really generous with your time and I just want to thank you Again. I'm just learning so much and I enjoy the learning. By the way, I think it's a fascinating discussion, but you've really helped me. You've advanced my own learning and I just want to thank you for that, and I'd love to have you back on as we keep trying to get our heads around this, together with our listeners, and know how we can interact and think about and respond to this current moment technologically. So thanks, thanks for your time today. Really appreciate it.

Brian Johnson:

It's an honor to join you, Scott. Thanks so much and praise God for the work you guys are doing here.

Scott Allen:

All right, well, thank you all for listening to another episode of Ideas have Consequences. This is the podcast of the Disciple Nations Alliance.

People on this episode