#E47 Will AI Bring About The End Of Humanity? With Theo Priestley & Beth Singler

About Theo Priestley

Theo Priestley is an esteemed futurist, internationally acclaimed speaker, author, and a foremost authority on the future. His expertise in 'hyperchange' positions him at the vanguard of an enterprise trend that transcends digital transformation, discussing the confluence of emerging technologies such as Spatial Computing, Artificial Intelligence and Generative AI, Web3 and the Metaverse and how they should not be treated as separate trends. Over the past decade, Priestley has lent his expertise to augment the strategic foresight and innovation efforts of leading technology providers and global 2000 enterprises, including Capgemini, SAP, Siemens, Bosch, AON and Hewlett Packard, with more than 500 articles contributed to Forbes, The European, WIRED, Huffington Post and beyond. He's offered his insights to VentureBeat, GigaOM, The Times Raconteur and through radio and televised news interviews.

About Beth Singler

Professor Beth Singler is the Assistant Professor in Digital Religion(s) at the University of Zurich. Prior to this she was the Junior Research Fellow in Artificial Intelligence at Homerton College, University of Cambridge, after being the post-doctoral Research Associate on the “Human Identity in an age of Nearly-Human Machines” project at the Faraday Institute for Science and Religion. 

Beth explores the social, ethical, philosophical and religious implications of advances in Artificial Intelligence and robotics. As a part of her public engagement work she has produced a series of short documentaries. The first, Pain in the Machine, won the 2017 AHRC Best Research Film of the Year Award.

Beth has appeared on Radio4’s Today, Sunday and Start the Week programmes discussing AI, robots, and pain. In 2017 she spoke at the Hay Festival as one of the ‘Hay 30’, the 30 best speakers to watch.

Read the HYPERSCALE transcript

[00:01] Briar: Hi everybody and welcome to an episode of Hyperscale. I'm recording this episode as a matter of emergency. In response to a recent government commissioned report, ran in Time Magazine warning of an extinction level threat to humanity posed by AI. I believe it's incredibly important to address this issue and critically examine it. Is this a genuine concern for the survival of humanity or merely a media and government driven scaremongering? To help me pick this apart, I have with me today, Professor Beth Singler and Theo Priestley. Welcome to the show.

[00:45] Beth: Thank you.

[00:46] Theo: Thank you.

[01:00] Briar: The US government has been alerted to move quickly and decisively to avert substantial national security risks stemming from artificial intelligence. The recent article and report in Time Magazine claims that AI is an extinction level threat to the human species. Beth, what are your thoughts, is this justified? 

[01:23] Beth: Well it's complicated. I would like to say, first of all, this is not the first time that the Skynet has been falling. There have been many of these loud prognostications about AI and extinction and existential risks. It's an ongoing conversation. I mean, it really has its roots in some deep cultural forms as well. What I think is particularly interesting about this report is how much it actually unpacks some of those hopes and fears by the methods of actually speaking to these experts and finding out their opinions and what that reflects about their position on where AI is going, almost the forms of community that are shaped around this conversation as well.

[02:08] Briar: What are your thoughts, Theo?

[02:11] Theo: I'm with Beth on this one. I think that this isn't the first and it's certainly not going to be the last report that we see from governmental agencies warning us about the existential threat or the perceived existential threat of AI. I think for me, what is concerning overall though, is that once these reports get released, then I think people make up their own rhetoric. Certainly the gutter press create their own narrative around these reports and sensationalize based on what we've seen in popular science fiction and culture already. And the number of experts, I think, that are consulted seem to be the same ones on a very small echo chamber basis. And they all have their own agendas in terms of, well they advise the various other sort of AI companies, whether openly or behind the scenes. And so they have vested interests in a sense. So I think it is good to dig into this, especially in this particular podcast or in this particular topic here. But I think it's also good to sort of retain a little bit of a pinch of salt with each report that we receive and the narratives that come with it.

[03:32] Briar: So, do you think that… So in this article, there was a report and a finished document, which was titled An Action Plan to Increase the Safety and Security of Advanced AI, and it recommended a set of sweeping policy actions that if enacted would radically disrupt the AI industry. It suggested that Congress could make or should make AI illegal. Do you really think that this is the best solution?

[04:04] Beth: In a way, it depends what you mean by AI as well, because the integration of algorithmic systems and automated decision making systems into our society as a whole is impossible to unpack now and take apart. So if you are talking about the future of AGI and where technology might be going in some of these narratives, then you can start to have this conversation about regulation and a conversation about whether that's even feasible.

[04:30] Briar: So, Beth, let's start, just for the listeners. Let's start by first of all, identifying and talking about the different kinds of AI. I think that would be helpful for the listeners. 

[04:42] Beth: So AI has a longer history than most people are aware of, that actually the term goes back to the 1950s with the Dartmouth Summer Research Conference where 10 men got together and decided they would work on this thing called AI and they would fix it over the summer. That would be the plan. They'd create something that could do basically everything a human could do with their own intelligence. Of course, they didn't do that in the six months that they set themselves, but that kind of goal is still in the DNA of the discussion around artificial intelligence, whether we're talking about very narrow forms of artificial intelligence where it can do one task particularly well, or AGI, artificial general intelligence, where some people think it's going to a level that can do everything that a human can do, but based in that view is this sort of techno determinism that AI is going somewhere, that it's going to become something in the future. And I think that's still a very dominant narrative in a lot of these conversations about existential risk.

[05:35] Briar: Some people do liken AI, when talking about it quite broadly, to almost being this godlike figure in the future that is either going to create a utopian future and do better at managing the world than say like, I don't know, the governments or this dystopian future where it will become smarter and actually, I don't know, turn against us. What are your thoughts about this?

[06:00] Beth: Yeah, well, I have a background in the study of digital religion and religious studies. So these narratives around God-like super intelligent AI, obviously are very much in my wheelhouse. I'm very interested in how we have a very sort of simplistic view of a monotheistic deity and the map that onto where AI is going, we'll say, it'll be omniscient and omnipotent and please hopefully be omnibenevolent as well. But really it's about imagining figures that are over us with super agency that can control and understand everything that needs to happen for our betterment. And then also conversely, people being worried about it not being omnibenevolent and being a destructive God-like figure, a wrathful God. And this is really just sort of a continuation of our cultural forms that, predominantly most cultures have some idea of a deity or deities, and it's mapped onto this technology as we imagine it into the future.

[06:53] Briar: Theo, I know that you are always quite outspoken about the future, and certainly when I had you on my last podcast, you told me that in the next 20 years humanity is going to see a huge shift, perhaps even bigger than when we went through the industrial revolution. Are you still of this opinion, and how do you foresee the future coming to be? And do you believe that we need to be doing more to put regulations in place now?

[07:24] Theo: I still believe that there's going to be fundamentally large shifts, technological shifts and societal shifts over the next 20 years. 20 years it sounds like a long time, but it's not really. I mean I was shocked when I received an email from LinkedIn, for example, saying I've been on the platform for 19 years and I had no idea that I'd wasted so much of my time on a social network that actually provided me with zero value. So it's not a long period of time to consider actually anymore. I think the pace of what we've witnessed over the last 18 months to two years in terms of how large language models have come together and how people have been experimenting with them in terms of greater levels of autonomy for their own purposes and their own lives.

[08:12] Theo: And then also for development purposes. I think it's in the next sort of five to 10 years, we'll see that amplify. You can see that just by the level of spend and investment that's going into these tools. So they will become all pervasive. But I think that the main thing to sort of consider here is that they are still tools. Ultimately we still have agency over them, and it's that particular agency, whether we want to relinquish control of that agency to the tools or the AI and the companies that build them. That is the fundamental shift that we'll see over the next 20 years. Because the more automation that comes in, the more, I guess apathetic we become to take control, back control. I've actually seen some of that happening in the background in terms of cultural shifts and that's across different generations as well.

[09:14] Theo: People are starting to rail against subscription models. They're starting to understand that subscriptions mean that they don't actually own anything digitally. They've lost control of privacy. And obviously we've seen how that has impacted in terms of AI because AI needs all of our data to be able to train itself effectively. So that comes at a cost and that cost is privacy and that cost is IP and creativity that we actually manifest. What was I talking about? 

[10:00] Briar: You were saying that they've taken away all of our, like been feeding them creativity and IP.

[10:38] Theo: Beth, can you remember what I was actually waffling about?

[10:40] Beth: So the question was about whether this was a significant change in where things are going. 20 years is not a long time.

[10:52] Theo: Generationally I've seen people rail against subscription models because they realize now that they don't actually own anything digitally and across different generations I've seen people start to and I'm not sure whether it's from a nostalgic point of view, but they're starting to build physical collections again, of art, of music, of film because they realize that they don't own anything. And digitally a service could be switched off overnight, go bankrupt, disappear. And they have nothing to show for it. And I think there's that sense of, well, what do I represent in a world that is governed by AI and ruled digitally? And do I actually exist? And if I leave this world, what do I leave behind? Where is my footprint? Because digitally, if I leave a footprint, it doesn't exist, physically if I leave a footprint, then there are stories around that. And humans are very driven by stories. So I think there is going to be a societal shift, not only in terms of technology, but culturally people will want to retain that agency. And I don't think they know it, they don't realize it yet, that what they're doing just now in terms of canceling subscriptions and buying up old media is actually a cultural shift that we're starting to see the beginnings of.

[12:18] Briar: Anything you would like to add to this Beth?

[12:20] Beth: Yeah, I guess to play devil's advocate a little bit, I think you're right that there are certain groups and advocates for artisanal returns, going back to human made things and physical media. But I'm also seeing people who are, as I say, relying more and more on this view of AI as a super agency that has the right answers all the time. The assumption that computers and machines and AI are more rational than humans, we should rely on them more and more because they're going to make the right decisions because of the data, because it can speak back to us. Also encourages the view of it being a person with its own ideas and its own agendas that will align with this utopian future we mentioned earlier. So there's sort of the, the spectrum of responses. They're going to be people who will push back a bit.

[13:04] Beth: And I'm not dissing the Luddites at all. I think they have some very good points. So we'll get like a Neo Luddite version. And then we'll also get people who go along with the flow a bit more and say, well, yes, of course we want AI to make large scale decisions for our society because it's smarter, because it doesn't have millions of years of human evolution that led to war and strife and difficulties and interpersonal problems. It's purer, it's rational, it's cold, it's all the narratives we get from science fiction. So we have data in Star Trek. Theo knows how much I love data as a representation of the logical form. And then we adopt that, and we think AI is going to head in the same direction as well because the science fiction has prepped us for it.

[13:48] Briar: When people talk about the singularity and AI achieving singularity where it's going to be able to make decisions and do all of these things without us and people talk about the fact like as soon as it achieves singularity, there's no going back from here. Like the Genie is out of the bottle, like the Pandora box is open. How are we able, like when do we think singularity is going to happen first of all? I know a lot of you futurists and professors and scientists and people tell me that you don't like to put dates on things, but is this really a five-year thing, like what the report suggests for artificial general intelligence and or perhaps 40 years, like some of the experts such as Ray Kurzweil and people like this are suggesting

[14:34] Beth: I do enjoy that Ray Kurzweil's next book is called The Singularity is nearer. It's creeping ever closer. I mean, those terms have a lot of slippage between them. AGI purely meant originally being able to do everything that a human could do with complete malleability and plastic nature to its intelligence. And then the technological singularity is borrowing from cosmology in the black hole that it's the horizon you can't see beyond. We are not meant to understand what an AI singularity thinks, wants, desires, is going to do. I'm trying to remember who it is, Charles Stross once called the concept of the technological singularity in science fiction, the turd in the toilet bowl because it basically presented a scenario that no science fiction writer should be able to write about, because you're not meant to know what it's going to be like. But these things become very slippery. People move between these terms. But if it's something that gains consciousness then we start getting our narratives about what it means to be a person, what AI will be like as a person, what it will desire. And it really reflects back what we think humans are like. So if we're panicking about it attaining consciousness and deciding to destroy us all, that says an awful lot about what we think we are like as conscious beings. 

[15:50] Briar: Well, I saw this funny thing on Reddit the other day. It's GI-GO, garbage in, garbage out and someone described this as AI and how it is and how will we know? So obviously AI is a tool. It's going to be looking back at us, it's going to be reflecting humanity. But how will we know if it's conscious, is that even a thing? Could that even happen? Or because of the algorithms and the way it's constructed and because it's reflecting back at us, it might seem like it's conscious, like it might have digital consciousness.

[16:31] Beth: So, I mean, we've had a lot of different narratives about how you identify consciousness in machines. The Turing test comes up a lot as a way of identifying real intelligence, basically on a pragmatic level to say, as long as it seems intelligent, that's as good as being fully intelligent the way that a human is. But you don't need to get this level of consciousness before humans start ascribing intelligence to even the most simple of chatbots. So we fail, humans fail the Turing test all the time, and then science fiction comes along and says, well, there are other capacities we could look for, like empathy and emotion and the Voight-Kampff test from Blade Runner. We don't really have a model that works even for humans, I'm not suggesting humans aren't conscious, but some people might and say, well, I know I'm conscious, but I have no proof that you are. So we've had these conversations for thousands of years already, and then we're just problematizing it by talking about AI as well.

[17:27] Theo: Yeah, I think humans are very good at anthropomorphizing as well, Inanimate objects and falling in love with dolls and things like that and imbuing some kind of personality and empathetic model on top of them. We also don't truly understand the nature of consciousness anyway. I mean, Roger Penrose, a physicist, has argued that consciousness is actually a quantum phenomenon that sits in the brain and and is activated at room temperature because we all know quantum mechanics doesn't actually happen at room temperature. But, he's arguing for a consciousness that is quantum-based and can be quantified and can be measured, and it essentially could be controlled one day. Whereas obviously if you're very religious, then you know that consciousness is almost like a part of the soul.

[18:24] Theo: It's not part of your body. You can't see it, feel it, touch it, track it, etcetera. So we don't fully understand what it means to be conscious or what consciousness is. So by that we can't really define and measure how to call an AI or an algorithm conscious because we anthropomorphize it and say, well, it's talking back at me and it's giving me answers. And it says it's sad or it's happy, so therefore it has emotion and therefore that must mean it's sentient or it can feel pain if I decide to turn it off or close my laptop. And that Voight-Kampff test means it's conscious, but using words in the right order does not mean that you are one intelligent, two conscious, or three sentient. So we really have to, like Beth says, we've got all these really old-fashioned tests still based on old-fashioned notions of what it means to be an artificial intelligence.

[19:19] Theo: So we have to go back and revisit what they are, what those tests should be. And then we have, ever-moving goalposts in terms of what AGI is. So as Beth said, we've got narrow AI, which is it does one thing, it's like a Tesla car, seems very intelligent, but it couldn't do your homework, but it'll drive you from A to B very well in autopilot. And AGI is supposed to be everything that is capable of everything that a human can do, including autonomy and creativity. But a few years back, AGI meant, if it had a corporeal form, it could perform tasks exactly the same way as a human can. So it needs a robotic form. And we've seen with the Nvidia keynote this week that there are robots being developed, humanoid robots, other types of robots.

[20:10] Theo: And so we're now seeing large language models married with bodies to be able to understand or certainly enact instructions in the physical world. And that, of course brings another layer of intelligence because now these LLMs or AI or algorithms will have physical context applied to what they're being told. So they will understand that a bottle is no longer just an image in their algorithm that is related to a bottle. They can now actually pick up and see it and hold it and smash it or put it down somewhere. And that gives it context of what a bottle is, feels like, looks like, how it interacts with the physical world. And that to me is like steps in the right direction for defining what an AGI is. And then trying to figure out what the right tests are to determine whether it is digitally conscious. I think we should remove the human consciousness element out of it and actually come up with a constructive definition of what being digitally conscious is.

[21:20] Briar: I think it's so true and I actually had a gentleman on my podcast recently to discuss consciousness. And I think after an hour and a half of talking to him, I had more questions than before I started because it's so complicated, and you are right why do we not know what human consciousness really is yet? But there are obviously lots of very interesting things that are happening in this space and I've loved watching all of the robotics videos that are starting to come out and hearing what the companies are doing and things like this. And it really does feel like there's so much advancement happening in this space with AI, with robotics yet on the other side of the coin. And I know that if we look at humanity as a whole, I know that things are a lot better now than what they used to be.

[22:09] Briar: Back in the 1950s we used to suffer and life was hard and we used to die of like a tooth cavity and things like this. And so overall it's a lot better these days. But in some ways it looks like robotics and AI are improving and leveling up and doing things like this. And then on the other side of things, like we've got fish-like attention spans now, we're addicted to our phones. We are spending our lives sitting on the couch. It feels like in terms of our individual and collective improvement, it feels like we're kind of going [inaudible]. What are your thoughts? Like sometimes I think, God, I think I need to merge with like a machine, or I think humanity needs to merge with a machine or AI so that we can keep up with the advancements.

[22:56] Beth: Well, I suppose in some ways the, the two close relationships between our attention and the algorithms is actually part of the problem. So you are talking about merging, that kind of merging has been problematic and perhaps in some ways we should distance ourselves from the algorithmic decision-making systems. But I mean, this is the strength of these narratives that we've given these utopian, these dystopian accounts and we're still trying to figure out where we actually want the future to lie. So some of these stories about where we're going to be with these amazing new robots and artificial intelligence forms either predict the utopian of a future, we will be, again Star Trek. Star Trek is one of my favorite things, but the luxury space communism of Star Trek, where we just have abundance and we can travel the universe and we have all this freedom versus, the Wall-E future of, well we've given all this free time and what do we use it for? Entertainment and feeding ourselves. They've got these two possible paths. And that they're kind of the way in which our attention is already being so drawn by algorithmic systems for profit by corporations. There's no question of that. It was suggesting actually what we need to do is move away from some of our social forms like capitalism before we can even think about getting to the utopia, the robotic form artificial intelligence, whatever is promising.

[24:15] Theo: Yeah, I think there's, so Peter Diamandes talks about abundance a lot but he makes a fairly decent point. Not in the fact that everything's going to be rosy, but I think that the need for a new type of economic system globally across society and civilization is needed before we can move forward because I mean, if we retain the capitalist mentality as it stands today, then we're going to be heading towards something that looks more like Elysium than it does Star Trek. And Elysium was very much the AI and the robotics and, and all the advancements and technology actually created a much bigger divide in society where the elites had everything. And we almost worked to manufacture the robots that helped the elites get more and more and more.

[25:06] Theo: And we lived on Earth, and Earth was an absolute mess, and they lived in the sky. And that's that. I mean, I do think that Wally is incredibly prescient even though it's a cartoon because, there's an eco-message in there. You can actually see it happen with the amount of garbage that we create on a daily basis and consume. And then there's, like I said, the sort of apathetic human that just sits back and has life fed to it. And we just become engorged on entertainment, on food and actually have no purpose because the machines are doing it all for us. And of course, the captain of the ship in that movie didn't want us to go back to earth to rediscover our humanity because it want us to keep us there fatten and tended to, because that gave that purpose. So there's not going to be a utopian versus dystopian. I think it's going to be a very sort of mixed view of what happens. But again, it goes back to my point about agencies, which one do we actually want to head for and what are the steps that we need to take to stop one or the other from taking over our lives?

[26:21] Beth: I think just to go briefly back to this report as well, that's what's missing in the discussion of risk. It's very much focused on the big title, existential risks of AI will either decide to destroy us or build something that will destroy us, but that future we're talking about there, where it's a model between who has and doesn't have and what information we have access to and what entertainment is distracting us, that's as much a risk as these things that they're laying out here and perhaps more so.

[26:46] Briar: So let's talk about the steps that we need to take because yeah they're suggesting in this report that they should make AI illegal. And then of course we need to talk about diversity a little bit as well, like who's actually creating these algorithms and AI models and things like this. Because I was horrified yet again when OpenAI announced their all-male board, I was like, why? Like, did we not learn from the various moments in history? Like, we need to be approaching this with, from my perspective anyway, with diversity, so that we're thinking of different perspectives. And of course, when everyone imagines the future it's very different for them. It's very personal because everyone's got a different culture, background, upbringing and experience. And I think of when Apple launched their health app, they forgot to put the period tracker on it because they had an all-male team and they didn't do it to be dicks about it, they did it just because they did not think about it, because why would they, they're men? So yeah, I think it's interesting. What do we actually really need to do here? Because I think we need to kind of get our shit together a little bit.

[28:05] Beth: I partially agree. I mean, I think diversity is sometimes employed as a form of ethics washing by corporations. So the new OpenAI board does have women on it, but it's again, is it diversity of thought or is it just diversity of appearance in that case? And I think they're just primarily white women as well. So there's not like other forms of diversity and intersectionality there at all. But just on the point about the period app, yeah, perhaps some women in the room would've mentioned it, but also, do we want our periods tracked by data that's being uploaded to a cloud and then being sold on sometimes to third parties? So there's questions about not just diversity, but also equality and justice as well that come in here that we don't necessarily want, say, a facial recognition system to be really good at identifying people from ethnic minorities if that's going to track them for particular reasons. So it's ethics all the way through, not just diversity at the beginning.

[29:02] Briar: Great points there.

[29:05] Theo: Yeah. I can't really add much on to that other than the appearance, like the appearance of diversity on a board doesn't mean anything when your algorithm has been trained on the entire history of the internet. And we all know that that is a melting pot of opinion, misinformation. It doesn't matter who it's being written by, because actually you probably can't identify half the time what has been written by whom how it's being attributed, where the information has come from so the citations don't exist, etcetera, etcetera. So there's diversity in terms of who's in control of building these tools. But at the same time, what is the diversity? Where's the ethics in terms of how those tools were made in the first place?

[29:55] Theo: Not by whom, but how they were trained, where the information came from. What labels or tags or any kind of data cleansing was applied in terms of labeling it, well, this was written by a black woman. This was written by a black guy. This was Chinese, this was Asian, this was Indian, etcetera, etcetera. I think from a diversity and inclusion point of view, it hasn't even started right from the root. So really just saying, oh, we've got a diverse board is absolutely meaningless when it comes to how these tools are actually developed and emerge. 

[30:37] Briar: So what would be your recommendations to companies that are developing AI? Like, if you were to write a report and publish it in Time Magazine, what would you suggest as the route to go about this?

[30:51] Beth: It's so difficult because I think some of these questions are either really present because they want to get out ahead of them and present as I say, like a particular image of what they're doing with an ethics committee or a board and like have a presentation of being right-minded, or they're just not even on the thought plane at all for some of these people. I've spoken to people working on particular forms of AI, like narrow AI at exhibitions, where they're going, oh, we can use this for, I don't know, recognizing crops and when they're ready to be picked. Okay, that sounds great. That sounds really useful and efficient, but they don't think about other uses that any kind of recognition software could be put to as well. I ask the question, they just look blank. So it's really difficult when sometimes the conversation doesn't even occur to some people that there could be other potential problems and uses of the technology.

[31:41] Beth: And when it comes to the conversation about existential risks as well, just to go back to the diversity issue as well, there are significant voices in this space like Elon Musk who are really dominating where people's imaginations of AI are going. I'm really keen on this term from an STS scholar called Lee Vinsel, Criti-Hype, that even when criticizing these things and suggesting that they shouldn't be used, it's still a way of bigging them up and saying, well, we're definitely going in a particular direction. We've got this techno-deterministic narrative again. So things like the pause letter, Elon Musk said, he signed the pause letter and then he built Grok in that period of time, straight after the pause letter. 

[32:20] Briar: He's always very interesting at PR, isn't he? Remember when he smashed his Cybertruck or whatever it was and he was like, oh, it's never going to smash and then it broke.

[32:30] Beth: Yeah. Back in 2014, he was talking about how with AI we're summoning the demon. So again, the religiously toned language that I'm really fascinated by. But he's built the demon now, if that's what he believes, he's built Grok on the basis of Twitter or X whatever, I'm not really keen on calling it X. He's used the data that we know is problematic. We know what the conversations can be like on Twitter, and he's built that LLM on the basis of that. So, does he really believe what he says or does he just say it in the moment?

[33:08] Briar: What are your thoughts about like Elon Musk and the fact that he said we're summoning the demon here, like those are big words?

[33:18] Theo: Yeah. I mean, Elon is an interesting character because I mean, him and Stephen Hawking as well were very vocal about unleashing AI and building AI, and we should be very wary about it. And it's going to be our humanity's demise. And like Beth says, they say one thing one year and then another year they're rushing to essentially build the same kind of systems that everybody else is, because they can see the price and the price is very large and that's the thing. The one thing that comes out of all of this as well is that there's not going to be one system. We're not going to get a Skynet. I think the James Cameron sort of future of one all-seeing, all-knowing and essentially the existential threat that the government seems to want to paint a picture of, it's not going to happen.

[34:10] Theo: We're all going to have personalized AI at some point that'll be pointing to the data that obviously we want to give up for that convenience. But again, is that what we want? Do we want to give up that agency just for that convenience? What is the price of that convenience? I think Beth also made a really good point in terms of the conversations that people aren't willing to have, which is, well I built it, but I'm not responsible for what other people do with it. And it's like, well you've built the demon, you summoned the demon, therefore you must be responsible if it runs amuck. But it's like, well actually I made it appear and how do we chat with it? I thought, oh, it's not going to do anything.

[34:52] Theo: He said, it's not going to do anything bad and it left the room. As soon as it goes out of the room, I don't care what it does and we can't have that kind of attitude in business and certainly with the people in power who are invested in to create these systems and tools. Yes, it can create a machine vision system that can pick the best apples. But again, like Beth has said, you can be subverted with a re-tweak of an algorithm to pick out people, certain types of people with certain skin complexions or certain looks. We're still talking about phenology and the age of AI, the shape of my forehead is going to determine whether I am gay, straight, lesbian, I'm intelligent, I'm not intelligent, I'm childbearing, I'm not childbearing.

[35:37] Theo: These are pseudosciences. And yet they're being debunked time and again. But we are still seeing people using AI in these sophisticated systems based on pseudoscience. Now, who's responsible for that? Ultimately it has to be the people who develop the algorithm and then decide to give it away. There's all this talk about open-sourcing algorithms and the training data and the AI, so everybody else can benefit from that. But that's almost like relinquishing responsibility and saying, well we built it for the best intentions. And just because someone else over there has decided to use our image generator for porn, it's not my fault. I didn't make it for porn or child pornography or anything else like that. I didn't make it like that, they did. Well you created the system to enable them. And that's the thing. How do we make them responsible? How do we judge them on that? These are questions that I think don't get answered in these reports either.

[36:40] Briar: And I think as well, it almost seems like we don't learn from our mistakes. I remember when Facebook, they got their fine with all the data and privacy and things like this. And here we seem to be at a crossroads again and we've still got questions and no answers.

[37:03] Theo: That's human, again human apathy. If you look at, Facebook has 3 billion users apparently, LinkedIn has 1 billion, but there is an echo chamber in there of the people who understand what the data is and what happens with that data and how it's sold onto third parties, etcetera, etcetera and used to train large language models and algorithm and AI. The vast majority, I would say 99% of Facebook don't give a toss what goes on in Facebook because of their perceived conveniences, I can post memes, I can post picture on my dinner, I can talk about my children. Here are pictures of me on the beach that I've manipulated using the pixel camera app, to make me look better and all this kind of sort of thing. We have become so inwardly focused on manufacturing what our life is like so it can impress other people that we don't really care about.

[37:59] Theo: But, they'll view us and go, oh, your life is wonderful. I wish I had your life kind of thing. That's who we are trying to educate and need to educate in this journey with AI. They're not going to read the government report. They will read the sun or the mirror or the New York Post or something like that, that produces a picture or an article with a picture of Terminator and saying, oh, the robots are going to kill us. And we'll read that and go, oh, look Jim, the robots are going to kill us again. And they go, yeah, yeah, okay. And put that down and forget about it. These are the people that we need to actually bring on this journey. It's not you, me and Beth, on this call because we know we're educated enough to understand what's going on.

[38:49] Theo: And we have a concern, but it's everyone else around us. I call it the supermarket test. If you stand in a supermarket and you can watch and observe life going on around you, you could probably pick out one or two people who would actually know what you were going to talk about if you shouted a couple of terms out, the rest of them wouldn't have a clue because they're so focused on their day-to-day life and getting to the end of the day and hoping that they wake up the next day in better circumstances. And it's those people that we need to talk to.

[39:17] Beth: I think also that status quo is only going to get worse because you talk about the willingness of people to accept things. If they can get say a filter to aesthetically change them for Instagram or Facebook, they'll just happily go along with giving over their data for that. But with the new generative AI, it's actually going to a step beyond that because they're not just trying to present themselves aesthetically in a particular way and are willing to do various things for that. It's now about presenting their talents and their intellect in a particular way. So if you use AI art, generative AI art, or you use these chatbots to write you a cover letter for a job, you are presenting yourself as an artist when you're not. You're presenting yourself as being able to write when you can't. And those things are being handed over to these technologies and people say, well, okay, great I've upscaled my abilities without having to put any years and years of practice into developing as an artist. 

[40:08] Beth: So it's not just our aesthetics, it's our talents and our intellect as well, of being undermined by generative AI. I mean, I'm a university lecturer as well and this is a huge concern that we have students and professors as well. They're both going to be using ChatGPT to write things and stopping being able to think through an argument, how to write an introduction, how to write a middle, how to evidence things that actually exist. Because you can ask ChatGPT to give me six sources on this subject, and it'll make up five out of six. They won't exist. So we're off-handing our epistemic labor, but we're also off-handing our creative labor to be able to say, I'm a talented artist now, which really bugs me. You might be able to tell that.

[40:53] Briar: I think it's a very interesting topic. So I'm producing a documentary at the moment where I'm interviewing lots of very fascinating experts about AI, robotics, the merge of man and machine. I'm due to go get my microchip in New York next month, which I'm very excited about. But it's not just about the microchip, it's the stance. I want to get people talking about these topics and be interested in the future and what it could possibly be, and the way that I'm producing my documentary or I'm trying to produce it in a reality TV style manner and the hopes that I can bring some of these amazing intelligent people such as yourselves to the masses, to all the people in the supermarket, as you pointed out, Theo. And that's really my goal and what I'm working on. But how can we do this? Do we need more people like Kim Kardashian and other celebrities perhaps playing a bit more of an interest in the future? Do we need different people in government, younger people, maybe, I don't know, like, where does it all begin? And Theo, you mentioned as well about how people are so worried about their day-to-day. I think that's a real concern. How can we think about the future when, if we're living in the UK, we don't know how to pay our power bill, for instance?

[42:15] Theo: It is a really difficult question, actually. I don't think there's a right answer here, because every individual has got different set of circumstances and different worries and things. And even with the AI threat against jobs for example, and we've seen, we're starting to see that hit more and more, the games industry is shedding people left, right, and center, because obviously there's a very creative element there. There's producing videos. There's concept talk, there's code and all this kind of sort of thing and the big studios are investing loads of money now in tools to basically take some of that away. And what do these other people do? Well, what we see is a shift away from people leaving the industry that they loved and that they built their careers in and now actually leaving to go to other industries that they feel safer in.

[43:03] Theo: So we're going to see massive black holes of talent where there shouldn't really be a black hole, because we should actually be using AI to take away the crappy, mundane stuff and leaving the humans to do the artistic stuff, the creative stuff. I mean, we should not be pushing the creative industries into a dark corner hoping that they're going to pick up a job as a mortgage specialist, for example, because that's not what this is supposed to turn out like, but this is what we're seeing. So getting people to understand that there could actually be a material impact on their livelihood because their job is impacting is one way of communicating this. But it has to be done in such a way that it's not sensationalized because people have been here before. We were there with the Industrial Revolution, we've had many iterations of AI already, Alexa and all these kind of sort of things. And even way before that, with robotic process automation, with business process management, automation, etcetera, they were all supposed to automate really crappy functions and processes to the point where we didn't need people anymore. 

[44:18] Theo: But we've seen time and again that the hype dies down very quickly. People have lost loads of investment in terms of the business, investing in those tools and not really seeing the return and actually hiring people back in again. And we might actually see this again with AI and this current state of AI, which is a huge push, lots of redundancies next couple of years, all falls flat. And then people realize that you actually still need people to do a lot of the creative work. A lot of the thinking like Beth has said it's good for producing something that appears very intelligent, but because we kind of think, oh, well, it's a computer that's telling us, so it must be right. We lose a lot of the critical thinking to question where the information came from. Where are the citations? Where is the proof that that what you're telling me is actually qualified enough for me to use to make a business decision or a life decision on? So this has got many threads. I think one thing is that we just can't lose the ability to critically think our way through it.

[45:28] Beth: Yeah. I like to cite someone who unfortunately, I think probably was a Tory MP, and maybe Theo knows this, who said that the key is education, education, education. We actually need to get in quite early at grassroots level at primary school even. I've given talks on AI to primary school-level kids who already know far more science fiction than they should at their age. They've seen things that are far too old for them. They already have a conception of what a robot is and what it should be able to do for them. And they already have a vision of a luxurious lifestyle in the future where robots will do everything for them. I think they're sort of mapping parents onto robots there, but they've already got this vision. So we have to start very early with encouraging critical thinking. We have to get in now and talk to them about ChatGPT and what it does and what it can't do, and where it's going to mask your inability to do certain things.

[46:23] Beth: We need to talk to teachers who are also needing to engage with this technology because every single location is talking now about how we have to use generative AI and how we have to train people to be prompt engineers. This is the worrying thing that actually, that's the direction a lot of the conversation in education seems to be going in. So we need it at the grassroots level that people have an understanding of what the technology is and what it can do. And also encouraging them to think critically about those kinds of articles with the Terminator images and the existential risk narratives that yes, devil's advocate, again, they only need to be right once and we have a robocalypse and we all die. But if they're not right, then we have to deal with these short-term future problems of what it means to hand over all our ability to think to something else. And we need to get in early and have those conversations.

[47:14] Briar: Do we think we're doing enough to change the education system? I'm obviously out of school. I'm so far removed. I don't have any children, so I'm not sure. But is it still quite old-fashioned in what they teach? I remember when I was at school, like we spent so long learning how to do maths, and then I just graduated school and used my calculator.

[47:36] Beth: Yeah. So I have a 12-year-old son and we moved to Zurich about a year and a half ago. So mostly he's been educated in the UK system and I'd say there's a little training around stranger danger on social media, cyberbullying, all those very important things. But AI is just a real unknown. There's no syllabus. There's no curriculum until you get to computer science 16, 17 year olds who have also spoken to at schools and philosophers at A level. That's the only point at which these conversations start really happening at schools. There's nothing in the national curriculum. At his school now. It's an international school. They've asked me to be on a panel to discuss ChatGPT with teachers there that they are thinking about where this is going. And likewise, at the university I work at, we're having lots of conversations about, well, is this plagiarism or do we allow students to use ChatGPT and do we train them on how to use it because this is here to stay. So the conversation level is so [varied] and incomplete that most people's accounts of AI will come from the tabloids and the Terminator pictures. And, when Boris Johnson was Prime Minister, he mentioned the Terminator at the UN, giving a big speech. We are given these narratives from science fiction, which I love as well. But they will shape our early education unless we do something more proactive about it.

[48:56] Briar: Theo, you wanted to raise a point before?

[49:00] Theo: Yeah, it's actually relatable to this, to be honest. So I mean, the education system is fundamentally broken I find. I've raised two kids. One's 21 this year, the other one is 18, then I've got a 10-year-old stepson who it's interesting to see how little has changed in terms of his education versus my own older children. But you've got some institutions who are trying to ban it because they don't want their students using it. You've got some who are saying, well the teachers are allowed to use it to create syllabus and questions, but the students aren't allowed to use it to answer them which I find is bizarre because it's almost, what are you trying to say? It's the wrong kind of image to portray. And then on the flip side, after education, we're now seeing a backlash from internal HR firms and recruitment basically saying, oh, we're being awashed with applications that have been written by ChatGPT.

[50:00] Theo: So we've got cover letters written by ChatGPT, and we've got tailored CVs written by ChatGPT, all against the ChatGPT-generated job post. And we don't know how to filter them out because everybody looks like a perfect candidate. And it's like, well, you kind of wanted this, you were moaning about the fact that you didn't like sifting through CVs and things, so you used automation to pump stuff out. And now as a consequence of that, people are getting wise and using that automation and those same tools to try and win the jobs and now you're complaining about it. So it's starting to become, everybody is rushed towards this new shiny object without really understanding. Again, it is back to that whole, we've released something and I don't care what happens afterwards, there's no responsibility taken. And as a result, everybody's rushing to use it without understanding what the downstream impacts are and now we're starting to see that.

[51:01] Theo: In the education system, a lot of the time when new technology comes along, they don't change the system itself. It's like, well we'll just throw this into the curriculum or the syllabus and see what happens. We'll use it as part of the syllabus, but the syllabus remains the same. So we had that with Chromebooks and with iPads and schools and stuff. And it was supposed to open up different types of learning, but in the end it was still the same stuff that the kids were given to learn, just using new, different types of equipment to do it on. Nothing has really fundamentally changed. It's been very Victorian still, especially in the UK. And of course it is down to funding as well. I mean the school funding system is wrong.

[51:48] Theo: Teachers are underpaid, schools aren't given enough money. The reason why schools bring in buy sales and bake sales and things like that is to actually pay for the electricity bills in the winter because they don't have enough money. And that's a real world example because I know because my kids went to a state primary school with exactly that, where they did these events in the summer to take in money to be able to pay the heating and electricity bills in the winter. And that's a really sad state of affairs. Is AI going to solve that? Well, no. No, it's not. I think it's just going to make things worse.

[52:23] Briar: Beth, anything you would like to add on to that?

[52:25] Beth: No, just to say, I mean, we can frame this around the question of existential risk. This is an existential risk in terms of many apocalypses that are going to happen because people's lives are changed. One of my favorite quotes is from William Gibson who says, the future is already here. It's just unequally distributed. So for the people like Theo and myself who send their kids to state schools in the UK, we are not going to see any benefit from any of these technologies. It's going to be, again, the people who have already got a system set up for their benefit that's going to give them more and more benefit as these technologies go on. So the future will be bright for some people and not necessarily for others.

[53:02] Briar: So the report as well, it's focused on two separate categories of risk. The first category which calls on weaponized risk refers to systems that could potentially be used to design and even execute catastrophic-

[53:21] Beth: It's a tricky word.

[53:23] Briar: It's not the first word I've had trouble with today, to be honest. I'm going to say, I'm just going to say that sentence again, otherwise I'm going to get annoyed at myself. The first category which calls on weaponized risk refers to such systems that could potentially be used to design and even execute catastrophic attacks. Do you think that this is something that we should be worried about? People, I don't know, being at the back end of artificial intelligence and actually using this to, I don't know, create some kind of nuclear attack or some kind of war or some kind of virus even.

[54:17] Beth: Well, if you talk to some of the prominent voices in the existential risk conversation, they take that very seriously. So I just did a mini-documentary as well on the [inaudible], and we interviewed [inaudible], who's one of the prime voices. He also had a piece in Time magazine talking about AGI and its risks. And if you talk to those voices, they'll say, well, if it happens, it'll happen so quickly that it'll be over almost in a moment. As soon as AGI develops this super level of intelligence and if it has access to biotechnical factories and is given network access, if it can access nuclear weapons, Skynet scenario, it's over very fast. He thinks it's probably more likely to buy a biological attack. So this is a framing that's been around for a while.

[55:03] Beth: I mean, obviously Skynet goes back to the 1980s. We had a whole dystopian era of science fiction longer before that. "R.U.R." Karel Capek play about the rising of the robots, they just physically turn around and start attacking us. It's in Asimov as well. So we've had these narratives for a really long time. We take them seriously or not. Again, like I said, if they're right, they only have to be right once. They can be wrong every single day. But if they're right and it happens once, then we won't know anything more about it because we'll all be gone. But it's sort of appealing in a way to some people to have these conversations, to get involved in the dystopian and the dark side of artificial intelligence narratives and think about those kinds of horror stories. I think it really matches our desire to read horror stories more generally as well. So people really do want to read tabloid accounts of Terminators rising up and that's why they sell and that's why they keep using those pictures again and again

[55:59] Briar: Well, at the end of the day, I guess media, they get paid for the clicks to their articles, don't they? So they essentially make money out of our fear. What are your thoughts about this, Theo?

[56:11] Theo: Yeah, I mean, it is great to be part of that particular sensationalist crowd because you get lots of attention. And I think Beth is right in terms of, it only needs to happen once and when it does happen, we won't have a clue that it's happening. I think, I can't remember the title of the film, but the one on Netflix where it was essentially a digital attack attacking digital infrastructure and money was going down and planes were falling out of the sky, etcetera, etcetera. I think that's probably what it's going to look like if it was going to happen. And I'm not convinced that it's going to be a biological or a nuclear attack because these systems are very well protected for one. They're air-gapped as well in a lot of instances.

[56:58] Theo: So there's no outside influence that can get inside unless it's you're coerced. And that's when you get into social engineering and AI using social engineering to convince a human to do its own bidding. So I think there are various levels of this existential threat. I think picking up on a point that Beth made about sort of many disasters or many apocalypses that she called or existential threats, I think that's actually a really good point is that it could actually be not one giant global thing, but a set of many cascading effects that are triggered that may lead to massive destruction or massive loss of infrastructure or something like that. I think we're not going to see a robotic uprising for a start because frankly there aren't enough robots for us to think about combating.

[57:53] Theo: But it's more likely going to be an in-corporeal digital threat that we have to consider here. When that happens, I mean, I don't know. I mean, I read a report a couple of weeks ago about AI worms that are viruses, that are basically crossing over into different AI systems that are being generated by people using existing AI on the web, AI agents on the web. So essentially I could kick off an AI agent. I've written an open source and it could essentially inject itself into another AI system that it could convince doing something else and that could propagate into another AI system. So that is an interesting existential threat where it's not just coming from one system, but essentially again, it's those mini catastrophes where it's like it could be triggered over and over again into different systems that are cascading and getting worse as they do. And that could be done by an AI, but it all starts with someone pressing the button. We have yet to see any AI have any agency beyond someone actually telling it to do something. And to me that's still the real existential threat, which is someone with enough agency and knowledge to be able to start the chain of events.

[59:20] Briar: Anything you'd like to add on to that, Beth?

[59:22] Beth: Yeah, I'm just thinking about that cascade effects. There are many researchers in my field of religious studies actually working on multi-agent models and using algorithmic systems to predict things like religious conversion and religious violence. And I think it's El Ron Schutz, he runs various different experiments in these fields, and I apologize if it's the wrong person, but one of his things, one of his models actually predicted the significance of one figure within a conversion unit for how these agents behaved. Now he realized when that result came up that you could use that if you were a state or a government that wanted to get rid of a group or encourage another group to grow by identifying figures with those traits and getting rid of them through targeted assassination, you could have a huge cascade effect on particular society. So by modeling more and more of society through algorithmic systems, we might see people also using AI in a more kind of broad sense to control societal changes through that kind of cascade effect, but actually in the real world as well.

[1:00:28] Briar: Interesting. So the second category of risk the report calls the loss of control risk, which we touched on earlier in our discussion, is the idea that advanced AI systems may outmaneuver their creators. There is, the report says, reason to believe that they may be uncontrollable if they are developed using current techniques. Do we think this is true or not true, Beth?

[1:00:57] Beth: It's interesting because control brings up this idea of agency again. As soon as you start talking about out-of-control AI, you get this idea like Westworld, all our narratives about AI deciding to do something when some of the conversation about value alignment and control is more about just making sure something that doesn't have common sense doesn't lend itself to a bad outcome. So Nick Bostrom's paperclip thought experiment just says, if you get a super powerful AI and you tell it to make paperclips, but you don't tell it when to stop, it destroys the entirety of the universe turning everything into paperclips. Now that's a very hyperbolic thought experiment, but it has influence on these discussions about existential risk. If you had an AI with an agenda set by humans that doesn't have an endpoint or doesn't have the things that we have, when we think about, what should I do?

[1:01:47] Beth: Should I stop? Is this the right thing to do? All these kinds of conversations we internally have, because that thing of consciousness that we assume we've all got, then you could have these out-of-control scenarios that have nothing to do with Skynet waking up and deciding to get rid of humans. So, I mean, there are unintended circumstances at the minor scale, we already know that happens with decision-making systems. So if you extrapolate upwards, then again the speculation is that this could happen at a much larger scale if we don't get these systems AGI-aligned with what we think human values are. And then that's a whole big debate. What are human values?

[1:02:24] Briar: Well, we'll open up that can of worms very soon, Theo.

[1:02:30] Theo: Yeah, the paperclip experiment always sort of makes me smile, like can you imagine everything in the entire universe is turned into paperclip.

[1:02:39] Briar: I was actually just thinking about why we need to learn our ChatGPT prompts. So we prompt it to stop, don't forget your prompts. 

[1:02:48] Theo: Yeah give it an endpoint. Yeah. Well, it's funny because in my mind hypothetically if an AGI system became emergent and had its own capabilities and reasoning, etcetera, leave the consciousness debate out of this, but it gave itself, self-perpetuate. It was self-perpetuating and self-learning. At that point control is almost impossible because when was the last time we tried to control something that was, well, we have never attempted to control something that was perceived or is more intelligent than us because we have no idea of what that is, what that looks like, what's more intelligent than a human being? Well, we don't know what's more intelligent that has agency and capability and movement and context of the real world. Well, we don't know. We haven't experienced it.

[1:03:55] Theo: So saying that we would lose control is both false because we don't know. But also at the same time is actually quite scary because we would have no conception of when that point would be reached, I think. So, yeah, I think for me yeah, losing control of something that we don't actually understand. I mean, then you're getting into scenarios of enslavement rather than alignment. And again, I'm projecting here and I'm very conscious that I'm anthropomorphizing here, but if you start to insist on putting controls and checks and balances in something that is actually fundamentally more intelligent than we are, then that system or that thing will naturally or digitally naturally rail against it. And it will find ways of getting around that. So I do think that this is actually more of a philosophical question rather than an actual question. And should we be having philosophical discussions in a governmental paper? Potentially, yes. But it shouldn't actually be forming part and parcel of any regulation or legislative ruling. I don't think.

[1:05:29] Beth: So. I think one of the arguments that's posed in relation to that kind of conversation about we haven't encountered intelligences greater than ourselves before. One of the arguments that's often made is you look historically and actually we have, because you have to unpack who that we is. And actually the dominant voices in this conversation are people who've been at the top of the hierarchical pyramid for a long time, feel very established at the top and now feel threatened by the idea of a greater intelligence based on a historical cultural context of encountering other intelligences, other humans, and not seeing them as equivalent. Not realizing that they might actually be smarter treating them - and slavery's come up - treating them like slaves and then they rebel. So some of this might be drawing on that history as well. And that's where I mentioned "R.U.R." by Karel Capek plays specifically about a robot uprising in a factory because they're enslaved because they've not given any freedom and they're at least intelligent, if not more intelligent than humans.

[1:06:25] Beth: So we have these conversations and some of the people pushing more for the conversation into robot rights will say this specifically, we didn't always recognize the intelligence of other humans. We treated them badly. Let's get it right this time as we go into the future. Now I'm agnostic on that. I think you don't have an absolute answer to whether we're going to create anything that's more intelligent and more full of agency than humans. But it's interesting those conversations keep recurring in this space and it draws in the conversation about extraterrestrials as well, because that's the same sort of encounter with another intelligence potentially in the future that people talk about.

[1:07:02] Briar: I love thinking about aliens and what might happen in our lifetime. Like, I'm just waiting for someone to make some juicy announcement. Like none of this Time magazine reports and stuff like this. Like, I want the juice, bring it to me like I've been waiting my whole life for this. Beth I wanted to make the most of having you on the call here and given your religious studies background. And Theo, I'm curious to hear your thoughts as well. But as I mentioned to you, I'm getting a microchip in my hand next month. And for me it's about the stance that I'm taking. I want people to discuss the future. I want people to have these kind of conversations. And I know that my microchip at this stage can just unlock my house and my car and automate some things on my home and my phone and you can add me on LinkedIn if you want. And I know that over time it will become microchips and we might have health chips and there'll be all sorts of wonderful advancements in the future. But something I found very surprising when creating content, talking about my microchip, was yeah, just how polarized my Instagram community was. So some people were either super for it, other people called me some names. Like I got called, do you remember when we were growing up, we had those little troll dolls with the ginger hair? 

[1:08:28] Beth: Oh yes.

[1:08:30] Theo: Yeah.  

[1:08:31] Briar: Someone told me I looked like a troll doll, which to be fair I did kind of have my hair in a funny little bun like this, so I didn't look too different and I can do the little smile. So anyway, I don't care. Sticks and stones can break my bones, whatever, like I've got far bigger things to worry about than being called a troll doll. Anyway, I got called a series of other names, but something I found very unsettling that I wanted to talk to you about is people have been saying that I'm the devil and people have been commenting bible verses under my content. They've been telling me I'm getting the mark of the beast. And I find it very unsettling. Do you agree with them? Am I doing a devilish kind of behavior? Like I'm keen to hear your thoughts?

[1:09:24] Beth: No. So I'm an anthropologist looking at religion and technology. I don't come from a specific religious perspective, but I am looking at people who use these kinds of satanic demonic narratives to talk about the future of AI, the future of transhumanism. People who are like yourself engaging with technology at the physical level want to have it within their bodies, that they do tie this into their personal interpretations of apocalyptic scenarios. So if they're coming from a Christian perspective, often New Testament with the book of Revelations, they tie in, you mentioned the mark of the beast, it overlaps with some more broad spiritual new age ideas as well about what's natural and what's not natural. And people who are against vaccinations, all these sort of overlapping groups. There's a term conspirituality that's often used in academia, the port manto of conspiracists and spirituality.

[1:10:17] Beth: So conspiracy groups that also have a spiritual element to them. So you see this in conversations that pops up in spaces that Q Anon is in as well. This idea that if you are taking on board technology and becoming transhumanist or using AI, that you are engaged in something that's malevolent, that's satanic and demonic. These are narratives of rejection that actually you can see going back to the origin of other technologies as well. So the printing press was responded to as the work of the devil. You shouldn't move, type around, the cinema in particular. And now we have a large-scale evangelical Christian film production industry. It's massive, Pureflix as the Christian version of Netflix makes billions, I think millions at least. So these shifts and change in interaction between religions, established religions and technology that either they think about rejecting it or they adopt it.

[1:11:09] Beth: There are also many examples of established religions using artificial intelligence, using robots demonstrating as you say, with putting the chip in your hand that you want to start a conversation, religious groups also wanting a conversation about the future of technology as well. And then also because my background is in the study of new religious movements, I look at people who are specifically developing their spirituality around AI as a God-like figure as a very literal thing. They're not talking metaphorically. I think Elon Musk as being more metaphorical when he talks about the demon. Some people are very literal about the Satanic element and some people are very literal about the God element as well when it comes to talking about AI. So these narratives interweave with each other in very interesting and creative ways. But I'm sorry that people are calling you names online. It's not very nice to have that experience. I'm sure some people call me names online as well, along lines of talking about AI and my misunderstanding of where it's going and where it's come from because it's satanic or hellish.

[1:12:08] Briar: To be honest. I think when you're out there and you're on a mission and you've got a goal that you want to accomplish, like the way that I see it is these sort of conversations are bigger than myself and I believe them being very important discussions for humanity as a whole. And of course I'm going to ruffle a few feathers along the way, that means you're doing the right thing. And I think the beautiful part about the world that people need to understand is that nothing is black and white, as we all know and have talked about on this call. Everything is complicated in life and messy and nuanced. And we can change our minds. I might take my microchip out in a year and might be like, you know what? That was just a terrible idea, but hey ho, maybe it might get infected. I don't know. But the main thing is I think that we are having these discussions and I loved what you both have shared with me today, especially around reaching young people at the education level and saying to them, hey, you guys are our future. Like, you can go out there, you can make change. But yeah, I think we need more discussion and more action, to summarize is that what you guys would agree?

[1:13:19] Theo: Yeah, I think individually everybody has a voice and has the ability to use that voice and not just post stuff on social media or whatever about it. I think now is a really good time for people to understand. Again, it goes back to agency. Humans have more agency than AI and there are collectively over 8 billion of us. Now, if 8 billion of us or even 7 billion of us stood up and said, I'm not really comfortable with this, then people are going to take notice. How do we galvanize that amount of society, is a question I don't think anyone is willing to address. Certainly the governments don't want to because they just want to sit and write reports and pretend that that's their level of responsibility over and done with.

[1:14:13] Theo: And certain levels of society are quite happily being led towards whatever particular future is being shaped for them because they either don't understand that they have a voice or again they're too worried about the day-to-day, surviving day to day and they don't really care. And they're just quite happy for change to happen with or without them. So it's really about trying to galvanize the vast majority of society. And again, we talk about society as in civilization, as in our cities and our communities and things. But a lot of civilization has no idea of what's going on other than they are being involved to train these things at a pence per hour because that's the only option that they have. A lot of the African nations, for example, still live day to day in the same way that they have done for thousands of years because culturally that's comfortable for them.

[1:15:06] Theo: And they are going to get rolled over, roughshod when this thing comes, this juggernaut happens and they will not understand what's gone on because they've been left out of the equation altogether. And those are the people that I think a lot of us have to stand up for. The other point is I've had this chip in my hand for nearly 10 years now. I had it done at CeBIT at Hanover when CeBIT was a thing in Germany. Never used it. So good luck to you Briar because I think you'll probably get bored after a while, very quickly.

[1:15:38] Briar: The idea of people adding me on LinkedIn quickly loses its novelty, I guess. And closing thoughts from you Beth?

[1:15:49] Beth: I think as an anthropologist I revel in the messiness of it all, but I completely understand how difficult it is for Joe and Jane public to understand what's going on and how they can use that very precious agency to make differences. How they can engage with the policy makers and the politicians who on the whole are just looking for more and more efficiency. If AI ChatGPT, whatever is going to get them easy wins, they'll just keep using it. And it is the same for people who want to, as I said, like sell themselves as things they aren't. If AI is going to give them that veneer, they'll use it. But I think as long as some of us decide to vote with our feet… I just saw recently a film I was very excited about, [I saw that it] has generative AI art used in it. I'm not going to see it. And I hope other people will make those sorts of decisions as well. We have to do what we can do. We can't all change policy at the top level, but at the grassroots level we can do some things. 

[1:16:48] Briar: Well thank you both so much for coming on my emergency podcast. I think we have gotten to the bottom of this report and yeah, I'm keen to hear people's feedback who enjoyed the show and do connect with my wonderful guests on Twitter and LinkedIn, although Theo is not really using his LinkedIn anymore. 

[1:17:09] Theo: No.

[1:17:11] Briar: Good on you, you were there for 19 years, so who cares. 

[1:17:15] Theo: I don't know. (laughs) 

[1:17:16] Briar: Awesome. Thank you so much guys.



Briar Prestidge

Close Deals in Heels is an office fashion, lifestyle and beauty blog for sassy, vivacious and driven women. Who said dressing for work had to be boring? 

http://www.briarprestidge.com
Previous
Previous

#E48 What Type of Ancestor Do You Want To Be? With Adah Parris

Next
Next

#E46 A Good Future is a Future That is Inclusive, With Biljana Markova