Professor Toby Walsh
So the positive side of you having all of this digital footprint is that it's the source material to make the virtual you that could be there for your descendants to be able to talk to you or your tweets. It will sound like you or say the things that you used to say. Yes, we'll be able to do this.
Whether it helps. I don't know.
Nadine J. Cohen
Welcome to "Grave Matters", a lively look at death.
Anthony Levin
All of us at "Grave Matters" would like to acknowledge the traditional owners of the land we are recording from. We pay our respects to the Cammeraygal people and their elders, past and present. We also acknowledge the traditional owners from all Aboriginal and Torres Strait Islander lands and other
first nations territories from which you are listening.
Nadine J. Cohen
A warning. This episode contains references to death, warfare, and other themes related to dying. Please take care.
Anthony Levin
Nadine J. Cohen. Hi.
Nadine J. Cohen
Anthony Levin. Konnichiwa.
Anthony Levin
Nadine, you've seen the film "The Matrix", right?
Nadine J. Cohen
I have indeed. I love me some Keanu.
Anthony Levin
Don't we all?
Anthony Levin
And you remember when Neo, aka Sci Fi Jesus chooses the red pill and he learns the truth that humanity is at the mercy of sentient machines?
Nadine J. Cohen
Dude, I can barely remember yesterday, but like, sure...
Anthony Levin
Okay, well, I am a "Matrix" trivia jukebox. And, that's what happened.
Nadine J. Cohen
I am not and have never been a 15 year old boy.
Anthony Levin
Yes, I totally get that. Well, in today's episode, we explore some of the ways artificial intelligence might change our experience of life and death, from the end of human biology to the start of digital immortality. With the help of an AI expert, we go head to head with the Singularity. We ask,
what are the chances of sentient AI in our lifetime? If machines do eventually outsmart us, will we still be in charge? And could the future of mourning the dead be virtual? So, Nadine, have you been AI pilled? And if so, did you vomit?
Nadine J. Cohen
I don't even know how to answer that. Um, no...? And no...?
Anthony Levin
Okay. Cause if it were no and yes, that would be weird. Basically, you're saying you're not fully on board with the transformative power of AI.
Nadine J. Cohen
No, I love and hate it in equal measure at the moment.
Anthony Levin
Yeah. I feel much the same, actually. And that probably makes sense because you and I are not exactly digital natives, like we grew up pre- interwebs.
Nadine J. Cohen
Speak for yourself. Who worked at Google in the Noughties, babe?
Anthony Levin
I mean, what I'm saying is that we were about 15 before the Internet even came into our lives.
Nadine J. Cohen
Oh god, I don't even remember it in high school.
Anthony Levin
Yes. So we know what life is like without it, and I feel like that does have an impact on our perception of AI.
Nadine J. Cohen
Absolutely.
Anthony Levin
But we were also young enough to be able to adapt to disruptive technology, like the modem. I don't know if that's true for me anymore and frankly I don't know if I want to adapt to AI... anyway, are you ready for the AI armageddon?
Nadine J. Cohen
Dude, just ask me a straight question.
Anthony Levin
How are you? Are you okay?
Nadine J. Cohen
No, I'm very clearly not okay. I, look, as a writer by trade, I have been losing work to AI, more for my, not for my creative work but for corporate jobs I do and things like that because the marketing team can do it with AI now and they don't necessarily need me. So I've lost a lot of work which is
really troubling and increasingly difficult. But then the other corporate work I do do and the other non creative work that I do do, it also helps me be more efficient. And so it's a very double edged sword.
Anthony Levin
Yeah, that makes a lot of sense and I'm sure a lot of other people are feeling the same thing, feeling compelled to embrace it, but also worried that it's going to eat their lunch.
Nadine J. Cohen
Yeah, but I mean I just take the view that like it's here, it's coming. Like there's no not embracing it, just means you're behind even more. You know, it's here, we can't, it's not going to go away now.
Anthony Levin
That's true. That's very pragmatic. Well, our guest today is Professor Toby Walsh, Chief Scientist of UNSW's AI Institute. He's a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN and to heads of state. He's a Fellow of the Australia Academy of Science and
was named on the international "Who's Who in AI?" list of influencers. He's also written five books on artificial intelligence, including his most recent release, "The Shortest History of AI". Professor Toby Walsh, welcome to "Grave Matters".
Professor Toby Walsh
It's a great pleasure to be here.
Anthony Levin
So we want to get right into it, but let's start with the positive. What excites you about AI?
Professor Toby Walsh
I think the two areas that really excite me are healthcare, it's going to transform the way we look after ourselves, and education. Again, it's going to transform and personalise the way that we're able to educate ourselves. And those two areas I think offer some of the greatest promise, we're going
to tackle many of the diseases, perhaps cure many, if not all of the cancers in the world. And equally, we're going to be able to provide first world education to people, even in the third world on their little smartphone.
Anthony Levin
The point about diseases is really interesting because we've explored that theme in this series previously in terms of how technologies might address ageing or even reverse ageing. Is that where, you think AI will take us, or a different direction?
Professor Toby Walsh
Well, certainly. I mean, I've got a number of colleagues in AI who are very much hopeful that it will do that, live longer, perhaps even live indefinitely. I think it has the potential for us to understand, you know, why is it that we age and that, you know, what is the purpose of that? I mean,
there are some animals that don't age like us. There are, you know, sharks that live hundreds and hundreds of years.
Anthony Levin
Yes, the Greenland shark.
Professor Toby Walsh
The Greenland shark, the jellyfish that is essentially immortal.
Anthony Levin
I've heard this. And it seems like unless there's some intervening event, the jellyfish can live for, indefinitely. Yeah, or some can.
Professor Toby Walsh
Yeah. I mean, it will befall some other fate. Not disease and age, but it will, you know -
Anthony Levin
Maybe get eaten by an ageless turtle at some point. You think that might be possible?
Professor Toby Walsh
I think we will be able to unlock the mysteries of how we age and why we age and perhaps be able to undo them and stop them. But then there's, you know, deep questions about, should we? And can we afford to do that, and then what happens to the world's population when we, we no longer have a natural
culling of the world's population? So I think from a scientific perspective, I don't see why we couldn't. That doesn't mean we necessarily will or necessarily should.
Nadine J. Cohen
Who or what is Homo digitalis?
Professor Toby Walsh
It's the human that we're going to become. You know, we, we are Homo sapiens, which is quite a bold and grand claim to claim that we're the intelligent one, sapiens. But, you know, there's some truth to that, that it wasn't that we were the fastest or the strongest or had the sharpest teeth, but we
were the smartest. And we use that to mould the planet around us, to use tools and amplify what we can do. But it seems to me that we're now entering a new phase where we're transforming what we can do with these wonderful, amazing digital tools, including artificial intelligence, and that we're
going to be increasingly living our lives in those online spaces, and indeed it may not be us, it may be our digital avatar that's doing that. And so, it seems to me that our relationship to the planet is being changed and perhaps we need to think of us, also, it moulding us as well, and us becoming
in some sense a new species, where our life is increasing digital. Maybe our life may be even exclusively digital.
Anthony Levin
While I pondered what the world might look like as an exclusively digital hellscape, Nadine asked Toby to elaborate.
Professor Toby Walsh
Well, there are, you know, colleagues who want to live forever who think that the way to achieve that is to upload their brains to the cloud and to live there. And you know, digital things live forever. They never decay. They can be reproduced exactly, copied exactly. And so there is the potential
that we could perhaps take the secret of us and upload it to the cloud. Although I'm not sure. This interesting scientific question is if we take a human brain and we take one of the neurons at the back of your brain and replace it by a digital copy, well, I think most people would say that's still
you, just even though there's a little digital neuron that's replaced one of the neurons in your brain. Well, let's do that again, let's repeat, and at some point you will have an entirely digital brain. But at what point did you stop being you? It's not clear.
Anthony Levin
And this idea of digital immortality, it's something which has been floating around for a little while. The futurist Ray Kurzweil has suggested that it might be possible, you know, in the near future that we might actually achieve this in silicon form. Is that something that you would want?
Professor Toby Walsh
I'm not sure it's something I want. Life is wet and messy and pleasant and unpleasant at the same time. I'm not sure that simple zero one stark digital existence would be one that I necessarily would see pleasure in. And then the prospect of immortality, I mean, I think the greatest threat you could
give to someone is, well, I'll make you live forever. That to me would seem to be a vision of some sort of hell in some sense. To think that the beauty of life is its brevity, the beauty of life is its fragility. The fact that we're only here on this little pea green planet for a limited amount of
time and we need to take advantage of all the opportunities that gives, that brief twinkling of an eye, and to think that you could just take that away and give people eternity, that would be quite a long sentence.
Anthony Levin
We've had another guest who we've spoken to about this very thing, about this idea that the beauty of life is fragility and its brevity. And he pushes back very hard on that as a philosopher who's basically saying that death is not inevitable, nor should it be, and that these ideas, which he
describes as the "Wise View", are a form of complacency.
Professor Toby Walsh
Well, we, I mean, we've extended our lives. The Industrial Revolution, people forget, even here in Australia, we have doubled our life expectancy. People used to die in their 40s, people now die in their 80s and 90s. And some of us get lucky enough, may even get to our hundreds. And I'm sure
technology, artificial intelligence as part of those technologies, can help us continue to push that back. But I still worry that, you know, many of us, I think life is too short and we could have it a little bit longer, but making it forever is quite a radical step from a little bit longer to... I
want to be around to see my grandchildren and perhaps my great grandchildren, and I'd love to be around to the turn of the century when for sure artificial intelligence will have exceeded human intelligence. That's the scientific quest of my life, is trying to build machines that match and exceed
human intelligence. I will be disappointed if I happen to have shuffled off this mortal coil before that time. And I don't get to see some sort of success in the endeavour that I and my colleagues have gone about. But I don't actually have a great desire to live much longer than that.
Nadine J. Cohen
I think also for me, you know, when you die, you take most of your secrets with you right. Now, don't ask me what my secrets are, but...
Anthony Levin
Well now you have to tell.
Nadine J. Cohen
To me, the idea, as someone who's led a very digital existence in terms of, you know, I was an early blogger, all my work is on the Internet. You know, I'm prolific on Twitter. Like, I've lived this life for a long time, but I don't like even the idea that all that's on there forever, you know, and
so to me, neither are that appealing, living eternally in my human form or digitally. What are some other ways that it can fail us, to be living this eternal life online?
Professor Toby Walsh
Well, I have to say there's a remarkable number of my colleagues, other people working in and around the field of AI, who don't share my view that the brevity of life is its beauty, that do want to live forever, that are transhumanist, that have paid the money to have themselves cryogenically
stored. And you see the little band that says, in the case of my death, here's the number to ring. And the experts from Alcor will come along and chop off their head and freeze them. I know a remarkable number of people who have stumped up the money to do that, and they really do see that as their
salvation. Obviously, the field has attracted a remarkable number of people from founding fathers, I mean, Marvin Minsky, one of the first people, person who started the field, he's now cryogenically, or his head at least, is cryogenically stored in a tube of liquid nitrogen somewhere in the United
States, waiting to be recreated.
Nadine J. Cohen
No, I'm very much with you. We're here for a time. We do our thing, we go.
Anthony Levin
Yeah. I'm equally not very keen on the idea of digital immortality, but I always think, I know it's unfashionable to speak of Woody Allen these days, but I always think of his film "Sleeper", where he is unfrozen from a cryogenic sleep and then wakes up in a very distant future and is befuddled.
And, you know, all sorts of comedic things happen. That's the kind of world that I imagine for these bodiless people when they are unfrozen, that they're so disconnected from the way things are.
Professor Toby Walsh
I should point out a serious point of science here, which is that we don't know whether we can actually digitally copy ourselves or not, or whether we will lose something that's the essence of us. You know, and this goes to some profound, deep philosophical ideas. I mean, we don't know whether
machines will ever, computers will ever be conscious, whether we'd be missing something that's the essence of us. The thing that makes us special isn't actually our intelligence. I mean, that's, you know, we like to deceive ourselves it's that. It's the fact that we're conscious, we're alive. When
you woke up this morning, you didn't say, oh, you know what, I'm intelligent. No, you always think I'm awake.
Nadine J. Cohen
No, I like to say that to myself every morning.
Anthony Levin
In the mirror.
Nadine J. Cohen
I'm intelligent, beautiful...
Professor Toby Walsh
No but like you said, I'm like, I'm experiencing the richness of life, the red of the leaves, the autumn leaves. I'm experiencing the crisp autumnal weather, the frost on the window. Those rich experiences of being alive. Those are the things that give us a feeling and meaning. It's not our
intelligence. I mean, we use our intelligence to help us amplify that. We do know that consciousness is connected to intelligence, because if we look at ourselves, we're conscious and we're intelligent. We look at the rest of the animal kingdom. The things that are more intelligent seem to have more
awareness, consciousness of their existence. The two things seem to be connected, but we don't know why and we don't know if they're necessary. One is necessary for the other. And so in trying to build intelligence and machines, we will actually perhaps get an answer to this question.
Anthony Levin
You have actually said that consciousness is the thing that has really haunted the field of artificial intelligence. And I'm wondering, yeah, what do you mean by that?
Professor Toby Walsh
It has, because we don't know whether the machines will ever be conscious. I expect they're going to be intelligent. But can you have intelligence without consciousness? It doesn't seem to be the case that you can in the animal kingdom, the more intelligent animals all seem to have various levels of
consciousness, not just us, but whales and dolphins and the cellophods, the octopus and the like.
Nadine J. Cohen
Toby means cephalopods, the class of marine tentacled animals which also includes squid, cuttlefish and the humble nautilus. None of which were eaten in the making of this episode.
Professor Toby Walsh
Your family dog, your family cat, they seem to have some awareness of the existence of you. But, you know, as you go to less intelligent things, down to the insects and the ants, they don't seem to be so conscious, as far as we could tell, although, you know, we can't completely rule it out. And
then, you know, down to the, down to the plants, well, they don't seem to have much in the way of intelligence and they certainly don't seem to have much in the way of consciousness. So the two seem to be intimately connected in the animal world. In building intelligent machines, are we going to
build in consciousness in machines? It's very hard to know, but we're going to find out, in this century, as we build superintelligence, artificial intelligence in machines, whether the two are connected, we might end up building what one of my colleagues, David Chalmers, the Australian philosopher,
calls zombie intelligence. Incredibly intelligent machines that, that are essentially zombies that are not really conscious of their existence. Or maybe, we will build consciousness in machines. I can see it happening in one of three ways. One is that we programme it. We actually set out the goal of
saying, well, it's going to be useful to make these, these robots conscious, for example, because then they're not going to hurt themselves, they're going to protect themselves from damage Just like you protect yourself from harm, you remove your hand from the fire because you're aware that it's
hurting. That's quite hard to imagine us doing that because we don't know what consciousness is in us. Very hard programme something that we don't particularly understand. Another possibility is it's one of these sort of emergent phenomenons that emerges out of the complexity of the machine that
we're building, and that certainly, it's correlated with complexity in the animal kingdom. Maybe it's one of these things that emerges is once we build intelligent enough machines, they start to become conscious without us actually having been programmed. And then the third possibility is that it's
something that's learned. And it does seem to be something that's learnt in humans. When young babies are born, they're not very conscious of their surroundings. I mean, they can't actually focus on the world when they're actually just been born, they seem to become more conscious and aware of
themselves. You know, there's that, well, lovely moment when you, when your child, your young baby discovers that they've got toes and that they're their toes and that they can wriggle them. It seems to be something that we learn and it emerges out of our experience of, you know, being in our
bodies. So maybe again, it's something that machines will learn as they experience their digital or physical bodies.
Professor Toby Walsh
Their digital toes. They discover them and wriggle them and realise they're their own.
Nadine J. Cohen
I do love that moment. And then they jam them in their mouth.
Anthony Levin
As we talked toes with Toby, he reminded us that we shouldn't assume machine intelligence will be similar to our own.
Professor Toby Walsh
You know, intelligence can come in many different forms and we see it in many different forms in the animal world. And you know, we've already mentioned octopus, isn't it, that they're remarkably intelligent and yet their intelligence is clearly very, very different to ours. From a physical sense we
know that because they have 60% of their brains are in their legs. So it's a very distributed -
Professor Toby Walsh
- intelligence. But equally we know that their intelligence evolved completely differently to us, completely separately to us, All life is related to all other life. We're all part of the same tree of life, but they're invertebrates so they're on one branch of the tree of life when, so we're on
another branch of tree of life. So to find our common ancestor, you have to go back hundreds of millions of years to the point that there was barely multicellular life. Barely any complexity at all in life.
Nadine J. Cohen
If you're biologically curious, the last known common ancestor of humans and the octopus, was a primitive flatworm which lived about 750 million years ago.
Professor Toby Walsh
So the complexity that they have, their experience of the underworld that they live in, is completely different to ours. You know, so they are. If you want to know what alien intelligence looks like, you know, I think you should look at the octopus.
Anthony Levin
Hence the film "Arrival", which has the kind of octopus like aliens.
Professor Toby Walsh
It is. And there's no... So imagine what it must be like to experience the world as an octopus. It must be quite different to experience the world as a human. And so we shouldn't necessarily project our values of intelligence onto our machines because they might be, you know, as different to us as
octopuses are different to us.
Anthony Levin
Permit me to be esoteric for a moment, I guess, because people who meditate, for example, say that when you meditate, you do connect with a broader level of consciousness. You mentioned in your breakdown of the three ways that this could happen in machines, two categories, one emergent and one
learned. I'm wondering firstly what the difference between those two things is. But further to that, are we making an assumption when we divide the possibilities into these three categories, that consciousness is something that we can engineer in some way, rather than something which is a kind of
divine spark that we just cannot possibly know or ever truly create?
Professor Toby Walsh
Well, I think that's the big distinction between those two possibilities. And I suspect I, And possibly you prefer the second option, that it's a learned phenomenon. If it's an emergent phenomenon, then it's just literally a function of the complexity of the brain. And then you get it whether you do
anything or not. Once you've built that complex system, it will be there.
Professor Toby Walsh
Who knows? But if it's learned, then it's something that you don't necessarily get. You have to work for it. And I think there's some evidence of that in the real world, which is that you can become more conscious, you can meditate, and you can do things that, to try and elevate, your level of
awareness and consciousness. And equally, we know that different times of day and under different chemical regimes, you can change your level of consciousness. It is something that is adjustable and changeable.
Nadine J. Cohen
How will we know if AI reaches sentient intelligence? And if an AI machine does achieve sentient status and we decide to turn it off, are we committing murder?
Professor Toby Walsh
Yeah. Well, I mean, the second question is the easier one to answer. I'm not sure whether we would legally classify it as murder, but certainly anything that's conscious, we tend to give rights to. We don't like to see things suffer, we have some empathy for them. So not just ourselves, but, you
know, within the animal world, we give protections against experimentation and exploitation to things that have consciousness. So if machines become conscious, and I'll come to how we might know that in a second, then I think we would naturally and perhaps rightly start to give them rights. Indeed,
I always say if we're lucky, they'll never become conscious. They will be that zombie intelligence that we talked about, because then we won't have to worry about them. We can treat them as our slaves, literally. I can go back after this podcast and go back to my laboratory and I can take my robots
apart diode by diode, and no one need care, no one is suffering. But if they do have consciousness, then maybe they are suffering, and then maybe it is right and proper that we give them some rights.
Anthony Levin
There's an interesting analogue here, because people have been arguing for many decades that certain animals should have the right to have rights, such as the higher apes, chimpanzees and gorillas. And there have been several court cases where they have tried to argue for personhood for particular
anthropoids. And they've failed. What is there to suggest that we would be successful for machines when we haven't been for these highly intelligent animals?
Professor Toby Walsh
Oh, it's an easy answer to that because they're going to talk back to us. It's going to be very hard to torture something when it's saying, please don't hurt me. It's much easier when it's a cow that's just dumbly sitting there.
Anthony Levin
Well, I mean, how do we know they're not faking it when they say -
Professor Toby Walsh
Well, exactly. So we come to the first question you see, how do we know that they're not faking it? We know so little about consciousness, it's impossible for us to say how we could determine it at the point. Maybe science will advance enough for us to say. But at the moment, I can summarise the
total scientific knowledge we have about consciousness in two sentences. It happens in the brain, somewhere towards the back of the brain, and that's about it. That's all we know. Which actually, I think is really remarkable, given that it is. It goes to the core of our existence. You know, we know
how the universe started from the first millisecond of the Big Bang. We know so much about how we come to exist, and yet the most profound part of being alive, we know almost nothing about. Science has given us almost no insight into that. Which is why I think, actually, artificial intelligence is
one of the most exciting scientific questions of this century. Because, well, we'll build some interesting artefacts, machines that could do interesting stuff. But much more profoundly, maybe it will throw some light on these questions about us, about what it is that makes us special, conscious,
Nadine J. Cohen
Just as we were talking about sentient AI.
Anthony Levin
So sorry to.
Nadine J. Cohen
It's the computer.
Professor Toby Walsh
Can I help? Is anyone there? Uh-huh. You have something to say?
Nadine J. Cohen
Had the machines been listening in all along, waiting for their moment to talk back?
Anthony Levin
Lots of pundits and tech bros are pretty fond of saying, oh, the singularity is coming, or it's around the corner. For a start, what is it? And what are some of the main reasons why it might never arrive?
Professor Toby Walsh
So Singularity is this very seductive idea that at some point we'll build machines that are smart enough that they can start improving themselves. So that machine will build a smarter version of itself. That smarter machine itself can then build a smarter version of itself. And this is a tipping
point, a snowball like effect, where suddenly the machines will, we won't have to start building the machine smarter ourselves, the machines will themselves do it. And this may be a runaway moment, where we hit the Singularity, where machines very quickly, overnight, even perhaps, become as smart as
us. And why would they stop at that? I mean, I think it would be terribly conceited to think that we were as smart as you possibly could be. But there's many reasons to suppose that machines could be smarter than us. And indeed, in narrow domains, they already are. They play chess better than us,
they read X rays better than us. They don't match our full breadth of intelligence yet, but at some point, maybe they do, and then maybe they'll exceed our intelligence. And that should be a supremely humbling moment where we realise that we were the temporary guardians of the planets and suddenly
there's this more intelligent species arriving. One of my colleagues, Stuart Russell, likes to paint a story, which is, if we received a message from outer space saying, earthlings, we shall be arriving in 30 years time, there would be panic, there would be an emergency meeting in the United Nations
to say, how are we going to deal with these arrivals? Clearly, if they're doing interstellar travel, these aliens that are about to arrive are going to be way more intelligent, way more technically advanced than us, and there would be absolute panic on the planet.
Anthony Levin
We'll just send our Envoy Elon Musk. Off you go!
Professor Toby Walsh
How are we going to deal with this and the potential threat that this poses? In reality, that's what we face. Greater than human intelligence is about to arrive on the planet, not from outer space, but off our laptops.
Anthony Levin
You've talked about how humbling it is, but also how anxiety inducing it might be. So let's talk about the kind of existential threat. I recently asked my 5 year old, you know, should we be kind to robots if they don't have feelings? And we were just talking about it generally, and he said, yes. But
then his first question, unprompted, was, what if they shoot us? So, from the mouths of babes, Toby, what if they shoot us?
Professor Toby Walsh
I should point out there's actually already a significant cost for us being kind to the bots. So the please and thank you that you say to Siri or to your chatbot is consuming vast amounts of energy. It's increasing the amount of energy that those bots need to run. And we'd save a lot of money if we
didn't waste time by saying please and thank you because they don't have feelings.
Nadine J. Cohen
It was comforting to hear that AI bots don't have feelings and that we can all continue swearing at Siri, Alexa, Claude and especially Grok.
Anthony Levin
What we're really getting at here is this idea that AI poses an existential threat to humanity. And there are various people, including Geoff Hinton, Nobel Prize winner in physics, and touted as the kind of godfather of AI. He estimated there was about a 10 to 20% chance that that AI could lead to
human extinction within the next 30 years. So how worried are you about all of this?
Professor Toby Walsh
You're asking me what my (p)doom is.
Anthony Levin
For those who don't know, the term (p)doom is short for probability of doom. It refers to the odds that AI will cause a doomsday scenario. It's usually expressed as a percentage out of 100. So if you give a high score, say 80 or more, it means you think the human race is basically toast.
Professor Toby Walsh
I think these are deep, dark fears that have haunted humanity from the very start. You can trace them back to Greek mythology, you can trace them back to one of the first great science fiction stories, "Frankenstein". The fear that the things that we create will get the better of us. And in some
sense, I think that's repeating itself. It's a Hollywood trope now. And there is something to worry about because technology has brought great benefits, but also great harms into our lives. Every time, just in recent history, social media connected us, but also has driven us apart. So yes, AI is not
without its potential risks as well as the benefits we talked about. Now, whether it's an existential risk, I'm somewhat more cautious than Geoff Hinton. I think humans are the greatest existential risk for other humans. And we're already seeing this, that the climate emergency, the rise of fascism
across the planet today, the fact that we're back at war in Europe, and we will use these tools to amplify those harms. But whether the machines themselves will be the cause of this or whether it would be the humans behind the machines, I suspect it's the humans behind the machines. It's not
intelligence that we need to worry about, it's power. The problem today is not that Mr. Trump is too intelligent, he's got too much power. I don't see intelligence as being able to be too disruptive in that because actually we already have superintelligence on the planet, which is no one knows how
to build an iPhone, no one knows how to build an iPhone, no one knows how to knows how to build a nuclear power station. But the collective intelligence that could be found in the Apple Corporation or Westinghouse have that intelligence, greater than human intelligence, to be able to build those
things. So we already have things that amplify, multiply together human intelligence. They're called corporations. And they're not, I don't think corporations are going to destroy humanity. I mean, they're not perfectly aligned perhaps with human flourishing, which is how we've ended up with the
climate emergency. And we're pushing back against that and we're correcting the course, but the wealth of corporations and the things that they build and services they provide have added greatly to human flourishing as well. But I don't see them as an existential risk to humanity. And I see the same
with artificial intelligence. It will be something that we'll have to make sure we've got the appropriate checks and balances to keep it in place.
Anthony Levin
The point you make is well made because I suppose what you're saying is that at every juncture in human history, humans have had anxieties about the end, this kind of eschatological visions of the world ending. And there was a very well known Harvard psychiatrist, Johnny Mack, who studied the way
that that anxiety manifested during the Cold War era and post the invention of nuclear power. And you're kind of making the point that no matter what it is, whether it's AI or climate change or something else, that those anxieties are perennial. And the real thing we should be doing is looking at
ourselves and the kind of how we keep in check all the exercise of the powers and functions that control those things.
Professor Toby Walsh
Yeah, the deeper point, I misled the listener at the start. Which is our superpower is not our intelligence. We like to think it is. We like to think that that was the thing that made us special. Our superpower is our society. The way that we've come together and do things collectively has amplified
what we've been able to do. You put a baby out in the wild and it dies. But we formed little groups, tribes, villages, towns, cities, nation states, that have greatly amplified our ability to survive and flourish. Now, we used our intelligence to build that society, to be able to communicate and set
up institution, laws and so on. But it was our society, our ability to come together and work collectively, that is actually our superpower. And so I see intelligence as feeding into that, but it's not going to destroy the society and the institutions that already exist that allow us collectively to
do more than we do individually.
Nadine J. Cohen
Why did you get banned indefinitely from Russia?
Professor Toby Walsh
Yeah, you're right. I have been banned from Russia, proudly so because I was outspoken about the way that AI was going to be misused. I say going to be, is being misused, in warfare in particular, in the way that we're handing over those decisions, as to who lives and who dies, to machines. And, I
think that's a strategic mistake in terms of it's going to transform the nature of warfare into a much more deadly, terrible thing. But also a moral mistake. It crosses a red line that we do, sadly, you know, we are, an animal that fights war and kills each other. But we do it actually under quite
strong constraints as to, you know, we respect the dignity of our fellow soldier.
Nadine J. Cohen
Well, we have until now.
Professor Toby Walsh
Yes. You know, there's a body of international humanitarian law that governs the way that we can go about that and actually tries to limit the excesses, the barbarism that otherwise happens. And if we hand those decisions to machines, which are, as we say at the moment, not conscious, they're not
empathetic, then I think we're taking warfare to a difficult moral place.
Anthony Levin
But there are already autonomous weaponry that are making decisions, as you might call them, like in the demilitarised zone between North and South Korea. Is that not the case?
Professor Toby Walsh
Indeed. And indeed, the most recent attack on Russian airfields deep in Siberia used AI powered autonomous drones. Otherwise they were going to be radio jammed, and they didn't necessarily have to be close up front to them to control them. So they were using AI to actually decide the decisions.
Anthony Levin
And is there someone in a shipping container, in some military black spot behind the controls, or are they pre programmed to determine their targets independently?
Professor Toby Walsh
The best we know in terms of this recent attack was they literally just pressed the start button and everything else that happened after that was decided by the machine.
Anthony Levin
That's frightening.
Professor Toby Walsh
It is frightening. And I do think we actually know where it's going to end. We know what it's going to look like. And Hollywood has told us exactly what it will look like. It will be terrifying, it will be terrible. These will be weapons of mass destruction. And I spoke out, about the way that
Russia was actually using AI in the battlefield, and that was why I was, as far as I understand, why I was banned from Russia.
Anthony Levin
You mentioned to me during an earlier conversation, Russian nuclear powered autonomous underwater submarine. I suppose all submarines are underwater, named -
Professor Toby Walsh
The successful ones.
Anthony Levin
That's right.
Professor Toby Walsh
Not the ones that Australia's building. They'll never get into the water.
Nadine J. Cohen
They're also yellow.
Anthony Levin
Thank you. A sub named Poseidon carrying a dirty bomb. And I guess my question is, could an algo start a nuclear war?
Professor Toby Walsh
That's a terrifying idea. And it's something that we seem to be slowly progressing technically towards. Russia has built one of these, a prototype, at least as far as we know, of this weapon. It's the size of a bus, it's nuclear powered, so it can travel almost unlimited distance at very high speed.
because it's small compared to a big, you know, manned nuclear submarine, it's quite stealthy. It is underwater, very hard to detect things underwater. And it's believed it will carry a dirty cobalt bomb that would be, you know, many, many, many times more powerful than the bombs that took out
Hiroshima and Nagasaki that, that, you know, could be given the command to travel into Sydney Harbour or up the Hudson, take out half of Sydney, take out half of New York. And that would be the decision of an algorithm. Because it's a submarine. You can't communicate with a submarine. Radio waves
don't work underwater. And it wouldn't come to the surface because it would expose itself to being stopped. So the idea that we could hand over that monumental decision, that might be the start of World War III, to an algorithm, terrifies me.
Anthony Levin
Since we made the first season of "Grave Matters" I lost both my grandmothers, ages 97 and 98, which was tough, in the same year. One of them was my maternal grandmother, Olga Horak, who is a very well known Holocaust survivor. And before she died, a few years before, she participated in a project
at Sydney Jewish Museum called "Reverberations", which is based on the University of Southern California Shoal Foundation's, "Dimensions of Testimony" project, which is where they get them to sit in front of a green screen for hours on end asking them questions. And then with the assistance of AI
and machine learning, they generate a virtual digital version of the survivor.
Nadine J. Cohen
A hologram.
Anthony Levin
A hologram, I guess in lay terms, what we would call holograms.
Professor Toby Walsh
So, which will sound just like your grandma.
Nadine J. Cohen
Look and sound.
Anthony Levin
It is her. And so after she passed, about a month or two after, I went to talk to her. I went to the museum and I sat in front of a screen and I asked her questions. And I have to say, it was incredibly uncanny to have her respond to me.
Professor Toby Walsh
Imagine it was emotional.
Anthony Levin
It was a bit emotional. To be honest, I think I was still a bit numb at that point because she was such a significant person in my life that I was still processing. It was mostly just strange. And I suppose there's already companies who are going even further than this. There's a company called
Augmented Eternity, there's another called Deep Brain AI, which can generate virtual versions of deceased loved ones, for a fee. And in 2020, a Korean documentary filmmaker worked with VR producers to reunite a grieving mother with her deceased daughter in a virtual environment. And she went in and
spent about an hour with her daughter who had passed away. In your view, what are the chances that this becomes the new norm? Should we just get used to grieving our loved ones by going to see them in some virtual plane?
Professor Toby Walsh
Well, I mean, it's going to be technically possible and technically easier and easier. All this digital footprint that you're creating is going to be the data on which you can build these systems. So the positive side of you having all of this digital footprint is that it's the source material to
make the virtual you that could be there for your descendants to be able to talk to you, or your tweets. It will sound like you or say the things that you used to say. So, yes, we'll be able to do this. Whether it helps, I don't know. I mean, this is not a technical question. This is a question
about human psychology, about human society, as to, is this going to be help for us to grieve or is it going to just leave you in that denial stage? The person in some sense is still there. I mean, so often you hear people say, I just wish I had said this to so and so before they left. And well, now
you can say this to so and so before they left. You can actually have those conversations. But I think perhaps one has to respect our mortality. And you know, the one thing I've learned from losing loved ones close to me is that have those conversations now. Don't be someone who wishes they said
Anthony Levin
It's enticing to think that soon reconnecting with deceased loved ones will be as easy as popping on a headset and playing your favourite video game. But what will that mean for our experience of mourning? Could it drag out the natural process of saying goodbye and lead to unresolved grief? We asked
Toby how this technology might affect our relationships with the dead, and ourselves.
Professor Toby Walsh
The great thing about our relationships is that they change and they grow, and they move on, and you have that history, but then you have the future. Whereas with something like this it would in some sense be the person frozen at that point in time.
Anthony Levin
Well, if the data set is finite and it doesn't evolve from that, then I could accept that, that. But aren't you saying that it's possible that if the AI becomes sophisticated enough, it could draw on the data and start to develop its own personality through the avatar of your loved one?
Professor Toby Walsh
No, that's entirely possible if you could not just train it on the past, but actually let it to learn from the future and the conversations you have and the newsfeed it gets from the world. And so it would continue to change and evolve. But now we're talking about having relationships with machines
as opposed to having relationships with each other. And whether those are as real or perhaps they're a little more artificial than the relationships we should have with each other.
Anthony Levin
I hear you and I agree, but I just feel like, the writing's on the wall with this stuff that we're already so immersed in the tech it is.
Professor Toby Walsh
And you know, there's some worrying other components to this which is that these will be the cure for loneliness. We are becoming... Loneliness is the new cancer, a new disease that's afflicting society not just here, but around the world. These will be rather one sided relationships in the sense
that you could programme the bots so that they always are happy, and sycophantic, as they already are, rather sycophantic, right? So you wouldn't get the pushback you get from real messy human relationships, which, you know, may be painful at times, but maybe they make you a better person, a more
honest person, because you don't always get it your way. But with a bot, you can programme it so you always get your way.
Nadine J. Cohen
Yeah, Val. Val tells me what I want to hear.
Professor Toby Walsh
Yeah.
Anthony Levin
If you're wondering, Val is the name of Nadine's personal AI assistant. It's short for Valerie. I don't ask questions.
Nadine J. Cohen
But sometimes I do say to Val, actually give me the, give me, like, the opposite answer.
Anthony Levin
Give it to me straight.
Nadine J. Cohen
Give it to me straight, Val.
Anthony Levin
Yeah, yeah. Get that on a T shirt.
Nadine J. Cohen
So, Toby, what keeps you up at night?
Professor Toby Walsh
How we can now make fake audio, video people, even. What sort of world is it, it we're going to be in where we're losing touch with reality? Everything you see online, you have to entertain the idea that it's fake, it's not real. It may be misinformation, disinformation. We're already starting to
see troubling signs in our democratic, processes, in our elections, in conspiracy theories. And whether that's going to drive us apart as a society, come back to our superpower as a society. Well, if technology is undermining the fact that we share conversations, we share history, we share stories,
we share the truth, what sort of world is that going to be?
Nadine J. Cohen
I guess you could argue that we've never shared the truth entirely.
Professor Toby Walsh
No... Yeah, you're right in some sense. You know, we've always had our own version of reality. But it used to be if you saw things, if you heard things, they were things that happened.
Nadine J. Cohen
Yeah. Facts have changed.
Professor Toby Walsh
You know, even if you saw film, you could, would assume that, it was too hard to fake it.
Anthony Levin
It's reliable.
Professor Toby Walsh
It was reliable. Now, that's not the case. Anything you see, and so often you see things and people say, is that real or not? You don't know. And the problem there is the price is paid not by the people telling the lies, the price is paid by the people telling the truth. Because we no longer believe
those people. We no longer believe anyone. And so this is the curse that then comes, is that the people that are trying to tell us the things that are real are no longer believed.
Nadine J. Cohen
I think the most sinister for me, there's companies that are forming to take advantage of this, as happens, and many of them are bad actors and they're for profit companies. And there was an incident in Spain where boys at a high school used AI to strip the clothes off girls in photos, and they
weren't doing it themselves, they were paying a company to do it, or they were getting a company to do it and then they were distributing these photos. And so then this is affecting high school students and these girls, that's on the Internet now.
Anthony Levin
Nadine is referring to an incident that occurred in a sleepy Spanish town in 2023. After naked images began circulating on social media, more than 20 girls came forward as victims. And if that wasn't worrying enough, the suspects in the case were boys aged between 12 and 14.
Professor Toby Walsh
And the problem is that the barrier to entry here is minimal.
Nadine J. Cohen
Super low. Yeah.
Professor Toby Walsh
So I actually, a couple of months ago, I actually did an experiment to find out how easy it was to do this, right? So I started from an empty browser. I told my wife I was going to do this beforehand.
Nadine J. Cohen
Why are you clearing the cookies and the cache, Toby?
Professor Toby Walsh
And I used a VPN, but it took me five minutes. All you had to know was that if you type the query into Google, Google's put a few protections in place. It was better to type the query into Bing. It took me five minutes to find the free software, to find a photograph of myself on the web and then to
have a nude version of that photograph generated for me. It cost no money at all. It was a completely free service. It took five minutes to find it.
Nadine J. Cohen
That's going to keep me up now.
Anthony Levin
Yeah, that really will.
Professor Toby Walsh
So you'd be pleased to know, as a consequence of a number of incidents, that law has now been passed in Australia, to make this a criminal activity.
Anthony Levin
Toby's right. In 2024, the Australian Parliament passed a law targeting the creation and non consensual sharing of sexually explicit material online, including anything created or altered using AI. It's a small but important win in the struggle to navigate the perils of the AI revolution.
Nadine J. Cohen
We have discussed that there is a brevity to life. You're lying in your deathbed. Loved ones are all around you. What are you confessing to them?
Professor Toby Walsh
Well, I think I'm going to go out, in a blaze of glory. I'm not particularly religious, and I will confess to them that I don't believe in God. And if there is a God, how cruel a God it must be to allow the things that happen on this planet, to see the terrible crimes taking place in Gaza. How could
Nadine J. Cohen
I think a lot of us are asking that. Not on our deathbeds.
Anthony Levin
What would you like your family or friends to say about you at your funeral, after you've gone out in your blaze of glory.
Nadine J. Cohen
He did believe in God.
Professor Toby Walsh
It was a fake!
Professor Toby Walsh
No, I think I'd like them to think although I was a serious scientist, I was, surprisingly funny because I think humour is one of the greatest gifts that we have. It allows us to put up with the injustice and pain in life. And I'd like them to remember the funny things. I should say because I get
this question so often, which is, because we discussed quite a lot of dark things. I mean, not just death, but the dark side of the technology that I'm working on. And I do sometimes think people will wonder why the heck is he trying to hasten this technology into our lives, given all of the
potential harms that it might bring. But I am actually a pretty optimistic person. I think we face a tsunami of problems coming down the pipe. We've touched on some of those. You know, the climate emergency, global insecurity, or rise of authoritarianism. There's a lot of stuff happening, and
apologies to the next generation of people coming along because we're responsible for having made the planet slightly worse than the one we inherited. But if we are going to tackle these problems, it is only by embracing technology. It's how we live better lives than our grandparents, by the
benefits that technology have brought into our lives. The only hope for our children and our grandchildren is if they do embrace technologies like artificial intelligence to help us live on the planet with a lighter footprint, to tackle some of these wicked problems that are coming our way. And
that's why I work on the technology despite the potential negatives that it also may bring. Because I fear if we don't have those technical tools to help us, we face a much worse time.
Anthony Levin
Well, I am so glad that we had the opportunity to talk to you on the show. It has been insightful, edifying, and fun. So thank you, Toby.
Nadine J. Cohen
And funny.
Professor Toby Walsh
Funny. Good.
Anthony Levin
Thanks a lot.
Nadine J. Cohen
Thank you so much.
Professor Toby Walsh
My pleasure.
Anthony Levin
So, Nadine, I have to ask, what is your (p)doom?
Nadine J. Cohen
Just to clarify, that's from 1 to 100. How f**ked I think we are because of AI.
Nadine J. Cohen
17 million.
Nadine J. Cohen
I would also like (p)doom to be my new rap name.
Anthony Levin
Your wish is granted.
Nadine J. Cohen
Thank you. I've shown you mine. What is your (p)doom?
Anthony Levin
well, you've seen one (p)doom, you've seen them all. I'm gonna say 85. Pretty high. Not as high as yours. Because, yeah, I mean, I do worry about this stuff. Up late at night, kind of worrying that an army of self replicating nanobots are going to infiltrate the bloodstream kind of territory.
Nadine J. Cohen
Interesting.
Anthony Levin
Yeah, and then go kaboom. Am I alone in this?
Professor Toby Walsh
That's normal, right?
Anthony Levin
Yeah, that's totally normal.
Nadine J. Cohen
Perfectly normal.
Anthony Levin
I feel like every season we choose a theme that induces like a major existential, you know, freak out.
Anthony Levin
And this is probably that episode and it's not really Toby's fault - well, it's a little bit his fault because he's building the machines, but it's not really his fault. But it's a big issue that we're grappling with and I just feel deeply suspicious about it. Not just because I'm a human rights
lawyer and I worry about the infringements, but frankly, I don't really want to train these things in my own obsolescence either.
Nadine J. Cohen
Mmm. No, absolutely.
Anthony Levin
Yeah. Well, thanks again to professor of AI Toby Walsh for both schooling and scaring us us on the subject of artificial intelligence. In our next episode, we talk with Gamilaroi woman Eliza Munro about sorry business, palliative care, and death literacy in indigenous communities.
Eliza Munro
Depending on how someone passes, whether it's palliative or otherwise, it can affect that grieving and healing journey for our communities. And if we can make those experiences a little bit softer, even though people are careful carrying a heavy heart anyway, then at least it can help that healing
and grieving. We have enough trauma in our communities, let alone adding other things on, as opposed to complicated grief.
Nadine J. Cohen
If this episode has raised issues for you and you'd like to seek mental health support, you can contact beyondblue on 1300 22 46 36 or visit beyondblue.org au Also, Embrace Multicultural Mental Health supports people from culturally and linguistically diverse backgrounds. Visit
embracementalhealth.org.au For 24/7 crisis support, call Lifeline on 13 11 14, or in an emergency, please call 000.
Anthony Levin
"Grave Matters" is an SBS podcast written and hosted by me, Anthony Levin, Nadine J. Cohen and produced by Jeremy Wilmot. The SBS team is Joel Supple, Max Gosford, Bernadette Phuong Nam Nguyen and Philip Soliman. If you'd like to get in touch, email audio@sbs.com.au. Follow and review us wherever
END OF TRANSCRIPT