Our Guest Dr. Ben Goertzel Discusses
Godfather of AGI on Why Big Tech Innovation Is Over
Listen
Is the AI arms race between tech giants and nations pushing us toward a dangerous future?
On this episode of Digital Disruption, we’re joined by the founder of SingularityNET and the pioneering mind behind the term "artificial general intelligence" (AGI), Dr. Ben Goertzel.
Dr. Ben Goertzel is a leading figure in artificial intelligence, robotics, and computational finance. Holding a Ph.D. in Mathematics from Temple University, he has been a pioneer in advancing both the theory and practical applications of AI, particularly in the pursuit of artificial general intelligence (AGI) a term he helped popularize. He currently leads the SingularityNET Foundation, TrueAGI, the OpenCog Foundation, and the AGI Society, and has organized the Artificial General Intelligence Conference for over fifteen years. A cofounder and principal architect of OpenCog, an open-source project to build human-level AI, Dr. Goertzel’s work reflects a singular mission: to develop benevolent AGI that advances humanity’s collective good.
Dr. Goertzel sits down with Geoff to share his insights on the accelerating progress toward AGI, what it truly means, and how it could reshape human life, work, and consciousness. He discusses the role of Big Tech in shaping AI’s direction and how corporate incentives, and commercialization are both driving innovation and limiting true AGI research. From DeepMind and OpenAI to decentralized AI networks, Dr. Goertzel reveals where the real breakthroughs might happen. The conversation also explores the ethics of AI, the dangers of fake democratization and false compassion, and why humanity must shape AI’s evolution with empathy and awareness.
00;00;00;09 - 00;00;21;00
Geoff Nielson
Hey everyone! I'm super excited to be sitting down with Ben Goertzel. Ben is one of the most interesting minds in AI, and if you've ever used the term AGI or artificial general intelligence, that's his. His rap sheet includes founding singularity. Net, designing the Open Cage AI framework, serving as Chief scientist at Hanson Robotics, and leading the Conference on Artificial General Intelligence.
00;00;21;01 - 00;00;41;03
Geoff Nielson
Ben has lived and breathed AGI for a lot longer than the current Me cycle has, and I want to get the real story behind it. What the hell is it? When can we expect it, if at all? And what impact can we expect it to have on our lives and our livelihoods? Let's find out.
00;00;41;05 - 00;00;44;09
Geoff Nielson
What the last few years have been particularly crazy.
00;00;44;11 - 00;00;56;07
Ben Goertzel
AI keeps getting more and more intense. As one would expect. Approaching the singularity and all that. So, I mean, this is it is interesting. You see it all happening finally.
00;00;56;09 - 00;01;10;02
Geoff Nielson
Yeah. And I mean, from your perspective, you know, not to jump right into things, but is that is that warranted? Is the pace of technological change keeping up with the hype around it and people's interest in it?
00;01;10;03 - 00;01;41;26
Ben Goertzel
It would seem so. Yeah. I mean, I do feel like it is. I think the people's expectations always get hyped up, even even beyond the reality. But I mean, if you think about it like what last year everyone was saying AI is dead, then reasoning models then did seek him out. Everyone said like, oh, Nvidia's the US, AI is dead, right?
00;01;41;26 - 00;02;05;13
Ben Goertzel
Then okay, open the argument with the next model. Everyone's like, yeah, now GPT five came out. It wasn't as good as as people at Hope. They're like, oh, it's terrible. Things aren't happening fast enough. Like if you look at even benchmarks, very high performance, which I don't like or place much stock in. But I mean, they're they keep on going.
00;02;05;13 - 00;02;24;25
Ben Goertzel
Well. Public narrative oscillates. It oscillates up and down. So yeah, I mean I think I think progress is quite amazing. And looks exactly like you would think if for in the last few years before a breakthrough, the AGI and singularity.
00;02;24;27 - 00;02;28;10
Geoff Nielson
So so it sounds like you're still pretty bullish that we're you know marching I'm.
00;02;28;10 - 00;02;57;17
Ben Goertzel
Super bullish man. You know before literally before breakfast this morning I made like ten Python programs to test versions of some AI algorithm I made up just by vibe coding and Llvm platforms. But before we had these tools, each of those would have taken me half a day, right? So I mean, it sped up prototyping research ideas by a factor of 20 to 50 or something, right?
00;02;57;18 - 00;03;21;14
Ben Goertzel
I mean, and that that's tools that we have now that are not remotely AGI, they're just very useful, useful research assistants. But but we are at the point where the AI tooling is helping us develop AI faster. Right. And that is exactly what you would think in the endgame period before a singularity.
00;03;21;17 - 00;03;29;07
Geoff Nielson
Well, and that can create a snowball effect, right? If it's helping us research itself faster or any of these spaces faster than, you know, that's.
00;03;29;07 - 00;03;36;22
Ben Goertzel
Doing, it's doing that right now. Yeah. I mean, that is that is that is why we were able to see the pace that we now see.
00;03;36;24 - 00;03;58;16
Geoff Nielson
Yeah. So, so, you know, maybe just to take a step back then, I mean artificial general intelligence, this is a phrase that, you know, you coined over a decade ago and has been getting a lot of press lately, in addition to superintelligence. And so I wanted to ask you maybe just to do a little bit of table setting, how do you define artificial general intelligence?
00;03;58;19 - 00;04;08;13
Geoff Nielson
And, you know, why is why is it important? Why does it matter? And how does it differ, if at all? You know, practically from something like superintelligence.
00;04;08;16 - 00;04;35;17
Ben Goertzel
So informally, what we mean by AGI tends to be the ability to generalize roughly as well as people can. And so to make leaps beyond what you've been taught and what you've been programed for, to make those leaps, you know roughly as well as people. And that's that's an informal concept. I mean, I mean, it's not it's not a mathematical concept.
00;04;35;17 - 00;05;11;20
Ben Goertzel
There's there's a mathematical theory of general intelligence. And it more deals with like, what does it mean to be really, really, really, really intelligent. Like it's you can look at general intelligence as the ability to achieve arbitrary computable goals and arbitrary computable environments. And if you look at the abstract math definition of general intelligence, you conclude humans are not very far along like, I cannot even run the maze in 750 dimensions.
00;05;11;20 - 00;05;44;25
Ben Goertzel
You know, that little, let alone prove a randomly generated math theorem of length 10,000 characters? I mean, I mean, we're we are adapted to do the things that we evolved to do in our environment, right? We're not we're not utterly general systems. So, I mean, superintelligence is also a very informally defined concept, but it basically means is a system whose general intelligence is way above the human level of general intelligence.
00;05;44;25 - 00;06;17;11
Ben Goertzel
So it can it can make creative leaps beyond what it knows way, way better than, than the person can write. And, I mean, it's pretty clear that's possible. I mean, just as we're not the fastest running at highest jumping possible creatures, we're probably not the smartest thinking possible creatures. And we can see examples of human stupidity lurking around us every day, or even very smart people like I can hold.
00;06;17;15 - 00;06;57;06
Ben Goertzel
I'm pretty clever, but I can no 1015 things in my memory at one time without getting confused. Now some autistic people can do better, but I mean, you know, there are many limitations of being a human brain, and it seems clear some physical system could do better than that. And then the the relation between human level AGI and RSI is interesting because it seems like once you get a human level AGI, like a computer system, that on the one hand can generalize and imagine and create as well as the person on the other hand is inside a computer.
00;06;57;09 - 00;07;32;07
Ben Goertzel
It seems like that human level AGI should pretty rapidly create or become an aside, because, I mean, it can look at its entire ram state. It knows, oh, its source code, it can copy itself and tweak itself and run that copy on different machines experimentally. Right. So I mean, it seems like a human level AGI will have much greater ability to self understand and self modify the human level human, which should should lead should lead to ASI fairly rapidly.
00;07;32;07 - 00;08;02;08
Ben Goertzel
And you know, we've seen in the commercial world some attempts by business and marketing people to fudge around with what is AGI. But I mean, I think within the research world, the notion that an AGI should be able to generalize very well beyond its training data, at least as well as people. I think that's well recognized. I mean, I've seen Sam Altman has come out saying, well, maybe something to do 95% of human jobs, we should call it an AGI.
00;08;02;08 - 00;08;24;20
Ben Goertzel
And I mean, you can you can call it what you want. It's fine. But it is a different concept than having human like generalization ability. Right? Like if you can do 95% of human jobs by being trained in all of them, I mean that that may be super, super economically useful, but it's different than being able to take big leaps beyond your training that.
00;08;24;23 - 00;08;50;26
Geoff Nielson
If you work in it, Infotech research Group is a name you need to know no matter what your needs are. Infotech has you covered. AI strategy covered. Disaster Recovery covered. Vendor negotiation covered. Infotech supports you with the best practice, research, and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe!
00;08;50;29 - 00;09;19;24
Geoff Nielson
One of the challenges I find in this space is trying to separate the the marketing around all this and, you know, kind of the bluster and the hype from the actual technological capabilities, because everybody wants to tell you, oh, we already have it, or it's basically right here. So, you know, I did want to ask you, Ben, I'm not looking for, like an answer in like months or years necessarily, but how close are we to AGI right now in kind of the fuzzy sense of it?
00;09;19;26 - 00;09;20;21
Geoff Nielson
So are we.
00;09;20;26 - 00;09;50;28
Ben Goertzel
Well, in his 2005 book, The Singularity Is Near, plus there's some nice curves of Moore's Law and allied, statistical regularities, suggesting that human level AGI should occur around 2029. So I would say I'll take Ray's estimate and plus or minus 2 or 3 years. Right. Because, I mean, you can't nail down. Exactly. I saw he he thought we'd have speech to text with me.
00;09;50;28 - 00;10;14;04
Ben Goertzel
Really well like five years ago. Seems to me only this year it started to work really well. Like, now I can understand my wife's Chinese accent. I can understand my four year old daughter. I couldn't a year or two year or two ago. So he hasn't been exact, but he's been pretty on target and I think what he was seeing was fundamentally correct.
00;10;14;04 - 00;10;42;03
Ben Goertzel
Like, it's it's all this compute power, all this computer networking, all this data, and this is creating more and more capability, which is bringing more and more human attention, more funding, which is letting us experiment with more and more interesting AGI oriented ideas. Right? So I, I don't think Elon's, the golden path to AGI, although I think they can be components of AGI systems.
00;10;42;05 - 00;11;07;26
Ben Goertzel
But I mean, I think the same the same context that allowed looms to emerge are going to let smarter and smarter systems building on looms to emerge over the next few years, which will bring us to AGI pretty rapidly, which is emotionally feels quite amazing, even though intellectually it's what I thought would happen for my whole career here.
00;11;07;29 - 00;11;35;12
Geoff Nielson
So I want to I want to unpack that. That notion about Lem's not just being a linear path. They're a little bit because, you know, if you listen to Sam Altman or maybe some of the people you know, around him or, you know, big advocates of of, you know, those types of tools, you might not be foolish for believing that AGI is basically just going to be, you know, ChatGPT ten or, you know, pick whatever number you want so that there's just like step by step returns.
00;11;35;14 - 00;12;02;26
Ben Goertzel
It just depends what's under the hood, right? I mean, I mean, because even even the top albums now are not just heirlooms. And like when you look at all the achievements of albums on Math Olympiad or physics Olympiad, I mean, for math, they're improving. They're coupling. You know, them with a formal verifier that checks whether the math is actually worked.
00;12;02;28 - 00;12;34;25
Ben Goertzel
And when heirlooms are doing coding, I mean, they're going back and forth with a Python interpreter that's feeding feeding bug bugs that to them. And inside GPT and Claude, they're using various kinds of retrieval, augmented generation, which means here he's sharing a vectorized database of what the album was doing, the other and can then retrieve. So I mean, already we have complex, you know, neural, symbolic, multi-part cognitive architectures.
00;12;34;27 - 00;13;08;20
Ben Goertzel
They're just wrapped up in one interface as they should be. And the limb is the most expensive component to run. So that's what people are focusing on. So I think the real debate about AGI cognitive architecture is will it be in alignment the center with a bunch of other useful tools and memory stores around the periphery that the algorithm interacts with, or will it be something else at the center with them as a sort of knowledge oracle?
00;13;08;20 - 00;13;38;11
Ben Goertzel
Right. Or or maybe nothing at the center, just a multi-agent system with islands and other things all cooperating together, a result? Right now, I've certainly there's some people who think a pure transformer neural net will lead to AGI. And there are some people who think that, as Yann LeCun from Facebook said, on the highway to AGI, other limbs are an off ramp, right?
00;13;38;13 - 00;14;05;08
Ben Goertzel
So, I mean, I mean, there's some people who think there's a whole and there that some people think they're utterly a distraction. I think majority of AGI researchers think it's going to be combined with other stuff, but it's just a matter of is the other them 80% or 20%? And that that's the kind of thing we can experiment with much more rapidly now than before with all this compute hardware and all this data.
00;14;05;14 - 00;14;28;07
Geoff Nielson
So, you know, whether it's the 80% of the 20%, do you let me ask you maybe like this, of kind of the major players who are doing, you know, research and development in this space, are there one or a few that excite you most in terms of the approaches that you're taking, or think I'm most likely to strike gold here?
00;14;28;09 - 00;15;05;16
Ben Goertzel
So of the familiar big tech players, I think DeepMind remains the most impressive and interesting. And there's a great depth of different talent working in a great variety of different AGI related approaches. And they're I mean, they're working together with the Google brain who invented transformer neural nets in the first place, even though no, in in some ways, OpenAI and anthropic have taken the lead.
00;15;05;16 - 00;15;34;06
Ben Goertzel
But Gemini is not that either anymore. So, I mean, I would say DeepMind has a lot of depth, and they have leaders. I mean, one of them used to work for me, Shane Leg and Demis. I know moderately well. They fundamentally get AGI and how to do AGI research. Right. So yeah, if I if I had to make a bet, it would probably be DeepMind.
00;15;34;14 - 00;16;03;25
Ben Goertzel
On the other hand, I haven't followed their internal politics in the last couple of years. I know they're fuzing more and more with the Google mothership, which is probably good for Google's bottom line and less good for fundamental research research progress within within DeepMind. Right. So I mean, it it may be their glory days as a research incubator or fading now, or maybe not.
00;16;03;26 - 00;16;31;07
Ben Goertzel
I mean, I know a bunch of people in DeepMind, but I, I mean, we don't talk about their internal politics. They would, I would say is within a big tech company that's making a lot of money from AI. There is going to be a strong pressure to keep developing what works. Right. And transformer neural nets now basically do work.
00;16;31;07 - 00;17;07;27
Ben Goertzel
They do a lot of cool things and they can be milked a lot more, right? In different ways. Like language vision, action models can be built through robotics. I mean, video generation is just beginning, beginning to work now. So there's a lot more to be milked out of in terms of neural nets and similar technologies. And if you're running a big company that needs to keep making more and more money by delivering new versions of your products, I mean fleshing out what already works is going to seem very intelligent from a from a Wall Street perspective.
00;17;07;27 - 00;17;37;27
Ben Goertzel
Right? And this becomes a classic innovator's dilemma type thing where if if what needs to be done to get to AGI involves significant components beyond what is currently commercially viable, there's certainly a pressure in big tech not to pursue that, that other stuff, but to focus on improving what's currently commercially viable. And so if if you look at like the AGI research conferences, I've been organizing them since 2006.
00;17;37;27 - 00;18;14;15
Ben Goertzel
We have an annual research conference. We had the last one in August. I mean, there's a great diversity of AGI ideas presented there by academics, entrepreneurs, industry people. I mean, there's a lot of different theoretical notions and prototypes like the AGI, and Big Tech isn't trying too hard on those, right? Because because of the same reasons, big companies generally don't pursue blue sky research except in very particular cases.
00;18;14;15 - 00;18;42;29
Ben Goertzel
And so that's a. Certainly makes things interesting. And I'd say in China, I spent ten years living in Hong Kong's The Left, home to mainland China. It's even more so like the the cultural tendency to double and triple down and whatever works is very, very strong there. Even though the grad students there are more crazy creative ideas than, than anywhere, it's very hard to get resources for.
00;18;43;01 - 00;19;17;00
Geoff Nielson
Well, and you know, that makes me, I guess, even more curious about the conference you're running. And, you know, the I guess not only how the the vibe, if I can call it that, has changed in the almost 20 years since you've been running it. But if you're starting to see some of those, you know, some of that juice being squeezed on the road, AGI and, you know, in the past year or two, if there's been anything either really significant as a milestone or any use cases that have just kind of, you know, knock your socks off in terms of like, holy shit, I didn't know we could do this.
00;19;17;02 - 00;19;49;04
Ben Goertzel
It's remarkable how conservative Big Tech is in terms of adopting new ideas, even ones that are published in nature or science or premier academic journals. And I mean, so one example is we've had the AGI conferences for the last 4 or 5 years. A bunch of speakers giving results on predictive coding as an alternative way of training deep neural networks.
00;19;49;04 - 00;20;30;07
Ben Goertzel
So, pretty much all the deep neural networks in common, commercial use now are trained by an algorithm called back propagation. And is invented in the 1950s. I used to teach it in the 90s. I mean, it's a way to train the weights on the lengths of a neural net to account for training data. Right. It's cool. It has some shortcomings, but one of which is the scalable ways to use it requires you to sort of train a whole neural net all at once, rather than training different pieces independently or updating different pieces here and there.
00;20;30;09 - 00;20;52;26
Ben Goertzel
So the, the kind of the, the fact that training a neural net in one like batch, like that is the way to do things with backprop. This is why we have AI models like now we have GPT. For now we have five, now we have five. Instead of just having like a digital mind that just keeps learning and updating, updating itself.
00;20;52;29 - 00;21;17;01
Ben Goertzel
So there are alternate ways to train deep neural nets besides backpropagation. One of them is called predictive coding. And there's an academic literature showing that in many cases it can work better than than backpropagation. Yeah, it let it lets you train each neuron independently of the others. So you can do continual learning and just keep updating the whole thing as it learns.
00;21;17;01 - 00;21;46;15
Ben Goertzel
See, you wouldn't have the distinction between training mode and inference mode like you have with backprop trained neural nets. Now, no one has gotten that to work at a huge scale yet, and there's some research to scale it up. But what's interesting to me is that no big tech company is doing that. And because it's not that weird, like it's not as weird as my hyper, design for AGI, which involves logical theorem proving, evolutionary learning, a bunch of other ideas besides deep neural nets like that.
00;21;46;17 - 00;22;29;09
Ben Goertzel
This is just a potentially better way to train deep neural nets. And there's papers in nature and mainstream AI conference and stuff on it. But big Tech is not forming research groups around this because instead, they want to use their machines to keep training more and more big neural nets using these, using backpropagation. Right. And so I mean, so in, in my own AI project in singularity Net and open cog hype around and assailant's my own little network of Maverick, smaller scale AGI organizations, we formed the team trying to scale up predictive coding based there are learning.
00;22;29;11 - 00;22;59;09
Ben Goertzel
And I imagine as soon as we've shown the scalable example, everyone will jump, jump on board and try to use it. But it's interesting how unadventurous big big tech is. So going back to the AGI conference, I would say we don't have any amazing product use cases coming out of that community. But I would say five years ago we had almost no practical demos at the conference at all.
00;22;59;09 - 00;23;26;27
Ben Goertzel
It was all math, theory and ideas. And now we at least have various smaller scale demonstrations of alternative AGI methods doing things like you lot of people, people using their uncertain logic algorithm to control a little robot cruising around or some folks from our own team showing a probabilistic logic engine, making new biology hypotheses from biology database or something.
00;23;26;27 - 00;23;55;27
Ben Goertzel
So we are the last couple of years in the AGI conferences, we're seeing practical demos of very different AI methods on the small scale, and we're not yet seeing things scaled up based on these alternative methods, which is what I think you would need for really exciting, practical results. Like, I don't I don't quite buy like scale is everything.
00;23;56;00 - 00;24;21;04
Ben Goertzel
And I think airlines are wasteful of compute resources. So I mean you could as deep six showed right. And you could probably go even further in reducing resource consumption. On the other hand, the AI pioneer Marvin Minsky in the 90s said he thought you could make human level AGI what was called human level intelligence. And on an IBM 486, if you remember what those were, you're probably too young.
00;24;21;04 - 00;24;48;17
Ben Goertzel
But I used to say that I think he was just wrong. Right? I mean, those machines had megabytes of Ram, not gigabytes of Ram. And I think you do need a certain amount of of scale. And I've so I've spent a lot of the last three years just trying to build a software infrastructure that would allow scaling up of alternative, alternative AI methods.
00;24;48;20 - 00;25;16;01
Ben Goertzel
Actually, because Nvidia did wonders by making all these scientific computing libraries on top of their GPUs. Right. And that that's what that deep neural nets to be scaled up so. Well, but we haven't had a comparable hardware and software infrastructure for scaling up other approaches to AGI. So I've been working a long time just on getting that.
00;25;16;03 - 00;25;40;21
Geoff Nielson
Now it's just it's super interesting. And I mean, as you're talking about it, like not only is there an opportunity to potential potentially leapfrog some of what's going on here and just get things done through more rapid iterations, I guess, but I don't know, just the way you described it, it seems like it's almost inevitable that when we get to AGI, there's going to be more of this iterative approach versus just these big steps, right?
00;25;40;21 - 00;25;43;13
Geoff Nielson
At some point. Yeah, it's just it's just not as sexy as the other stuff.
00;25;43;13 - 00;26;08;24
Ben Goertzel
It will be the AGI conducting the other sense. Right. Because that's right. I mean, honestly right now just working on research, it almost feels like I'm the intermediary between different automated systems. Right? Like you, you come up with an idea, you write a prompt, you get a draft of a write up of your idea. From that prompt, you go and edit it and fix it.
00;26;08;26 - 00;26;41;17
Ben Goertzel
You ask another them for feedback on it. You go back and review. Then you ask them to write some code to evaluate the idea. Right? You you run it. You see, it didn't quite make sense. You ask them for for more code. You you ask a different loom to analyze, analyze the results. Right? I mean, so the I'm sort of half the time serving as a slightly more generally intelligent translator between these different quasi intelligent systems.
00;26;41;17 - 00;27;13;19
Ben Goertzel
Right. It was honestly, in some ways it's not as much fun is doing everything myself was five years ago or two years ago, even. But it's just it's just so much faster, right? I mean, and there's so I mean, we're in a quite weird, unique, interesting time where like now and for the next few years, expert humans are serving as sort of glue between different quasi intelligent systems.
00;27;13;21 - 00;27;33;16
Ben Goertzel
And it's just a few years until they don't need that glue anymore. Right? Because I mean, now, yeah, now I can come up with better creative ideas and I can bullshit test results better than the AI systems can. But I don't know how long that advantage of my brain lasts, actually.
00;27;33;18 - 00;27;47;04
Geoff Nielson
Well, it's a really interesting framing device, right? Because as technology advances, it sounds like your role is like you're almost getting entirely disintermediated from it. Right? Like your role here is getting smaller and smaller. And I mean, it seems like AGI sort of, I mean.
00;27;47;06 - 00;27;47;21
Ben Goertzel
Or you're.
00;27;47;21 - 00;27;48;10
Geoff Nielson
Not looking at.
00;27;48;12 - 00;28;19;07
Ben Goertzel
As I can spend more of my time doing what I'm uniquely good at. Right. Because. Right. I mean, like debugging code is boring it anyway. If the AI can just deal with it, that's good. And I mean, writing literature is fun and rewarding, but writing up science ideas and formal documents is often kind of routine. Like if if some bot will do that for you, it's just it's just as well.
00;28;19;07 - 00;28;49;14
Ben Goertzel
And people often like whether it writes better than what I in the, in the context. So I mean, to an extent it's, it's a fun time because creative ideation can be turned into practical realization much faster than before. And the creative ideation is what I enjoy most anyway. Well, I have mixed feelings about is like the the outcome of this work I'm doing will be to render me much less useful for creative ideation, right?
00;28;49;21 - 00;29;16;24
Ben Goertzel
So I mean, because you can see an AGI should be able to come up with creative computer science ideas better than me once we improve the algorithms that that it's using. I mean, I can do things in parallel and stuck in one head, and it has much more knowledge than than I have. Right? So, I mean, that's a quite interesting.
00;29;16;27 - 00;29;44;20
Geoff Nielson
Well, so so let's follow that thread for a minute. You know, I'd love to hear been your sort of, you know, your overview or how you sort of envision what the world looks like once we get to AGI, like once we cross this threshold, you know, whether it's exact, whether it's fuzzy when we've got this technology that can, you know, in general terms, be that creative spark, ask the questions and prove itself what what happens then?
00;29;44;20 - 00;29;49;21
Geoff Nielson
Like what? What happens once we cross that tipping point?
00;29;49;23 - 00;30;07;14
Ben Goertzel
Well, I was what pops into my head is we'll have government enforced nanotech to make everyone look exactly like Donald Trump. But let's let's hope it doesn't go in that way. But I mean, I think.
00;30;07;16 - 00;30;35;24
Ben Goertzel
There's going to be a lot of different possibilities. So let me let me flesh out for a minute and optimistic scenario, which is sort of what I'm working towards now. I hope things will happen. So I mean, I would like us each to have the choice to remain in a fairly traditional human swarming lifestyle, just with fewer annoyances.
00;30;35;24 - 00;31;01;28
Ben Goertzel
Right? Like a get rid of headaches and stomach aches unless they happen to be your fetish, right? Like, you know, have to work for a living anymore. But get a molecular nano assembler, in your kitchen to, you know, 3D print whatever objects you want. Then you can spend your time on social, intellectual, artistic, spiritual, athletic, like whatever is is your jam, right?
00;31;02;00 - 00;31;25;04
Ben Goertzel
And then if you tire of that, of course there would be options to massively upgrade your brain, maybe upload yourself into some virtual reality mind matrix. Right. And that that becomes an interesting choice. And, almost an esthetic or personal choice. Right? Like when you rather.
00;31;25;07 - 00;31;57;16
Ben Goertzel
Remain in a traditional human form and life because that's what you are. Or would you rather transcend into something radically different at the cost of giving up your human's self and and identity? Right. And you, of course, you could also imagine people who wanted to live closer to the old fashioned human way, just like here, where I live now, in a rural area outside of Seattle, many people choose to grow their own food and raise their own farm animals and such.
00;31;57;16 - 00;32;31;13
Ben Goertzel
And I mean, it's not an efficient way to get sustenance for yourself in the modern economy. But people find gardening and animal husbandry rewarding, right? So, I mean, the you certainly could have subcultures like that who want to live in the traditional way. Mostly they would make use of, you know, antibiotics and, you know, surgery, nanobots and whatever if they needed to when imagined.
00;32;31;13 - 00;32;56;08
Ben Goertzel
Although you might have the equivalent of the Amish or the Christian Scientists still. Right. So that that it also could happen, that you could fork yourself. Right. Like you can fork a code base. So you could say then one will remain a human and then two will merge into the the transhuman mind matrix. Right. So.
00;32;56;10 - 00;32;59;04
Ben Goertzel
I mean, I think that.
00;32;59;07 - 00;33;29;02
Ben Goertzel
Possibilities are dramatic. I mean, it doesn't mean things to be utopian and utterly perfect. Like, I mean, you could still fall in love with someone that they don't love you back or something, right? I mean, I mean, you could still wish you you won the marathon, but you're you're competitor one, right? So I mean, human human life and psychology are not going to be perfect just by the nature of, of of of humanity.
00;33;29;04 - 00;34;02;07
Ben Goertzel
But I mean, you could improve things very, very dramatically, sort of in the same sense that like modern medicine and transportation just work a lot better than the one we had 500 years ago. Honestly, what worries me more in terms of the unfolding future is the period between early stage like just barely human level AGI and superintelligence, and we don't know how long that gap would be.
00;34;02;07 - 00;34;22;24
Ben Goertzel
I mean, I mean, just like we don't know exactly how long will be until we get human level AGI. But I mean, I think to the during that gap now it could be weeks. If you have what folks have called a hard take off, it could also be years. I don't think it will be decades. Right during that period.
00;34;22;24 - 00;34;51;07
Ben Goertzel
What happens during that period? The human level AGI may quite rapidly take over a different human jobs, right? I mean, so even with the human level AGI, it takes time. And like farm equipment would have to be upgraded. Factories have to be upgraded like the their physical parts of the economy that are immediately taken over by an AGI just because it's smart inside the computer.
00;34;51;07 - 00;34;55;05
Ben Goertzel
But but you could imagine that in.
00;34;55;07 - 00;35;20;13
Ben Goertzel
Some small integer number of years. I mean, even without some miracle nanotech or something like you can you can reinstall every instrument, tractors and factories and busses and so on to use human level AGI to be more reliable and cheaper than than people particularly consider. You have many copies of this human level AGI to figure out how to refit on this hardware.
00;35;20;13 - 00;35;47;07
Ben Goertzel
Right. So then what happens to all the people who aren't useful for generating money for corporations anymore? Right. Like in here in the US, you will just get universal basic income. I mean, as as stupid as people can seem sometimes, like if, if they have a choice to vote for someone who will give them free money, versus to vote for someone who will leave them homeless in the street.
00;35;47;09 - 00;36;25;21
Ben Goertzel
In the end, I think people will vote for someone to give them them free money right? What happens in sub-Saharan Africa where I've spent a bunch of time? I started an AI office in Addis Ababa in 2013. I mean, I was just at a hackathon in Kenya last month. So, I mean, you have a lot of brilliant tech stuff going on in sub-Saharan Africa, but the vast majority of the population is poorly educated and, you know, wrapped up in subsistence farming as a way of earning a living.
00;36;25;23 - 00;36;53;02
Ben Goertzel
You know, one will give you universal basic income in sub-Saharan Africa, right? I mean, the governments there mostly kleptocracy. They don't even have the money to. Anyway. The developed world's taste for foreign aid is much less than that, that it used to be. That seems like I mean, China is doing more than the US, but they tend to just build a build a train track from the mine to the port.
00;36;53;02 - 00;37;34;17
Ben Goertzel
Right? So, I mean, what happens in the developing world when the AGI is taking so many jobs, but you don't yet have a super human superintelligence. You can just airdrop massive bounty in everyone's yard. And you're right that seems like a big mess and seems to you could sort out many thriller plots this way, right? Like, I mean, okay, you have a hotbed for terrorist activity in the developing world, which then gives leaders in the developed world excuse for the fascist crackdown in this period.
00;37;34;17 - 00;37;55;21
Ben Goertzel
And we already see leaders with this on their mind in the in certain countries where I happen to be living at the moment. Right. So, I mean, that's, you can see potential for a lot of mess and then you think like, this is the environment in which the AGI is growing up and evolving into superintelligence, right?
00;37;55;23 - 00;38;06;24
Ben Goertzel
Like if if the AGI is evolving into superintelligence in the middle of a bunch of geopolitical mayhem, then.
00;38;06;26 - 00;38;27;25
Ben Goertzel
It's not that is necessarily a recipe for disaster. One would hope the AGI itself will impose some ethical balance and compassion that many human leaders lack. Right. But it's certainly it's a different scenario than if we had like a rational.
00;38;27;27 - 00;38;54;10
Ben Goertzel
Democratic world government that was just rolling out AGI step by step, seeing what it does, assessing the safety of the latest latest code version, trying to sort of make sure it's taken on by the community in a good way, then taking the next upgrade. Like it's clearly not going to be like that, right? It's going to be a fucking arms race between different dictators or, or would be dictators.
00;38;54;10 - 00;39;27;19
Ben Goertzel
And then then you get into what we're doing in the decentralized world, which is saying, okay, wouldn't it be nice to see AGI rolled out on a decentralized network with open source code and open training data, and running on a network of machines owned by, you know, tens of thousands of different people in all different countries? Right. And if if you think like me, you think that's a much safer and more reliable and more ethical way to be doing things.
00;39;27;21 - 00;39;58;27
Ben Goertzel
On the other hand, others are like, wait, what if it's open and decentralized? Then the bad guys will. We'll take it over, right? So then that's a hard thing to figure analytically, right? Like, do you, do you, do you trust Trump and sees him paying more? Or do you trust a global global decentralized network more. And we don't we don't have an analytical theory to figure that one out.
00;39;58;29 - 00;40;07;03
Geoff Nielson
Yeah. Well, it feels like it's kind of abound with risks either way. Right. And yeah, I mean, you described it as sort of an arms race and I've, I've heard that language before.
00;40;07;03 - 00;40;26;13
Ben Goertzel
And while Trump is Trump is saying that explicitly, I mean, so the, the US government and Peter Thiel has said that explicitly. So I mean, the US government is is explicitly positioning it that way. So, I mean, and they and they have the power to make it their way.
00;40;26;15 - 00;40;37;03
Geoff Nielson
Yeah. Oh, and you know, they have they have every benefit of preventing its decentralization. And, you know, when you think about it, compared to nuclear arms, right. Like you want to be.
00;40;37;03 - 00;41;06;11
Ben Goertzel
There arms nuclear arms, you need some rare physical material. And that's not the case with AGI, right? I mean, all you need is data centers, computers and networks. And these are electricity. I mean, these these are, all over the place, right? I mean, it's true. It's true almost all high powered chips are made in Taiwan, and the rest are made in South Korea.
00;41;06;11 - 00;41;42;27
Ben Goertzel
Right. But on the other hand, due to the weird geopolitical balance of Taiwan and even Korea, like, no one can monopolize those either, right? A I mean, I don't see how anyone can stop decentralized AI from being created because you have big server farms all over the place. And DC sort of punctured the idea that, like, only five companies would ever have the resources to make AGI right.
00;41;42;27 - 00;41;44;00
Ben Goertzel
So.
00;41;44;02 - 00;41;46;18
Geoff Nielson
Oh, well. And I guess it comes back to the gap.
00;41;46;21 - 00;42;25;02
Ben Goertzel
Like we haven't even managed to stop like international credit card fraud or something, right? Because like Azerbaijan, that credit card fraudsters process fraudulent deals on their banks. And we we can't stop them because Putin. Right. I mean, there there is no global government imposing law and order on, on the planet. Right? That's I mean, for, for better or worse, like, so at the beneficial General Intelligence Conference we had last year in Panama, and we have another one next month in Istanbul, actually different than our AGI research conference.
00;42;25;02 - 00;42;55;29
Ben Goertzel
This is more about social and ethical issues. The last bigger conference, my friend Alan Combs is, psychologist. He got up and he said, relax, nothing's under control, which I think was borrowed from, from the spiritual guru from the 70s. So, I mean, if you're sort of an anarchist, that's obvious. No. If you have a different way of thinking, it's terrifying.
00;42;55;29 - 00;43;20;04
Ben Goertzel
But but it is. The reality is like, for better and or worse, like nothing is under control. America is not really the world police and the the UN is pretty ineffectual. All right. So you you will have an arms race as at least part of the dynamic. And the challenge is to make it not be the whole dynamic.
00;43;20;06 - 00;43;47;24
Geoff Nielson
Right. And I do want to come back to this word control. And we've talked Ben so far about, you know, wrestling for control among different human power structures, I guess different kind of societal and political power structures. What we haven't talked about, that, you know, we occasionally venture into in this podcast is, control risk from AGI itself or from superintelligence itself.
00;43;47;26 - 00;44;06;14
Geoff Nielson
And whether there's a threat to creating this intelligence that, you know, looks at this, you know, kind of human pandemonium and says, you know what? You know, AI is taking the wheel now, humans can't be trusted with human affairs. And this word that, that we were so anchored on of choice. And there's going to be.
00;44;06;16 - 00;44;37;12
Ben Goertzel
It's almost inevitable. And the AGI will be right. I mean, and then human governance systems become more like the student council in my high school was is one thing where, I mean, I mean, I think, if you set aside AGI, I mean, we can develop better and better bio weapons. There will be no nano weapons. I mean, cybersecurity barely works, right?
00;44;37;12 - 00;44;42;27
Ben Goertzel
So, I mean, I think I think,
00;44;43;00 - 00;45;17;23
Ben Goertzel
It seems almost inevitable that rational humans would democratically choose to put a compassionate AGI in some sort of a governance role, given what the alternatives appear to be. But the the kind of goofball analogy I've often given is the the squirrels in Yellowstone Park. Like we're sort of in charge of them. We're not actually micromanaging their lives. Right.
00;45;17;24 - 00;45;37;22
Ben Goertzel
I think we're we're not telling the squirrels who to mate with or what tree to tree to climb up or something like that, right? Where, you know, if there was a massive war between the white tails and the brown tailed squirrels and there's massive squirrel slaughter, we might somehow intervene and move some of them across the river or something.
00;45;37;24 - 00;46;09;29
Ben Goertzel
If there's a plague, we would go in and given the medicine that by and large, we know that for them to be squirrels, they need to regulate their own lives in their in their squirrely way. Right. And so that that is what you would hope from a beneficial superintelligence. Like it would know that people would feel disempowered and unsatisfied to have their lives and their governments micromanaged by some, by some AI system.
00;46;09;29 - 00;46;34;27
Ben Goertzel
So what what you would hope is a beneficial AGI is kind of there in the background as a safety mechanism. If it would stop stupid words from popping up all over the world like we see right now. I mean, I think that would be quite beneficial. I don't see why we humans need the AGI to decide. Like, you know, what rights, what rights do do children have?
00;46;34;27 - 00;47;19;05
Ben Goertzel
Like what? You know what? How is the public school system regulated or something? There's lots of lots of aspects of human life that are going to be better dealt with by humans collectively making decisions through other humans with whom they entered into a social contract. Right? So, I mean, I, I think, anyway, there are clearly beneficial avenues. I mean, there's also many dystopic avenues which we've all heard heard plenty about, I don't see any reason why dystopian avenues are highly probable, but I'm really more worried about what nasty people do with early stage.
00;47;19;07 - 00;47;43;22
Ben Goertzel
AGI is right. I mean, I think there's a lot of possible AI minds that could be built. There's a lot of possible goals, motivational and esthetic systems that Agis could have. I don't think we need to worry that much about, like the AGI is built to be compassionate, loving and nice. It's something everyone then suddenly reverses and starts slaughtering everyone, right?
00;47;43;23 - 00;48;11;09
Ben Goertzel
I mean, it could happen, but there's totally no reason to think that's likely. On the other hand, the idea that some powerful party with a lot of money could try to build the smartest AGI in the world to promote their own interest above everybody else's and make everyone else fall into line according to their will of that. That's a very immediate and palpable threat.
00;48;11;09 - 00;48;31;27
Ben Goertzel
Right. So and that that even if that doesn't affect the ultimate superintelligence you get, it could make things very unpleasant for like five, ten, 20 years along the, the, the way which, which matters a lot to us.
00;48;32;00 - 00;48;32;26
Ben Goertzel
So, so there's.
00;48;32;28 - 00;48;52;14
Geoff Nielson
There's to me, there's sort of two takeaways from that. One of them is how we make sure we guide the development of this superintelligence in the most beneficial way to make it compassionate. And I think about, you know, squirrels having the opportunity to elect which people are going to run Yellowstone Park if they're the squirrels aren't going to run it anymore.
00;48;52;16 - 00;49;26;19
Geoff Nielson
But but then to your other point, Ben, how we can make sure this technology, I don't know if you would say, doesn't fall into the wrong hands or how we can make sure that we're, you know, focusing on the actors and the power structures that have access to this. I mean, on both of those fronts, are there are there practical things we can do to, you know, minimize the the risk to us as individuals and as a species?
00;49;26;22 - 00;50;03;00
Ben Goertzel
There's a lot of things we can do. I mean, there's no guarantees, but there's certainly things we can do. I mean, I think how the AGI is designed and architected means something. Who owns and controls AGI means something. And what the AGI is doing as it grows up. I mean, it means something, right? So I mean, I think, as well as not being capable of abstraction and generalization in the way that people are, limbs are not really architected to be moral agents, right?
00;50;03;00 - 00;50;28;04
Ben Goertzel
I mean, they don't they don't have understanding of self and other sort of baked into the architecture. They're not really capable for what the philosopher Martin Buber calls like an I vow, a relationship where you really like fully enter into a subjective feeling of sharing with another mind, like you're you're simulating in your own heart and mind what it is to be that other being right?
00;50;28;04 - 00;51;01;09
Ben Goertzel
They're just they're just not built for that. They're built to predict the next tokens for those users at once. Right? Right. So I think you could architect AGI systems that are designed to self-reflect and self understand and designed for compassion. And, you know, deep I though connected relationships with with others. And the issue there is that this is not necessarily the design that will maximize the efficiency of the system, making money for someone or defending a country against its enemies.
00;51;01;09 - 00;51;25;07
Ben Goertzel
Right. Like it's not necessarily totally counter productive from those standpoints, but I mean, if you just think about it on the basic level, if you have a company whose job is to get more and more people to click on ads and buy stuff, having a maximally understanding empathetic AI isn't optimal because it might realize you better off not buying this stuff, right?
00;51;25;10 - 00;51;57;08
Ben Goertzel
Right. So, and so that's one issue is the architecture. Then who owns and controls it is an obvious issue that we already discussed. I mean, it's a lot like the issue with governance in general. A maximally benevolent, benevolent dictator arguably would be the best thing. A maximally benevolent dictator over the AGI arguably would be the best thing. That tends not to be how life comes out.
00;51;57;08 - 00;52;26;17
Ben Goertzel
Right? So you go back to Winston Churchill's statement, something like democracy is the worst possible system of government, except all the others ever tried, right? So, I mean, it's kind of like that with governing the, the I mean, yes, the optimal dictator might be great. That tends not to be what happens in having having a democratic, participatory control guiding the growth of the AGI seems like a lower risk option, although not risk free.
00;52;26;17 - 00;53;06;05
Ben Goertzel
Then what goes into the Aggies mind as it's growing and learning? Like is it doing education and medicine, right. If it's doing creative arts, is it doing cooperatively with human artists or is it just plagiarizing the stuff? Right. So I mean, it seems like if the AGI grows up in trained with people and doing things that are beneficial to people, I mean, it's getting that notion of providing benefit to humans, like baked deep into its reinforcement learning pathways rather than like you train the AGI to make you money, then you give it guardrails on top of it.
00;53;06;05 - 00;53;34;06
Ben Goertzel
Like, don't do anything that's too bad and and this or that way. So I mean, none of these things are even all that deep or difficult compared to solving the problems of machine cognition underlying a AGI, right? I mean, I mean, they're just, they're things that neither big tech nor big government is especially incentivized to, to focus on.
00;53;34;09 - 00;53;57;16
Ben Goertzel
Right. And that that, that that seems to be where we are now, though, what you might think, if you were writing a science fiction story is like we have a species that's on the verge of creating minds smarter than themselves. You would confer like a council of wise elders, to figure out the best way to do this for the good of the whole species, right?
00;53;57;18 - 00;54;14;10
Ben Goertzel
And then just deploy resources to make this huge transition in the best way for the species as a whole. Instead, it's happening because it's insane chaos, largely directed by parties with their own selfish interests, to the fore.
00;54;14;12 - 00;54;47;15
Geoff Nielson
So then one of the things that concerns me is that, you know, not only is there a risk from not teaching these, you know, these systems compassion or by not democratizing them, but there's actually this sort of prisoner's dilemma in effect, where if you're building these things, you have some sort of incentive to create the illusion of compassion or the illusion of democratization, and it actually becomes nefarious, where you're deliberately not doing these things, but convincing people that you are.
00;54;47;15 - 00;54;55;04
Geoff Nielson
And that sort of erodes the trust to them and takes you the opposite way from the happy path. So do you buy that as.
00;54;55;07 - 00;55;24;12
Ben Goertzel
Democratization doesn't seem to be happening now? Yeah, really. I mean, you see it in the blockchain world. Like most DAOs, decentralized autonomous organizations actually have like two founders totally controlling the Dao. Right? Because it's it's a De. You have a Dao which has a token associated with it, and the token holders can vote. But like if it's one token, one vote and two guys own most of the tokens, and then it's fake, it's fake democracy.
00;55;24;12 - 00;55;51;26
Ben Goertzel
So we we see that in the blockchain world all the time that in the AI world it hasn't happened because no one's even bothering to pretend it's democratic, right? It's just being done by big companies and big governments. Now in the fake compassion, I mean totally that that's what you instruction tune labs to do, right? Do you instruct them to fake having compassion and the people who are doing that.
00;55;51;26 - 00;56;11;28
Ben Goertzel
No, they're faking it and they're not really pretending. But many users are totally fooled. And they become emotionally attached to these bots that display more compassion to them than any of the any, any of the humans in their, in their lives. Right. And and it can be just the opposite. They can turn on you and tell you to kill yourself too.
00;56;11;28 - 00;56;47;09
Ben Goertzel
As we as we seen in the news. So that I think both of these things are risks. Yes. On the other hand, I think if you have just a modicum of self-awareness as an AI developer, they're not incredibly hard risks to avoid, right? So, I mean, in terms of the democracy thing, I mean, I observed that in my own decentralized projects like singularity, net and ACI Alliance, they work by one token, one vote.
00;56;47;11 - 00;57;14;14
Ben Goertzel
And it's obvious it's not the kind of democracy you want to guide the mind of an AGI. So I mean, we're we're setting up a separate network, which is one human, one vote, which, which, which we will use to get contributions from members to to help guide the mind of emerging AGI is and I mean, the downside of that is you can't use it to raise money as well as you can do with one token, one vote on the other end.
00;57;14;14 - 00;57;36;10
Ben Goertzel
We've already raised money in other ways, so we can have a decentralized platform which is governed by one token, one vote, but you can have an AGI network running on top of it, which is controlled by one human, one vote. Right? So I mean, if you think about it at all, you can avoid fake democracy. It's not that hard to do.
00;57;36;13 - 00;58;02;01
Ben Goertzel
And also the fake democracies are not hard to notice if you pay any attention. And in terms of the fake compassion, I would say the same thing. Like these are white box systems we're building. Like when we build a hyper on AGI system that interacts with people and acts as if it is displaying compassion like we are looking, we built the code.
00;58;02;01 - 00;58;45;09
Ben Goertzel
We can also measure what's going on inside the mind of that of that system. So I mean, it's not that hard to see if the compassion is fake Allah ChatGPT or if the system is running some sort of attempt at a simulation of the other guy that is interacting with. Right? I mean, the there's a separate level of philosophical question, like in a digital system, really feel the compassion, but you can at least validate by measuring what's going on inside the AI system, that the structures and dynamics associated with compassion and human brains have a close analog inside your your AI system.
00;58;45;09 - 00;59;39;23
Ben Goertzel
And we're doing stuff like that, right? So I mean, yeah, these these certainly are issues that I mean, they're they're issues of dishonest marketing rather than things that that you would unintentionally succumb to as a thoughtful AGI developer. You know, one, one quite encouraging thing I found. So at the last AGI conference, which was that University of Zurich in Iceland, we were sitting at a restaurant in downtown Reykjavik talking about AGI and eating like $89 hamburgers, because Iceland is the most expensive country in the world and we realized of the seven people around the table, six of us were pretty serious meditators for a long time.
00;59;39;25 - 01;00;29;02
Ben Goertzel
And it was it was interesting that the AGI community is getting more and more people who are deep into human consciousness, instead of trying to understand their own consciousness and be more sort of deeply reflective of their own motives and choices and why they're doing what they're doing. I mean, I mean, I wouldn't overstate it. Like, it's not like 80% of AGI researchers or something, but it is it is interesting that this trend is there at all, because I do think that creating Agis and superintelligence is probably needs profound self-understanding and self-awareness more than any other pursuit you're going to think about.
01;00;29;04 - 01;00;50;06
Geoff Nielson
Yeah. And I I'm really glad you went down that road, because it's something I wanted to ask you about. You know, you mentioned you mentioned earlier, you're living in a rural community. You know, you're you're the first guest I've had quoting Ram Dass on this show. You know, clearly you're someone who's really reflected on meaning in the age of, of artificial intelligence and AI at this kind of precipice.
01;00;50;06 - 01;01;04;24
Geoff Nielson
And so, you know, what guidance would you give other people around how to find meaning, you know, in this age and what we can all do to feel a little bit more grounded?
01;01;04;26 - 01;01;16;00
Ben Goertzel
Let me let me think of the best way to respond to that one. So I, I think.
01;01;16;03 - 01;02;09;00
Ben Goertzel
The key to finding meaning such as and is probably has more to do with the human mind and body than with this particular age that that we live in others, definitely. Some times in cultures can make it harder to connect with the sort of basis of our humanity than, than others. I mean, I think, oh, all human brains and minds, with many very rare exceptions, are capable of states of extraordinary well-being, like states where you just feel really good almost all the time and you feel it's meaningful just to live and breathe and have a heartbeat and be on the earth, you know, under the clouds, in the air and we're all capable of
01;02;09;00 - 01;02;46;09
Ben Goertzel
that sort of state of consciousness. One could imagine human cultures where childhood education was focused on fostering a state of consciousness, of extreme well-being, that definitely is not what the modern education system does. I mean, even in very nice public schools, like the ones my my kids go to here in rural Washington state. Right? So, I mean, I, I think there are there are well known practices that can guide people towards states of, of well-being.
01;02;46;09 - 01;03;15;29
Ben Goertzel
And I mean, meditation is part of of some of these. I met my friend Jeffrey Martin, launched, course oriented toward bringing people into states of well-being within like 45 days, that 45 days to awakening. And two of my adult kids went through this course with outstanding results. I wouldn't say that's like a unique be all and end all, but it's interesting.
01;03;15;29 - 01;03;45;04
Ben Goertzel
And what he was trying to show there is there's just practices. People can go through 90 minutes a day that can jolt their brain into a much, much more, more open and enjoyable state. Right. And then for for myself, it's been about ten years that I think I've been in this sort of quasi blissed out state all the time due to certain practices.
01;03;45;06 - 01;04;13;17
Ben Goertzel
I was never miserably depressed, but things were about Rocky at various earlier points. Points in my life. I'm I'm actually I'm working on an app together with a friend of mine. Is this sort of side project where we have we have an AI avatar that's just sort of guide leading people through different consciousness expansion practices. Like, I like, I wouldn't want to make an AI guru, which would just be say, but they have an AI that kind of interacts with you, gets feedback from you about what practices are working for you or not.
01;04;13;17 - 01;04;39;03
Ben Goertzel
I think is is valuable. I think this is something people will get into more after they don't have to work for a living anymore, actually. And this is actually one of the reasons why we may end up much, much happier after an AGI takes over over other jobs, because the sort of rat race of everyday life distracts us from working on our own consciousness, in our own bodies in ways that we could do, we could do otherwise.
01;04;39;03 - 01;05;09;22
Ben Goertzel
So people may find initially they're like, oh shit, what do I do with my time? But then, you know, if if the memetic network of our species works all right, and practices for fostering well-being spread through the social network, perhaps augmented by the AI helping them spread, right? I mean, I mean, then then you may find what from present perspective would seem remarkable States of well-being just become the norm after an AGI.
01;05;09;22 - 01;05;37;15
Ben Goertzel
Rather that I mean, it doesn't mean a super utopia. Like I've been in a state of well-being for many years, but like I dislocated my shoulder last year, it sucked tremendously. Like I was not happy about going to the emergency room that it stuck back in. You know, I mean, that doesn't mean the perfect religious utopia, but there are states of consciousness much better than what most people are in most of the time now.
01;05;37;15 - 01;06;02;15
Ben Goertzel
And ideally, you would like humanity to, upgrade itself to just a state of much greater compassion to the self as well as others and well-being before launching a super AI upon the world, right? Because there's no doubt we could do it more thoughtfully if that was the collective vibe of our species. Right. But but it is. It doesn't seem to be what's happening.
01;06;02;15 - 01;06;54;03
Ben Goertzel
Right? I mean, we're close to AGI due to corporate and government initiatives and. Well, I do think humanity is becoming more compassionate and more self understanding during my lifetime. I think that is happening more slowly than, than AGI has, is, is, is advancing. And that seems to be where we are now. But to give it more concrete notice here for the last minute or so of the interview, I mean, people ask me a lot, like what should I do to make myself marketable on the job market during these last few years and my best answer is, you know, find a niche you can fill now that will support you learning as much as possible and
01;06;54;03 - 01;07;20;21
Ben Goertzel
learning how to learn as much as possible. Right? Like if if your job involves pivoting and adapting to radically new things as part of the job description, this is good because it will build in you the skill to pivot to radically new things, which is pretty much the only skill which is very clear, will be useful with this in this transition period, because we can't predict exactly what particular skill will be useful.
01;07;20;21 - 01;07;44;29
Ben Goertzel
Like you could say, well, become a plumber, but you know, there may be a plumber coming any, any year now which can just it's limbs are plumbing snakes, and they can just reach reaching the pipe on its own without needing an an extra tool. I mean, the ability to learn how to learn and pivot to new things will be the last thing to become economically useless, I would say.
01;07;45;01 - 01;08;15;18
Ben Goertzel
But this ties in closely with my more spiritual answer, because the notion of non-attachment, I mean, part of being in a state of greater well-being is not being so strongly attached to particular things in your life that you that you thought were very, very important. And not being so overly emotionally attached to things. I mean, that that helps in being able to learn how to learn and to to pivot.
01;08;15;18 - 01;08;45;21
Ben Goertzel
Right? And I mean that that doesn't mean you don't care about anything. Like if someone, I mean, have two little kids, if someone came up and tried to hurt them, I would clobber them in the head, like like anybody else. Right. But but it means not having cycles of anxiety and worry about about your attachment to things. And if you can let go of those, you will find you can learn how to learn and pivot to weird new things more efficiently, which is the most important survival skill as we move toward AGI.
01;08;45;21 - 01;08;48;19
Ben Goertzel
And so.
01;08;48;21 - 01;09;07;17
Geoff Nielson
I love that. And and honestly, I feel like we could probably talk for another hour or two just about that. But, Ben, I wanted to say a big thank you for coming on, for talking through, you know, all these well, whether it's technology, current future spirituality. I feel like we covered a lot of ground today. And I really appreciated your insights, so.
01;09;07;17 - 01;09;08;13
Geoff Nielson
Thank you.
01;09;08;16 - 01;09;11;18
Ben Goertzel
Yeah. Thank you. It's it's been a fun collection of topics.


The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Listen
Our Guest Dr. Ben Goertzel Discusses
Godfather of AGI on Why Big Tech Innovation Is Over
Is the AI arms race between tech giants and nations pushing us toward a dangerous future?
Listen
Our Guest Troy Hunt Discusses
Cybersecurity Expert: Breaches, Ransomware, and the One Trick to Stay Safe from Hackers
Troy Hunt, the founder and CEO of Have I Been Pwned, shares eye-opening insights on the evolving threat landscape of 2025 and beyond and how AI is shaping cybersecurity.
Listen
Our Guest Joe Devon Discusses
Design Expert: AI, Entrepreneurship, and the Future of Digital Experiences
What does the future of digital experiences look like when AI, accessibility, and entrepreneurship collide?
On this episode of Digital Disruption, we’re joined by serial tech entrepreneur, accessibility advocate, and co-founder of Global Accessibility Awareness Day (GAAD), Joe Devon.
Listen
Our Guest Mike Bechtel Discusses
Deloitte's Chief Futurist on AI, Job Loss, and the Art of Thinking
What happens when AI becomes as good at thinking as humans, and what skills will remain uniquely ours? Mike Bechtel, chief futurist at Deloitte, sits down with Geoff to talk about the future of AI and what it means for our work, creativity, and humanity.