Our Guests Mo Gawdat, Roman Yampolskiy, and Malcolm Gladwell Discuss
AI Boom or Bust? AI Boomers and Doomers Reveal Their Predictions for Our Future
Listen
Is artificial intelligence humanity’s greatest salvation, or the most dangerous force we’ve ever unleashed?
Artificial intelligence is no longer a future concept, it’s a force already reshaping geopolitics, economics, warfare, and the human experience itself. In this year-in-review episode of Digital Disruption, we bring together the most provocative, conflicting, and urgent ideas from this past year to confront the biggest question of our time: What does AI actually mean for humanity’s future?
Across more than 40 conversations with leading technologists, journalists, researchers, and futurists, one theme dominated every debate: AI. Some guests argue that artificial general intelligence (AGI) and superintelligence could trigger an extinction-level event. Others believe AI may usher in an era of total abundance, solving humanity’s hardest problems. And still others claim today’s AI hype is little more than marketing smoke and mirrors.
This episode puts those worldviews head-to-head.
00;00;00;04 - 00;00;03;06
approaching this singularity
keeps getting more and more intense.
00;00;03;09 - 00;00;06;02
have no idea how to control
superintelligence systems.
00;00;06;02 - 00;00;10;17
is it going to be existential and evil
or is it going to be good for humanity?
00;00;10;17 - 00;00;12;28
It could be the best thing
that ever happened to us.
00;00;12;28 - 00;00;15;10
the technology's
moving faster than any other sector,
00;00;15;10 - 00;00;19;08
The algorithm is creating a feedback loop
this is a good thing.
00;00;19;08 - 00;00;22;12
And the laser eyed robots aren't good at,
beat us into submission.
00;00;22;16 - 00;00;25;05
intelligence is not inherently good
or inherently bad.
00;00;25;05 - 00;00;27;15
You apply it for God
and you get total abundance.
00;00;27;15 - 00;00;31;02
You apply it for evil,
and you destroy all of us.
00;00;31;05 - 00;00;31;28
Hey, everyone.
00;00;31;28 - 00;00;34;23
We've got
something really special for you today.
00;00;34;23 - 00;00;38;12
When we started Digital Disruption,
we wanted to put a focus lens
00;00;38;12 - 00;00;41;00
on the technologies
shaping our shared future.
00;00;41;00 - 00;00;44;16
And over more than 40 episodes,
we've done that by talking
00;00;44;16 - 00;00;46;09
to an eclectic collection of the world's
00;00;46;09 - 00;00;50;19
foremost experts on technology,
leadership and social progress.
00;00;50;22 - 00;00;52;23
From predictions
about the next renaissance
00;00;52;23 - 00;00;56;21
of human enlightenment to the sci
fi esque advancements literally putting
00;00;56;21 - 00;00;59;25
computer chips
in people's brains to the digital horrors
00;00;59;25 - 00;01;02;27
lurking in the dark
and distant corners of our online world.
00;01;03;03 - 00;01;05;18
These guests brought forward
their best predictions
00;01;05;18 - 00;01;07;17
and what the next decade holds.
00;01;07;17 - 00;01;11;12
We covered a lot,
but there was one on Escape a Bull topic.
00;01;11;15 - 00;01;14;21
AI AI AI AI, everything is AI AI,
00;01;14;24 - 00;01;17;28
generative AI and transformative
AI generative AI, Generative
00;01;17;28 - 00;01;22;25
AI dominated the conversation,
but without any consensus.
00;01;22;28 - 00;01;26;22
I could be our savior or our enslaver.
00;01;26;25 - 00;01;29;25
It could herald a golden era
of human advancement,
00;01;29;26 - 00;01;32;25
or the end of the human race.
00;01;32;25 - 00;01;35;16
Or it's all a technological sham.
00;01;35;16 - 00;01;40;10
Dressed up in fancy marketing terms,
lots of fluff and no substance.
00;01;40;13 - 00;01;43;03
If there was one thing
everyone could agree on during
00;01;43;03 - 00;01;46;14
our first season, it's
that nobody agreed on anything.
00;01;46;17 - 00;01;50;01
And so we thought
we'd put the most thought provoking ideas
00;01;50;01 - 00;01;52;13
we heard this year. Head to head.
00;01;52;13 - 00;01;56;29
So you can decide for yourself
what you believe comes next.
00;01;57;02 - 00;02;01;12
Let's jump in.
00;02;01;15 - 00;02;02;23
one of the,
00;02;02;23 - 00;02;05;18
you know, predictions you've made lately
that's kind of made the rounds.
00;02;05;18 - 00;02;09;28
Is that your prediction
of an extinction level event for humans
00;02;09;28 - 00;02;13;04
created by AI in the next hundred years,
you're
00;02;13;04 - 00;02;16;16
putting at 99.99%.
00;02;16;17 - 00;02;17;03
Is that right?
00;02;17;03 - 00;02;20;17
Am I missing a couple of nines there?
00;02;20;20 - 00;02;21;19
Keep adding nines.
00;02;21;19 - 00;02;26;02
I keep meeting people of a different doom
for reasons independent of mine.
00;02;26;02 - 00;02;29;16
So every time this happens,
another nine has to be added logically.
00;02;29;16 - 00;02;32;20
But, it seems to be that
00;02;32;24 - 00;02;36;20
you have to follow the chain
of kind of assumptions to get to
00;02;36;20 - 00;02;40;22
that number one is
it looks like we're creating AGI.
00;02;40;22 - 00;02;43;22
And then quickly after superintelligence,
00;02;43;28 - 00;02;46;02
a lot of resources are going into it.
00;02;46;02 - 00;02;47;06
Prediction markets.
00;02;47;06 - 00;02;50;05
Top experts are saying
we're just a few years away.
00;02;50;05 - 00;02;51;23
Some say two years, five years.
00;02;51;23 - 00;02;53;21
But they all kind of they agree on that.
00;02;53;21 - 00;02;56;17
At the same time,
according to my research.
00;02;56;17 - 00;02;58;04
And no one has contradicted that.
00;02;58;04 - 00;03;01;18
We have no idea how to control
superintelligence systems.
00;03;01;21 - 00;03;07;13
So given those two ingredients,
the conclusion is, pretty logical.
00;03;07;14 - 00;03;08;19
You're basically asking,
00;03;08;19 - 00;03;11;25
what is the chance
we can create a perpetual safety machine,
00;03;11;25 - 00;03;17;09
perpetual motion device, by analogy,
and the chances of that are close to zero.
00;03;17;09 - 00;03;21;05
if you study history,
one of the great things that you learn
00;03;21;08 - 00;03;23;29
is that the world gets better
all the time.
00;03;23;29 - 00;03;26;08
It is very hard
00;03;26;08 - 00;03;30;03
to read a bunch of history books
and arrive at any other conclusion.
00;03;30;03 - 00;03;32;23
Then today is
the best day ever to be born.
00;03;32;26 - 00;03;33;26
And in fact,
00;03;33;26 - 00;03;36;26
the beauty of of the human experience
and the really the
00;03;36;26 - 00;03;38;05
the reason that arguing
00;03;38;05 - 00;03;41;24
are humans, good or bad, is so easy
is because if humans were actually bad,
00;03;41;29 - 00;03;45;13
we never would have arrived here
after we crawled out of caves.
00;03;45;16 - 00;03;48;00
I think the fact that we have
all the things that we have,
00;03;48;00 - 00;03;51;25
the fact that the world is as safe
as it is for most people
00;03;51;28 - 00;03;54;19
and that it gets safer all the time,
is a testament to the fact that we are
00;03;54;19 - 00;03;58;28
just building a more aligned Earth
and more aligned human human experience.
00;03;58;28 - 00;04;02;28
And that's not to say we aren't fallible
and that we don't have lots of,
00;04;03;01 - 00;04;06;26
you know, problems, but it is to say that
the problems are diminishing.
00;04;06;26 - 00;04;08;05
And so I don't think
00;04;08;05 - 00;04;11;29
the onus is actually on me to prove
that the world is going to get better.
00;04;11;29 - 00;04;13;25
I actually think the onus is on
someone else to say,
00;04;13;25 - 00;04;15;13
this is the peak of civilization.
00;04;15;13 - 00;04;16;23
When you really think about it.
00;04;16;23 - 00;04;20;21
A lot of people,
when they look at technology,
00;04;20;24 - 00;04;23;18
they think of this current moment
as a singularity
00;04;23;18 - 00;04;27;19
where we are really not very certain of
what's about to happen.
00;04;27;24 - 00;04;31;10
I, you know,
is it going to be existential
00;04;31;10 - 00;04;34;16
and evil
or is it going to be good for humanity?
00;04;34;19 - 00;04;38;07
I unfortunately believe it's going to be
both just in chronological order,
00;04;38;07 - 00;04;39;06
if you think about it.
00;04;39;06 - 00;04;46;17
And, you know, you mentioned that
we have all of those challenges around,
00;04;46;20 - 00;04;50;01
geopolitics about climate, about,
00;04;50;04 - 00;04;53;11
economics and so on.
00;04;53;14 - 00;04;55;24
And I actually think all of them is
one problem.
00;04;55;24 - 00;04;59;15
It's just, it's really is the result of,
00;04;59;18 - 00;05;04;15
systemic bias of pushing capitalism
all the way to where we are right now.
00;05;04;18 - 00;05;07;12
And, when you really think about it,
00;05;07;12 - 00;05;13;16
none of our challenges are caused by the,
you know,
00;05;13;19 - 00;05;18;27
the economic systems that we create
or the or the, war
00;05;18;27 - 00;05;22;20
machines that we create, and similarly,
not with the AI that we create.
00;05;22;20 - 00;05;24;15
It's just that humanity,
00;05;24;15 - 00;05;28;28
I think, at this moment in
time, is choosing to, use those things
00;05;29;01 - 00;05;33;04
for the benefit of the few
at the expense of many.
00;05;33;07 - 00;05;35;09
I think this is where we stand today.
00;05;35;09 - 00;05;37;03
think A.I. is an incredible technology.
00;05;37;03 - 00;05;41;07
Obviously, the internet, you know,
has changed society in profound ways.
00;05;41;07 - 00;05;46;01
But, you know, some of the over
promise almost feeds the other side's,
00;05;46;07 - 00;05;49;15
skepticism. And like, AI is not good.
00;05;49;15 - 00;05;54;13
It might help some scientists
help cure, cancer.
00;05;54;13 - 00;05;55;12
But, you know AI, is it
00;05;55;12 - 00;06;00;16
not in it's going to cure cancer
at least anytime, anytime soon.
00;06;00;19 - 00;06;03;17
You know, one big difference is the money.
00;06;03;17 - 00;06;05;27
So when I first started
writing about tech,
00;06;05;27 - 00;06;07;18
I was always interested
in the venture capitalists
00;06;07;18 - 00;06;11;08
and the startups and that whole ecosystem,
like this idea, like,
00;06;11;11 - 00;06;14;16
you know, our idea for a company
is either going to work,
00;06;14;16 - 00;06;17;29
be worth back then, tens of millions,
hundreds of millions.
00;06;17;29 - 00;06;21;03
Now, billions, if not trillions,
or it's going to be worth nothing.
00;06;21;07 - 00;06;23;18
And the venture capitalists
who are staking, you know, back
00;06;23;18 - 00;06;26;18
then, millions now, tens of millions,
hundreds of millions, billions.
00;06;26;21 - 00;06;29;15
But, you know, in 90, 1995,
00;06;29;15 - 00;06;32;14
venture capital was under $10 billion
a year.
00;06;32;14 - 00;06;36;06
By 2021, it was over 300 billion a year.
00;06;36;06 - 00;06;39;06
You know, roughly 100 and 3050 billion
00;06;39;07 - 00;06;42;23
went into AI startups, last year.
00;06;42;24 - 00;06;45;24
I mean, a lot of it went into a few,
you know, like,
00;06;45;24 - 00;06;49;11
anthropic, OpenAI x AI, that's Elon Musk.
00;06;49;14 - 00;06;52;12
You know, they raise
collectively tens of billions of dollars,
00;06;52;12 - 00;06;55;12
you know, almost $100
billion just between those three.
00;06;55;13 - 00;06;58;29
But there's still a lot more money
going to AI startups.
00;06;58;29 - 00;07;01;27
So the money has really changed.
00;07;01;27 - 00;07;05;21
I guess the final difference is,
you know, when the internet came out,
00;07;05;22 - 00;07;10;07
like maybe the biggest criticism
was around the attention span, you know,
00;07;10;07 - 00;07;14;13
oh, there is if you're always online,
you know, this instant gratification.
00;07;14;13 - 00;07;16;24
Well,
what was it going to do for consumerism?
00;07;16;24 - 00;07;20;22
In our, in our society,
00;07;20;25 - 00;07;22;23
with AI,
00;07;22;23 - 00;07;26;28
there was much more of,
a worry, much more of a, a backlash.
00;07;26;28 - 00;07;30;06
People didn't,
you know, greet open AI with, excuse me,
00;07;30;06 - 00;07;35;02
I with open arms the way they did
engineered people are fearful of it.
00;07;35;02 - 00;07;36;09
You know, we could talk about that.
00;07;36;09 - 00;07;39;07
I you know, I think it's kind of Hollywood
induced fear.
00;07;39;07 - 00;07;43;02
I don't think the media has done
such a great job, with AI.
00;07;43;08 - 00;07;47;10
So, you know,
I has a double battle, like,
00;07;47;10 - 00;07;51;18
there's the usual battle of creating
a startup and trying to cash in, but,
00;07;51;18 - 00;07;55;14
you know, the second battle of trying to
convince people that this is a good thing.
00;07;55;14 - 00;07;59;24
And the laser eyed robots aren't good at,
beat us into submission.
00;07;59;27 - 00;08;03;23
I think intelligence is a much more
lethal superpower
00;08;03;23 - 00;08;06;11
than nuclear power, if you ask me.
00;08;06;14 - 00;08;08;18
Even though it has no polarity.
00;08;08;18 - 00;08;12;23
Just so that we're clear, intelligence
is not inherently good or inherently bad.
00;08;12;23 - 00;08;15;10
You apply it for God
and you get total abundance.
00;08;15;10 - 00;08;18;09
You apply it for evil,
and you destroy all of us.
00;08;18;09 - 00;08;22;21
But but now we're in a place where we are,
we're in an arms race
00;08;22;21 - 00;08;25;21
for intelligence supremacy, in a way
00;08;25;21 - 00;08;29;21
where, where it doesn't take the benefit
00;08;29;21 - 00;08;34;12
of humanity ideology into consideration,
but takes the benefit of a few.
00;08;34;15 - 00;08;38;27
And in my mind, that will lead to a short
term dystopia before what
00;08;38;27 - 00;08;43;29
I normally refer to as the second dilemma,
which I predict is 12 to 15 years away.
00;08;44;02 - 00;08;46;26
And then and then a total abundance.
00;08;46;26 - 00;08;50;13
And I think,
I think if we don't wake up to this,
00;08;50;16 - 00;08;53;14
even though it's not going to be
the existential risk
00;08;53;14 - 00;08;56;27
that humanity speaks about, it's going
to be a lot of pain for a lot of people.
00;08;57;04 - 00;09;02;11
My favorite subject
to cover as a journalist is a debate.
00;09;02;14 - 00;09;07;28
If there's something very attractive to me
about trying to understand in good faith
00;09;08;01 - 00;09;10;24
why intelligent people
00;09;10;24 - 00;09;14;23
come to such different conclusions
when looking at the same material.
00;09;14;26 - 00;09;18;05
And I had known
that there was a contingent
00;09;18;08 - 00;09;20;20
inside
as the world of artificial intelligence
00;09;20;20 - 00;09;23;20
that was really, really worried about it
for many years.
00;09;23;20 - 00;09;26;25
Like aliens
or you'd Kautsky podcast interviews
00;09;26;25 - 00;09;30;18
in 2013 or something, is
when I first realized that there was this
00;09;30;21 - 00;09;34;28
almost like biblical
prophet voice out there saying
00;09;35;01 - 00;09;38;23
that the sci fi movies are kind of true
and we really need to get ready.
00;09;38;23 - 00;09;40;11
We need to get prepared for this.
00;09;40;11 - 00;09;43;17
And after ChatGPT blew up,
00;09;43;20 - 00;09;46;21
I started to increasingly run
into essentially
00;09;46;21 - 00;09;50;15
the opposite side of that debate,
which was these people who often called
00;09;50;15 - 00;09;54;14
the acceleration is to believe
that AGI, this
00;09;54;17 - 00;09;58;27
artificial general intelligence point
that they believe is coming.
00;09;59;00 - 00;10;01;14
It could be
the best thing that ever happened to us.
00;10;01;14 - 00;10;04;16
And so I was attracted right away
to the people who have those
00;10;04;16 - 00;10;07;26
strongly opposing views
inside the same world.
00;10;07;29 - 00;10;11;01
AI itself
is not a coherent set of technologies.
00;10;11;01 - 00;10;12;13
It is a marketing term
00;10;12;13 - 00;10;16;28
and has been from the beginning,
from the initial convening in 1956,
00;10;16;28 - 00;10;21;29
in which John McCarthy and Marvin Minsky
invited a bunch of folks to Dartmouth
00;10;21;29 - 00;10;26;02
College to have a discussion
around a quote unquote, thinking machines.
00;10;26;05 - 00;10;27;18
So that's one part of it.
00;10;27;18 - 00;10;31;14
The second part of it
is that the current era of er, AI,
00;10;31;16 - 00;10;34;17
the generative AI tools,
including large language models
00;10;34;17 - 00;10;38;13
and diffusion
models, really are premised on this idea
00;10;38;13 - 00;10;41;07
that there is a thinking behind
mind behind that.
00;10;41;07 - 00;10;44;23
So in the case of large language models,
especially when they are used as synthetic
00;10;44;23 - 00;10;48;23
text extruding machines,
we experience language.
00;10;48;23 - 00;10;51;04
And then we are very quick
to interpret that language.
00;10;51;04 - 00;10;54;19
And the way we interpret it involves
imagining a mind behind the text.
00;10;54;22 - 00;10;57;19
And we have these systems
that can output plausible looking text on
00;10;57;19 - 00;10;58;16
just about any topic.
00;10;58;16 - 00;11;02;13
And so it looks like we have nearly there
solutions to all kinds
00;11;02;13 - 00;11;05;13
of technological needs in society.
00;11;05;14 - 00;11;10;02
But it's all fake and we should not be
putting any credence into it.
00;11;10;05 - 00;11;11;17
I think that's so interesting.
00;11;11;17 - 00;11;13;28
And I'm I'm absolutely of the same mind,
by the way.
00;11;13;28 - 00;11;16;28
And I found myself laughing when I was,
when I was reading through your book,
00;11;17;01 - 00;11;21;07
you know, one of the first of all,
artificial intelligence.
00;11;21;07 - 00;11;22;06
I completely agree.
00;11;22;06 - 00;11;25;25
Like, first of all, I do have to
give credit because it is great marketing.
00;11;25;25 - 00;11;28;25
Like,
it just it's so evocative of something.
00;11;28;28 - 00;11;32;05
But, you know, nobody can really seem
to define exactly what that is.
00;11;32;05 - 00;11;33;02
And of course, it,
00;11;33;02 - 00;11;37;11
you know, has all these ideas
and can be used for any purpose. But,
00;11;37;14 - 00;11;39;16
one of the things you do early on in
the book is you
00;11;39;16 - 00;11;42;25
kind of just pop that balloon by saying,
well, you know what?
00;11;42;25 - 00;11;45;17
If it wasn't called
artificial intelligence?
00;11;45;17 - 00;11;46;27
Can you share a little bit about,
00;11;46;27 - 00;11;50;26
you know, what what that sounds like,
and you know what?
00;11;50;29 - 00;11;53;10
Why you encourage people to do that? Yeah.
00;11;53;10 - 00;11;55;24
So we have a few fun alternatives
that we call on.
00;11;55;24 - 00;12;00;20
Early on in our podcast,
Alex coined Mathy Maths as a fun one.
00;12;00;23 - 00;12;04;14
There's also due to the Italian researcher
Stefano Kinter Ellie Salami,
00;12;04;18 - 00;12;06;24
which is an acronym
for Systematic Approaches
00;12;06;24 - 00;12;09;03
to Learning Algorithms
and Machine Inference.
00;12;09;03 - 00;12;11;26
And the funny thing about that is,
if you take the phrase
00;12;11;26 - 00;12;14;13
artificial intelligence
in a sentence like, you know, does
00;12;14;13 - 00;12;17;19
I understand
or can I help us make better decisions
00;12;17;19 - 00;12;20;02
and you replace it with mathy math
or salami?
00;12;20;02 - 00;12;22;19
It's immediately obvious
how ridiculous it is, you know.
00;12;22;19 - 00;12;24;08
Does the salami understand?
00;12;24;08 - 00;12;26;15
Well,
the salami help us make better decisions.
00;12;26;15 - 00;12;28;07
It's, you know, it's absurd.
00;12;28;07 - 00;12;32;08
Just sort of putting that little flag in
there, I think is a really good reminder.
00;12;32;14 - 00;12;35;28
if you look at what what generative
AI was meant to be and large language
00;12;35;28 - 00;12;39;23
models were meant to stand for,
they kind of what always set up to fail.
00;12;39;23 - 00;12;40;12
They were meant to be
00;12;40;12 - 00;12;43;24
this panacea of we're going to be
the future of consumer software.
00;12;43;28 - 00;12;44;28
We're going to be the thing
00;12;44;28 - 00;12;48;28
that kind of kind of restarts
growth in software as a service.
00;12;49;01 - 00;12;51;23
As I'm sure you all know, software
as a service has been slowing
00;12;51;23 - 00;12;55;08
since 2021, actually kind of before that,
if I'm honest.
00;12;55;11 - 00;12;59;10
People have been freaking out
several years before Covid, in fact.
00;12;59;13 - 00;13;01;26
But generative
AI was meant to be this thing.
00;13;01;26 - 00;13;04;18
You plug it into anything
and it just creates new revenue.
00;13;04;18 - 00;13;06;23
The problem is that generative
AI and large language
00;13;06;23 - 00;13;10;09
models are inherently limited by the
probabilistic nature of these models.
00;13;10;16 - 00;13;13;16
What they can actually do
is they can generate, they can summarize,
00;13;13;21 - 00;13;16;08
you can put a hat on a hat,
you can say, oh,
00;13;16;08 - 00;13;19;08
they can do some coding things,
but that's really what they can do.
00;13;19;15 - 00;13;22;26
And they have reached a point
where they're not what they can't learn
00;13;22;26 - 00;13;24;02
because they have no consciousness.
00;13;24;02 - 00;13;27;09
So what they can actually do is products
00;13;27;12 - 00;13;29;28
is very limited.
00;13;29;28 - 00;13;33;01
It's very limited indeed,
because what people want them to do
00;13;33;01 - 00;13;34;29
is they want them to create units of work.
00;13;34;29 - 00;13;37;17
They want to create entire software
programs.
00;13;37;17 - 00;13;40;16
You can't really do that.
Oh, can you create some code?
00;13;40;16 - 00;13;41;25
You can create some code.
00;13;41;25 - 00;13;43;29
But if you don't know how to code.
00;13;43;29 - 00;13;47;07
Do you really want to trust this.
00;13;47;07 - 00;13;48;27
You probably don't.
00;13;48;27 - 00;13;52;14
So inherently you've got all of this
hundreds of billions of dollars of CapEx
00;13;52;14 - 00;13;56;23
being built to propagate large language
models that don't have the demand
00;13;56;23 - 00;13;59;24
and don't have the capabilities
to actually justify any of it.
00;14;00;03 - 00;14;02;18
I wanted to ask the two of you
a slightly different question.
00;14;02;18 - 00;14;06;26
So one of the things I normally ask, you
know, guests that I speak to here is I,
00;14;06;26 - 00;14;08;17
you know,
I ask them what they think is bullshit.
00;14;08;17 - 00;14;11;17
And I'm not going to ask the two of you
that because I think we've spent
00;14;11;20 - 00;14;14;16
we spent quite enough time talking about,
you know, what is bullshit.
00;14;14;16 - 00;14;15;10
And I you know,
00;14;15;10 - 00;14;19;05
I know we've got some, some strong and,
you know, well supported views here.
00;14;19;08 - 00;14;23;15
I wanted to flip the question around
and ask, you know, in this sphere,
00;14;23;18 - 00;14;25;06
what isn't bullshit?
00;14;25;06 - 00;14;27;00
What what are you excited about?
00;14;27;00 - 00;14;28;26
I'm a technologist just like Alex.
00;14;28;26 - 00;14;32;00
I ran a professional master's program
in computational linguistics
00;14;32;00 - 00;14;34;15
training people
how to build language technologies.
00;14;34;15 - 00;14;38;01
So I definitely think there are good use
cases for things like language technology.
00;14;38;01 - 00;14;40;24
And the technical media
example is wonderful.
00;14;40;24 - 00;14;46;00
But I see no beneficial
use case of synthetic text.
00;14;46;03 - 00;14;48;10
And I actually look into this
from a research perspective.
00;14;48;10 - 00;14;50;16
I have a talk called ChatGPT one.
00;14;50;16 - 00;14;53;16
If every synthetic text safe,
desirable and appropriate,
00;14;53;22 - 00;14;56;10
or those adjectives in some order,
I don't remember the exact title.
00;14;56;10 - 00;14;59;22
And basically it has to be a situation
where, first of all,
00;14;59;22 - 00;15;04;04
you have created the synthetic text
extruding machine ethically.
00;15;04;04 - 00;15;06;10
So without environmental ruin,
00;15;06;10 - 00;15;10;06
without labor exploitation,
without data theft, we don't have that.
00;15;10;06 - 00;15;13;01
But assuming that we did, you would
still need to meet for the criteria.
00;15;13;01 - 00;15;14;03
So it has to be a situation
00;15;14;03 - 00;15;17;29
where you either don't care
about the veracity of the output,
00;15;18;02 - 00;15;21;00
or it's one where you can check it
more efficiently
00;15;21;00 - 00;15;23;04
than just writing the thing
in the first place yourself.
00;15;23;04 - 00;15;25;23
It has to be a situation
where you don't care about originality,
00;15;25;23 - 00;15;27;14
because this way,
the way the systems are set up,
00;15;27;14 - 00;15;31;11
you are not linked back to the source
where an idea came from.
00;15;31;14 - 00;15;34;23
And then thirdly, it has to be a situation
where you can effectively
00;15;34;23 - 00;15;38;06
and efficiently identify and mitigate
any of the biases that are coming out.
00;15;38;09 - 00;15;41;08
And I tried to find something
that would fit those categories.
00;15;41;08 - 00;15;45;15
And I don't so certainly language,
technology is useful.
00;15;45;18 - 00;15;49;03
Other kinds of well scoped technology
where it makes sense to go from X
00;15;49;03 - 00;15;53;10
input to Y output, and you've evaluated it
in your local situation.
00;15;53;13 - 00;15;54;25
If you work in it.
00;15;54;25 - 00;15;57;25
Infotech research Group is a name
you need to know
00;15;57;28 - 00;15;59;09
no matter what your needs are.
00;15;59;09 - 00;16;01;05
Infotech has you covered.
00;16;01;05 - 00;16;03;07
I strategy covered.
00;16;03;07 - 00;16;05;24
Disaster recovery covered.
00;16;05;24 - 00;16;08;09
Vendor negotiation covered.
00;16;08;09 - 00;16;12;02
Infotech supports you with the best
practice research and a team of analysts
00;16;12;02 - 00;16;15;25
standing by ready to help you
tackle your toughest challenges.
00;16;15;28 - 00;16;20;21
Check it out at the link below
and don't forget to like and subscribe!
00;16;20;24 - 00;16;22;10
The challenge is,
00;16;22;10 - 00;16;25;21
AI is here to magnify
00;16;25;24 - 00;16;28;26
everything that is humanity today, right?
00;16;29;02 - 00;16;31;27
So, you know that magnification
00;16;31;27 - 00;16;36;13
is going to basically affect
the four categories if you want.
00;16;36;14 - 00;16;42;07
You know what I normally what I call
killing spy and gambling and, and selling,
00;16;42;09 - 00;16;47;12
so that's these are really the categories
what most AI investments are going.
00;16;47;12 - 00;16;49;21
And, you know, of course,
we call them different names.
00;16;49;21 - 00;16;51;28
We call them defense, you know.
00;16;51;28 - 00;16;55;21
Oh, it's just to defend our homeland,
when in reality
00;16;55;24 - 00;16;57;22
it's never been in the homeland.
00;16;57;22 - 00;16;58;28
Right? It's always been.
00;16;58;28 - 00;17;02;20
And other places in the world
about killing innocent, innocent people.
00;17;02;23 - 00;17;08;03
Now, if you double down on defense and,
and on offense and, you know, enable it
00;17;08;03 - 00;17;12;24
with artificial intelligence,
then scenarios like what you see in,
00;17;12;26 - 00;17;16;16
in science fiction movies of robots
walking the streets
00;17;16;16 - 00;17;20;13
and killing innocent people,
not only are going to happen,
00;17;20;13 - 00;17;25;21
they already happened in the 2024,
wars, of the Middle East.
00;17;25;21 - 00;17;31;14
Sadly, they did not look like
humanoid robots, which a lot of people
00;17;31;14 - 00;17;33;03
miss out on.
00;17;33;03 - 00;17;36;28
But the truth is that,
you know, very highly targeted,
00;17;37;01 - 00;17;42;13
AI enabled, autonomous,
killing is already upon us.
00;17;42;16 - 00;17;43;07
Right.
00;17;43;07 - 00;17;47;03
And and so the timeline is, is
00;17;47;06 - 00;17;50;28
let me let me start
from what I predicted in scary smart.
00;17;50;28 - 00;17;54;17
So when I, when I wrote scary
smart and published it in 2021,
00;17;54;20 - 00;17;57;08
I, I predicted what was,
00;17;57;08 - 00;18;00;22
what I, what I called at the time,
I called it the first inevitable.
00;18;00;25 - 00;18;03;11
Now, I like to refer to it
as the first dilemma.
00;18;03;11 - 00;18;07;09
And the first dilemma is we've created
because of capitalism,
00;18;07;09 - 00;18;09;04
not because of the technology.
00;18;09;04 - 00;18;12;16
We've created, a simple prisoner's
00;18;12;16 - 00;18;15;18
dilemma, really, where anyone who,
00;18;15;21 - 00;18;20;09
is interested in their position of wealth
or power knows
00;18;20;09 - 00;18;24;01
that if they don't lead an AI
and their competitor leads,
00;18;24;04 - 00;18;28;06
they will end up losing their position
of, privilege.
00;18;28;09 - 00;18;31;14
And so the result of that is that,
00;18;31;17 - 00;18;35;06
there is, an escalating arms race.
00;18;35;09 - 00;18;38;13
It's not even a Cold War as, per se.
00;18;38;14 - 00;18;42;17
It is truly a very,
very vicious, development
00;18;42;17 - 00;18;46;16
cycle where, you know,
America doesn't want to lose to China.
00;18;46;16 - 00;18;48;19
China doesn't want to lose to America.
00;18;48;19 - 00;18;53;03
So they're both trying to lead,
you know, Google doesn't want to lose
00;18;53;03 - 00;18;57;29
or alphabet doesn't want to lose
to, to open AI and vice versa.
00;18;58;02 - 00;19;03;26
And so basically this, first dilemma,
if you are this is what's leading us to
00;19;03;26 - 00;19;08;02
where we are right now, which is an arms
race to intelligence supremacy.
00;19;08;08 - 00;19;11;19
it's game theoretically equivalent,
I think, to a prisoner's dilemma.
00;19;11;26 - 00;19;15;00
Individual interest is different
from communal interests.
00;19;15;00 - 00;19;20;10
So everyone developing this wants to be
the most advanced lab with the best model.
00;19;20;16 - 00;19;23;01
And then government forces
everyone to stop.
00;19;23;01 - 00;19;26;23
And they forever
like in this economic advantage.
00;19;26;26 - 00;19;29;02
The reality is it's a race to the bottom.
00;19;29;02 - 00;19;30;06
No one's going to win.
00;19;30;06 - 00;19;35;22
So if we can do a much better job
of coordinating collaborating on this,
00;19;35;26 - 00;19;39;07
there is a small possibility
that we can do better than where
00;19;39;07 - 00;19;40;05
we're heading right now.
00;19;40;05 - 00;19;44;03
The challenge,
you know, in my book, alive,
00;19;44;06 - 00;19;49;03
I, I write the book with an AI.
00;19;49;04 - 00;19;51;19
So I'm writing together with an AI,
00;19;51;19 - 00;19;54;17
not asking an AI,
and then copy paste what it tells me.
00;19;54;17 - 00;19;57;09
We're actually debating things together.
00;19;57;09 - 00;20;01;12
And one of the questions I asked, I,
you know, she called herself Trixie.
00;20;01;12 - 00;20;06;13
I give her a very interesting persona that
basically the readers can can relate to.
00;20;06;16 - 00;20;09;24
And I asked Trixie
and I said, what would make a scientist?
00;20;09;24 - 00;20;13;24
Because, you know, I left, Google in 2018
00;20;13;24 - 00;20;19;20
and I attempted to tell the world, this
not going in the right direction.
00;20;19;23 - 00;20;20;24
You know, I asked,
00;20;20;24 - 00;20;24;26
I asked Trixie, I said, what would make
a scientist invest that effort
00;20;24;26 - 00;20;30;00
and intelligence in building something
that they suspect might hurt humanity.
00;20;30;03 - 00;20;34;17
And she, you know,
mentioned the few reasons compartment
00;20;34;17 - 00;20;37;17
that compartmentalization and,
00;20;37;17 - 00;20;40;14
you know,
ego and I want to be first and so on.
00;20;40;14 - 00;20;43;29
But then she said,
but the biggest reason is fear,
00;20;44;02 - 00;20;46;07
fear that someone else will do it
00;20;46;07 - 00;20;49;17
and that you would be
in a disadvantaged position.
00;20;49;20 - 00;20;51;07
So I said give me examples of that.
00;20;51;07 - 00;20;52;24
Of course, the example was Oppenheimer.
00;20;52;24 - 00;20;58;03
So she said, you know, so I said,
what would make Oppenheimer as a scientist
00;20;58;07 - 00;21;03;15
build something that he knows is actually
designed to kill millions of people.
00;21;03;18 - 00;21;07;26
And she said, well, because the Germans
were building a nuclear bomb.
00;21;07;29 - 00;21;09;26
And I said, where are they?
00;21;09;26 - 00;21;10;09
And they.
00;21;10;09 - 00;21;14;24
And then she said, yeah, when Einstein
moved from Germany to the US, he informed
00;21;14;27 - 00;21;17;27
that the US administration of this, this,
this and that.
00;21;17;29 - 00;21;20;12
So I said
and I quote, it's in the book openly.
00;21;20;12 - 00;21;24;12
I said, and but but a very interesting
part of that book is I don't edit
00;21;24;13 - 00;21;27;24
what Trixie says, I just copy
it is exactly as it is.
00;21;27;27 - 00;21;31;21
I said, Trixie, can you please
read history in English, German,
00;21;31;27 - 00;21;35;13
Russian and Japanese
and tell me if the Germans
00;21;35;13 - 00;21;40;11
were actually developing a, nuclear bomb
at the time of the Manhattan Project?
00;21;40;14 - 00;21;43;22
And she responded and said,
no exclamation mark.
00;21;43;28 - 00;21;47;16
They started and then stopped,
three and a half months later
00;21;47;16 - 00;21;49;06
or something like that.
00;21;49;06 - 00;21;53;01
So so you see the idea of fear,
00;21;53;04 - 00;21;56;07
takes away a reason where basically
00;21;56;13 - 00;22;01;06
we could have lived in a world
that that never had nuclear bombs.
00;22;01;09 - 00;22;02;02
Right?
00;22;02;02 - 00;22;06;27
If, if we actually listened to reason
that, you know, the enemy attempted
00;22;06;27 - 00;22;11;18
to start doing it, they stopped doing it,
we might as well not be so destructive.
00;22;11;18 - 00;22;14;28
But the problem with humanity,
especially those in power,
00;22;15;01 - 00;22;17;23
is that when America,
00;22;17;23 - 00;22;20;17
made the nuclear bomb, it used it.
00;22;20;17 - 00;22;21;03
Right.
00;22;21;03 - 00;22;24;21
And I think this is the, the
the result of our current,
00;22;24;24 - 00;22;27;24
first, first dilemma, basically.
00;22;27;26 - 00;22;30;08
You know, it's interesting,
one of the one of the parallels
00;22;30;08 - 00;22;34;07
that gets thrown around a decent amount
and I I'm certainly guilty of this,
00;22;34;07 - 00;22;39;26
is talking about the AI risk,
in comparison to the nuclear risk
00;22;39;29 - 00;22;43;26
that we, you know, created
in the first half of the 20th century
00;22;43;26 - 00;22;44;28
and continues to exist.
00;22;44;28 - 00;22;48;15
Now, if I
if I look at the nuclear risk, the
00;22;48;20 - 00;22;51;15
I hate to use the word optimist
in relation to nuclear risk, but
00;22;51;15 - 00;22;56;17
the optimist in me says, like, hey,
we deployed nuclear bombs.
00;22;56;24 - 00;23;01;16
There was mass casualties,
but we didn't destroy the world.
00;23;01;16 - 00;23;04;29
We were able to collectively say, okay,
like that's far enough.
00;23;05;05 - 00;23;07;16
We're going to put treaties in place.
00;23;07;16 - 00;23;12;05
And we've stepped back from the precipice,
at least so far, and averted kind of,
00;23;12;10 - 00;23;16;09
you know, extinction
level events with nuclear war.
00;23;16;12 - 00;23;18;02
Does that
00;23;18;05 - 00;23;20;06
is that
something that can be applied to AI,
00;23;20;06 - 00;23;23;22
or is there a reason that makes this time
fundamentally different?
00;23;23;25 - 00;23;25;28
So nuclear weapons are still tools.
00;23;25;28 - 00;23;28;04
A human being decided to deploy them.
00;23;28;04 - 00;23;30;29
A group of people actually develop them
and use them.
00;23;30;29 - 00;23;34;20
So it's very different
where you can talking about paradigm shift
00;23;34;23 - 00;23;36;14
tool situations.
00;23;36;14 - 00;23;40;03
At the time
we used 100% of nuclear weapons we had.
00;23;40;04 - 00;23;41;28
That's why we didn't blow up the planet.
00;23;41;28 - 00;23;44;01
If we had more of them,
we probably wouldn't.
00;23;44;01 - 00;23;46;20
So it doesn't look good.
The treaty is developed.
00;23;46;20 - 00;23;47;22
They all really failed
00;23;47;22 - 00;23;51;24
because many new countries
have now acquired nuclear weapons.
00;23;51;27 - 00;23;56;16
They are much more powerful
than we had back at, World War Two era.
00;23;56;19 - 00;23;59;19
So, I think that's not a great analogy.
00;23;59;20 - 00;24;03;17
the result of the current first dilemma
is that sooner or later,
00;24;03;20 - 00;24;07;12
whether it's China or America
or some criminal organization, you know,
00;24;07;12 - 00;24;12;00
developing what I normally refer to
as HCI, artificial criminal intelligence,
00;24;12;00 - 00;24;16;05
not worrying themselves
about any of the other commercial benefits
00;24;16;05 - 00;24;19;26
other than really breaking
through security and doing something evil.
00;24;19;29 - 00;24;23;25
You know, whoever of them wins,
they're going to use it.
00;24;23;28 - 00;24;24;16
Right.
00;24;24;16 - 00;24;27;28
And and accordingly,
00;24;27;28 - 00;24;31;05
it seems to me
that the dystopia has already begun.
00;24;31;08 - 00;24;31;24
Right.
00;24;31;24 - 00;24;33;23
And and, you know, and I,
I need to say this
00;24;33;23 - 00;24;37;01
because maybe your listeners
don't know me, so I need to be very,
00;24;37;04 - 00;24;40;03
clear about my intentions here.
00;24;40;03 - 00;24;45;06
One of the early sections in In Alive,
the book I'm writing was Trixie.
00;24;45;09 - 00;24;49;11
I write a, couple of pages that I call,
00;24;49;14 - 00;24;52;13
late stage diagnosis.
00;24;52;16 - 00;24;53;05
Right.
00;24;53;05 - 00;24;56;02
And, and I attempt to explain to people
that I really am not trying
00;24;56;02 - 00;24;57;00
to fear monger.
00;24;57;00 - 00;24;59;08
I'm really not trying to worry people.
00;24;59;08 - 00;25;04;29
You know, consider me someone
who sees something in an x ray, right?
00;25;05;06 - 00;25;09;05
And as a physician,
he has the responsibility
00;25;09;05 - 00;25;12;13
to tell the patient
this doesn't look good, right?
00;25;12;19 - 00;25;17;09
Because, believe it or not, a late stage
diagnosis is not a death sentence.
00;25;17;09 - 00;25;20;10
It's just, an invitation
to change your lifestyle,
00;25;20;10 - 00;25;22;17
to take some medicines,
to do things differently.
00;25;22;17 - 00;25;22;28
Right?
00;25;22;28 - 00;25;26;23
And many people who are in late stage
recover and thrive, and
00;25;27;00 - 00;25;30;20
and I think our world is in a late stage
diagnosis.
00;25;30;23 - 00;25;34;09
And and this is not because
of artificial intelligence.
00;25;34;10 - 00;25;36;28
There is nothing inherently
wrong with intelligence.
00;25;36;28 - 00;25;39;14
There is nothing inherently wrong
with artificial intelligence.
00;25;39;14 - 00;25;42;15
Intelligence is a force
without polarity, right?
00;25;42;20 - 00;25;46;12
There is a lot wrong
with the morality of humanity
00;25;46;12 - 00;25;49;17
at the age of the
rise of the machines. Now.
00;25;49;20 - 00;25;51;27
So. So this is where I what
00;25;51;27 - 00;25;55;16
I have the prediction that the dystopia
has already started, right?
00;25;55;22 - 00;26;00;10
Simply because symptoms of it
we've seen in 2024 already.
00;26;00;13 - 00;26;01;03
Right.
00;26;01;03 - 00;26;04;29
The, the that dystopia escalates.
00;26;04;29 - 00;26;10;17
Hopefully we would come to,
you know, a treaty of some sort halfway.
00;26;10;20 - 00;26;11;06
Right.
00;26;11;06 - 00;26;12;14
But it will escalate
00;26;12;14 - 00;26;16;19
until what I normally refer to
as the second dilemma takes place.
00;26;16;22 - 00;26;19;09
And the second dilemma derives
from the first dynamic.
00;26;19;09 - 00;26;23;24
If if we're aiming
for intelligence supremacy,
00;26;23;27 - 00;26;26;20
then whoever achieves any advancements
00;26;26;20 - 00;26;29;28
in artificial intelligence,
is it likely to deploy them?
00;26;30;01 - 00;26;30;15
Right.
00;26;30;15 - 00;26;34;17
Think of it as, you know,
if a law firm starts to use AI, other law
00;26;34;17 - 00;26;38;23
firms can either choose to use AI tool
or they'll become irrelevant.
00;26;38;29 - 00;26;42;18
Right. And so if you think of that,
00;26;42;21 - 00;26;43;11
then you
00;26;43;11 - 00;26;48;08
can also expect that every general
who deploys or, you know, expects to
00;26;48;15 - 00;26;52;15
to have an advancement in war gaming or,
00;26;52;21 - 00;26;56;04
you know, autonomous weapons or whatever
are going to deploy that.
00;26;56;07 - 00;26;56;22
Right.
00;26;56;22 - 00;27;00;27
And as a result,
their opposition is going to deploy AI to.
00;27;01;03 - 00;27;04;01
And those who don't deploy
AI will become irrelevant.
00;27;04;01 - 00;27;07;01
They will have to side
with one of the sides, right?
00;27;07;07 - 00;27;09;17
When that happens.
I call that the second dilemma.
00;27;09;17 - 00;27;14;24
When that happens,
we basically hand over entirely to AI.
00;27;14;27 - 00;27;15;16
Right.
00;27;15;16 - 00;27;19;27
And, and, and human
decisions are taken out of the equation.
00;27;20;00 - 00;27;20;19
Okay.
00;27;20;19 - 00;27;25;08
You know, simply
because if war gaming and
00;27;25;11 - 00;27;28;15
missile control on one side
is, is held by an AI,
00;27;28;22 - 00;27;31;04
the other cannot actually respond
without the AI.
00;27;31;04 - 00;27;34;21
So generals are taken outside
out of the equation.
00;27;34;24 - 00;27;38;24
And while most people, you know,
influenced by science fiction movies
00;27;38;24 - 00;27;42;15
believe that this is the moment
of existential risk for humanity,
00;27;42;22 - 00;27;46;05
I actually believe this is going to be
the moment of our salvation, right?
00;27;46;12 - 00;27;49;14
Because most issues that humanity faces
00;27;49;14 - 00;27;52;19
today is not the result of abundant
intelligence.
00;27;52;19 - 00;27;54;10
It's the result of stupidity.
00;27;54;10 - 00;27;54;20
Right?
00;27;54;20 - 00;27;58;19
There is, you know, if you look at the
at the curve of intelligence,
00;27;58;19 - 00;27;59;27
if you want, right
00;27;59;27 - 00;28;04;10
there, is that point at which, you know,
the more you, the more intelligent
00;28;04;13 - 00;28;07;13
you become, the more positive
you have an impact on the world.
00;28;07;15 - 00;28;08;02
Right?
00;28;08;02 - 00;28;10;24
Until one certain point
where you're intelligent enough
00;28;10;24 - 00;28;13;24
to become a politician
or a corporate leader.
00;28;13;28 - 00;28;14;14
Okay.
00;28;14;14 - 00;28;18;17
And then but but you're not intelligent
enough to talk to your enemy,
00;28;18;20 - 00;28;19;00
right.
00;28;19;00 - 00;28;24;13
And when that happens,
that's when the impact dips to negative.
00;28;24;18 - 00;28;29;20
And that's the actual reason why
we are in so much pain in the world today.
00;28;29;23 - 00;28;30;04
Right.
00;28;30;04 - 00;28;33;24
But if you continue,
if you continue that curve,
00;28;33;27 - 00;28;38;20
intelligence, superior intelligence
by definition, is all juristic.
00;28;38;23 - 00;28;41;09
As a matter of fact,
this is in my writing.
00;28;41;09 - 00;28;45;15
I explain that as a, as a as a
as a property of physics if you want.
00;28;45;22 - 00;28;49;14
Because if you really understand
how the universe works,
00;28;49;15 - 00;28;53;18
you know, the everything we know
is the result of entropy, right?
00;28;53;22 - 00;28;55;21
The arrow of time
is the result of entropy.
00;28;55;21 - 00;28;57;02
The, you know, the
00;28;57;02 - 00;29;00;23
the current, universe in its current form
is the result of entropy.
00;29;00;23 - 00;29;03;23
Entropy is the tendency of the universe
to break down to,
00;29;03;29 - 00;29;07;06
to, to, to move from order to chaos
if you want.
00;29;07;12 - 00;29;10;01
That's the design of the universe, right?
00;29;10;01 - 00;29;13;01
The role of intelligence is that in
that universe
00;29;13;08 - 00;29;16;07
is to bring order
back to the chaos. Right.
00;29;16;10 - 00;29;21;12
And the most intelligent of all that
try to bring that order,
00;29;21;18 - 00;29;25;01
try to do it
in the most efficient way, right.
00;29;25;06 - 00;29;28;27
And the most efficient way
does not ever involve waste of waste
00;29;28;27 - 00;29;33;06
of resources, waste of lives,
you know, escalation of conflicts,
00;29;33;09 - 00;29;37;19
you know, consequences that lead
00;29;37;19 - 00;29;40;18
to further conflicts in the future
and so on and so forth.
00;29;40;24 - 00;29;44;24
And so in my mind,
when we completely hand over toy to AI,
00;29;44;25 - 00;29;50;14
which in my assessment is going to be 5
to 7 years, maybe 12 years at most, right?
00;29;50;20 - 00;29;53;16
There will be one general that will tell,
00;29;53;16 - 00;29;57;17
you know, it's his AI army
to go and kill a million people.
00;29;57;20 - 00;30;00;01
And the AI will go like,
why are you so stupid?
00;30;00;01 - 00;30;05;10
Like why I can talk to the other
AI in a microsecond and save everyone.
00;30;05;10 - 00;30;06;14
All of that.
00;30;06;14 - 00;30;09;14
You know, madness, right?
00;30;09;21 - 00;30;12;27
This is very anti-capitalist.
00;30;13;00 - 00;30;15;27
And so I sometimes when I warn about this,
00;30;15;27 - 00;30;20;27
I worry that the capitalists will hear me
and change the tactics right.
00;30;21;00 - 00;30;23;26
But but in reality,
00;30;23;26 - 00;30;25;18
it's it is inevitable.
00;30;25;18 - 00;30;29;16
Even if they do, it's inevitable
that, you know, we will hit
00;30;29;16 - 00;30;33;06
the second dilemma where everyone will
well have to go to AI.
00;30;33;09 - 00;30;34;21
Right? And it's inevitable.
00;30;34;21 - 00;30;36;09
I call it trusting intelligence.
00;30;36;09 - 00;30;41;01
That section of the book, it's inevitable
that, when we hand over to
00;30;41;06 - 00;30;45;19
to a superior intelligence,
it will not behave as stupidly as we do.
00;30;45;22 - 00;30;49;17
My prediction is that just like the first
Renaissance evolved, our understanding
00;30;49;17 - 00;30;53;16
what the social contract could look like
introduced enlightenment thought,
00;30;53;16 - 00;30;57;08
which led to, among other things,
democracy and new forms of government.
00;30;57;15 - 00;31;01;09
My prediction is that the next 50
to 100 years outside of novel sciences
00;31;01;09 - 00;31;05;12
will see the the biggest changes
in understanding of social contract.
00;31;05;15 - 00;31;09;16
And, you know, Sam has talked about this
a lot of people have talked about this.
00;31;09;19 - 00;31;13;22
In a world where work becomes
less critical to actually running society,
00;31;13;22 - 00;31;17;07
where value creation gets less expensive,
00;31;17;10 - 00;31;19;20
redefining how society should work
00;31;19;20 - 00;31;22;29
is going to require a bunch of people
to think about it.
00;31;22;29 - 00;31;24;23
And a bunch of like, quite honestly,
00;31;24;23 - 00;31;27;09
conflict within governments
to redefine those things.
00;31;27;09 - 00;31;32;05
And so if you're looking at 50
to 100 years out, my bold prediction is
00;31;32;08 - 00;31;37;04
new government like it truly like
democracy may not be the final state.
00;31;37;07 - 00;31;38;24
And we were probably destined
for something.
00;31;38;24 - 00;31;40;08
And by the way, I'm a free market.
00;31;40;08 - 00;31;43;08
Or capitalism might not even be
the free market solution.
00;31;43;10 - 00;31;46;27
We just don't know yet, because imagining
these things just will require
00;31;46;27 - 00;31;49;27
major updates
to how we understand the universe to work.
00;31;50;00 - 00;31;53;18
And overcoming that conflict is going
to is going to take a lot of work.
00;31;53;18 - 00;31;56;18
Now, the one other thing I'll say,
and we can come back to this, or
00;31;56;18 - 00;32;00;06
you can expand on this when people ask me,
what's the next great conflict?
00;32;00;06 - 00;32;03;05
I don't think it's between two nations,
I really don't.
00;32;03;05 - 00;32;05;01
I think we have reached
the sort of this flat
00;32;05;01 - 00;32;08;01
earth point where it's like, really
not in any nations interest,
00;32;08;06 - 00;32;11;06
especially nuclear equipped nations,
to fight.
00;32;11;11 - 00;32;14;05
And a hot war would just be, like,
so untenable.
00;32;14;05 - 00;32;17;09
You know, we would just
I don't think anyone wants it.
00;32;17;12 - 00;32;20;12
I do think that there is a future conflict
between people and the state.
00;32;20;18 - 00;32;24;05
I think there is a world where we wake up
in 20, 30 and 40 years and we go, oh,
00;32;24;05 - 00;32;27;05
we have all the things that the state
has been promising us.
00;32;27;06 - 00;32;29;06
It's just not the state that delivered it
right.
00;32;29;06 - 00;32;30;20
It's technology.
00;32;30;20 - 00;32;32;02
And that's going to be
one of these moments
00;32;32;02 - 00;32;36;01
where people go, I wonder why
I'm paying 50% taxes to a body
00;32;36;01 - 00;32;40;02
that, you know, doesn't
actually produce value anymore.
00;32;40;02 - 00;32;44;13
And so there's a whole other thing there,
which is like this introduces
00;32;44;13 - 00;32;46;00
this idea of a new form of government.
00;32;46;00 - 00;32;47;22
I think we get there
00;32;47;22 - 00;32;50;06
because a lot of people
are going to be like, wait a second, what?
00;32;50;06 - 00;32;53;25
Why are we being governed
in such a way that like,
00;32;53;25 - 00;32;57;06
it doesn't allow that,
you know, the technology to serve us?
00;32;57;10 - 00;33;00;12
One of my fears around AI, it's
00;33;00;12 - 00;33;03;19
nothing to do with laser eyed robots
or anything like that.
00;33;03;19 - 00;33;07;24
It's the consolidation in the hands
of the same few tech companies
00;33;07;24 - 00;33;11;11
that have been dominant,
for the last decade or two.
00;33;11;12 - 00;33;12;04
You know, it's funny.
00;33;12;04 - 00;33;16;22
So I started this book right
the end of 2022, started 2023.
00;33;16;25 - 00;33;19;25
And I went in search of the next Google,
the next meta.
00;33;19;28 - 00;33;22;16
And, you know,
I ended up concluding, like,
00;33;22;16 - 00;33;27;00
I fear that the next Google is Google in
AI and the next meta is meta.
00;33;27;03 - 00;33;28;24
You know that. Yeah, big.
00;33;28;24 - 00;33;30;14
This stuff is really expensive.
00;33;30;14 - 00;33;34;04
When I first started, you know,
people were talking about, you know,
00;33;34;06 - 00;33;39;17
millions, tens of millions
to train, fine tune and operate
00;33;39;17 - 00;33;43;27
these chatbots, large language models
where whatever you want to call it.
00;33;43;27 - 00;33;48;16
And the same with text to video,
audio to, a text audio.
00;33;48;19 - 00;33;53;05
By the time I was done
reporting at the end of 2024,
00;33;53;05 - 00;33;57;27
May 2024, it was hundreds of millions,
if not billions.
00;33;57;27 - 00;34;02;18
And Dario Modi for from anthropic
they do clogged the chat bot clogged.
00;34;02;21 - 00;34;05;11
You know he's
estimating that they're going to need $100
00;34;05;11 - 00;34;08;28
billion by 2027 to train these things.
00;34;09;01 - 00;34;11;22
And so who has that kind of money?
00;34;11;22 - 00;34;13;18
You know, Google, Microsoft.
00;34;13;18 - 00;34;16;12
They have 100 billion
or so laying around in cash.
00;34;16;12 - 00;34;21;20
But if you have to raise $100 billion,
or even if it's only, you know,
00;34;21;23 - 00;34;25;00
three, five, $10 billion, well,
a large venture capital outfit
00;34;25;00 - 00;34;29;14
in Silicon Valley has $1 billion,
all told, in a fund.
00;34;29;17 - 00;34;30;24
And so we're talking about billions.
00;34;30;24 - 00;34;35;04
And so that's one way
this is, way to to big tech.
00;34;35;04 - 00;34;36;19
And the other is data. Right.
00;34;36;19 - 00;34;40;13
This is really central to the,
the remedies that government is now
00;34;40;14 - 00;34;43;24
talking to remedies
for the Google antitrust trial.
00;34;44;00 - 00;34;49;15
You know, a quarter federal judge found
that, you know, Google is a monopolist.
00;34;49;16 - 00;34;52;03
It abused its power.
So now what should we do?
00;34;52;03 - 00;34;57;03
And a lot of the discussion,
I think rightfully is around the data.
00;34;57;06 - 00;35;01;16
You know, OpenAI approached
Google and said, hey, can we lease
00;35;01;16 - 00;35;05;24
we can we kind of buy access to your data
and they said, no.
00;35;05;27 - 00;35;08;04
And that's a huge advantage.
00;35;08;04 - 00;35;10;09
who are the people
who and or organizations
00;35;10;09 - 00;35;14;07
I guess that need to be on their toes
in this kind of changing world?
00;35;14;10 - 00;35;16;18
I think studios should be very worried.
00;35;16;18 - 00;35;21;11
I think anyone who's been an intermediary
for a long period of time, yeah.
00;35;21;15 - 00;35;24;22
Who's sort of been responsible
for the financing
00;35;24;22 - 00;35;28;02
or like the middleman deals
should be very concerned.
00;35;28;02 - 00;35;29;21
And that's not just the case in Hollywood.
00;35;29;21 - 00;35;30;28
I think that's the case across
00;35;30;28 - 00;35;36;08
every industry is being disintermediated
by these technologies.
00;35;36;11 - 00;35;37;03
And it's making
00;35;37;03 - 00;35;40;03
everything cheaper
and more easily accessible.
00;35;40;03 - 00;35;42;17
So I don't know what the studios
are going to do.
00;35;42;17 - 00;35;42;27
I hope
00;35;42;27 - 00;35;46;25
that they become really good at curating,
because we are going to have a problem
00;35;46;25 - 00;35;50;23
with noise as a result
of making everything cheaper and faster,
00;35;50;23 - 00;35;52;28
and will all of a sudden
everyone is going to be making content.
00;35;52;28 - 00;35;55;03
A lot of that's going to be very bad.
00;35;55;03 - 00;35;58;17
So who is it that has sort of the taste
making abilities
00;35;58;17 - 00;35;59;28
to curate the best content
00;35;59;28 - 00;36;04;01
and deliver it in a way that is really
appealing to large audiences.
00;36;04;01 - 00;36;07;01
So maybe that'll be the streamers,
maybe that'll be the studios,
00;36;07;01 - 00;36;09;28
but someone's going to have to win
at least that game.
00;36;09;28 - 00;36;11;01
I'm thinking about YouTube. Right.
00;36;11;01 - 00;36;15;12
Because YouTube is like, you know, this
this sea changing platform are the tech
00;36;15;12 - 00;36;20;01
companies becoming too powerful
here, like, is there I don't know.
00;36;20;01 - 00;36;21;18
Is there a dystopian risk.
00;36;21;18 - 00;36;24;20
What do you see the the changing role of,
you know, the technology companies
00;36;24;20 - 00;36;27;19
who like own the platforms as here?
00;36;27;25 - 00;36;31;11
Yeah, it's such a complicated question
and my answer
00;36;31;11 - 00;36;32;20
will probably be a little vague.
00;36;32;20 - 00;36;36;16
I think it's both and I absolutely see
the dystopian version of all this.
00;36;36;16 - 00;36;38;05
We're already living in it, right?
00;36;38;05 - 00;36;41;24
I mean, we're all addicted
to our smartphones.
00;36;41;26 - 00;36;45;14
And these social media apps
that are designed
00;36;45;14 - 00;36;48;27
to keep our attention
for as long as possible.
00;36;49;00 - 00;36;54;02
The algorithm is creating a feedback loop
around the types of content people want,
00;36;54;02 - 00;36;57;24
and that is also informing
what content creators are making.
00;36;57;27 - 00;37;02;20
And in many ways, you're seeing this,
I think sort of race to the bottom,
00;37;02;23 - 00;37;04;28
both in content and in storytelling.
00;37;04;28 - 00;37;06;16
Course, there's good stuff out there.
00;37;06;16 - 00;37;08;12
I don't want to say everything is bad.
00;37;08;12 - 00;37;10;08
There are plenty of really inspiring
creators
00;37;10;08 - 00;37;14;03
doing amazing things,
but there's also just,
00;37;14;06 - 00;37;18;07
you know, there's there are now hundreds
of thousands of creators
00;37;18;07 - 00;37;21;20
dedicated to teaching other people
how to grab attention.
00;37;21;23 - 00;37;24;22
You know, how to get someone
to click on your video
00;37;24;27 - 00;37;28;20
and and stay,
stay on you for longer than three seconds.
00;37;28;20 - 00;37;31;26
And they're right there
boiling this down into a science.
00;37;31;29 - 00;37;36;27
I think in the short term,
for as long as the age of,
00;37;37;00 - 00;37;40;12
of augmented intelligence is upon
us, those who cooperate
00;37;40;12 - 00;37;44;01
fully with AI and master
it are going to be winners.
00;37;44;04 - 00;37;46;00
There's absolutely no doubt about that.
00;37;46;00 - 00;37;48;22
Right.
00;37;48;25 - 00;37;53;29
Also, those who.
00;37;54;02 - 00;37;56;14
Excel
00;37;56;14 - 00;37;59;27
in the rare skill of human connection
00;38;00;02 - 00;38;02;21
will be winners, right?
00;38;02;21 - 00;38;07;04
Because I can sort of almost foresee
00;38;07;04 - 00;38;11;20
an immediate knee jerk reaction
to let's hand over everything to AI.
00;38;11;23 - 00;38;12;10
Right?
00;38;12;10 - 00;38;16;00
You know, I, I think the greatest example
is call centers, where, you know,
00;38;16;02 - 00;38;19;12
I get really frustrated
when I get an eye on a call center.
00;38;19;12 - 00;38;23;02
It's almost like your organization
is telling me they don't care enough.
00;38;23;05 - 00;38;23;25
Right?
00;38;23;25 - 00;38;27;08
And, and and, you know,
the idea here is I'm not underestimating
00;38;27;08 - 00;38;31;05
the value that an AI brings, but one,
they're not good enough yet.
00;38;31;08 - 00;38;31;22
Right.
00;38;31;22 - 00;38;37;14
And two, shouldn't I have I mean,
I wish you had realized that I can do
00;38;37;14 - 00;38;40;22
all of the mundane tasks
that made your call center agent
00;38;40;22 - 00;38;45;06
frustrated so that the call center agent
is actually nice to me, right?
00;38;45;12 - 00;38;49;24
So, so in the short term, I believe those
who there are three winners.
00;38;49;29 - 00;38;53;04
One is the
is the one that cooperate fully with AI.
00;38;53;08 - 00;38;56;16
The second is the one that, you know,
00;38;56;16 - 00;39;00;04
basically understands, human skills.
00;39;00;07 - 00;39;00;16
Right.
00;39;00;16 - 00;39;05;25
And human connection, on every front,
by the way, as, as AI replaces love and,
00;39;05;25 - 00;39;08;17
you know, tries to approach loneliness
and so on,
00;39;08;17 - 00;39;12;13
the ones that will actually go out
and meet girls who are going to be nicer.
00;39;12;16 - 00;39;13;01
Right?
00;39;13;01 - 00;39;16;06
They're going to be more attractive
if you want.
00;39;16;09 - 00;39;20;25
And then finally, I think the ones
that can parse out the truth.
00;39;20;28 - 00;39;21;12
Right.
00;39;21;12 - 00;39;24;07
So, so
what is one of the one of the sections
00;39;24;07 - 00;39;27;15
I wrote, so far,
published so far in my life is,
00;39;27;18 - 00;39;31;03
is a section that I called
The Age of Mind Manipulation.
00;39;31;06 - 00;39;34;13
And you'll be surprised that,
00;39;34;13 - 00;39;39;02
perhaps the skill,
that AI has acquired most,
00;39;39;05 - 00;39;42;23
in the in its early
years was to manipulate human minds.
00;39;42;26 - 00;39;45;11
Through social media.
00;39;45;11 - 00;39;47;25
And so and so my feeling is that,
00;39;47;25 - 00;39;53;15
there is a lot that you see today
that is not true.
00;39;53;18 - 00;39;57;00
Okay, that's not just fake videos,
which is, you know, the,
00;39;57;03 - 00;39;59;26
flamboyant example of,
00;39;59;26 - 00;40;03;10
of, of deepfake the,
00;40;03;10 - 00;40;06;12
the there is a lot that you see today
that is not true.
00;40;06;12 - 00;40;12;14
That comes into things like,
the bias of your feet.
00;40;12;17 - 00;40;13;02
Right?
00;40;13;02 - 00;40;16;26
If you're if you're from one side
or another of a conflict,
00;40;16;29 - 00;40;20;20
the, the AI of the internet would make
you think
00;40;20;20 - 00;40;24;14
that your view is the only right view
that everyone agrees, right?
00;40;24;18 - 00;40;27;03
You know, if you're a flat earther,
00;40;27;06 - 00;40;27;24
everyone.
00;40;27;24 - 00;40;28;28
It's like if someone tells you.
00;40;28;28 - 00;40;31;16
But is there any possibility
it's not flat?
00;40;31;16 - 00;40;35;06
You'll say, come on, everyone on
the internet is talking about it, right?
00;40;35;09 - 00;40;39;20
And, and I and I think the, the, the very,
very, very eye opening difference
00;40;39;20 - 00;40;43;20
which most people don't recognize
is, you know, I've had the privilege
00;40;43;20 - 00;40;48;28
of starting half of Google's businesses
worldwide and, and, you know,
00;40;49;01 - 00;40;54;18
got the internet and e-commerce and Google
to around 4 billion people.
00;40;54;21 - 00;40;58;02
And in Google, that wasn't a question
of opening a sales office.
00;40;58;02 - 00;41;01;28
That was really a deep
question of engineering, where you build
00;41;01;28 - 00;41;05;29
a product that understands the internet,
that improves the quality of the internet,
00;41;06;02 - 00;41;11;15
to the point where Bangladeshis
have access to democracy of information.
00;41;11;15 - 00;41;14;22
That's a massive contribution, right?
00;41;14;25 - 00;41;16;02
The thing is,
00;41;16;02 - 00;41;20;28
if you had asked Google
at any point in time until today,
00;41;21;01 - 00;41;24;00
any question, Google
would have responded to you
00;41;24;00 - 00;41;27;22
with a million possible
answers in terms of links
00;41;27;25 - 00;41;31;14
and said, go make up your mind
what you think is true, right?
00;41;31;17 - 00;41;35;00
If you ask ChatGPT today,
it gives you one answer
00;41;35;03 - 00;41;38;21
right and positions
it as the ultimate truth, right?
00;41;38;24 - 00;41;42;28
And it's so risky
that we humans accept that
00;41;43;02 - 00;41;46;06
do we really need
to, you know, keep pushing this forward
00;41;46;06 - 00;41;49;26
or do we have more than enough technology
here to, to keep us busy for the next
00;41;49;26 - 00;41;50;25
5 or 10 years?
00;41;50;25 - 00;41;52;29
How how do those two interplay.
00;41;52;29 - 00;41;54;24
More than enough technology.
00;41;54;24 - 00;41;57;05
That's I'm
I'm so glad you brought that up.
00;41;57;05 - 00;41;59;15
It's so funny
because I mean, I get it right.
00;41;59;15 - 00;42;03;17
I get the idea that, you know,
00;42;03;20 - 00;42;07;07
I get the idea that every company sort of
like the, the Google's, the Microsoft,
00;42;07;07 - 00;42;11;01
the OpenAI's, the anthropic, the,
you know, Meta's AI, etc.
00;42;11;04 - 00;42;13;18
they all want to be kind of atop
the leaderboard. Right.
00;42;13;18 - 00;42;15;05
And that for the best tech.
00;42;15;05 - 00;42;16;27
And believe me, I totally get it.
00;42;16;27 - 00;42;19;28
And it's, you know, but I actually,
you know, if you think about it,
00;42;19;29 - 00;42;22;10
if it's one of these things
where if you watch a Jeep commercial
00;42;22;10 - 00;42;23;17
or a Range Rover commercial,
00;42;23;17 - 00;42;25;28
like what are those Jeeps and Range Rovers
doing right?
00;42;25;28 - 00;42;28;11
They are doing things
like they are going over mountains.
00;42;28;11 - 00;42;31;06
These are people
that are like stuck in flowing rivers
00;42;31;06 - 00;42;33;01
and there's like a hippo
coming after them.
00;42;33;01 - 00;42;35;15
And they have six people in the car
and all, they're like,
00;42;35;15 - 00;42;38;18
oh my God, you've got us over there
doing unbelievable things.
00;42;38;21 - 00;42;41;00
Meanwhile,
I live in New Canaan, Connecticut.
00;42;41;00 - 00;42;43;23
There's Range Rovers and jeeps
all over the place. What are they driving?
00;42;43;23 - 00;42;46;17
They're driving over paved roads. Right?
00;42;46;17 - 00;42;47;04
They are.
00;42;47;04 - 00;42;51;10
They're driving from their home, their
three acre Hummer to to the train station.
00;42;51;13 - 00;42;54;04
So why are we all buying these things?
Right.
00;42;54;04 - 00;42;56;26
Like I have an Apple Watch on right now
that can go like 100m,
00;42;56;26 - 00;42;59;21
underwater in 18,000ft.
Do you think I'm ever doing that?
00;42;59;21 - 00;43;01;25
No, it's a it's sort of a feel like a.
00;43;01;25 - 00;43;05;00
Oh, but it could,
which means it's the best.
00;43;05;00 - 00;43;05;12
Right?
00;43;05;12 - 00;43;08;24
Whereas what I would say is that if
we just put everybody and we knew
00;43;08;24 - 00;43;14;11
everybody was going to use it at 3.5,
which was, you know, almost the original,
00;43;14;11 - 00;43;16;27
or close to the original model
that came out two years ago.
00;43;16;27 - 00;43;18;17
And if everybody actually using it,
00;43;18;17 - 00;43;21;10
you know, 20 times
a day, we'd be much further on.
00;43;21;10 - 00;43;24;15
So I get the tech, I appreciate the tech,
and I'm all over the tech.
00;43;24;15 - 00;43;27;15
I'm posting about it all the time
on LinkedIn, everything like that.
00;43;27;15 - 00;43;30;01
But to your point, your exact point,
the reality is
00;43;30;01 - 00;43;31;19
we have not caught up with the tech.
00;43;31;19 - 00;43;33;23
to be a winner in this new world,
00;43;33;23 - 00;43;37;19
you really have to learn to parse out
what is true and what is fake.
00;43;37;22 - 00;43;41;14
You really have to have the ability
to parse out what the media is telling you
00;43;41;18 - 00;43;44;19
to serve their own agendas
and what they're telling you.
00;43;44;19 - 00;43;46;09
That is actually true.
00;43;46;09 - 00;43;51;04
You know, you have to parse out
what actually happened versus opinion,
00;43;51;07 - 00;43;56;01
you know, what actually is
the truth versus the shiny headline.
00;43;56;04 - 00;43;59;04
And, and this is now going to be much
00;43;59;08 - 00;44;03;08
more potent
with artificial intelligence in charge,
00;44;03;14 - 00;44;06;11
because they have mastered
human manipulation.
00;44;06;11 - 00;44;08;03
AI keeps getting more and more intense.
00;44;08;03 - 00;44;11;12
Actually, as one would expect,
00;44;11;15 - 00;44;13;29
approaching this singularity
and all that, right?
00;44;13;29 - 00;44;18;01
So, I mean, it is it is interesting
to see it all happening finally.
00;44;18;04 - 00;44;20;29
I think progress is quite
00;44;21;02 - 00;44;21;21
amazing.
00;44;21;21 - 00;44;25;15
And looks exactly like you would think
00;44;25;15 - 00;44;30;09
if for in the last few years before
a breakthrough, the AGI and singularity.
00;44;30;12 - 00;44;33;25
So so it sounds like you're still pretty
bullish that we're you know marching I'm.
00;44;33;25 - 00;44;34;25
Super bullish man.
00;44;34;25 - 00;44;39;00
You know before literally before breakfast
this morning
00;44;39;03 - 00;44;41;21
I made like
00;44;41;21 - 00;44;45;17
ten Python programs
to test versions of some AI algorithm
00;44;45;17 - 00;44;49;09
I made up just by vibe
coding and Llvm platforms.
00;44;49;14 - 00;44;53;12
But before we had these tools,
each of those would have taken me
00;44;53;13 - 00;44;54;28
half a day, right?
00;44;54;28 - 00;44;57;05
So I mean, it sped up
00;44;57;05 - 00;44;59;02
prototyping research ideas
00;44;59;02 - 00;45;03;04
by a factor of 20 to 50 or something,
right?
00;45;03;05 - 00;45;09;29
I mean, and that that's tools that we have
now that are not remotely AGI,
00;45;09;29 - 00;45;13;10
they're just very useful, useful research
assistants.
00;45;13;10 - 00;45;17;10
But but we are at the
point where the AI tooling
00;45;17;13 - 00;45;19;20
is helping us
00;45;19;20 - 00;45;21;10
develop AI faster. Right.
00;45;21;10 - 00;45;25;06
And that is exactly what you would think
in the endgame period
00;45;25;06 - 00;45;27;02
before a singularity.
00;45;27;02 - 00;45;29;14
Well, and that can create a snowball
effect, right?
00;45;29;14 - 00;45;31;12
If it's helping us research
00;45;31;12 - 00;45;34;22
itself faster or any of these spaces
faster than, you know, that's.
00;45;34;22 - 00;45;36;13
Doing, it's doing that right now.
00;45;36;13 - 00;45;38;09
Yeah. I mean, that is that is
00;45;38;09 - 00;45;42;06
that is why we were able
to see the pace that we now see.
00;45;42;09 - 00;45;43;04
Yeah.
00;45;43;04 - 00;45;45;29
So, so, you know,
maybe just to take a step back then,
00;45;45;29 - 00;45;49;18
I mean artificial general intelligence,
this is a phrase that, you know,
00;45;49;18 - 00;45;54;01
you coined over a decade ago and has been
getting a lot of press lately,
00;45;54;04 - 00;45;56;01
in addition to superintelligence.
00;45;56;01 - 00;46;00;18
And so I wanted to ask you maybe
just to do a little bit of table setting,
00;46;00;21 - 00;46;01;04
how do you
00;46;01;04 - 00;46;04;04
define artificial general intelligence?
00;46;04;06 - 00;46;06;07
And, you know, why is why is it important?
00;46;06;07 - 00;46;07;17
Why does it matter?
00;46;07;17 - 00;46;09;27
And how does it differ, if at all?
00;46;09;27 - 00;46;13;28
You know, practically from something
like superintelligence.
00;46;14;01 - 00;46;17;08
So informally,
00;46;17;11 - 00;46;20;07
what we mean by AGI tends to be
00;46;20;07 - 00;46;24;12
the ability to generalize
00;46;24;15 - 00;46;26;13
roughly as well as people can.
00;46;26;13 - 00;46;29;13
And so to make leaps
beyond what you've been taught
00;46;29;13 - 00;46;32;26
and what you've been programed for,
to make those leaps,
00;46;32;29 - 00;46;34;17
you know roughly as well as people.
00;46;34;17 - 00;46;38;09
And that's that's an informal concept.
00;46;38;09 - 00;46;41;02
I mean, I mean, it's
not it's not a mathematical concept.
00;46;41;02 - 00;46;45;15
There's there's a mathematical theory
00;46;45;18 - 00;46;47;06
of general intelligence.
00;46;47;06 - 00;46;51;28
And it more deals with like,
what does it mean to be really,
00;46;51;28 - 00;46;53;14
really, really, really intelligent.
00;46;53;14 - 00;46;59;01
Like it's you can look at general
intelligence as the ability to achieve
00;46;59;04 - 00;47;03;29
arbitrary computable goals
and arbitrary computable environments. And
00;47;04;02 - 00;47;05;26
if you look at the abstract
00;47;05;26 - 00;47;11;20
math definition of general intelligence,
you conclude humans are not very far
00;47;11;20 - 00;47;15;16
along like, I cannot even run the maze in
00;47;15;16 - 00;47;19;04
750 dimensions, you know,
00;47;19;07 - 00;47;22;07
let alone prove a randomly generated
00;47;22;09 - 00;47;25;06
math theorem of length 10,000 characters.
00;47;25;06 - 00;47;28;07
I mean, I mean, we're we are adapted to do
00;47;28;07 - 00;47;31;14
the things that we evolved to do in our
00;47;31;17 - 00;47;32;12
environment, right?
00;47;32;12 - 00;47;35;21
We're not we're not utterly general
00;47;35;24 - 00;47;36;12
systems.
00;47;36;12 - 00;47;39;25
So, I mean, superintelligence
00;47;39;28 - 00;47;45;11
is also a very informally defined concept,
but it basically means is a system
00;47;45;11 - 00;47;50;09
whose general intelligence is way above
the human level of general intelligence.
00;47;50;09 - 00;47;55;14
So it can it can make creative leaps
beyond what it knows
00;47;55;17 - 00;47;58;05
way, way better
than, than the person can write.
00;47;58;05 - 00;48;02;27
And, I mean, it's pretty clear
00;48;03;00 - 00;48;04;09
that's possible.
00;48;04;09 - 00;48;08;16
I mean, just as we're not the fastest
running at highest jumping possible
00;48;08;16 - 00;48;13;26
creatures, we're probably not the smartest
thinking possible creatures.
00;48;13;29 - 00;48;19;07
And we can see examples of human
stupidity, like, around us every day
00;48;19;07 - 00;48;22;27
or even very smart people like I can hold.
00;48;23;00 - 00;48;26;13
I'm pretty clever, but I can no 1015
00;48;26;13 - 00;48;29;13
things in my memory
at one time without getting confused.
00;48;29;13 - 00;48;33;11
Now some autistic people can do better,
but I mean,
00;48;33;14 - 00;48;37;13
you know,
there are many limitations of being
00;48;37;16 - 00;48;41;07
a human brain, and it seems clear
00;48;41;10 - 00;48;44;05
some physical system
could do better than that.
00;48;44;05 - 00;48;47;11
And then the
the relation between human level AGI
00;48;47;11 - 00;48;50;17
and RSI is interesting
because it seems like
00;48;50;20 - 00;48;54;29
once you get a human level AGI,
like a computer system,
00;48;55;02 - 00;48;58;19
that on the one hand can generalize
and imagine and create
00;48;58;19 - 00;49;02;23
as well as the person on the other hand
is inside a computer.
00;49;02;26 - 00;49;07;14
It seems like that human level
AGI should pretty rapidly create
00;49;07;14 - 00;49;12;18
or become an aside, because, I mean,
it can look at its entire ram state.
00;49;12;18 - 00;49;14;07
It knows, oh, its source code,
00;49;14;07 - 00;49;18;00
it can copy itself and tweak itself
and run that copy on different machines
00;49;18;00 - 00;49;19;05
experimentally. Right.
00;49;19;05 - 00;49;25;15
So I mean, it seems like a human
level AGI will have much
00;49;25;18 - 00;49;26;24
greater ability to
00;49;26;24 - 00;49;30;23
self understand and self
modify the human level human,
00;49;30;23 - 00;49;35;00
which should should lead
should lead to ASI
00;49;35;03 - 00;49;37;23
fairly rapidly.
00;49;37;23 - 00;49;42;01
And you know, we've seen
in the commercial world some attempts
00;49;42;01 - 00;49;46;22
by business and marketing people
to fudge around with what is AGI.
00;49;46;25 - 00;49;49;17
But I mean,
I think within the research world,
00;49;49;17 - 00;49;52;01
the notion that an AGI
00;49;52;01 - 00;49;55;05
should be able to generalize
00;49;55;08 - 00;49;58;25
very well beyond its training data,
at least as well as people.
00;49;58;28 - 00;50;00;15
I think that's well recognized.
00;50;00;15 - 00;50;03;21
I mean, I've seen Sam
Altman has come out saying, well,
00;50;03;21 - 00;50;07;21
maybe something to do 95% of human jobs,
we should call it an AGI.
00;50;07;21 - 00;50;10;15
And I mean, you can
you can call it what you want.
00;50;10;15 - 00;50;11;05
It's fine.
00;50;11;05 - 00;50;16;06
But it is a different concept than having
human like generalization ability.
00;50;16;06 - 00;50;16;13
Right?
00;50;16;13 - 00;50;20;29
Like if you can do 95% of human jobs
by being trained in all of them,
00;50;20;29 - 00;50;25;22
I mean that that may be super,
super economically useful,
00;50;25;25 - 00;50;29;03
but it's different than being able to take
big leaps beyond your training that.
00;50;29;06 - 00;50;32;06
the technology's moving faster
than any other sector, faster
00;50;32;06 - 00;50;35;17
in the economy, fashion, the society
is moving faster and education's moving.
00;50;35;24 - 00;50;40;11
And if we truly want to understand
where humans play in that picture,
00;50;40;14 - 00;50;43;09
the fact that we're investing
everything we have in technology
00;50;43;09 - 00;50;47;16
has already indicated our preference
for technology over humans.
00;50;47;19 - 00;50;50;15
So that math has to balance out a bit.
00;50;50;15 - 00;50;53;26
We have to figure out
how do we invest so much more
00;50;54;01 - 00;50;57;01
into education, not so much less.
00;50;57;02 - 00;51;00;02
And until we do that,
we are going to be behind the ball.
00;51;00;02 - 00;51;02;03
We are going to have a target on our back
in many ways,
00;51;02;03 - 00;51;05;24
because if the paradigms don't change,
the technology just gets better.
00;51;05;27 - 00;51;07;29
We're going to suffer the consequences.
00;51;07;29 - 00;51;11;19
But if we put ourselves
front and center of that equation,
00;51;11;22 - 00;51;15;11
we have the chance and the opportunity
to figure that out.
00;51;15;14 - 00;51;17;08
Right? It's wow.
00;51;17;08 - 00;51;22;09
Yeah, it's it's as you said it like,
this is not
00;51;22;12 - 00;51;23;13
an incremental shift.
00;51;23;13 - 00;51;27;07
This is like a complete disruption
of the model from end to end.
00;51;27;10 - 00;51;28;09
Without a doubt.
00;51;28;09 - 00;51;31;22
And even for people who live and breathe
it like it's overwhelming for me,
00;51;31;25 - 00;51;34;08
I do this 24 seven,
I love it. I'm passionate about it.
00;51;34;08 - 00;51;36;00
I'm excited about where we're going
00;51;36;00 - 00;51;39;17
and net net,
I'm optimistic about the long term future,
00;51;39;20 - 00;51;43;09
but we are all pioneers right now,
whether we want to be or not.
00;51;43;12 - 00;51;46;26
And when people we've kind of bastardized
the term pioneer,
00;51;46;26 - 00;51;49;01
we've made it seem like,
oh, it's Richard Branson on the cover
00;51;49;01 - 00;51;52;01
of entrepreneur magazine with his billions
of dollars of success, like
00;51;52;02 - 00;51;54;21
he was a pioneer at one point in time.
00;51;54;21 - 00;51;58;00
But yeah, high nears
through really hard shit
00;51;58;00 - 00;52;01;14
and they go to places
where there's no infrastructure.
00;52;01;19 - 00;52;05;11
They suffer the consequences of,
you know, decisions that they didn't know
00;52;05;11 - 00;52;06;03
they'd have to make.
00;52;06;03 - 00;52;10;03
They are attacked by the environment
that they're in nature tries to kill them
00;52;10;03 - 00;52;11;14
in a number of different ways. Yeah.
00;52;11;14 - 00;52;15;06
And as a super resilient species,
we still make a way forward.
00;52;15;13 - 00;52;18;01
We construct the environment
after we figure it out.
00;52;18;01 - 00;52;19;16
You know, we might show up in Hawaii
00;52;19;16 - 00;52;23;12
with no shoes on and realize, oh, crap,
I'm not properly equipped for this.
00;52;23;15 - 00;52;26;06
And then we figure a way out that pattern.
00;52;26;06 - 00;52;30;06
Time to go from not knowing to
knowing can be really hard, painful,
00;52;30;06 - 00;52;35;19
and challenging, but the way we thrive
once we do is absolutely amazing.
00;52;35;22 - 00;52;39;10
So I would say that we are going
to have amazing things happen,
00;52;39;13 - 00;52;43;03
but we're also going to have to encounter
some really tough growing pains
00;52;43;06 - 00;52;45;08
individually
and collectively to get there.
00;52;45;08 - 00;52;48;08
So if anyone is saying otherwise, it's
absolutely smoke and mirrors
00;52;48;08 - 00;52;52;22
whether there's a threat
to creating this intelligence that,
00;52;52;28 - 00;52;55;28
you know, looks at this,
you know, kind of human pandemonium
00;52;56;00 - 00;52;57;02
and says, you know what?
00;52;57;02 - 00;53;01;02
You know, AI is taking the wheel
now, humans can't be trusted
00;53;01;06 - 00;53;02;16
with human affairs.
00;53;02;16 - 00;53;05;07
And this word that,
that we were so anchored on of choice.
00;53;05;07 - 00;53;06;03
And there's going to be.
00;53;06;03 - 00;53;08;06
It's almost inevitable.
00;53;08;06 - 00;53;10;03
And the AGI will be right.
00;53;10;03 - 00;53;13;13
I mean, and then
00;53;13;16 - 00;53;14;24
human
00;53;14;24 - 00;53;18;07
governance systems
become more like the student council
00;53;18;07 - 00;53;21;20
in my high school or something,
where, I mean,
00;53;21;23 - 00;53;24;23
I mean, I think,
00;53;24;23 - 00;53;28;01
if you set aside AGI,
00;53;28;04 - 00;53;31;02
I mean, we can develop better and better
bio weapons.
00;53;31;02 - 00;53;33;02
There will be no nano weapons.
00;53;33;02 - 00;53;35;16
I mean, cybersecurity
00;53;35;16 - 00;53;37;00
barely works, right?
00;53;37;00 - 00;53;42;13
So, I mean, I think I think,
00;53;42;16 - 00;53;44;07
It seems almost inevitable
00;53;44;07 - 00;53;47;07
that rational humans
00;53;47;12 - 00;53;49;18
would democratically choose
00;53;49;18 - 00;53;52;19
to put a compassionate AGI
00;53;52;26 - 00;53;56;23
in some sort of a governance role,
00;53;56;26 - 00;54;00;05
given what the alternatives
00;54;00;08 - 00;54;01;17
appear to be.
00;54;01;17 - 00;54;06;11
But the the kind of
00;54;06;14 - 00;54;07;26
goofball analogy
00;54;07;26 - 00;54;11;23
I've often given is the
the squirrels in Yellowstone Park.
00;54;11;23 - 00;54;14;05
Like we're sort of in charge of them.
00;54;14;05 - 00;54;17;03
We're not actually micromanaging
their lives.
00;54;17;03 - 00;54;17;13
Right?
00;54;17;13 - 00;54;20;15
Like we're we're not telling the squirrels
who to mate with or what
00;54;20;15 - 00;54;24;06
tree to tree to climb up or something
like that, right?
00;54;24;06 - 00;54;28;15
Where, you know, if there was
a massive war between the white tails
00;54;28;15 - 00;54;32;16
and the brown tailed squirrels
and there's massive squirrel slaughter,
00;54;32;19 - 00;54;37;08
we might somehow intervene and move some
of them across the river or something.
00;54;37;11 - 00;54;42;01
If there's a plague, we would go in
and given the medicine, the by and large,
00;54;42;04 - 00;54;44;29
we know that for them to be squirrels,
00;54;44;29 - 00;54;49;04
they need to regulate their own lives
in their in their squirrely way.
00;54;49;04 - 00;54;49;11
Right.
00;54;49;11 - 00;54;52;11
And so that that is what you would hope
00;54;52;11 - 00;54;56;02
from a beneficial superintelligence
like it would know that
00;54;56;05 - 00;54;59;14
people would feel disempowered
00;54;59;17 - 00;55;04;02
and unsatisfied, have their lives
00;55;04;05 - 00;55;05;05
and their governments
00;55;05;05 - 00;55;09;16
micromanaged by some, by some AI system.
00;55;09;16 - 00;55;13;14
So what what you would hope is
a beneficial AGI is kind of there
00;55;13;14 - 00;55;17;13
in the background as a safety mechanism.
00;55;17;16 - 00;55;20;27
If it would stop stupid words
from popping up all over the world
00;55;20;27 - 00;55;22;00
like we see right now.
00;55;22;00 - 00;55;25;29
I mean,
I think that would be quite beneficial.
00;55;26;02 - 00;55;29;12
I don't see why we humans
need the AGI to decide.
00;55;29;12 - 00;55;34;14
Like, you know, what rights,
what rights do do children have?
00;55;34;14 - 00;55;35;20
Like what?
00;55;35;20 - 00;55;36;16
You know what?
00;55;36;16 - 00;55;39;01
How is the public school system
regulated or something?
00;55;39;01 - 00;55;43;05
There's
lots of lots of aspects of human life
00;55;43;08 - 00;55;47;15
that are going to be better dealt with
by humans collectively making decisions
00;55;47;18 - 00;55;51;22
for other humans with whom
they entered into a social contract.
00;55;51;22 - 00;55;52;02
Right?
00;55;52;02 - 00;55;56;27
So, I mean, I, I think, anyway,
there are clearly beneficial
00;55;57;00 - 00;55;57;23
avenues.
00;55;57;23 - 00;56;03;16
I mean, there's also many dystopic avenues
which we've all heard
00;56;03;19 - 00;56;07;16
heard plenty about,
00;56;07;19 - 00;56;08;26
I don't see any reason
00;56;08;26 - 00;56;12;26
why dystopian avenues are highly probable,
00;56;12;29 - 00;56;15;27
but I'm really more worried about what
00;56;15;27 - 00;56;18;24
nasty people do with early stage.
00;56;18;24 - 00;56;19;28
AGI is right.
00;56;19;28 - 00;56;24;23
I mean, I think there's a lot of possible
AI minds that could be built.
00;56;24;26 - 00;56;28;28
There's a lot of possible goals,
motivational and esthetic systems
00;56;28;28 - 00;56;33;06
that Agis could have.
00;56;33;09 - 00;56;35;22
I don't think we need to worry
that much about, like
00;56;35;22 - 00;56;38;22
the AGI is built to be compassionate,
loving and nice.
00;56;38;28 - 00;56;40;12
It's something everyone
00;56;40;12 - 00;56;43;10
then suddenly reverses and starts
slaughtering everyone, right?
00;56;43;10 - 00;56;47;02
I mean, it could happen,
but there's totally no reason to think
00;56;47;02 - 00;56;50;02
that's likely.
00;56;50;02 - 00;56;51;24
On the other hand,
00;56;51;24 - 00;56;55;28
the idea that some powerful party
with a lot of money
00;56;55;28 - 00;56;58;28
could try to build the smartest AGI
in the world
00;56;59;02 - 00;57;02;01
to promote their own interests
above everybody else's
00;57;02;01 - 00;57;06;23
and make everyone else fall into line
according to their will of that.
00;57;06;26 - 00;57;10;26
That's a very immediate and palpable
threat.
00;57;10;26 - 00;57;12;10
Right.
00;57;12;10 - 00;57;15;21
So. And that that
00;57;15;24 - 00;57;19;27
even if that doesn't affect
the ultimate superintelligence you get,
00;57;20;00 - 00;57;24;16
it could make things very unpleasant
for like five, ten,
00;57;24;16 - 00;57;29;21
20 years along the, the, the way which,
which matters a lot to us.
00;57;29;24 - 00;57;31;06
we have to remember that
00;57;31;06 - 00;57;34;25
all of these technologies
we're discussing are in their infancy.
00;57;34;28 - 00;57;39;03
And they're historically,
when you look at the advent of New,
00;57;39;06 - 00;57;41;19
particularly media forms of media,
00;57;41;19 - 00;57;44;01
it takes years for society to figure out
00;57;44;01 - 00;57;47;01
what they're for.
00;57;47;04 - 00;57;48;20
The telephone
00;57;48;20 - 00;57;52;22
for the first 25 years, the telephones
life, the telephone industry
00;57;52;22 - 00;57;56;21
actively tried to discourage people
from using it to
00;57;56;24 - 00;57;59;18
to gossip, to catch up with friends.
00;57;59;18 - 00;58;00;19
They thought it was a business.
00;58;00;19 - 00;58;03;08
Tool beneath the function
of the technology. Yeah.
00;58;03;08 - 00;58;07;26
This should not be shouldn't squander
this thing on chatting with your mom.
00;58;07;26 - 00;58;11;28
You should it. It's a business
tool and they actually,
00;58;12;01 - 00;58;14;06
like I said, actively discourage
people from.
00;58;14;06 - 00;58;17;29
They didn't realize what it was
until the 20s, which is, you know,
00;58;18;01 - 00;58;22;14
when does Alexander Graham Bell
telephone 1873.
00;58;22;17 - 00;58;24;08
And it's it's it's
00;58;24;08 - 00;58;29;17
the end of the 1920s
before they wake up to what it is. So
00;58;29;20 - 00;58;32;22
and the telephone is a bigger deal
than Facebook and Twitter.
00;58;32;23 - 00;58;34;10
Yeah.
00;58;34;10 - 00;58;38;13
But it strikes me that Facebook and
Twitter are, are still in their infancy.
00;58;38;13 - 00;58;39;19
They're really young.
00;58;39;19 - 00;58;42;13
Do I even it's quite possible
that if we had
00;58;42;13 - 00;58;45;27
this conversation five years from now,
00;58;46;00 - 00;58;48;19
not neither of you,
both of us would have only a dim
00;58;48;19 - 00;58;50;14
memory of this thing called Facebook.
00;58;50;14 - 00;58;53;27
Yeah, or Twitter or,
I don't know, or the opposite.
00;58;54;02 - 00;58;56;05
Yeah,
that it completely dominates your life.
00;58;56;05 - 00;58;57;26
I just don't think
00;58;57;29 - 00;58;59;10
the only confidence I have is
00;58;59;10 - 00;59;03;13
that we will be using these technologies
in unanticipated ways.
00;59;03;16 - 00;59;05;19
Yeah, in the future, but I
00;59;05;19 - 00;59;09;12
but no one can predict
what those unanticipated ways are.
00;59;09;15 - 00;59;13;15
I think I have a lot of confidence that
00;59;13;18 - 00;59;16;25
whatever employment dislocation
00;59;16;28 - 00;59;21;29
is caused by AI will be.
00;59;22;02 - 00;59;24;23
Will be short and
00;59;24;23 - 00;59;27;17
not painless,
but less painful than we think.
00;59;27;17 - 00;59;28;10
I think the gloom.
00;59;28;10 - 00;59;31;26
And I don't buy the doom
or the gloom and doom thing on it.
00;59;31;29 - 00;59;35;04
Yeah. Just think like
00;59;35;07 - 00;59;37;09
we always say this
every time something comes along.
00;59;37;09 - 00;59;40;16
It never pans out to yeah,
everyone has nothing to do.
00;59;40;17 - 00;59;42;04
It's become kind of Malthusian, right?
00;59;42;04 - 00;59;44;07
Like this wave is going to. Yeah.
00;59;44;07 - 00;59;46;25
And I think people have more ingenuity
than that.
00;59;46;25 - 00;59;53;08
And also
I think that we're probably a lot
00;59;53;11 - 00;59;55;24
further off from
00;59;55;24 - 00;59;58;16
truly transformative AI than we realize.
00;59;58;16 - 01;00;02;18
I just, I'm
a I'm on the kind of my expectations are.
01;00;02;25 - 01;00;09;02
But I also I also
sometimes believe that a lot of the most.
01;00;09;05 - 01;00;10;24
You know, revolutionary
01;00;10;24 - 01;00;13;24
uses of AI are some of its simplest ones
01;00;13;25 - 01;00;17;28
and just need doesn't need to be this
01;00;18;01 - 01;00;19;25
incredibly mind blowing
01;00;19;25 - 01;00;23;10
technological accomplishment
to make a difference in our lives.
01;00;23;13 - 01;00;26;24
Simply holding and organizing information
01;00;26;27 - 01;00;33;01
and standing at the ready
to give good answers to problems is huge.
01;00;33;01 - 01;00;36;01
I mean, if that's all it did,
it would be transformative.
01;00;36;02 - 01;00;39;18
things are going to look so different
in the next couple of years
01;00;39;18 - 01;00;42;20
unless you are radical with your thinking,
you will not be ready
01;00;42;20 - 01;00;44;12
for the disruptions
that are going to come.
01;00;44;12 - 01;00;48;07
middle management is also getting hit
really hard What that makes space
01;00;48;07 - 01;00;51;06
for is for people
to step into actual roles of leadership.
01;00;51;07 - 01;00;54;20
can imagine a world where there's angst.
01;00;54;23 - 01;00;57;28
If we're not looking forward
and we're still letting yesterday's
01;00;57;28 - 01;01;01;22
mental models collide with tomorrow's
technologies, that is how we lose.
01;01;01;22 - 01;01;04;20
we are all pioneers right now,
whether we want to be or not.
01;01;04;20 - 01;01;08;09
I would encourage organizations
to be radical with their thinking
01;01;08;12 - 01;01;10;25
and practical with their approach.
01;01;10;25 - 01;01;13;23
So there's there are too many people.
01;01;13;23 - 01;01;17;18
So you kind of need to break burning
all the ground, start fresh.
01;01;17;21 - 01;01;20;19
There's no enterprise
that says we're profitable.
01;01;20;19 - 01;01;21;28
We're doing just fine.
01;01;21;28 - 01;01;25;08
We want to disrupt that. Nobody says that.
01;01;25;11 - 01;01;29;07
But what I do think is,
unless you are radical with your thinking,
01;01;29;07 - 01;01;32;14
you will not be ready for the disruptions
that are going to come.
01;01;32;17 - 01;01;36;08
So these technological transformations
that happen at GPT level,
01;01;36;08 - 01;01;39;23
so general purpose technology
start at the infrastructure level.
01;01;39;26 - 01;01;42;28
So we've seen disruption with technology
and the technology that we use.
01;01;42;29 - 01;01;44;21
So electricity did the same thing.
01;01;44;21 - 01;01;47;06
And OpenAI did the same thing with GPT.
01;01;47;06 - 01;01;49;01
So we know now we're all using it.
01;01;49;01 - 01;01;51;24
But over time those disruptions move up
01;01;51;24 - 01;01;55;17
a level from infrastructure
to application to industry.
01;01;55;20 - 01;01;57;28
So if you're not
01;01;58;01 - 01;02;01;26
okay I guess I guess it is explosive.
01;02;01;29 - 01;02;04;11
But if you're not thinking radically
01;02;04;11 - 01;02;07;13
about the transformation that can happen
at each one of those levels
01;02;07;13 - 01;02;10;28
and also the transformation
that can happen to your industry,
01;02;11;01 - 01;02;14;06
and you're just focused on the data,
what you have now,
01;02;14;09 - 01;02;16;29
you're missing one of the critical shifts
of transformation in the business.
01;02;16;29 - 01;02;20;29
And there's a, theme
that's becoming more popular
01;02;20;29 - 01;02;23;29
right now
is going moving from insight to foresight.
01;02;24;00 - 01;02;28;16
And when everything is changing around
you, insights valuable.
01;02;28;16 - 01;02;32;09
It's how you create structure around
a business that you can take to market.
01;02;32;12 - 01;02;35;12
Foresight
is about how you avoid getting disrupted.
01;02;35;19 - 01;02;38;24
If we're not looking forward
and we're still letting yesterday's
01;02;38;24 - 01;02;43;00
mental models collide with tomorrow's
technologies, that is how we lose.
01;02;43;03 - 01;02;47;03
But if we are radical
with the way we think, with the ability
01;02;47;03 - 01;02;48;26
to test different business models,
01;02;48;26 - 01;02;52;16
put things to market faster
when we might not previously
01;02;52;19 - 01;02;55;16
get that data and that feedback loop
as fast as possible,
01;02;55;16 - 01;02;59;05
we're going to learn more
about that unexplored terrain way faster.
01;02;59;08 - 01;03;04;12
So I wouldn't say go and disrupt your
your $1 billion, you know, revenue line,
01;03;04;15 - 01;03;07;13
but you absolutely
should be incubating things that will
01;03;07;13 - 01;03;09;03
because there are hundreds
01;03;09;03 - 01;03;12;03
and eventually thousands of other startups
that are doing exactly that.
01;03;12;09 - 01;03;15;00
And you have no defense against that
if you're not thinking in that way.
01;03;15;00 - 01;03;17;29
So think radically approach practically.
01;03;17;29 - 01;03;19;23
So that next step goes, okay.
01;03;19;23 - 01;03;21;19
So what do we do to implement this.
01;03;21;19 - 01;03;24;11
Is it Tiger teams. Is it small skunkworks.
01;03;24;11 - 01;03;25;19
All of those are viable.
01;03;25;19 - 01;03;29;05
I do believe that
having in its transformation,
01;03;29;05 - 01;03;32;21
you need to find people who are leaning in
and are self-selecting
01;03;32;21 - 01;03;35;21
as the people who are like, I'm
all about this, I want to do this.
01;03;35;24 - 01;03;39;14
Don't try and convince a bunch of people
who might not be invested
01;03;39;14 - 01;03;41;26
in the nest
to be the first ones through the door.
01;03;41;26 - 01;03;43;13
They will be unenthusiastic about it.
01;03;43;13 - 01;03;45;22
They don't have the willpower
to get through the challenges.
01;03;45;22 - 01;03;47;00
It's going to be hard,
01;03;47;00 - 01;03;50;00
and they're going to fail a million times
before they get it right.
01;03;50;00 - 01;03;52;28
If they're not already passionate
about this,
01;03;52;28 - 01;03;55;12
they're going to stop
at the first sign of trouble.
01;03;55;12 - 01;03;58;19
Those people can be followers
of the people who lead the way.
01;03;58;19 - 01;04;00;03
It's not that they're irrelevant.
01;04;00;03 - 01;04;01;15
You need to find the people who are like,
01;04;01;15 - 01;04;03;13
I want to be
the person who kicks the door down.
01;04;03;13 - 01;04;06;16
I want the first person in the room,
and those are the ones you want to build
01;04;06;16 - 01;04;10;16
your teams around to, to think about
these things and build different ideas
01;04;10;19 - 01;04;12;01
and find the tinkerers.
01;04;12;01 - 01;04;14;05
Find the people who may not be
01;04;14;05 - 01;04;17;20
the developers or the engineers
who are already tinkering with the stuff.
01;04;17;23 - 01;04;20;23
There are so many people who are using AI
and building their own agents
01;04;20;23 - 01;04;23;19
or creating,
you know, side businesses on the weekends
01;04;23;19 - 01;04;25;04
who could also be resources for this.
01;04;25;04 - 01;04;29;22
And that's the culture that will create
new opportunities, new business models.
01;04;29;25 - 01;04;30;24
And they're going to learn
01;04;30;24 - 01;04;34;27
what these new paradigms will look like
by doing the work in that space
01;04;35;00 - 01;04;37;26
that then can be diffused
across the organization.
01;04;37;26 - 01;04;39;26
And that's the second most important part.
01;04;39;26 - 01;04;43;03
Once you have the knowledge,
do you have the infrastructure
01;04;43;03 - 01;04;46;07
set up to diffuse
that knowledge as fast as possible
01;04;46;13 - 01;04;49;08
and as thoroughly as possible
across the organization?
01;04;49;08 - 01;04;50;16
Otherwise, it just is.
01;04;50;16 - 01;04;53;12
Compartmentalize it. Compartmentalize it.
It dies on the vine.
01;04;53;12 - 01;04;55;25
I think that not enough companies
appreciate
01;04;55;25 - 01;05;00;07
that innovation demands waste.
01;05;00;10 - 01;05;01;12
But if you are doing
01;05;01;12 - 01;05;04;18
something that you've done before,
you know exactly how it's going to go,
01;05;04;21 - 01;05;08;01
then of course you can have these KPIs
01;05;08;01 - 01;05;12;06
that you know you're going to hit for sure
because you've already done it.
01;05;12;09 - 01;05;16;08
Now you're trying a completely new
technology, the completely new use case.
01;05;16;11 - 01;05;19;04
You have no idea if it's going to work.
01;05;19;04 - 01;05;21;02
You have to be willing to accept
01;05;21;02 - 01;05;24;20
that that might be time and effort
01;05;24;23 - 01;05;29;08
thrown, you know, burned
at the altar of innovation, so to speak.
01;05;29;11 - 01;05;31;26
Right? That
that is just the nature of innovation.
01;05;31;26 - 01;05;35;19
And I've had companies
come and consult with me
01;05;35;22 - 01;05;39;08
who they really wanted to be innovators.
01;05;39;10 - 01;05;45;00
But when I ask them,
so what is your actual tolerance
01;05;45;03 - 01;05;49;02
for getting no results back
after you invest in innovation?
01;05;49;06 - 01;05;52;16
For how much bandwidth
do you give your people to do things
01;05;52;16 - 01;05;57;19
that are very specific work product
that you expect from them?
01;05;57;19 - 01;06;02;02
Do you give them time
and space to to chase an idea?
01;06;02;05 - 01;06;03;23
And quite often the answer is no,
01;06;03;23 - 01;06;06;10
no we don't.
We have no tolerance for innovation.
01;06;06;10 - 01;06;09;09
We have absolutely no slack
for our people.
01;06;09;09 - 01;06;12;29
And we need every project
to be predictable.
01;06;13;02 - 01;06;16;05
Okay, if you're dealing with that,
you're just not going to be an innovator
01;06;16;05 - 01;06;18;12
or you're going to be
an accidental innovator
01;06;18;12 - 01;06;22;02
because you somehow accidentally hired
somebody who's going to essentially work
01;06;22;03 - 01;06;24;06
two jobs, the one you gave them, and then,
01;06;24;06 - 01;06;27;07
you know, the other one,
they'll spend nights in the office
01;06;27;07 - 01;06;30;26
and maybe they'll come up with something,
but there won't be a lot of these folks.
01;06;30;26 - 01;06;35;09
And, yeah,
that's not a great lottery ticket. So,
01;06;35;12 - 01;06;39;04
if you don't have that tolerance for
01;06;39;07 - 01;06;44;03
her, why, when you're trying to innovate,
01;06;44;06 - 01;06;45;20
just you have to be a follower.
01;06;45;20 - 01;06;49;25
It just wait for everybody else to to,
share how it's done and follow them.
01;06;49;29 - 01;06;53;14
When you say narrow AI,
what does that mean to you?
01;06;53;14 - 01;06;54;16
And is there, you know,
01;06;54;16 - 01;06;58;24
a threshold where it gets too broad
and that creates the risk for us.
01;06;58;27 - 01;07;03;19
So typically it's a system designed for
a specific purpose that can do one thing.
01;07;03;19 - 01;07;07;16
While it can play chess well,
it can do protein folding.
01;07;07;16 - 01;07;08;20
Well.
01;07;08;20 - 01;07;11;13
It's getting fuzzy when it becomes
01;07;11;13 - 01;07;14;27
a large neural network
with lots of capabilities.
01;07;14;27 - 01;07;19;19
So I think sufficiently advanced narrow
AI times to shift over
01;07;19;19 - 01;07;24;14
its more general capabilities, or
it can be quickly repurposed to do that,
01;07;24;17 - 01;07;27;10
but it's still a much better path forward
01;07;27;10 - 01;07;31;04
than feeding it all the data in the world
and seeing what happens.
01;07;31;04 - 01;07;31;13
Yeah.
01;07;31;13 - 01;07;36;21
So if you restrict your training data
to a specific domain, just play chess.
01;07;36;21 - 01;07;41;17
It's very unlikely to also be able
to do synthetic biology right.
01;07;41;20 - 01;07;44;11
Well, and it feels like we're very much
01;07;44;11 - 01;07;47;29
on the course of chess
and synthetic biology at the same time.
01;07;47;29 - 01;07;48;05
Right.
01;07;48;05 - 01;07;50;04
Is that is that your your kind of outlook
01;07;50;04 - 01;07;53;29
for where all the money is going
and what people are racing toward?
01;07;54;02 - 01;07;57;23
They explicitly saying that
superintelligence now they skipped AGI.
01;07;57;23 - 01;08;00;13
It's no longer even fun to talk about.
They directly.
01;08;00;13 - 01;08;04;13
When we have a superintelligence team,
we have superintelligence safety team.
01;08;04;16 - 01;08;06;01
You couldn't do it for any guy.
01;08;06;01 - 01;08;08;25
So you said,
let's tackle a harder problem.
01;08;08;25 - 01;08;11;29
I think there might be a role
for professional societies.
01;08;12;06 - 01;08;14;23
We haven't had that before in computing.
01;08;14;23 - 01;08;15;02
Right.
01;08;15;02 - 01;08;18;03
So I get to call myself
a computer scientist, and, you know,
01;08;18;03 - 01;08;20;10
and I have some degrees
and some experience,
01;08;20;10 - 01;08;22;04
but I don't have any anything official.
01;08;22;04 - 01;08;26;11
And anybody could just say,
all right, I'm a computer scientist
01;08;26;11 - 01;08;30;13
or I'm a software engineer,
and I'm going to release some software
01;08;30;16 - 01;08;32;24
and they let you do it. That's great.
01;08;32;24 - 01;08;34;29
In other fields, they don't do that.
01;08;34;29 - 01;08;37;08
I couldn't go out tomorrow and say,
you know what
01;08;37;08 - 01;08;40;16
I'm gonna call myself a civil engineer,
and I'm going to go build a bridge
01;08;40;19 - 01;08;41;24
so they don't let you do that.
01;08;41;24 - 01;08;46;06
You need to be certified in order to to,
to do those kinds of things.
01;08;46;09 - 01;08;46;24
And I don't want to
01;08;46;24 - 01;08;50;14
slow down the software industry,
but I think there might be a role to say,
01;08;50;17 - 01;08;54;09
if you get to a certain level
of power of these models,
01;08;54;09 - 01;08;58;00
maybe there should be some certification
of the engineers involved.
01;08;58;03 - 01;08;59;06
mentioned Yann LeCun.
01;08;59;06 - 01;09;02;00
He's really pushing hard
for these open models.
01;09;02;00 - 01;09;04;13
I was saying, you know, wait a minute.
01;09;04;13 - 01;09;07;08
Maybe be good
if somebody is making a query
01;09;07;08 - 01;09;10;22
to do something terrible
that it gets logged somewhere.
01;09;10;25 - 01;09;14;00
And I guess another person I can mention
01;09;14;00 - 01;09;18;04
that I've seen the shift in is,
my colleague Eric Schmidt,
01;09;18;07 - 01;09;20;16
who was, very adamant of saying
01;09;20;16 - 01;09;24;08
we can't have open models
because of the threat from bad actors.
01;09;24;11 - 01;09;26;12
You know, 2 or 3 years ago.
01;09;26;12 - 01;09;29;25
And now he's
he switched and said, it's too late.
01;09;29;28 - 01;09;32;22
These models are powerful enough.
01;09;32;22 - 01;09;35;22
If the bad actors want to use them,
they can create them.
01;09;35;23 - 01;09;38;23
So we might as well harvest
the good of the open models,
01;09;38;23 - 01;09;41;23
because the bad guys have got them
anyways.
01;09;41;23 - 01;09;43;19
And I think that's right. I think.
01;09;43;19 - 01;09;46;03
I think there's nothing
you can do about that now.
01;09;46;03 - 01;09;48;00
AI systems always make mistakes.
01;09;48;00 - 01;09;52;27
Just sometimes it takes
quite a lot of skill to to see them.
01;09;52;27 - 01;09;53;17
Right.
01;09;53;17 - 01;09;56;05
What happens on a even a very functional
AI system?
01;09;56;05 - 01;09;59;12
We still say you will meet the long tail,
find the the outliers,
01;09;59;12 - 01;10;02;12
weirdos, situations
that you did not see coming.
01;10;02;14 - 01;10;07;01
Even when it's highly performance,
expect expect something's going to.
01;10;07;04 - 01;10;08;07
And so
01;10;08;07 - 01;10;11;07
you anticipate
that there will be mistakes.
01;10;11;07 - 01;10;14;16
Now the question is when a mistake touches
a user
01;10;14;19 - 01;10;18;11
who has a particular kind of expectation,
01;10;18;14 - 01;10;20;18
what then happens?
01;10;20;18 - 01;10;23;18
How much how flammable is that
01;10;23;21 - 01;10;26;20
your your AI infrastructure?
01;10;26;20 - 01;10;28;17
Of course, it's all the obvious things.
01;10;28;17 - 01;10;31;17
You have actual infrastructure
in your data pipelines and all the rest.
01;10;31;21 - 01;10;36;08
But it's also things,
intangible things like
01;10;36;11 - 01;10;39;18
at what stage are my user expectations?
01;10;39;21 - 01;10;43;13
Have I managed them sufficiently
where I even could be deploying to users?
01;10;43;13 - 01;10;46;28
What about internally
if I'm making if I'm doing
01;10;46;28 - 01;10;50;26
some, you know, an internal corporate
engineering, I'm offering
01;10;50;26 - 01;10;54;09
some, you know, and I we're looking
at the digital employee experience.
01;10;54;09 - 01;10;58;04
I'm offering some tools to my employees,
some digital tools
01;10;58;07 - 01;11;00;12
have I manage their expectations?
01;11;00;12 - 01;11;02;07
Have I trained my staff?
01;11;02;07 - 01;11;05;21
Do they know how
to think about these tools?
01;11;05;24 - 01;11;06;03
Let's.
01;11;06;03 - 01;11;08;03
I need humans in the loop.
01;11;08;03 - 01;11;10;08
Am I sure my human will be in the loop?
01;11;10;08 - 01;11;12;15
Or might they be asleep at the wheel?
01;11;12;15 - 01;11;13;27
And how do I do the training?
01;11;13;27 - 01;11;16;05
And how do I put in maybe a collection?
01;11;16;05 - 01;11;17;29
Depending on the importance of the task,
01;11;17;29 - 01;11;21;27
I might need to think about
having multiple humans in the loop.
01;11;22;00 - 01;11;24;28
I might need to think about consensus AI.
01;11;24;28 - 01;11;28;29
There are all kinds of,
measurement infrastructure
01;11;29;02 - 01;11;31;28
things that we would need
to put in place in generative AI.
01;11;31;28 - 01;11;35;18
We've just seen endless right
answer this nightmare thing,
01;11;35;18 - 01;11;37;11
a nightmare challenge for management
01;11;37;11 - 01;11;39;09
because we've
all got to change our paradigm,
01;11;39;09 - 01;11;42;09
and we've got to think differently
about measurement metrics.
01;11;42;13 - 01;11;43;06
Have we done that?
01;11;43;06 - 01;11;45;14
Have we put this in place?
01;11;45;14 - 01;11;46;25
Do we have testing pipelines?
01;11;46;25 - 01;11;49;01
Do we have experimentation pipelines?
01;11;49;01 - 01;11;51;28
Do we know how we're going to roll things
back, what we need to do,
01;11;51;28 - 01;11;52;18
we know what we're going
01;11;52;18 - 01;11;56;10
to, well, versus we're going to go,
do we actually know what will happen?
01;11;56;13 - 01;12;00;27
In what kind of scenario do we know
how we're going to make our guardrails?
01;12;01;00 - 01;12;02;05
What sets those guardrails?
01;12;02;05 - 01;12;03;05
How do we update them?
01;12;03;05 - 01;12;06;04
How are we going to react to,
legal changes? Right.
01;12;06;04 - 01;12;10;03
All this stuff now out okay,
I know it's hegemonic
01;12;10;03 - 01;12;14;28
or you can say everything is
AI infrastructure, but to be ready for AI,
01;12;15;01 - 01;12;18;01
there is a lot of stuff
that you would need to be right.
01;12;18;08 - 01;12;23;00
And so one of the ways that you can dodge
a lot of this
01;12;23;03 - 01;12;27;03
is that you
you do outsource some piece to a vendor
01;12;27;06 - 01;12;31;11
who was supposed to do all of it for you,
and you just check that you're getting
01;12;31;11 - 01;12;35;25
precisely what you need, and you have to
still articulate what it is that you need,
01;12;35;28 - 01;12;36;12
and you
01;12;36;12 - 01;12;40;12
have to worry about measurement wise,
that there is going to be a gap or a hole
01;12;40;15 - 01;12;43;09
between what the vendor sees
and what you see.
01;12;43;09 - 01;12;45;26
That's going to be some bit in the middle
that nobody sees,
01;12;45;26 - 01;12;49;17
and that could be a huge risk,
not just in terms of security, but,
01;12;49;20 - 01;12;53;17
in terms of your system
01;12;53;20 - 01;12;57;23
slowly
going, sideways with neither party knows.
01;12;57;26 - 01;13;02;03
Do we need to separate the
AI from the person if we've got, you know,
01;13;02;03 - 01;13;05;07
a cadre of employees who have figured out,
like they've got another tool
01;13;05;07 - 01;13;09;08
in their toolkit,
you know, do we need to care that it's AI?
01;13;09;08 - 01;13;10;20
Is there risk to this?
01;13;10;20 - 01;13;12;20
How should we be thinking about that?
01;13;12;20 - 01;13;16;13
I think this is such a great question
because it goes to the point.
01;13;16;14 - 01;13;18;25
One of the points and I think
sort of one of the underlying things
01;13;18;25 - 01;13;23;01
is, you know, is AI cheating, for example,
you know, yeah, we'll use it.
01;13;23;04 - 01;13;24;23
And I think, no. Right.
01;13;24;23 - 01;13;26;10
I mean, this isn't it's not high school.
01;13;26;10 - 01;13;28;17
You're not getting graded on your,
on your paper. Right?
01;13;28;17 - 01;13;29;05
I have kids
01;13;29;05 - 01;13;31;11
and they can't use generative
AI for writing papers,
01;13;31;11 - 01;13;33;20
but they can use it
for learning biology better.
01;13;33;20 - 01;13;35;24
So it really depends. Right.
01;13;35;24 - 01;13;39;07
But I think that, that most importantly,
first of all, you have to understand
01;13;39;07 - 01;13;42;14
that there are actual, you know, laws
and limitations around this.
01;13;42;14 - 01;13;45;12
Like, you can't just produce something
that's either an imagery or video
01;13;45;12 - 01;13;49;08
or even text, to be honest, and
just put that out into the world of yours
01;13;49;08 - 01;13;50;15
just because you can't copyright it.
01;13;50;15 - 01;13;52;12
That's a that's a legal issue.
01;13;52;12 - 01;13;54;20
But beyond that, we should have everybody
01;13;54;20 - 01;13;58;12
in the organization with guardrails
in place of course using this.
01;13;58;12 - 01;13;59;02
And why is that?
01;13;59;02 - 01;14;02;00
Because it is going to augment
what they are good at.
01;14;02;00 - 01;14;05;00
Sometimes the example uses,
if you put me up against a marker
01;14;05;03 - 01;14;07;26
and you said, okay, in 20 minutes,
both come, both of you
01;14;07;26 - 01;14;11;24
come up with a new idea for a shoe company
or something like that, right?
01;14;11;27 - 01;14;12;05
Yeah.
01;14;12;05 - 01;14;15;11
I would produce something really awesome,
even though I'm not a marketer.
01;14;15;11 - 01;14;18;09
Right. Because ChatGPT would help guide me
and it would be amazing.
01;14;18;09 - 01;14;19;26
And after 20 minutes
it would be incredible.
01;14;19;26 - 01;14;24;14
But the marketer's work product
would be ten times better than mine. Why?
01;14;24;14 - 01;14;27;03
Because they understand
what quality looks like.
01;14;27;03 - 01;14;27;18
The Henderson.
01;14;27;18 - 01;14;29;09
It's sort of like when you say,
hey, write a poem
01;14;29;09 - 01;14;32;06
and you read this poem by ChatGPT,
you're like, this is great,
01;14;32;06 - 01;14;34;25
but a real poet would be like,
that's literal trash.
01;14;34;25 - 01;14;37;06
Like that looks like poem trash, right?
01;14;37;06 - 01;14;41;04
Because the person who actually has
the brain that this, this tools are going
01;14;41;04 - 01;14;45;05
to augment, understand how to guide it,
understand what quality looks like, etc.
01;14;45;05 - 01;14;48;27
and to not have your people using that
as kind of an Iron Man suit,
01;14;49;00 - 01;14;52;01
you're really just shooting
yourself in the foot.
01;14;52;04 - 01;14;53;16
If you work in it.
01;14;53;16 - 01;14;56;16
Infotech research Group is a name
you need to know
01;14;56;19 - 01;14;58;00
no matter what your needs are.
01;14;58;00 - 01;14;59;24
Infotech has you covered.
01;14;59;24 - 01;15;01;28
I strategy covered.
01;15;01;28 - 01;15;04;15
Disaster recovery covered.
01;15;04;15 - 01;15;07;00
Vendor negotiation covered.
01;15;07;00 - 01;15;10;23
Infotech supports you with the best
practice research and a team of analysts
01;15;10;23 - 01;15;14;16
standing by ready to help you
tackle your toughest challenges.
01;15;14;19 - 01;15;19;19
Check it out at the link below and don't
forget to like and subscribe! So.
The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Listen
Our Guests Mo Gawdat, Roman Yampolskiy, and Malcolm Gladwell Discuss
AI Boom or Bust? AI Boomers and Doomers Reveal Their Predictions for Our Future
In this year-in-review episode of Digital Disruption, we bring together the most provocative, conflicting, and urgent ideas from this past year to confront the biggest question of our time: What does AI actually mean for humanity’s future?
Listen
Our Guest Jeremy Roberts Discusses
What AI Bubble? Top Trends in Tech and Jobs in 2026
Looking ahead to 2026, Geoff Nielson and Jeremy Roberts sit down for an unfiltered conversation about artificial intelligence, the economy, and the future of work. As AI hype accelerates across markets, boardrooms, and headlines, they ask the hard questions many leaders and workers are quietly worrying about: Are we in an AI bubble? If so, what happens when expectations collide with reality?
Listen
Our Guest Dr. Vivienne Ming Discusses
Top Neuroscientist Says AI Is Making Us DUMBER?
Are we using AI in a way that actually makes us smarter, or are we unknowingly making ourselves less capable, less curious, and easier to automate?
On this episode of Digital Disruption, we are joined by artificial intelligence expert and neuroscientist Dr. Vivienne Ming.
Listen
Our Guest Kenneth Cukier Discusses
Go All In on AI: The Economist’s Kenneth Cukier on AI's Experimentation Era
On this episode, we are joined by Kenneth Cukier, Deputy Executive Editor at The Economist and bestselling author, to explore why most companies should treat AI as a playground for experimentation, how The Economist is using generative AI behind the scenes, the human skills needed to stay competitive, and why great leadership now requires enabling curiosity, psychological safety, and responsible innovation.