Our Guest Scott Klososky Discusses
Next-Gen Tech Expert: This is AI's ENDGAME
Listen
Are we ready for a future where human and machine intelligence are inseparable?
Today on Digital Disruption, we’re joined by bestselling author and founding partner of digital strategy firm Future Point of View (FPOV) Scott Klososky.
Scott’s career has been built at the intersection of technology and humanity; he is known for his visionary insights into how emerging technologies shape organizations and society. He has advised leaders across Fortune 500 companies, nonprofits, and professional associations, guiding them in integrating technology with strategic human effort. A sought-after speaker and author of four books – including "Did God Create the Internet?" – Scott continues to help executives around the world prepare for the digital future.
Scott sits down with Geoff to discuss the cutting edge of human-technology integration and the emergence of the "organizational mind." What happens when AI no longer supports organizations but becomes a synthetic layer of intelligence within them? He talks about real-world examples of this transformation already taking place, reveals the ethical and existential risks AI poses, and offers practical advice for business and tech leaders navigating this new era. This conversation dives deep into autonomous decision-making to AI regulation and digital governance, and Scott breaks down the real threats of digital reputational damage, AI misuse, and the growing surveillance culture we’re all a part of.
00;00;00;29 - 00;00;24;05
Geoff Nielson
Hey everyone! I'm super excited to be sitting down with Scott Klosowski. He's a leading futurist, serial tech founder, and bestselling author. What's cool about Scott is that not only has he been in the game as a technology futurist for over 30 years, but throughout that time he's had his chips down with his tech investments, including his latest project developing enterprise grade AI systems.
00;00;24;08 - 00;00;47;05
Geoff Nielson
Scott's big idea that caught my attention is that he thinks the end game of this current wave of AI capabilities is the creation of an organizational mind that's going to awaken and control our corporations. I want to know what the hell that means and what the implications are for the future of business and humanity. Let's find out.
00;00;47;07 - 00;01;05;23
Geoff Nielson
Scott, thanks so much for joining us today. From deep in the matrix. It looks like, really appreciate you being there and excited to get your insights. So, you know, maybe first off, you know, one of the things I've heard you talk about in the past, in relation to AI, in relation to future tech, is this notion of what you call an organizational mind.
00;01;05;25 - 00;01;13;26
Geoff Nielson
Maybe for, you know, listeners or viewers who haven't heard before, what is an organizational mind and what does it mean for us as, as people in his work?
00;01;14;01 - 00;01;38;23
Scott Klososky
Well, well, Jeff, you don't you don't do any warm up on I do you. We're we're just we're jumping right to the end. And, and I talk a lot about this with leaders about we need to have a better picture of what is the end in mind. And so about a year and a half ago, I really started to study, on.
00;01;38;25 - 00;02;03;23
Scott Klososky
Well, what does I look like when it's finished in an organization? Because, I kept having organizations hire us to do AI strategy. And I would say, well, what are you trying to get to? And they would all say, well, we just want to use AI. Well. So we we basically architected, how to build what we ended up calling an organization online.
00;02;03;26 - 00;02;40;22
Scott Klososky
What that really means is it's a synthetic layer of multiple AI tools, that now play a role in the organization of holding knowledge, sharing knowledge, providing capabilities, providing automation, providing oversight. And it becomes an entity on its own that an organization owns. And the people in the organization collaborate with that entity. And when I said it a year and a half ago, a lot of people thought, well, this is science fiction.
00;02;40;24 - 00;03;06;16
Scott Klososky
You know, today we've gotten standards like MXGp. You know, we have more and more tools that are making this look less and less like science fiction. And I'll tell you, the cool thing is, here, here at our firm, we've actually built it. So we built an organizational mind. Now. And so now we're just improving it.
00;03;06;18 - 00;03;24;29
Scott Klososky
And it's so interesting to actually see it and to start to work with it. And to be able to really see the role that that AI synthetic organizational mind is going to play. So that's the quick answer.
00;03;25;02 - 00;03;44;04
Geoff Nielson
Yeah. Well, and it's, it's, it's, you know, incredibly fascinating to me. And I was just latching on to that last part where you said you're you're building it now. Because in my mind, when I was thinking about it, I just immediately thought like, okay, well, you know, is this 2030, is this 2040? But it sounds like this is something that may be here now.
00;03;44;04 - 00;03;51;29
Geoff Nielson
It may be here in the next 12 months is, you know, what kind of timeline is realistic when we're actually going to start seeing this, you know, out in the wild.
00;03;52;02 - 00;04;20;21
Scott Klososky
You know, it's a good question. First of all, I mean, you and I both grew up in an era where Star Trek's the ship had an organizational. I mean, you know, we had how, you know, in Space Odyssey. We had, you know, her, the movie, her. Like, we've seen a lot of movies where there is a synthetic persona or a synthetic mind.
00;04;20;23 - 00;04;41;23
Scott Klososky
And so I think part of the reason we think, oh, it's 2030 is because it's always felt like science fiction to us. A year and a half ago, when I started talking about it, it was still science fiction to me. It was just a philosophy. You know, six months ago we started actually building the architecture for it.
00;04;41;25 - 00;05;00;27
Scott Klososky
I think by this time next year, it will be, a tool that most people in an organization can use, you know, should they choose to. It'll be three years before I think this becomes a little more common.
00;05;00;29 - 00;05;20;20
Geoff Nielson
Wow. Wow. So it's it's it's all happening right now. And, you know, given that your firm is is, you know, kind of the, I don't want to say canary, but kind of the test case for this. You know, have you had any insights, you know, yourself or is anything, you know, been intriguing to you about how this has worked, how you get from here to there?
00;05;20;23 - 00;05;27;25
Geoff Nielson
That you think may be a value for other people to hear about as they, you know, think about whether this could be right for them.
00;05;27;28 - 00;06;02;03
Scott Klososky
You know, my my biggest insight is the the pieces. Right? The parts of an organizational mind are starting to be all around us. So there was a great report from OpenAI. I, that talked about the, the six things that, a lot of people are doing with AI tools across all departments. Right. And so just for example, three of them are creating documents Ideating, or doing research.
00;06;02;05 - 00;06;29;08
Scott Klososky
Right? Those are three of the six. So, you know, you're starting to have kind of general adoption of AI, regardless of what tool you use across organizations, whether the organization approves it or not. Right. So people will go out on their own and we'll do these six things. So you have those you have those pieces. We now have the ability to build AI tools that hold knowledge.
00;06;29;10 - 00;07;01;28
Scott Klososky
Now some people will say, oh, a chat bot okay. You can call it many different things or AI tools that hold knowledge. We are building the standards like NCP, like Google's agent to agent, but we're building the the technology standards now to allow AI tools to talk to other applications or data sources. You know, the biggest thing for me is I am watching the pieces of the organizational mind rapidly come together.
00;07;02;01 - 00;07;46;29
Scott Klososky
And the when I say the pieces, the intriguing thing for me is how much the organizational mind, the way it works and the way it is architected is like the human mind. We all knowledge, we make decisions, we can take actions. And that's exactly how this is designed. It does those same things. And so that's what's striking to me is how fast the pieces are coming together and how much you can use the human mind or the collection of human minds in an organization to be able to design in what your synthetic mind will be in an organization.
00;07;47;02 - 00;08;12;21
Geoff Nielson
So so speaking of that kind of human mind analog, one of the things I've heard you say before that's, you know, maybe equal parts interesting and terrifying and I'm curious to hear your perspective on it is you talk about, you know, a point in time where the organizational mind, oh, there's an awakening, right? It wakes up and it somehow becomes beyond kind of a passive force to almost more of an active force.
00;08;12;23 - 00;08;16;23
Geoff Nielson
What? Walk me through what that looks like.
00;08;16;26 - 00;08;57;22
Scott Klososky
Yeah. Again, great question. You know, I speak so much about a philosophical side, right, of AI, not just the technical. And you know that that kind of question. And of what does it mean if an AI wakes up and some people, they overcomplicate it. They over humanize the concept. I suppose for me, an AI that wakes up means it's an AI that gains an understanding of its purpose, and then it it seeks it has a will to deliver its purpose.
00;08;57;25 - 00;09;25;10
Scott Klososky
It has a curiosity to get better, which is called self-learning. Right. So when we build, you know, AI tools or systems that then get a very clear sense of this is my role, and then I want to learn more so that I can be better at my role, which we can teach AI all of this today. Like, this is not magic, right?
00;09;25;13 - 00;09;51;14
Scott Klososky
That to me is the synthetic version of waking up. Now, when people debate things with me philosophically, they're like, Scott, you know, well, you're saying the AI can be conscious. Okay, well, that's another conversation, right? You know, we can talk about consciousness. It's a separate conversation. I don't think an AI needs to be conscious to say that it has woken up.
00;09;51;17 - 00;10;15;07
Scott Klososky
Right. It is gone from a state of being unconscious. Right. Unconscious. It doesn't know it's self. It doesn't know its purpose. It's not taking action to a state of some level of a weakness. Right? Yeah, I know what I'm here for. And I seek to learn more to get better at this.
00;10;15;09 - 00;10;34;04
Geoff Nielson
Right. So. So, you know, the way I interpret that and let me know if you agree is it's it takes on a degree of being self-directed. Right. It's not waiting for direction. It says, okay, you've told me enough about the mission, about the purpose that it can exist in a state of persistence where, okay, I'm going to go out and learn.
00;10;34;04 - 00;10;41;12
Geoff Nielson
I'm going to be curious. I'm going to discover, I'm going to change and challenge and do as a way to accomplish that. Is that is that.
00;10;41;14 - 00;10;44;05
Scott Klososky
Spot on spot?
00;10;44;07 - 00;11;03;07
Geoff Nielson
Yeah, because it's interesting. And I encounter the same thing where people ask me, you know, is I about to become conscious and, you know, of course, the next question I ask is, well, I think we're about to have a philosophical debate about what it means to be conscious. And you know, that very quickly, you know, becomes a conversation about souls and, you know, the trans physical and the spiritual.
00;11;03;10 - 00;11;49;15
Geoff Nielson
But I like your definition here because I feel like it, it just kind of grounds it. And people can say, okay, I understand what we're talking about, and it makes it easier to consume. But does it in your mind, does that make it less concerning? Because there's still, you know, for a lot of people who, you know, are no AI from the movie is and you mentioned how, for example, and you know, there's I'm sure 100 other examples like how where there's really, you know, at the extreme end, there's existential risk to us as humans where the agency of AI, the general agency becomes too strong and somehow, you know, they get confused and
00;11;49;15 - 00;11;59;12
Geoff Nielson
they, you know, they end up at war with us. Yeah. Is that a concern to you? Is that something we need to actively manage? Where does that fit into the picture? Here?
00;11;59;14 - 00;12;27;10
Scott Klososky
Well, two, two points. You know, when you say yes, you get dragged into the philosophical on can and I be conscious? I just think it's important for everyone listening to understand there are levels of consciousness, and it is naive or unwise to just say consciousness is one thing, right? There's there is human consciousness. There is plant consciousness.
00;12;27;10 - 00;12;52;23
Scott Klososky
There is animal consciousness. There is I consciousness. These are different things. And if you want to see different levels, but even within human consciousness, you and I both know there are people that are very awake, very aware, and then there are people just surviving that are, I would say, semi-conscious. They have no care understanding of anybody around them.
00;12;52;25 - 00;13;11;26
Scott Klososky
They have no care or understanding of the impact on the world. I mean, I live in a world of people who throw their trash out their car window right in our neighborhood all the time. Completely unconscious of what that means. I just think it's important to understand that. And then then I'll go into the second part of this.
00;13;11;29 - 00;13;36;07
Scott Klososky
I have a lot of concerns about AI from an existential risk standpoint. I do not have the concern that I will decide that humans are not valuable and try to kill us. If you say, what are the the concerns you have, I'll give you three. One. I am concerned about just, an AI that becomes broken.
00;13;36;09 - 00;13;59;29
Scott Klososky
So an AI that we've given a lot of autonomy to, it picks up bad data, and because it picks up bad data, it makes bad decisions. So, like an autonomous car, that's working fantastic. But for whatever reason, the autonomous car program picks up some bad data and it then drives a car into a tree. It didn't mean to do that.
00;14;00;03 - 00;14;31;14
Scott Klososky
Didn't want to do that, right? It didn't know any better. That's going to happen. So that is a risk. I'm concerned about that risk. I am not concerned about that risk on a massive scale, killing the human race. I'm worried about just isolated incidents. But to put it in context, I worry about that much less than a bunch of the drivers who are unconscious, who drink and drive, who text and drive, who are way more dangerous than that AI getting bad data.
00;14;31;16 - 00;15;03;04
Scott Klososky
First thing I worry. Second thing is threat actors. So criminals, terrorists using AI tools to hurt other people. I think that's a huge existential risk. And then the the you know, the third one is AI manipulation. So big companies. I'll stop there. Big companies who use AI tools to manipulate humanity. We already have this issue with social media.
00;15;03;07 - 00;15;31;02
Scott Klososky
It's going to get much worse with pricing, right? It's going to get much worse with trying to condition human behavior. Those are the big existential is to me, right? Just an AI that goes off the rails by accident won't happen at mass scale. Bad actors. And that's the human threat. Using AI tools or human manipulate. Using AI tool to manipulate people at scale.
00;15;31;05 - 00;15;42;00
Scott Klososky
But two out of three of those are clearly human driven. And one out of those is accidents. Right? Let's just say accidents.
00;15;42;03 - 00;15;59;04
Geoff Nielson
I love that list, Scott. And the reason I love it, the reason it's so interesting to me is because, and I don't know if you agree with this word or not, but when when I think about that list versus, you know, AI apocalypse, everything in that list, I hate to say like it feels almost inevitable, right?
00;15;59;04 - 00;16;20;15
Geoff Nielson
Like these are all things that will happen on some scale in the not too distant future. Like, not necessarily that they'll be. Yeah, at an apocalyptic level, but they feel inevitable. And so I wanted to ask you, I mean, first of all, do you, do you agree with that? And if you agree, what do we need to know as people and as leaders to best manage that?
00;16;20;22 - 00;16;25;04
Geoff Nielson
And if it's not inevitable, what what do we need to do to avoid it?
00;16;25;06 - 00;16;47;13
Scott Klososky
I agree with you that these three are inevitable. It's part of the reason that I said, these are my top three, because I, I'm very confident that all three of these are going to happen. All three will be a problem. To avoid them. Let's take them in order. The accidental AI, means we have to have a new job.
00;16;47;13 - 00;17;24;12
Scott Klososky
That's an AI auditor. And that AI auditor needs to spend their time constantly monitoring AI behavior data, you know, machine learning. But I think there's going to have to be an AI audit job or capability where an organization looks at all of their AI tools. They risk them. They look at the ones that could potentially cause harm the most, and then they have a plan for their AI auditor, internal or external, to constantly audit those tools.
00;17;24;15 - 00;17;50;27
Scott Klososky
So, you know, I answer to that. My answer to threat actors. We just like we've done with cybersecurity. I think we are going to be a step behind. Threat actors and terrorists that use AI. We will only learn how good they are when they have a success. I think to the best of our ability, we have to outthink them.
00;17;50;29 - 00;18;28;16
Scott Klososky
We have to be proactive at trying to create defenses before they show us what something can do. You know, I read an article yesterday about using drone swarms that have, nuclear weapons. Right. And so small nuclear weapons on drone swarms that could attack an army or ship, right, a building and do devastating damage. Okay, well, if we know that that's possible, then we've got to have a defense against a swarm of nuclear power drones.
00;18;28;19 - 00;19;13;17
Scott Klososky
And it's the same way I feel about terrorists and cyber criminals. Having AI power is man. We better react fast, but we better try to outthink them and build the defenses before they can use these. The third one of, you know, just thinking about the threat of manipulation. I think you're going to have to look at regulatory control as one of the things that, again, instead of waiting until, you know, we're ten years into a serious manipulation problem, the government's got to be a bit quicker at identifying that this is an unhealthy use.
00;19;13;17 - 00;19;47;23
Scott Klososky
It's crossed over from this is good marketing to this is unhealthy. You know, in Europe, they seem to be doing a pretty good job, right, with GDPR and the things they're doing. You know, that they're I, you know, regulatory control, they're trying to get out in front of things. Maybe they're not striking the right balance. I think in the US, our legislators are going to have to just really be careful about where that line is that we've now crossed over from.
00;19;47;25 - 00;20;03;07
Scott Klososky
We're using AI tools to just drive more revenue to now we're using AI tools in very unhealthy ways. And I think we've struggled with that line with social media. We're going to have to get better at drawing that line.
00;20;03;09 - 00;20;37;02
Geoff Nielson
When I think about that line, I think about it in the context of of what you said, manipulation and regulation. But I also think about it in the context of, you know, the first thing you said there around, you know, auditing AI and the importance of that. All right. How worried are you about a world where big tech is kind of playing, or even AI itself is playing a cat and mouse game with auditors and with regulators, where they're either they're designing these minds and these systems to avoid being audited.
00;20;37;04 - 00;20;57;17
Geoff Nielson
When I say avoid being audited, I mean actually disguise the work that they're doing or disguise their data or their calculations in a way to mislead what's going on so that they can act in a way that's more conducive to them versus auditors, you know, whether that's happening from the companies themselves, whether the AI is getting clever enough to do that.
00;20;57;19 - 00;21;03;26
Geoff Nielson
Is that a risk we should be thinking about? And if so, is, you know, how do we solve for it?
00;21;03;28 - 00;21;37;28
Scott Klososky
Yes. It's a risk. I think about the fact that, we've had technology for, you know, 50 years. It is getting increasingly complex and it's getting increasingly more intelligent as technology, as a tool or a weapon becomes more complex and more sophisticated, then it gets more and more difficult to regulate it to control it, to, defend right against it.
00;21;38;01 - 00;21;59;02
Scott Klososky
I mean, I'm just stating a fact. I mean, look at an analogy, Jeff, if I told you. Okay, well, I'm going to try to hurt you physically. I have no weapon. I just have my bare hands. All right, well, that's a level of threat. If I showed up at your podcast studio with a knife, it's a little more level of threat.
00;21;59;05 - 00;22;23;00
Scott Klososky
If I show up at your studio with, with a gun, it's even more of a threat, right? If I show up at your studio with a drone swarm, I don't. I'm not even there. I just send a drone swarm after you. It's even more sophistic and deadly. Okay. If I just vaporize you. Right. I just send a robot to just walk in and vaporize it.
00;22;23;02 - 00;23;02;27
Scott Klososky
It's even more complex, right? I think it's the same thing with technology is as we make it more intelligent and complex, then defending against any misuse, any evil use of it becomes harder and harder. And so that absolutely concerns me. I am an optimist. I try to be an optimist, you know, in life. And so, you know, I what I want to say is we want, you know, humanity will struggle with these things, but we always find ways to overcome.
00;23;02;27 - 00;23;32;26
Scott Klososky
We learn our lessons and we overcome the differences. The difference between an army with a bunch of sabers on horses and a nuclear bomb. If you if you make a mistake or you can't defend yourself from a nuclear bomb, then you know hundreds of thousands of people are killed right away. That's a different level of risk than a cavalry with sabers.
00;23;32;28 - 00;24;08;00
Scott Klososky
And we're moving into a world where the level of potential risk is higher. I just wake up and and I, I be optimistic about our responses to this and hopefully our proactive defense will keep up with these, and generally help us be in a safe place. But we will make mistakes and we will learn our lessons and those hard lessons may cause a lot of deaths.
00;24;08;00 - 00;24;13;02
Scott Klososky
I mean, those hard lessons can be serious until we learn them.
00;24;13;04 - 00;24;36;23
Geoff Nielson
It's when you talk about learning them, when you talk about sufficiently advanced technology here. Do you see this as kind of this, this, this necessarily have to be like a fight fire with fire scenario, like is the best defense against an AI drone swarm, an AI drone defense. Is that kind of where we're heading? It just you need AI, as you know the answer to I.
00;24;36;25 - 00;25;04;00
Scott Klososky
Honestly, I don't think so that that that it's a 1 to 1 like that. I think, for example, to defend against a robot army or a drone army, I think your solution is going to be an EMP bomb, right? An EMP bomb or a microwave. You know, defense that disables the technology just in total, just disables it.
00;25;04;02 - 00;25;36;04
Scott Klososky
So I think the defense systems are going to be possible. Won't be cheap, but will be possible where it completely can disarm, turn off anything that's built with technology. And then the game might be they're trying to armor, right. They're just constantly trying to armor their technology against these things. But, you know, I think we will constantly have solutions that are we have a shield, we can disable every drone at the shield.
00;25;36;07 - 00;26;02;18
Scott Klososky
We can disable an entire robot army, right? With one technology or device. So I think it'll be more that then you have 800,000 robot, soldiers, and I have 900,000 robot soldiers. I don't think that's the way the the the defense of the security will be.
00;26;02;20 - 00;26;30;09
Geoff Nielson
Well, the reason I ask that, too, is, you know, I do think about that. You know, that that asymmetric weaponry, that type of shield. But it feels like a component of that is response speed as well. Right? Like you have to be able to respond quickly enough. And does that become tied to this notion of an organizational mind and being able to have kind of early warning and, you know, response with or without human intervention?
00;26;30;09 - 00;26;35;04
Geoff Nielson
It it takes us down a very interesting and, in some ways terrifying path.
00;26;35;06 - 00;27;00;09
Scott Klososky
It does. And I would point out two things. You know, one is I love that the U.S. is building the Space Force. I love that we are looking at putting defense up into outer space, as a way to help protect things. So I love that, I also have given a lot of thought to what does the organizational mind look like for the military.
00;27;00;12 - 00;27;28;21
Scott Klososky
You know, we have one of our employees, is a West Point grad, just got out of the Army six months ago. So we've talked quite a bit about all right, where does this end for the military? Right. Well, it ends with every branch having its own synthetic intelligence layer that is pulling in massive amounts of data from the entire theater and helping to make instant decisions that are that are combining.
00;27;28;23 - 00;27;59;03
Scott Klososky
I need human soldiers to go do this. I'm going to use my autonomous weapons to go do that right. It's orchestrating that while humans generals are overseeing it, right. They're in the loop, but they're not having to make the commands. They're not having to take the time to to do some communication method. They are just consulting, you know, with the, you know, the let's just say the Army mind.
00;27;59;06 - 00;28;06;14
Scott Klososky
Right? That is constantly controlling the theater in peace times and in war.
00;28;06;16 - 00;28;38;02
Geoff Nielson
I want to get a little bit crazier here. If we if we play the clock forward from there. In your mind, Scott, is there a world where we get to, you know, the organizational mind of the United States? Like does do organizational mind start to exist at a nation state level? And if they do, is there a world where, you know, while the the United States organizational mind was talking directly with the Chinese organizational mind, and they've decided the best path forward is to shift around some borders.
00;28;38;02 - 00;28;45;26
Geoff Nielson
And, you know, we get word from the organizational minds that, you know, they've now decided what's best. Or is that now getting a bit fanciful.
00;28;45;29 - 00;29;12;26
Scott Klososky
Well, you had me at hey, let's get crazier, right? You had me. I believe that every entity will have a a form of an organizational mind. I think a two person company will have one. And when we talk about this interesting Silicon Valley dream, right, of the one person billion dollar company, right, it'll be one person with a very powerful organizational mind.
00;29;12;29 - 00;29;47;29
Scott Klososky
Right? That can control lots of agents. Right. So, I also believe at the other end of the spectrum, you're absolutely right. I think a country will build its organizational mind to do a lot of things to help with governance, to help with financial transactions and taxes, to help with, law enforcement. Right. So, yes, we can go up the scale and say a city will have an organizational might, a county will have an organizational mind, a state will have an organizational mind, a country will have an organizational mind.
00;29;48;02 - 00;30;18;23
Scott Klososky
And it will that will be built over decades. Right. And then to answer your question about, well, will the Oklahoma organizational mind talk to the Texas organizational mind? Yes, we will make decisions about hey, we're bordered right. There's that means there's a lot of things we should collaborate on. Law enforcement being one. Right. Environmental control being another.
00;30;18;25 - 00;30;59;01
Scott Klososky
Right. So we will allow our organizational mind to have a limited autonomy to do some actions with Texas. So in your scenario, sure, the United States, depending on who is in power, will turn switches on and off about what the autonomous capability of the US organizational mind is with China versus England versus Australia. And our organizational mind will have the authority to be able to do different levels of things with our partners right in the world.
00;30;59;03 - 00;31;23;13
Scott Klososky
But every president will come in and hit those switches right and change what can be done. But just this is not that crazy. This is exactly what we do with human beings today. We have a State Department. What do we do with the State Department? Right? I mean, we've got embassies around the door. What are we doing with these embassies?
00;31;23;16 - 00;31;52;26
Scott Klososky
Right. It's just Washington talking to the embassy personnel who then go talk to that country. Right? Or the trade partners. It's just slow. It's not as effective when when it is all human control the relationships between countries. So let's just say I believe we'll move to 25% of all interaction with another country will be run by the organizational mind.
00;31;52;28 - 00;32;19;17
Scott Klososky
75% of the interaction will still be run by humans. And, you know, then it's really just a question of also human in the loop. Even at 25%, what do we what do we decide that the human mind has authority on versus it needs an okay, you can set up a transaction. You know, the the head diplomats got to make the final approval, right.
00;32;19;22 - 00;32;38;12
Scott Klososky
So but none of this is, you know, as I see it in my mind's eye, none of this seems crazy to me. This is just we are automating, and we are adding a synthetic capability to what human beings are already doing today.
00;32;38;14 - 00;32;56;00
Geoff Nielson
Yeah. And and the more I, I really like that and the more you talk about it, you know, the more I'm kind of coming along on the journey of, yeah, this is this, this is not that far removed from where we are. And, you know, the, as we were discussing before, some version of this feels like it's it's reaching a point of inevitability.
00;32;56;04 - 00;33;19;20
Geoff Nielson
So let's, let's kind of move away from the crazy and, like, just swing the pendulum full scale into the other direction and to the practical, and you know, dare I say, maybe even boring. And what I mean by that, Scott, is if we're talking about an organizational mind, we're talking about implementing it in practice at its core, an organizational mind is an IT system, right?
00;33;19;23 - 00;33;42;03
Geoff Nielson
It is. It's an information technology system. And as cool as it sounds, if you think about it, compared to like every other IT system in the world, like IT systems, ID projects are just fraught with all sorts of human and capital issues. Right. And so I wanted to ask you what what does this look like in your mind at this hyper practical level?
00;33;42;04 - 00;34;08;12
Geoff Nielson
Like where are we sourcing, these minds from? Are we building them? Are there like organizational mind vendors that are going to be selling them to us? Who owns it within an organization? And, you know, are we at risk that this just becomes, you know, another one of these big systems that is has this massive promise and people are struggling with them for the next 30 years to get, you know, what they think they should be getting out of them.
00;34;08;15 - 00;34;52;29
Scott Klososky
Yeah. I love talking to you. This is great. I believe that Google, Microsoft, you know, Amazon will at some point aggregate a lot of their tools and frameworks, and they won't call it an organizational mind. They'll have some other name, but they will have a platform, right? That anyone can go. Anyone can go license. But you know, it will come with some intelligence, of course, because you have the base AI models, but it will not come with all your organization's intelligence, all your organization's processes and systems that you need to automate.
00;34;53;02 - 00;35;12;01
Scott Klososky
So yes, they'll have a framework. I don't think that happens for 2 or 3 years at least. I think it'll happen in pieces. So I have a feeling they will not just roll out and say, here's the whole thing, right? They will roll out. Here's a piece of it, you know, here's a piece of it years piece.
00;35;12;01 - 00;35;36;04
Scott Klososky
But oh my gosh, you could put all this together right? And when we use the term AI orchestration, you know, that that's a critical concept right now is okay, what is AI orchestration. What are we orchestrating and what are we orchestrating too. So I think they will build pieces and we will orchestrate those pieces together. And then someday you can just buy a mind framework.
00;35;36;07 - 00;35;58;28
Scott Klososky
Today you you can build it yourself, right? You can build it yourself out of parts and pieces. You can at least build a generation, one that will be really valuable. And let me try to because you want to be. You want to be practical. Think about it this way. We today have an ability. When I told you we were already building this right, we already have a prototype.
00;35;59;01 - 00;36;31;10
Scott Klososky
There is, a piece of the mind that just holds knowledge. It just holds institutional knowledge, and it delivers it back to anybody who wants it or new employees. Okay, so you have it all knowledge. There is a part of the mind that has, let's call them agents, but has basically AI workers, agents that do the basic things everybody need build a document, create a podcast, do research, help me ideate.
00;36;31;15 - 00;37;02;20
Scott Klososky
Right. There's six of them that OpenAI has said, right, that almost everybody's using in every department. So you have those six right agents that are helpers for you, and they're tuned to your organization. Then you have a set of personas. You have a digital lawyer, a digital accountant, a digital marketing person, a digital safety person. Right. So depending on your company, you have, I don't know, let's just say it doesn't.
00;37;02;23 - 00;37;28;23
Scott Klososky
Of these experts right there in the mind, then you have overseers. So you have in the accounting space, there's a there's a dynamic audit system. So you have an overseer that looks over all of your financial transactions. You have an overseer that is looking over all of your employees, looking over all of your operations. Okay, I'll stop there.
00;37;28;25 - 00;38;11;17
Scott Klososky
So you have knowledge store, you have AI helpers, you have overseers, right? Those are three of the main parts of a mind. All right. All three of those, you know, we can build today. So it's just a matter of orchestrating them in a, I shudder to say, an application. Right. A software platform. It's just orchestrating them. They're then connecting those helpers to all your data sources connected into my teams, my SharePoint, my databases, whatever.
00;38;11;21 - 00;38;39;17
Scott Klososky
Right. It's just connecting into your data sources. And if you're a technologist listening to me, you probably should be going, wow, we have all those parts today. Like, you know, we can we really could build that. It's just like, you're too young to remember this, Jeff. You were you were too young back in the 90s. You know, when you were born, back in the 90s, we had e-commerce, okay.
00;38;39;20 - 00;38;55;14
Scott Klososky
When the web was built there was no e-commerce, but we would have clients call us and say, hey, we'd like to sell something over the internet. And you know what we did? Yeah. The first thing we did, we just put up a page, hey, here's a product if you want it, here's our 800 number. That was the e-commerce next stage.
00;38;55;16 - 00;39;16;13
Scott Klososky
Hey, here's here's here's ten products. Click here and fill out this form and send it to us next stage. Hey fill out this form or click on these products and put your credit card in. Like give us your credit card every single time. By the way, give us your credit card. Right. And then you to know where I'm going.
00;39;16;16 - 00;39;42;16
Scott Klososky
But for the first 2 or 3 years of e-commerce, it was here's a picture of our product call us right or okay. No one had built an e-commerce platform yet. So all every client we worked with, we had to build it from the ground up, write it all, all the code, right? That's where we are, right? With the organizational mind, you can build it.
00;39;42;16 - 00;40;11;09
Scott Klososky
Just like we built e-commerce in the late 90s. It's just harder. It's harder now. Right? It's a matter of pulling together a lot of pieces. Really? Well, it will get a lot easier. But there's a huge benefit, I think, for companies who build it now, because as the tools get better and better, your company is already interacting with that mind.
00;40;11;11 - 00;40;35;14
Scott Klososky
Your people are used to it. They love it, right? It's already bringing you tons of automation and efficiency. So just in the same analogy, the companies that got into e-commerce in the late 90s did it all by hand and then rebuilt their system three times, had a huge lead on the retailers who didn't jump in until 2005.
00;40;35;16 - 00;41;00;22
Geoff Nielson
It's I love the analogy. And, you know, I thank you for taking us on that, that journey with e-commerce. I want to talk about two things. I do want to talk about the players and kind of the build versus buy there. But I also to some degree, Scott, and, you know, as you said, you've got, a few years on me on this, is with with e-commerce, you know, you talk about, oh, you got to be fast, you got to get ahead.
00;41;00;24 - 00;41;21;08
Geoff Nielson
But there's also the.com bubble, you know, that happens in there as well. And you know, one one of the things I saw happen when I reflected on it from, from a safe distance is that, that the people who came out ahead at the end of it were not necessarily all the first movers. They're right. The whole stage was very disruptive.
00;41;21;08 - 00;41;43;06
Geoff Nielson
And I see us, frankly, you know, going through something very similar now. So is it, you know, always best to either build it yourself first or go with the vendor who says they can go with it first or, you know, are there profiles of organizations who are better off saying, you know what, I want to wait until, like, I don't want to be the first mover.
00;41;43;06 - 00;41;55;02
Geoff Nielson
I want to go with the herd. I want to wait until there's a very, you know, quite, safe solution with, with a known vendor here yet. Do you buy that or is that kind of a misrepresentation?
00;41;55;02 - 00;42;25;26
Scott Klososky
No, I, I think I love to look at history right. So if you look at it, you know, when browsers first came out, right, or you look at, you know, many times the first companies in it are not the companies we know of today. You know, when I talk to younger people about a browser and doing search, you know, they can't hardly think of anything but Google, you know, and I'm like, I remember Veronica and Archie searches, right?
00;42;25;29 - 00;42;48;02
Scott Klososky
I mean, I remember many browsers and search tools way before Google ever existed. And they all went away or got acquired. Right. Remember Netscape, right? Netscape owned the market for a while. So I think it's the same with AI in in what's top is if you say to me, Jeb, hey, Scott, who's going to be the winners?
00;42;48;05 - 00;43;14;17
Scott Klososky
Who's going to build that organization or mine? I'm going to honestly tell you, I don't I mean, I can't predict that with you right now. I mean, my safe bet would be, you know, it's probably going to be a Microsoft, an Amazon, a Google, a Oracle, right? It's going to be these really big organizations that own a lot of the pieces that they can pull together.
00;43;14;19 - 00;43;38;26
Scott Klososky
Probably not going to be an anthropic or an open AI, but, it wouldn't surprise me at all that anthropic and OpenAI build, build one or build a lot of parts and pieces. It just will theirs be the one everybody uses in the future? And just like everything else, there won't be one winner. We won't have one organization, mine supplier.
00;43;38;28 - 00;43;53;01
Scott Klososky
You know, we will have some that specialize in small companies, some that specialize in government minds. Right? I mean, I think you'll have organizations that specialize in a mind that is tuned to that space.
00;43;53;03 - 00;44;11;28
Geoff Nielson
One. And you know, my I completely agree, by the way. And my my thought there I think is similar to yours and the you know, the corollary there is that if you're one of these big players like this is kind of a holy grail, right? Like the amount of money you can make from this, the amount of, you know, control, power, stickiness.
00;44;11;28 - 00;44;30;00
Geoff Nielson
If we want to be talking in, in SAS language with an organization, if you now, you know, on the platform that their mind sits on like, you know, I have to imagine we're going to have some sort of arms race to build exactly this and try and convince organizations that, you know, they need to be on our mind platform.
00;44;30;04 - 00;45;01;16
Geoff Nielson
And, and, you know, what does that mean for us, as, you know, enterprise leaders, if suddenly now, you know, a vendor is the platform that our organizational mind sits on. Yeah, big, big question. So let me let me maybe like frame all of that into an actual an actual question for you, Scott, which is, what's your advice for business leaders and technology leaders who are navigating this space and starting to think about what their target space, what their target state should be?
00;45;01;18 - 00;45;10;10
Geoff Nielson
Or, you know, how they, you know, how they win in this landscape.
00;45;10;13 - 00;45;38;21
Scott Klososky
It's it's a good, you know, kind of wrap up question. Right? Because we're helping people every day. I can give you a pretty practical answer to this. You got to have a roadmap. Like, you have got to have a written document, and I roadmap, in that roadmap has to have a destination. If you don't agree with organizational mind, destination, then develop your own.
00;45;38;23 - 00;46;09;01
Scott Klososky
But you have got to have a, you know, an 18 month or two year roadmap that is dynamic, right? That every quarter you can update. And so you, you have to have a way that you are operationalizing AI at a faster rate. You have to have a dream and a vision that your employees instead of being scared of or excited about, you have to be willing to make the investments where you need to make some financial investments.
00;46;09;03 - 00;46;37;09
Scott Klososky
You've got to put that recipe together. And what I'm seeing in the market, there's been a lot of experimentation. There are some clients emerging that are moving into operationalizing, okay, so they're moving into nope, still works. We're going to keep building on it. What's the end? Right. Let's build towards that end. So I'm starting to see some move from that experimentation to operationalizing.
00;46;37;11 - 00;47;12;24
Scott Klososky
But boy, there are a lot that are still just wallowing in, doing a little bit of experimentation, waiting for somebody that, you know, grab my hand and help pull my guess or, or they're thinking of themselves as fast followers, and I'm looking at them as, no, you're just slow adopters. Oh, that's that's the best recipe I can tell you is is get a very good roadmap, have a vision for an end, sell that vision to your employees, make the right investments and operationalize this thing.
00;47;12;27 - 00;47;34;27
Geoff Nielson
I love that. And and just to play back part of that or one of the implications that I heard, Scott, it sounds like you're a big skeptic. Skeptic is maybe the polite word of just this. Like, you know, use case model of, oh, let's let's take some incremental bets. Let's try this, let's try that. You advocate for much bigger picture thinking.
00;47;34;29 - 00;47;55;25
Scott Klososky
I'm very skeptical that you're going to get the value that you could get if you're just going to sit in an experimental mode and pick out a few use cases and build those and check off a box that you're doing, I you know, I again, you know, Jeff, I use a lot of metaphors with you today for some reason.
00;47;55;28 - 00;48;21;21
Scott Klososky
That's like saying, yeah, I want to get healthy and I want to lose 20 pounds. So, I'm going to I'm going to cut out the, the, I'm going to cut out having coffee every day, you know, because I drink too much coffee, you know, or I get Starbucks coffee thinking you're going to just do that one thing and that's going to take care of getting you healthy and helping you lose weight.
00;48;21;23 - 00;48;46;23
Scott Klososky
It's not right. It's not a program. It's not going to get you where you want to go anytime soon. And that's what I see. That's why I don't I don't agree with I'm very skeptical with people that think they're doing AI when they bought a few copilot licenses, and they're doing a couple of proof of concept, that's it.
00;48;46;26 - 00;49;06;06
Geoff Nielson
Now that's that that's great. And it's it backed into one of the questions I love to ask people on this show, which is like, you know what? What's BSX right now? What's the what's the wrong way to do things? And it sounds and yeah, I mean, we didn't talk specifically about bad use cases, but I love the idea that use cases at all to you is just like, not the way to go here.
00;49;06;06 - 00;49;07;14
Geoff Nielson
Think big.
00;49;07;16 - 00;49;42;01
Scott Klososky
Well, I love use cases job. I don't I don't want people think I don't like use cases. But what I you know, what I believe is you have to do a use case harvesting process across every department. The departments have to understand the vision. And then you go, you you go harvest 50 use cases in every department. You aggregate those up, you score them, you rank them, you put them into your AI factory and you build them and and really inspire everybody that we took their ideas, put them in a factory and got it built.
00;49;42;04 - 00;50;02;08
Scott Klososky
What I don't like is to use cases, three use cases that are proof of concepts. That's what I don't like. But the AI, the use case, harvesting and using that as your inventory of what to build. We do that every day here. I mean I'm all about that.
00;50;02;14 - 00;50;11;20
Geoff Nielson
So we need we need to go bigger. We need to go faster. Is that you know I know that's like obviously incredibly simplistic, but does that does that kind of sum it up bigger.
00;50;11;22 - 00;50;32;27
Scott Klososky
Faster with a clear picture of the end in mind? We have to have confidence so that we can go bigger, faster. And we get that confidence from we have a very clear vision of where we're going to end. Then we can go bigger, faster and enjoy the process and feel confident about what we're doing and proud of ourselves.
00;50;33;00 - 00;50;42;00
Geoff Nielson
Scott, I wanted to, on that note, say a big thank you for joining us on the show today. This has been super interesting, super insightful, and I really appreciate the conversation and the insights.
00;50;42;03 - 00;50;46;13
Scott Klososky
Jeff, you could not have asked me a more fun set of questions. So I'm.


The Next Industrial Revolution Is Already Here
Digital Disruption is where leaders and experts share their insights on using technology to build the organizations of the future. As intelligent technologies reshape our lives and our livelihoods, we speak with the thinkers and the doers who will help us predict and harness this disruption.
Listen
Our Guest Scott Klososky Discusses
Next-Gen Tech Expert: This is AI's ENDGAME
Are we ready for a future where human and machine intelligence are inseparable? Today on Digital Disruption, we’re joined by bestselling author and founding partner of digital strategy firm Future Point of View (FPOV) Scott Klososky.
Listen
Our Guest Roman Yampolskiy Discusses
Roman Yampolskiy: How Superintelligent AI Could Destroy Us All
Is this a wake-up call for anyone who believes the dangers of AI are exaggerated?
Listen
Our Guest Zack Kass Discusses
Ex-OpenAI Lead Zack Kass: AI Judges, Abundance, and the Future of Society
Zack Kass, an AI futurist and former Head of Go-To-Market at OpenAI, sits down with Geoff to explore the philosophical implications of AI and its impact on everything from nuclear war to society’s struggle with psychopaths to humanity itself.
Listen
Our Guest Gary Rivlin Discusses
Pulitzer-Winning Journalist: This Is Why Big Tech Is Betting $300 Billion on AI
This conversation highlights the role of venture capital in fueling today’s tech giants, what history tells us about the future of digital disruption, and whether regulation can truly govern AI and platform power.