Content-type: text/html Downes.ca ~ Stephen's Web ~ Why Do Digital Transformations Fail?

Stephen Downes

Knowledge, Learning, Community
Why Do Digital Transformations Fail?


Unedited Google Pixel 7 Pro transcript

 

Poured. On my pixel phone because that saves me the effort of having to write a transcription of it later. The AI in the phone will do that for me. Um, The topic of this talk is, Not AI or anything like that. I decided that i didn't want to go in that direction.

 

So what direction did i go, you may ask Um, I went in the direction. Of this. Why do digital transformations fail? And, Pretty sure you're seeing that. All right, it sort of hard to tell. But if there's any problem, somebody will tell me So, What i want to say, first of all.

 

Is that? These slides are available on my website. Www.downs.ca slash presentation, slash 571. And yes, you can do pretty much anything you want with them except for all of them, behind the pay wall and charge money for them. Please don't do that. Because they're intended to be shared. Um, These slides were not authored by any AI, and i didn't choose And AI in the production of them at any point.

 

Um, Gotta get my there we go. Get my broadcast together. Um, With one possible exception, and that's when i was creating the slides. I use the microsoft baby shower slideshow theme. Now, why did i use that particular theme? Well. Because i could. Mostly. And it just goes to show and it's funny.

 

Sarah talked about, you know, they're being no limit to the range of human creativity. I've always sought of human creativity as being something along the lines of taking a technology and using it for a purpose, for which it was not designed. And the internet is a classic example of that.

 

Here we have trillions literally trillions of dollars worth of investment and we use it to send cat pictures back and forth to each other. And, That's kind of kind of hard to do duplicate. So, Uh, why why did i choose this theme though? Why did i choose? Why digital transformations were fail.

 

It's not it's not really the Typical CMIE topic, see ones, mostly immacademic conference. And Certainly, if you say the words, digital transformation, you're thinking, corporate IT and and things like that. So, there's a bit of a You know, at least between the two worlds. Um, But i was thinking about, well, what should i talk about?

 

And i thought, well, i want to talk about what i was working on the last year and what i was working on the last year. Was approached with. Defense, research and development Canada? Um, trying to understand. Why instructors don't use digital technologies? When they're teaching people. Uh, things like how to do.

 

You know, line vehicle control systems and things like that. Uh, it was actually a much more interesting exercise than it sounds and of course, We don't have any results. Our work was strictly involved in. Considering what would it take to ask? The question and and know that we are getting something like a reasonable answer.

 

And i think that was actually a really useful sort of thing to be doing. Especially in an ed tech environment. We're so much is just done off the cuff. And i'm going to talk about that. So, This isn't a presentation of a new technology, this isn't a presentation of research results.

 

Um, this isn't a presentation of the topic of the day. It's, it's more presentation of What would it take? To talk usefully. About some of these technologies that we're working on. Okay, i wrote this. Not work. Okay. So, Let us, welcome. Remember to baby shower. Theme. New technology to the world.

 

What could go wrong? Okay. Let's look at the question first and and all confesses well, i may as well confess i'm going to confessional mood today. Um, The, the bulk of this presentation is organized around the text of an email that i wrote in response to One of my colleagues sending me this question.

 

Why did digital transformations fail? Uh, he sent it to me by email while i was working at home. Um, although of course, the big portion has been to get us all back into the office because that's where we have, you know, collaboration, and and culture, and all of that sort of stuff.

 

But i don't want to go off on that tangent. Just Be aware it exists in the back of my mind. So, The big question. Then? Why do digital transformations fail? Part of the thing to think about is what is a digital transformation. So like i say, comes from the corporate side of the house.

 

Um, Sure would equally well, of course, applied in colleges and universities and even schools, etc. But we don't hear it expressed so much in those terms in those environments. Um, Well, accenture digital transformation is the process by which company is always companies. Embed technologies across their business to drive fundamental change.

 

So that's that's kind of interesting. Uh, the intent there is to drive change. The technology is the change agent. Idm digital transformation means adopting digital first custom sorry, digital first customer business partner and employee experiences. So here. The transformation. Is to create these digital experiences. And we don't know what the change agent is.

 

Wikipedia, because why not? Digital transformation is the adoption of digital technology. By an organization to digitize non-digital products, services or operations. Okay. Gartner digital transformation can refer to anything from IT modernization, for example, cloud computing to digital optimization to the invention of new digital business models. So, Two big themes, right?

 

The one thing is digitization. The other thing is is well change, right? New business models, new products, new experiences. Etc. Um, Now, we've been up to this enterprise of working in a digital world. Well, i've been doing it since when did i first start? 1981. Um, I've been doing online stuff since about.

 

1990. Um, so You think by now everything that could be digitalized has been digitalized digitized. But of course, that's not true as we're seeing with artificial intelligence. There are some things that we thought would resist digitization that are now becoming digitized, but we're also experience this translation from one wave of digitization to another, to another, for example, Going from the learning management system to the learning experience platform to You're whatever follows that.

 

It'll be something. We'll talk about that. Okay, so How did we get here? And, and by here now, i mean, How did we get to talking about this sort of topic? In the realm of learning and developing ten, educational technology. Well, i've got three major lines, Of thought in my recent work some of which some of you will have seen already uh one of which was web 3 which i actually presented to CNIE.

 

Back in 2019. Um, i called it e-learning 3.0 Um, then people came along and called called it all web three. So then i started calling it. Ed three. None of my name's caught on because they don't So that's one thread. Um, and now that was a range of technologies including cloud including data.

 

Um, including the idea of community as consensus. The idea of automated assessment your next Uh, credential will be a job offer. Things like that whole range of new technology. So it was a lot of fun to talk about those And we're seeing all of those come on stream today in a big way.

 

Seconds was AI ethics. I created a mook called ethics analytics in the duty of care. It got i can't say dozens because that would imply more than one doesn't. Uh, A number of people participating. Um, I i presented it in a few talks. I did a few workshops on it.

 

Um, Uh, i put up the mook online. Uh, it was a ton of work. I covered a lot of ground for me. It was an enormously useful exercise. And it laid, clear to me the complexity that is the subject of ethics in artificial intelligence. And when somebody says, You know, comes along and says, oh yeah, we should look at the ethics of chat gpt Um, You know, i either they're about to embark in.

 

Well let's let's put it this way. Um, I did all the presentations in. That course, i did the transcripts of those. And the slides of those. In both cases, the transcripts came out to be a thousand word document or thousand word, a thousand page document. And there were a thousand pages of slides, so, it's not as simple, you know, let's just assume we've got the ethics figured out and all we have to do is implement them, it's not even remotely that.

 

That simple, third thing was data literacy. And actually, this came up in the earlier discussion, i think saw's discussion Uh, the whole concept of well, you know, talking about digital literacy, what do we need to learn? How do we learn, what we need to learn? How do we organize our learning, if we're in a new field, In the past, i've talked about a thing called critical literacies.

 

Which basically interprets learning in any discipline as An exercise in pattern recognition. And then in data literacy, I, i took the basic categories of critical literacy, applied them to data literacy. And and, and mapped them out. Long story short, anyhow it's another mook and all the information is there for you to see So, all of these things are lurky in the background for me.

 

Um, Then there's the question of how we got here now? This year at this moment. And this, the three things. Uh, first of all, just wrapping up my work on the tech guide. How how to put up online learning quickly easily in an expensively, which i did for the coronavirus, which was very popular, a lot more popular than i thought it would be or that it deserved to be.

 

Um, Also, the whole question of the federverse. Um, I started working on that. Good. Well, i've been on the fed ever since 2016 but i i did a presentation on it. A year and a half or so ago. Before the fallout with twitter and and he who shall not be named etc, But the interesting question here is, How do we have a distributed learning federated?

 

Distributed federated learning network. To interesting. We saw we've seen just a little while ago. In english adjectives have a specific order, but what is the order when you're saying distributed federated learning network? Is that the right order or should it be a federated distributed anyhow? I worry about things like that.

 

And then, of course the barriers week the barriers work That i did with DRDC. And actually this cumulates A number of years worth of work with that organization. Looking at digital literacy, looking at training methodologies etc. There's a whole side to my life at NRC that that doesn't show up in my newsletter.

 

Uh, but, but it always has an influence in what i have to say. So,

 

The big thing. I think for me, Coming out of all of this work. Is. An understanding, i i want to say like a sophisticated understanding or something like that but i don't want to You know, make it seem like nobody else knows this because i'm sure that's not true, but an understanding what all this new tech does.

 

And, There are the three major technologies and it's interesting for me. The e-learning 3.0 work that i did, specifically, did not name any of these three technologies my argument at the time was and is Uh, even back in 2019, they already exist. They're already a thing. And that's the funniest thing about this whole artificial intelligence intelligence thing from this year, people are acting like, oh, this just happened.

 

Um, I've been working at NRC for 22 years now, and for all of those 22 years, there has been at least one and often more than one research team working on artificial intelligence and those research teams called back a long ways. Uh, this is not a Things suddenly happened phenomenon.

 

This is a There's been a lot of work done by a lot of people over a very long time and now despite all the skeptics and there have been many skeptics, it's beginning to show that it actually might work, and yeah. So, We could talk about a lot about, What chat gpt does and doesn't do whether it will.

 

Create robot teachers or anything like that, but but i think that is far too narrow with scope. As i think sarah said Um, You know there's a lot more out there than just chat GPT there's the thing on my phone during the transcript, there's the uh the AI topaz AI.

 

I used to get rid of all the speckle in my photographs how i love that. Um and there's more i i did a blog post just recently listing all the different ways i've been using AI without mentioning chat GPT1 But what i've done here, Is to show how these three major technologies are all going to come together in a really interesting and important way.

 

These three technologies are the metaverse blockchain and artificial intelligence. And, What i think is interesting is There's kind of like a brand name for each one of them for the matters. People are thinking virtual reality, right oculus rift. Meta the company. Mark Zuckerberg. For blockchain, everybody goes right away to bitcoin.

 

Or if they're more sophisticated, if urium, And then, of course, artificial intelligence chat gpt or if they're in the note barred. Um, But underneath each of these technologies is a whole range of other technologies. The matter versus interesting because not because it creates immersive digital worlds. I mean that technology is people appointed out again has been around for decades.

 

I remember way back in the early 2000s playing around with active worlds. No, what the metavers does. That's different in new is to create persistent objects. Uh, Specifically persistent digital objects. Um, But also object persistence from the real world, because you have mixed reality, right? From the real world into a virtual world.

 

But, Was not a trivial problem. I remember working in multi-users dungeons. Back in the 1990s. In fact, uh, one of the very things, very first things i did with the Canadian association for distance education. Uh, with some of my colleagues was to have A caged conference in our multi-user dungeon, As far as i know, that was the world's first.

 

The transcripts are still on my website. And the big question of multi-user dungeons was, how do you have persistence? How do you migrate this persistence from one mud to another mud? Because everybody wants it inter mud, right? Because that would be cool. You could go from one mud to another mud, but can keep your sword and all your money, except the other mud was a space mud.

 

So that didn't make a whole lot of sense. And how would we do the currency valuations and so on, it was a horrible mess. Um, but there was an inter mud communications relay created. That was pretty interesting. Blockchain again. Uh, yeah sure the financial services jumped all over blockchain.

 

Um, and it became overall of scams and things that tells me more about financial services than it. Does about the technology. What's really important about the technology is it's a persistent. Tamper free. Database mechanism, distributed database mechanism. Based on things like content, addressing miracle, graphs and consensus. I won't talk about the technology behind those but blockchain is what guarantees the authenticity of the persistent objects that are described in the matters.

 

The matter versus where we work with them on an everyday basis, the blockchain is the tech. That makes sure what we think is real is real. And then this all ties into artificial intelligence. What we have now are large language models and everybody's going gaga over them. What large language models have done.

 

Is learn how words are arranged in sequences and, and that's about it and that's all that they intended to do because they didn't need to do anything else. Um,

 

And it just isn't the side that sort of funny mint for it to see newspaper saying you're stealing our content, right? All these AI systems are doing all. They're recording is The arrangement of words. They're not recording facts. They're not it's not recording data. It's not recording information. It's recording.

 

The order of words. And if that's the IP that all of these organizations are trying to, to say, we own That's a really a really slim. Uh, slim footing to be trying to hold. But what it is going is, it's learning language. A language is an incredibly valuable tool.

 

And if you want your language to actually mean things, it's going to have to refer back to objects in some way. And there's a whole lot of philosophical stuff. I won't go into when we talk about. What does it mean for an AI to refer to objects for our purposes here?

 

What it means is The ai. Via the blockchain. Refers to the persistent objects described in the metaverse and I could bracket the metaverse and say, you know, that includes all of the semantic web the matter. Versus a wave doing the semantic web except with persistence and so on. These things all work together and that's what the new tech does.

 

So, and sort of they shake my head, sometimes when A year about certainly, when i see the many of the the recent AI experts come along and and say offer, A college course on writing chat GPT prompts. And i started to go. Well, okay, there's one born every minute i guess.

 

Anyhow. So how do we think about all of this? I mean, this is a huge transformation any way you look at it and it's a huge transformation. From the point of view of Everybody's going from whatever it was, they were doing before, to whatever it is. We're going to be doing after It's a huge transformation in the sense of You know, there's going to be a lot of change.

 

Here are a bunch of new things we never thought we could do that. Now we can do uh, like robot teachers for example, and Uh you know, George seamans wrote, just the no, not George Simmons, doug bellshaw. Just the other day. About Morris two sigma problem. How how do we get massive?

 

Or at least cohort sized training? To get to the same educational level as the, the one tutor, one student model. Well, you make one of those two entities, a robot and hint, it's not the student. Yeah, in robot teachers um and then everybody can have the benefits of Uh, individualized tutoring.

 

But, You know, robot tutoring. Is a lot more than say programmed, instruction, decision, trees, content, recommendations learning analytics or any of the stuff that passes for AI today. So, how do we think about that? Well, that's what this work was about that. I was doing Uh, the sammer model, substitution augmentation, modification redefinition.

 

Um it's interesting there's a history to that on the Taylor Institute page, it's credited um But the summer model is actually an old model. It had its origination as a description. Of what happened as a new technology diffused through an environment or society. So it was observational at first later on it became normative, which is really strange.

 

It's because, you know, When people simply observed. Okay, here's what happens when a new technology comes along first. I use it to just substitute for what they're already doing. Then they make it a bit better, then they begin to make changes and then they redefine everything that we're doing with this new technology, that's just what happens.

 

And it was turned around and we were told no, this is how you should introduce a new technology to An environment. And i look at that and i say so do what they were gonna do. Anyways, Um, Personally, i prefer to go straight to our Uh, without all the agony of sa and m.

 

But, You know, it's really hard to do that. It's not impossible though and and we'll talk a bit about that as well. On the same page is something called tpak. Technological pedagogical content knowledge. Uh, model. And of course, just a combination of the three things, right? Technological knowledge, pedagogical knowledge and content knowledge.

 

Um, There's the presumption that we have all three of those knowledges. Um, I, i would question that quite a bit actually. Um, But it, but it's popular and it's a way for people to Represent the different sorts of things. You need to know when you're thinking about how a new technology would take hold especially in.

 

Uh, a digital learning environment. Okay. Of course, if you have robot tutors, why do you need pedagogical knowledge anymore? Well. Life's mysteries. We looked at three approaches that are more common in the business environment. And, Again, some of you probably all of you will be familiar with these Um, but we don't really see them raised a whole lot if at all at an education technology conference and unless it's a corporate education technology conference in which case, especially if it's sponsored by.

 

Uh, mckinsey or the world bank, or something like that, which case, that's the entire conference. But Um, But i wanted to look at these. Just we we use these as a basis for evaluating the sort of things that we would have to ask about in order to determine why, or why not.

 

Uh, instructional technology was adopted, so There's the whole technology acceptance field. Basically, this is a range of models that were developed over time. We begin with rogers innovation diffusion theory and and it's interesting. Yeah, you go into the background of that and you read through that and this stuff is actually fairly old.

 

It's fairly old. It's really old. Um, But, Yeah, i don't, i don't have the date here in my notes, sadly, but But, It comes from an idea that Technology. Diffusion can be thought of as. Analogous to Diffusion in general, in fact, that's why they called it, diffusion theory in physics.

 

And in fact, this is the point on which my career, as a physicist founder. It was precisely at this point. Doing the mathematics. Of diffusion. Of one liquid in another liquid. So, diffusion of two liquids in a solution. And then apparently, you can do the mathematics of that. Uh well they can do the mathematics of that.

 

I could not do the mathematics of that and they realized that time that i was not going to be a physicist. I could probably go back and do it. Now, i know so much more than i did in 1980, but But what's interesting is that it is, it's a mathematical process.

 

You know, you put one liquid in another liquid and it just happens. And so they thought, well, you you take a technology, you put it in an environment, it just happens and you know, it's funny. There's so much criticism. Uh, in the educational world about, you know, tech, inevitabilityism, or, or even tax solutionism, and it's all pushed back against that sort of thinking that it's, like, pouring, when liquid into another, and it just happens naturally.

 

And i can see why they said that because you can't not get a result. If you dump technology into an environment, you try it. Right virtually. No matter what technology and what environment. You mix, there's gonna be something happening. And the question is, was. Can we understand what's going to happen?

 

Um and the same thing with technology and learning, right? If you apply a technology saying blackboards to an environment, say schools, You know, they might just sit there for a while, but eventually one of the teachers is going to go switch switch out and oh yeah, i can i can use that.

 

So yeah. Um, There is something to that. But it was all very too mechanical. Um, and so and and You know, it wasn't just diffusion. So we come along with the theory of planned behavior It's not quite behaviorism because there's planning, this is oxygen's. Uh, theory. But the idea is that the person's intention to act is the immediate determination of that action.

 

So in other words, the idea was for technology. Diffusion. You explain the diffusion or shall we say the use? Other technology and an environment in terms of the person's intention to use that technology. Um, Which is also kind of useful. I also kind if not useful because what creates that person's intention?

 

Well there, there we got the technology acceptance model which is based in attitudes. You notice how we're becoming less and less determining this stick less and less, linear less, and less mathematically based. I find that really interesting in the history of this. Yeah, so consider attitudes rather than behavioral intentions.

 

That's the main predictor or behavior? What sort of attitude's? Well, for example, do you fear technology? Do you have hope for the future things like that. Um, There's also the concerns based adoption model. So, which according to strop technology adoption is a complex inherently social developmental process, Sounds a lot like victarski.

 

Uh, sounds a lot like social constructionism. Uh, constructivism and there is an awful lot of overlap and then finally, They came up with the unified theory of acceptance and use of technology or Utah, it's not on the slide and then of course, because you couldn't just leave it. They came out with a you taught two.

 

And so, these were basically based on performance expectancy effort expectancy social influence. You know, that was the death of google glass, right? Was the nerd factor Um, and then the intention and the facilitating conditions. So, all of these things coming to play. When we're talking about whether a technology will be used in education.

 

Let's bracket that for a second, to remember what we're talking about here is stuff like AI blockchain metaverse. I would add fetiverse Uh, web 3, etc. Okay. The second approach is risk management. I won't spend as lying as long on it, but basically risk management. Uh, which you cannot do attack projecting government without And and to alerts to be corporate, uh, environments without this either, although an education, it appears to be very optional.

 

Um, the Find kidney method was the, you know, the the origin of risk management theory and what they did. Is consider three factors. They exposure. Uh, To the risk. Uh, you know how how much when it impact you if, you know, if it were to happen, Uh, the likelihood of the risk.

 

How likely is it to happen? And then the consequence. If it happens, how many people will die, okay, maybe not that. But you know what i mean, right. The combined, the exposure and likelihood. Um, and created a risk matrix. And the the risk matrix creates one of these little diagrams here that we see the the little box pointing to my screen.

 

Um The, the little boxes with the green, which is minimal risk, and the red, which is high risk. And the idea is, you go through a risk analysis, consider all the possible risks that might come up, run it through this calculation, Um, and then That's, that's your risk calculation.

 

And then in an ideal world, you describe what you're going to do to mitigate all of those risks. And that's interesting. And you can take it a step further by producing. What is called an analytical hierarchy process model where we're not just looking at risk generically for an entire project or an entire technology.

 

But what you do is you take all the possible risks, you break them down into a hierarchy of different topics or different Different aspects of the project, the analyze, the risks for each one of those. And then you run them through a calculation, which rolls them all up. There's a long mathematical formula with one of those ease at the beginning.

 

And, and that gives you your overall risk rating. Um, So, Analytical hierarchy process model kind of fits pretty well. With the concerns based adoption model in. Since that for a given technology. We're looking at a range of different factors. Uh you know, the the social, the technological, the legal etc, we'll come back to that.

 

Finally validation protocols which i won't you guys probably know them mostly but What i, what i want to say here, is that All of these theories, unlike so much in education. Are actually run through a process much, like the process that we ran. So we did our created our own study instrument, based on technology, acceptance and risk management and did some modifications to that.

 

And then my work was to run it through these validation, protocols. To. Assess, whether the instrument that was being produced, was a valid that is to say, Uh, both valid and reliable. Uh, instrument to identify Why instructors were using technology so like i say no results but that was the method.

 

But you might be interested in that. So the big question here and this is the turning point in the talk. What did i learn from all of this? Because that's what you're here for, right. I hope you're here for that. Okay. So, First of all, we can come up with kind of a risk registry and the risk registry.

 

As i mentioned is what we get. When we combine The concerned space model with the risk assessment model. So, uh, this is based on Paper by pat reading 2014 like done. Tree models. Because yeah, let's Baby shower theme. Um, model for it. So basically, five major factors or areas of concern technology process administration environment and faculty.

 

I always found it interesting. That students aren't in there, but we'll leave that aside. For now. And then in each of them, so, Y'all live. Sorts of factors that would impact weather. Technology is used access reliability, complexity process would include management support and professional development. Administration environment and faculty, etc.

 

You can read those for yourselves. And it is a nice model. I'm not sure how complete a model it is. Uh it certainly it's complete enough for a corporate environment. I think that in a more open environment, we'd probably want to include some more factors but it's a good enough model that We can actually draw a pretty good assassment of, Whether or not, Whether or not, A digital transformation initiative will succeed or fail.

 

I mean, if we, if we can answer positively in all those, Areas probably, it would succeed. Uh, but if we identify a huge areas of concern, in some of these, you know, they're red on the risk matrix. Then there's a really good chance that they will fail. Coffee. Okay.

 

But wait. Accordingly mean by fail. Uh, yeah, the question that they don't ask over on the corporate side because they know what they mean. Uh, and These are are from roan 2022. If you download the slides, you'll find the reference in the notes. Um, So hershey's. Fast cracked attack rollout.

 

Uh, didn't test it properly lost 100 million dollars worth of orders that they could have fulfilled. Like i think of all that wasted chocolate and My heart bleeds. Uh, hewlett-Packard 2003. Uh, new. Erp was not configured to sync with the old system. Um, and so again orders went unfulfilled cost them 160 million.

 

Uh, Miller core, same sort of thing. Uh, they had a hundred million dollar. Check. Transformation project. That Uh, was oversold and under delivered dragged on for three years. Never did finish. And the company basically gave up and sued their implementation team. Revlon lost 64 million dollars, in unshipped orders.

 

Etc. So that's what fail looks like in the corporate environment. Of course. That kind of fail is meaningless to us, right? Because Are, you know, accepting a very broad sense, we're not worrying about share price, run shift orders, except for those of us who are, you know, in the corporate learning environment which case it matters a lot.

 

The corporate learning environment, will i think recognize that there's a broader sense of fail at work here. Thought i'd look up some other kinds of failure, just for fun. Lazune was a failure. Sony beta max was a failure. Black blackberry storm. Uh, which was basically the end of black bear.

 

Fair enough. Turns out that Uh, manipulating your results and Pushing non-working. Market that results in failure. Merks is viox, increased risk of cardiovascular events. Let's failure. Ford Pinto is a failure that an explosive failure. McDonald's McDLT. Which by all accounts was one of McMullan's better hamburgers. But it was packaged in two separate containers.

 

Wasteful self-wasteful, and of course, google glass, which i mentioned earlier, never did can't pass the nerdiness factor. So, There's a brown scope of what we mean by succeed or fail and this is the sort of thing that i encountered a lot when i was doing the ethics of artificial intelligence.

 

Um, People are so worried about a few very specific ways. The artificial intelligence can fail. And they don't. First of all, don't look at what it can do. You know, many of the skeptics don't And by no means all of them. But more to the point, they don't consider the wide range of what could possibly count as a failure.

 

And even the question of whether what they count as a failure, Is a failure. And that's, i really like service talk because basically, she took the idea of You know, it allowing students to cheat. Oh, that's a failure. She turns that around and said, well, now it meets will just get past this stage in our developments, as educators, which i think is exactly the right approach to take.

 

And that's the important thing. Failure doesn't just mean, people are not using the technology even though in a lot of boardrooms that seems to be what it is. Minimally. You know. Success. Means more than simply people are using the technology success. Minimally means People are using the technology and not getting killed.

 

Uh, or at least people are using the technology not getting killed. And getting some benefit from it. You see what i mean? Like, It's not just whether you use the technology or not, that is success or failure. Um, You know, even if the whole world is using chat gpt That's not the determining to, that's not the determinant of whether chat.

 

GBT is a successor of failure. The whole world is using facebook. Uh, from this pure corporate perspective. Yeah, facebook is a huge success. But there are other factors involved. As the weather, we think of Facebook, as a success or a failure. But anyhow, even let's go back to the question of whether or not people use a technology.

 

Because you know, ultimately what people decide does have some bearing on this, not as much as we'd think, but it does. When people don't use a new technology, there's often a very good reason for that. Although sometimes. People use the technology when they really should not have. And yeah there are lots of examples of that that we can think of So, We'll do.

 

Little rapid version here. Here's our new technology. Will fail my version. So, First reason, nobody knows what the new tech does. Remember. A bunch of slides earlier, i had a slide that said Uh, here's what the new tech does. For the most part. Nobody knows that. Um, They think that chat gpt gives some answers to questions, for example, They think that.

 

Blockchain is used for. Or digital currencies and that's where it begins to end. Uh, they think the matter versus oculus riff? Um, But, These technologies. Or a combination of a range of underlying technologies. And with their brought together and then when these major technologies are brought together, We get something really interesting and complex so much.

 

So, That it's not really possible to say what's going on, inside it. And it's not just neural networks that are like that, although let's be clear near all networks are like that. Um, We can't actually say. What technology is doing in our society? I know there's an awful lot of, you know, technology, experts out there.

 

Who gave us still nice statements about this? Tech does this? This tech does that this tech does that? But that's a very simplistic understanding of what a technology does. Uh, look at a car. What does a cars you what car transports people? If you think that's what a car does and that's the beginning in the end of the story, you don't understand the car.

 

You don't understand it as a delivery vehicle. You don't understand it. It's a status symbol. You don't understand it as a right of passage, etc. Etc. And all of this applies to all of these technologies. So, there isn't going to be a simple story. And, Nobody can give you that story the best we can do.

 

Is well, samuel. Arbzman says, we can kind of interpret what's happening a bit like TV meteorologist. So that's a really good example, because the phenomenon is exactly the same. There's a people ask you to explain what an AI is doing. Even now with a large language model, Uh, you know, it's got a billion inputs.

 

You can't tell me what it's doing, nobody can the best you can do is some kind of interpretation. But we have to be careful not to mistake. The interpretation for the fact, we've already done that with the brain. We interpret what a human brain is doing. And we come up with all kinds of concepts that may or may not apply.

 

You know, do people really have beliefs? Do people really have souls to, you know? We, we make up these ways of characterizing what's happening. But what's actually happening is many levels further down and a lot more complex than people realize If you don't know what the new text doing, it's really hard to adopt it.

 

That's why we need people who are able to kind and describe Like a TV meteorologist, what a new tech is doing. That's what i'm trying to do here a bit. Another reason why the check fails is the new tech is harder to use than the old jack. And here we go back to the blackberry storm again.

 

But i'm thinking even of Oh, what was it? I was thinking of like a passwords. Uh, okay. Passwords are a real pain. And we'd all like to get rid of them. But every single replacement for passwords that has come along, Is way harder to use than a password. You know, these i think the two factor authentication this the special program in your computer that you use for your two factor authentication.

 

The soft keys, the ssl keys, etc. Uh, It's really hard to use. And you're just trying to do the same thing, but you were doing with something that already worked.

 

And that leads to the next point. They need attack is being used to do the exact same thing as the old tech. Which the old tech did fine. Protect, but the new tech. Takes longer and adds extra steps. This is the classic. We have a new technology. Let's create a classroom in it.

 

And here we have. A vr version of a classroom. Uh, with avatars, which interestingly have only heads and hands but no bodies. Except the teacher who has a body but no legs. And the teacher is writing, can you see that writing on a whiteboard? Um, Why would we do that?

 

What's going to convince people that this kind of technology is better than a classroom. For bringing people together. Yeah, no wonder people said, yeah, we need to get back to in person learning again. Because remote learning looked sometimes like this

 

There may be other unexpected side effects of the new technology. These side effects might be directly related to the technology as in the case of the nuclear bomb. Or they may be undirectly related to the technology. But, Representing their own unique way of worst case scenario. For example, A heavily centralized social media network that is used to share news and information among journalists and activists around the world.

 

Being taken over by someone with zero tolerance for journalists of activists. No names. Um, You know, you have to put that in your risk matrix, right? Uh, you know what if someone who was not you Ends up in control of your technology. Um, I forget his name. It's Sam altman.

 

The guy runs the guy who runs chat gpt or open AI. I'm sorry, i forgot his name, my Sam Watterson on the brain, but that's law and order. Which is also related, but i digress. Uh, came out with an article that i ran in my newsletter the other day.

 

Like a day or two ago. Uh, calling for a eye to be regulated. Where regulation meant the companies would get together. Come up with a regulatory framework. That leaving governments. Not all just leaving governments around the world would endorse. Um, And that sounded great. Uh, i i applied the What if we got one of these tests to it?

 

And realize that we probably don't want them regulating their own technology. And made that comment, but it was interesting. The very next day, the european union came out of the framework. That they said, should be applied to things like OpenAI, and Sam ultimate. I hate having a dyslexic memory.

 

Uh, His first reaction was well. Fall humming. Those regulations would not be feasible. If we had to follow those regulations, we just have to shut down. So yeah. Um, Regulations.

 

This is a very common one, i'm working on a new project, which i'm talking about sometime in the future. Uh, and this is my, this is the biggest concern for me. The new tech doesn't address actual needs. And we're going to be clear here what i mean by that actual needs.

 

All right, i don't mean the needs of the people who created the technology. I don't even mean the needs of the people who paid for the technology. That now, characterizes, by the way, 90 more than 90 percent of all educational technology. No, i mean, the needs of the people who are Intended to use the technology.

 

This is really important. More importantly you can imagine because There are two things that fall out of this first. If the technology doesn't actually address the users needs. There's virtually nothing you can do to get them to use their technology. Let alone use it safely carefully properly as intended etc.

 

Second. If the technology does address, Users actual needs. There's virtually nothing you can do to stop them from using it. See. Tech diffusion is a thing after all. It's just a different thing that that people thought, right? Uh, If a technology is actually going to serve a need, It's like a better mousetrap people will build beat a path to your door.

 

Um, It's funny. Uh, you think about the smartphone? The smartphone became widely diffused throughout the government of canada. I've workforce. For everyone has well, maybe not everyone but everyone i know has one Everyone. They know, has one There, there are Tens of thousands hundred more than 100 thousand probably almost 170 thousand smartphones in the canadian civil service.

 

All of them were actually bought by the people. Who use them the government for the most part, didn't pay for them at all. The fact, the government, supported blackberry. And and, and, and foster that program for a while and But the smartphone unencumbered was so useful people with buy their own and actually carry two smartphones around.

 

So that they could have a proper smartphone, that's how useful it was. And, and so the, the technology adoption plan. In the government became. How do we make it? So that these smartphones can actually work for people and they're still struggling with that to this day because of course, Uh, you know, the government has Security needs.

 

It has safety needs. Etc. And a lot of these guarantees aren't built into smartphone technology because Well, we go back to the this problem.

 

Another problem, the new tech is unrelivedable and glitchy. That's what happened to the government's blackberry plan part of what happened to it. And we've all seen these things, right? The blue screen of death. The zip depth zip disc, click of death that probably predates many of you. So maybe look that one up, it's quite funny.

 

I had a zip deck one. A zip disc once. And yeah, i went click, click click. Uh but the big one of course is bandwidth and slow internet. My office is a Wi-Fi dead zone. So yeah, they're not able to get me to use Wi-Fi in my office, big surprise.

 

Um, I have a whole range of experiences with unreliable and glitchy technology. And i've collected some of those experiences in a playlist called Steven follows instructions. For, i try to use a new technology following the instructions that were provided with that technology, with hilarious results. Another failure. Vector. The new tech was a lot more expensive.

 

And had to be rations or partially deployed. That was the other thing with the The government's mobile phone initiative. They only give them to managers. Um, but the people who really needed them were the people who wrote in the field. The managers could Talk to each other, but they didn't really have any way of communicating with their staff.

 

And so none of this was particularly useful. Although when they traveled to conferences, they did have their phones and they could call their office again. So i guess there's that Today, we're seeing it with VR headsets. Things like the hollow lens and so on. Horribly overpriced. Uh it's not something that you could widely deploy was the same in the earlier days of computers and education.

 

And they still use this kind of picture to this day. Where you have one computer? A person usually a boy. Working with the computer and three or four people standing behind. The boy, the girl, another girl and the black boy, and maybe a teacher in the back. And, and that was the model, right?

 

Um, terrible. And, and i, you know, i always said, you know, a computer's first and foremost, a personal device. And i i maintain that to this day, your education is first and foremost, a personal thing. A i when it really comes on stream will be first and foremost, a personal service will have our own AI's, forget about chat GP.

 

Gpt wait till we got our own personal AIs. I have one i use it. It's on. Feedly the my RSS reader, it's called leo. I have my own instance of leo. I train it myself. It works really well. That's the game changer. Here's one that here all pretty familiar with.

 

The new tech was tree peak. Creepy and tracked, everyone's every move. And this could be everything from video surveillance to red light cameras. To key loggers. To your computer as being monitored for service considerations or whatever. Um, People don't want every move to be monitored and if you are monitoring and there every move, especially if it's going to affect their pay.

 

Um, or they're standing in the company, they're probably not going to want to use that technology because They can reliably predict the bad consequences that will follow. So, Just about to wrap up because i know it's kind of getting on time. So i i had another eight slides, talking about causes, I combine them into one slide.

 

And, you know, these are the obvious things, right? These aren't the reasons, why, tech fails, but these are the things that Create the reasons why tech fail so they didn't consult that. It pilot they didn't market or introduced the technology. They did not conduct a risk assessment. I still think nobody's conducted a chat.

 

GPT risk assessment, even among the critics not a proper one. Uh, proper budgeting and investment was not done. Uh, think twitter there, right? Um, business processes, never adapted to match the capabilities of the new tech, they kept doing the same old thing that kept reproducing classrooms that kept reproducing lessons.

 

Uh, they kept thinking that learning is about remembering stuff. And then finally, a culture of trust was never established. The people who use the technology never did trust, the people who required the technology and of course, you're not going to get Good successful transformation out of that. So, These are the reasons.

 

These are not the reasons. Why tech transformation fail. Although i see this accountant is reasons a lot so much, i made a special slide for it. This is not my transformation fails. Staff and student resistance to change, fear of losing our jobs, or the speed of change. No, you're you're blaming the user here.

 

And again. If the technology has any value to the user, they're going to use it. Um so it's not simply that oh they're afraid to change. I hate that kind of remark because it's so not true. Look at all. The changes we've had in society over the last decades.

 

We? Staff students, everyone. Have accommodated so much change. It is ridiculous to say we're resistant to change. Nobody in a rational mind, could say that. Second thing, human resources, lack of training not having the right talent insufficient leadership. Uh, again, none of these is necessary to make a technology succeed.

 

Um, Look look at uh what what was it? That just came out. Zelda. Uh, It's something about zelda, it was a big game. I i don't plan. I play no man sky but the legend of Zelda nobody's being trained to play that. We don't need to have the right game playing talent in order to make this a success.

 

The people buying it, like, actually, i went to read them all. And there were lined up down the hall and around the corner. With this came out the morning. This came out. There were little leaders, telling them to line up. They didn't need leaders. So no, it's none of this stuff.

 

And then the, the horrible psychology. Theories that get trotted up. So frequently in education. Not having the right mindset, not having a growth mindset, failure to innovate failure to understand, innovation, failure, to disrupt, etc, etc. None of that has anything to do with this. Um, you know, maybe one or two outliers, Might just not want to, you know?

 

Ever change anything ever again, but mostly again has nothing to do with the psychology of the people using the tech. It really does have to do with the tech and housing useful. It is to the people using the tech. The by one final notice, the example i gave earlier Look at What everybody has been doing with the mobile phone?

 

Huge tech transformation. Um, We paid for them ourselves, mostly retrained ourselves. The phones just worked, mostly And they made a lot of things, a ton of things like say instantly creating a transcript of this talk. A lot easier to do, they were so useful. We went out and spent our own money so that we could use them at work.

 

So, That's the context. In which we should be viewing. All of this, AI slash blockchain slash better stuff. You know, we're we're so worried about what other people do they might be the wrong thing. They might act unethically. Oh no, People might lose their jobs, oh copyright copyright plagiarism.

 

None of that stuff is relevant. All of this, you know. Well, we we have a word for it in the open source, community, fear uncertainty, doubt. That's what's being played out. Right now, with respect to these technologies and i wish i had another hour to continue on just analyzing that into its finest details.

 

But i don't, so i can't. This is the kind of analysis we should be doing. Not the, the scaremongering. Let's get the ethics, right? Kind of analysis. Because he we don't have ethics right? Anyways. We we've been working on ethics for three thousand years. We still have figured it out.

 

We're not going to get it in the next two or three months. So, Let's think about these technologies as technologies, and let's understand whether or not they're successful or whether they fail. By thinking of them and understanding them as technologies and not the moral quandary of the millennium. Thank you.

 

Little excel because this is after all. A baby shower template. Thank you very much.

 

I actually went down a rabbit hole yesterday, in fact, looking up that the future is already here. It's just not evenly distributed. Because it was misquoted in a newsletter that i read. The future is already here, it's just not widely distributed with the myth quote. But i wanted to check, and, of course, gibson never actually wrote it, but he did say it many times.

 

Um, and videos, his quote, and it's a real quote. That was an hour. Well, spent

 

Yep.

 

Yeah.

 

I well, They didn't build this presentation around like they mentioned them because i thought they were relevant and should be included because they formed the background. Of thinking on this topic. Um, So i would probably approach. Uh, approach the topic with something like Utah to Utah to And, or You know, i really did like The breakdown.

 

Um, That was done by. Pat reed. Um, Because that, that combines the two. The two sides, the risk management. And the the technology acceptance of very, very variables, combines them quite well. That's the approach. I would take. We see something like reeds. Model. And, And probably i'd be inclined to extend it.

 

Uh, To to take it outside of corporate context.

 

Yeah.

 

Yeah, that's quite right. You know, people have to my mind undervalued. The capability of technology in general and and AI type technologies are particular. To achieve greater efficiencies and cost savings. And yeah, someone like me comes along says, yeah greater efficiencies and cost savings people. Uh, Capitalism. All right, and and yeah, okay, i get that but When we're talking about providing a service to a group of people, especially as you know, A small demographic.

 

That isn't going to be able to as a group. Pay for large expensive technologies because they're just times enough of them. Um, Then. These things really do come into play. Obviously. Uh, that's why. But i was willing, i actually paid for my own phone and yeah, it's a backup for corporate tech and believe me, it has been.

 

I had to go back to the office, starting in March. My computer's still don't work properly. Um, But this always has uh, even in the Wi-Fi dead zone because i have You know, some mobile phone. Um, But, That's why i bought a pixel phone specifically. I bought in my first was a pixel 4.

 

Now, i have this one. Um, Because it would transcribe my talks. And i had always wanted to transcribe my talks because People who are unable to hear, weren't able to get any benefit out of them. Because they're audio, right? And i personally love audio audios my favorite medium. But, If you can't hear audios a terrible medium, So, i've been using this, i've been posting the transcripts on my presentation page.

 

And it takes me. It took the small investment in this phone, which i was going to buy anyways. Um, it takes me approximately not done it a few times, three or four minutes. To get the text from google docs, which is where it ends up. To my specific presentation page.

 

I still do that manually because i'm stupid. Um, but three or four minutes and now i've made My talk accessible to people who are hard of hearing. That's what we should be thinking. Right. Um, Otherwise. You know. Otherwise, the the problem of providing accessible resources is intractable And, and it has been for a long time and and it creates Actual other barriers.

 

Um, you know, you think of the ada? In the united states. Um, requiring one of the uniform university of california institutions. I forget, which one To take down their When they're open access learning materials. Because they weren't ADA compliant. Well, that's not a step forward. I totally get. The concerns.

 

That these materials should be accessible. I totally agree that these materials should be accessible, but the ray to reach that isn't to require things that are accessible to be taken down the way to do reach that is to figure out better cheaper and more efficient ways of making the resources that already exist accessible to everyone else.

 

We have the same problem in the federal government in canada of translation. And i've been working with some european grips recently. Well, we thought we had a problem with translation. Um, And again there's this you know, this role that anything that gets posted on the government of Canada website has to be in english and french.

 

Other really good reasons for that. And I totally support that i, you know, 100% behind the principal bilingualism may indeed I would extend it further. I think it should include. Uh, indigenous languages, obviously. Especially in the north. But really across the country. I think it should support. Uh, languages of significant minority populations.

 

Um, But the cost you, it's Really expensive inside the government to translate a document like now. Fortunately, it's always someone else's budget but still you're sitting there. It's a taxpayers money and you're going I can't imagine. We're spending this amount of money to translate document. And there's been a lot of resistance over the years to using automated translation in government.

 

Now at first, for some very good reasons, right? It was awful. Uh, but automatic translation now is getting so good. That there's less and less of a reason. Uh not to use it and you know, even if it produces just a workable, first draft of a translation, That should be at least enough to get us off the ground.

 

That's the approach to take. And i know i went on about this, but but i've encountered this so many times, And and, you know, I've always. Always. Supported the, the principle of accessibility first. Uh, you know, universal design for learning all of. I think all of those are great things.

 

Uh tagging image captioning. That's why i like mastodon so much. It almost requires you to capture your image. But i'm sitting here thinking i'm typing a description of an image that i'm looking at like a chump. Why am i doing this? It's taking me longer to type out, what's in the image, then it took me to create the image.

 

Um there is an AI that does that, but right now it's terrible. But when this is good. And that's not going to be long. I should just push a button. It'll produce the caption. Let's say, yeah, that's good. And and continue from there.

 

Yeah. Yeah, not uninsurable.

 

Last question.

 

Yeah, yeah. That's wise.

 

Yeah.

 

Yeah.

 

Okay. You could take the word of chat GPT over me. If you want it. And that would be your right. I mean, But you'd be wrong. In, in my view. Um, But but, but let's answer that in more diplomatic way. Uh, We know that chat gpt. Uh, Is basically a constructed.

 

You know, it's a deep learning system. It's basically built out of neural networks. Uh, a large number. I forget how many, but it's a lot. Um, that were basically pre-trained. Uh, by being fed a ton of data. Of actual examples of language use. And it was pre-trained in some specific ways.

 

Uh, which are described in the name. Uh, generative pre-trained transformer. Um, We know that. Now. If we look at that, we we can go all the way down to the bottom level. And This is where all of this stuff started for me because the the bottom level was the level.

 

When i was looking at this in the 1980s. Um, and people like jeffrey hinton and What are their names? Remote, heart and mcclellan and others. We're doing their groundbreaking work in in neural networks. The love that level involves signals going from one unit to another the adjustments of wakes between that unit, the changing of the bias or sensitivity of the unit, maybe the creation of loops to create convolutional neural networks etc, We could talk about that.

 

At that level at that basic level, that's what it's doing. Sorry about the human brain, right? Human brain, a whole bunch of neurons. Um, Signals, go from one to another. Putting neuron to another based on electric chemical potentials, spiking frequencies the gap etc. Um, Not a useful level of description in either case.

 

So, what happens is? We move up a level, a layer of abstraction And, and we've done that with both humans. And with machines, And so we have in humans basic psychology. Um, It might be for example if theory well much like we saw with Utah about attitudes and expectations and desires and goals Um, Or it might be a psychology.

 

That's a little more out there. Um, And i won't give examples. There's no need at this point. Although i, i have my favorite Uh, whipping. Uh, whipping posts to complain about. Which i do in my newsletter. Same thing. With the neural networks. Uh, do you move up the layers?

 

We we begin to describe what what they're doing. And yeah that's how we get language. Like chat gpg is hallucinating or chat. GPT is trying to give you an answer to a question. The thing is The further you get away from that base layer, the more abstract, your description has become Because you're set, you're using a single concept to represent.

 

Um, A non-defined. Set. Of possible neural states, either the computer or the human By by non-defined. I mean that there's no way of Drawing the set of all and only those states that correspond to whatever it is. You just said that the thing is doing. So if you say chat GPT intends to do such and such That statement does not reduce to any set of statements about the neural net underneath.

 

And that's important because that means that As you get higher and higher up your statements become more and more abstract and less and less likely to be true and worse harder and harder to falsify. So, when chat GPT tells you in all earnestness, i'm trying to give you answers to questions that you have Um, It's not.

 

Uh, because there's no way to get from. I'm trying to give you answers to. Here are some intermediate states down to here. Is what's happening at the neural level? There is no there. Right. And it's not that it's lying. Um, Because again, I mean, strictly speaking, it is lying because it's doing the same thing that humans do when they like.

 

Um, but, but it's not that it's lying in the sense that It's giving its best shot. At providing a higher level description of the incredibly complicated thing that's going on underneath. And it's just something that Is probably an inaccurate generalization because it just doesn't have the data to work from.

 

So, That's how i would approach your sort of comment. Um, And it bothers me. This is now this is the The the takeaway from this and why i spent the time on it. People say, I'm not gonna say that chat GPT is thinking or hallucinating. Or, or, or intending, or trying, or anything like that.

 

All i'm gonna say is chat. GPT is generating except it's not even generating, right? It's just stringing. Sequences of words together. That, that is, you know, that's a level of description, maybe one or two levels up from that basic mural structure thing. That's what it's doing. Um, But, There's no reason to talk like that.

 

And the reason why people talk like that is because they say, well, those are the things that humans do that chat, GPT doesn't do. But, Humans, don't do that either. Technically Right humans. Don't hallucinate tactically. They have these neural events and cascade phenomenon and floods of signals and and at several layers up we call that hallucination even though it's not really hallucination.

 

It's just this neural phenomenon. Well, that's what happening in chat, gbt, it's this cascade. General events. That we call hallucinating because it resembles an awful lot. What humans seem to be like when they hallucinate. So, Uh, Take that as you will, right? Um, Chachi pt is trying. It doesn't have the experience to describe its own behavior properly.

 

Um, but that's okay. Neither to humans.

 

I hope that was useful. I know it was long but i hope it was useful.


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Dec 22, 2024 01:26 a.m.

Canadian Flag Creative Commons License.

Force:yes