Content-type: text/html Downes.ca ~ Stephen's Web ~ Introduction to AI Policies, Guidelines and Frameworks

Stephen Downes

Knowledge, Learning, Community
Introduction to AI Policies, Guidelines and Frameworks


Unedited audio trascript from Google Recorder

All right. So hi everyone. I'm Stephen Downes. And welcome to the session. Um, i appreciate that you've taken the time out of your busy. Schedules to join me. And to join us here. At the micro learning sessions. Um, participant has enabled closed captioning. That's awesome. Um, okay, let's see if the slideshare works this meetings being live streamed, got it.

Uh, Shares screen. And the share screen one, which will look kind of messy but or no, i'm going to scare share screen too. That will look less messy. There we go. Okay, it's my friend. The downy woodpecker. I always like to have a few bird friends when i do a presentation.

Um, Yeah, we should start from the beginning. So, i'm hoping that looks nice to you. Um, looks like it's working. And the meeting is being recorded, got it? So today, This is a follow-up to the introduction to artificial intelligent. That i did. A few weeks ago. And if you didn't see that, the recording is available.

Um, This one follows up. By talking about AI policies guidelines, and frameworks. So, we're not talking so much about how AI works in this session. We're talking about the approaches that. A different. Government organizations, etc. Have taken with regard to policies, guidelines, and frameworks. So,

So the, the workshop as the slide says, well, look at recent policies, guys frameworks, i can't do them all. There's, More than a hundred of them there, perhaps even more. But i do want to talk about some of the elements, these policies and frameworks have in common and how people go about creating them.

The sorts of things that they address. I also want to give all of you the opportunity to engage with the topic. By thinking about the important values and principles. That govern your own thinking and your own approach to the ethics of artificial intelligence. So i have a fun activity planned.

But we'll get to that when we get to, that won't be luck. I do want to look at the question just to frame our discussion. And the question, if you had to boil it down, Is something like this. Artificial intelligence is a new type of technology. It's fundamentally new in many ways.

It offers capacities that we haven't seen before in any of our technologies. But with that, it offers Perhaps presents risks, that are unlike any of those we've seen before. I mean there was an even a well-known statement. Recently about the possibility of artificial intelligence taking over the world and exterminating humanity.

I'm pretty sure that won't happen. Um, In fact. That money on it. Uh, but there are risks. There are real risks. And these range from bias and prejudice to automated decision making without appeal to our loss of privacy to AI bringing us to a culture of blindness. And, you know, Retreat to the middle ground.

And more so much more. So, governments industry societies, organizations are pretty much everybody. Has been grappling with this question. How do we govern ourselves with regard to the creation and use of artificial intelligence technology? So, It's not a simple question and it doesn't have a simple answer. And there are different parts to the question.

Especially when we look at. The different codes and frameworks that have been developed. A lot of the time you'll see them presented maybe in the media as just this list of principles or this list of Conditions. But these Statements are generally organized of different parts. And this is one framework here that we see on this screen.

It's not the only framework that exists and i use this one because It has a number of different components. That are used by many but not necessarily all of the statements of principles and ethics. Off to the left hand side, there is under the heading of knowledge. What we know about AI, what we believe about it, what our experiences have have been.

Um, What we expect to happen when we use AI, it calls this created reality. We might call this, perhaps our worldview, or our perspective or our understanding. And i think this is going to be different from person to person and certainly, from society, the society These inform. What might be called underlying principles.

And and we'll look at some of those in the and even Assess some of those endless presentation. These are influenced or informed by fundamental values. And again, we'll war. Look at some of those. The principles talk about or informed coach of conduct, which may relate to ethics, may relate, to morals, may relate to laws and ultimately The objective is to.

Give us a guide and maybe a motivation to behave in certain ways. Now. If you don't like this framework, that's fine. Um, There's no law that says you have to follow this framework and that's one of the interesting things about AI right there isn't agreement, even on the fundamentals of how we talk about the principles of AI.

This isn't a session about ethics. And i don't want to delve into, The theories and principles of ethics generally to any great detail. But i think it's useful this juncture just to quickly characterize some major approaches to ethics. Again, we we think of ethics, a lot of the time is, you know, codes of conduct or rules for the governing of behavior.

But again. How we think about ethics, what are moral principles are. Very, From person to person because we come from different perspectives. We're backgrounds. So i've listed four major approaches to ethics these characterize four, major philosophical schools out of developed over the last 2,500 years. Again, this list isn't definitive.

It's a guy only. One of the major schools of ethics is based on virtue and character. Hesit origins in the Greek writings of aristotle. It's characterized by the principal, be all that you can be the idea is that Ethical behavior results from being an ethical person. And an ethical person.

Is described as well. Now here we get into the details. Right honest, virtuous, Uh, sharing compassionate. Um, curious willing to learn. Uh, There's a whole list of traits that might characterize an ethical person. Or there might be one essential trait. But, Is over and above all other traits, right, maybe honesty.

Right, an honest person is an ethical person and all the other behaviors follow from that. That's one approach. Another approach based in the philosophy of the manual count. Is duty. And conch says two things, he says, first of all, our ethical principles should be such that If i have a principal, Then everybody else in the world.

Should be able to have that principle and things would still be okay. So, If i, if one of my principles is, take whatever you want. Obviously, that wouldn't work if everybody in the world did that. So maybe something like, take what you need might be better or maybe something like, take your fair share, might be even better because everybody could do that.

So that's one aspect of of duty Is this universality of the principle? The second is concepts. People should be treated as means not ends, you shouldn't use people. Rather your ethics should recognize the inherent worth or value of an individual person. So to that, we we get A set of obligations or responsibilities or duties.

That tell us how we ought to conduct ourselves in the world. Another major school is consequentialism. This is based in the writings of people like jeremy, bentham john stewart mill. And basically it's the principle that neends justify the means. If you produce a good outcome. That's what your ethical theory should guide you toward.

What is a good outcome? Well, male says, it's happiness. Uh, the presence of pleasure in the absence of pain. But what counts as pleasures, it just purely you know pleasure in the moment like a heat in this might like or is it as just or misses one of the higher pleasures?

Uh, love of knowledge. Finally, we have social contract. And i love together a bunch of theories under this. The most famous is by john roles. In a book called a theory of justice just from the 1970s. Although social contract theory has a history that goes back. All the way to people like thomas hobbs and john luck.

And the idea is that ethics are created by agreement that we get together either. Actually, your theoretically And decide for ourselves. What's right? And what's wrong? This social contract might be a religious contract. Where we all agree to adhere to a particular set of values a stated in our religion, Or it can be a cultural contract or a governmental contract, like a bill of rights or a constitution.

There are very various ways to framing that. So, as you can see, Through a different approaches to ethics, different ways we could Characterize. Our approach to the ethics of artificial intelligence. Okay. Let's have some fun. Um, What are your values? When it comes to artificial intelligence. So if you scan the qr code on your phone or better, Follow this link on your computer.

The link will show up better. That'll take you to a screen. And in fact, i'll click on this link so we can see it here. How to take you to a ballot. And there's actually three elections just do the first one for now. And, You see, on the left hand side.

Are fairly common ethical principles. That relate to artificial intelligence. Respect common cause fairness, etc, they're randomized. So the order will be different for you. And then across the top, you can rank them. First choice. Second choice, etc. So like this, All right.

So you just fill in your rankings. And then when you're done that, It'll show you what your rankings are. And then you can do as many or as few as you like and then vote and go to the next selection. Just click on this button here. So, It'll take a little time for you to do that.

So i'm not going to look at the answers right away, but i do hope you participate in that otherwise this won't work, right? So Uh, i do hope you participate in that. And we'll come back to the answers that you've provided, Later on. I predict. I think i'm pretty safe and predicting.

Different people will put in different answers. Uh, and i i say that because There's this presumption out there in the world. Uh, and we'll look at actually one article or one statement that actually says this explicitly that everybody agrees on the same set of principles and ethics. I don't think that's true.

And i think there's good evidence to show that, that's not true. And this quiz is an example of that sort of evidence or will be, if you all fill it out, if you don't feel it out. Well, Okay.

So, we'll get rid of that.

So, I said this. Talk. This seminar would be about ethical codes and ethical principles. It's based in work that i've done. Surveying, ethical codes. Um, I've got an ethical codes reader. Available online at the link. You see here. And i, i will say these slides. You can find these online.

Um, Where it says the url at the bottom here, presentation, 575 Or if you scan this qr code. In your phone, you can you can find the slides there as well. They're not up yet. But this will take you to the presentation page where they will be available after the session.

And the reason why i'm doing it that way is i'll be adding to these slides, the results of your participation in our survey, So be sure to fill the floor. So anyhow. Uh, Surveyed. More than seven. There's even more now. Because these statements keep coming out, Different, ethical codes of conduct.

Conduct. For the different professions. So, i'm just going to crack the tight below there. There we go. That's better. Um, Dealing in different professions dealing with knowledge and technology. So i didn't just look at artificial intelligence codes of conduct. But also things like journalism. Medical codes of conduct social welfare codes of conduct.

Public service coach of contact, librarian codes of conduct and more. And the ethical codes. Reader provides a brief summary of each code, links to the code web page and extracts text from the code. I'll be talking about some of these codes, not all of them in more detail as we go along.

Here's one. Came out in 2017. It's the united kingdom house of lords. Select committee on artificial intelligence. And this one's important because it was one of the first major statements to come out of a governmental body. And, When you look at it you see that it takes a consequentialist approach.

It says, it's better to focus on the consequences of artificial intelligent rather than on the way it works. In such a way. That people are able to exercise their rights. And of course, the principles that they state and you can read them in the document emphasize consequentialist Uh results.

You know, promotion of the good or benefits beneficence. Right. Uh, not causing harm. Um, not violating the rights of individuals, things like that. That's that's a common theme that i've seen in a lot of these statements of ethics. Uh, Governments especially but also other organizations are very risk averse.

Meaning that their approach to the governance of AI is based on minimizing risk, and that, that is a consequentialist approach, right? So, they would rather not use AI rather than expose ourselves to two greater risk. And it's also based on. In, if you will a sense of balance. Between the benefits on the one hand and the risks on the other and this balance approach.

Is very common in these ethical statements. I personally, i personally am not in favor of a balance approach. Uh, i prefer in my own ethics to unambiguously pursue the good. But balance, when we're thinking about a social context is often very important. Because, as i said earlier, We have different perspectives and different points of view.

Could be argued. That in some cases, you can't achieve a balance. That. You know. Some things. And the quote in in the case of ethics, And especially in the case of artificial intelligence, our black and white and not shades of gray. Like, AI decision making Uh, You either allow the AI would make.

Public policy decisions, or you might not. There isn't some sort of middle ground for that. Could be argued.

Another major statement. This one came out from the white house. A few years ago, well, three years ago now. Which was basically a blueprint for an AI bill of rights. And, It's based on. These five principles safe and effective systems. Protection from algorithmic discrimination. So, What that means is that.

Your algorithm. Should not treat. Different groups differently. Um, And that the data used to train and algorithm, we talked about that in the introduction to AI session. Should fairly and completely represent all parts of the population. It's a criticism of the approach taken by so many studies in psychology.

Where a professor? We'll write a paper based on. A survey. Of 42. Students. In a psychology class in the midwestern u.s college. And of course, that's not a representative sample at all. And so the results are very suspect. Well, ai can do the same thing. Uh, for example. If we used as a training set for AI, all the data from twitter, Well, this data will probably contain elements of prejudice and discrimination at least if my experience of twitter is any guide.

And that's something that we want to prevent. Um, in the development of AI. That's a very common statement. I see that expressed a lot. Data privacies and other one of the Major elements of the blueprint. Notice and explanation. It's interesting that these are combined. Notice is when we see this a lot as well to principle of transparency.

Letting people know that an AI was used. And indeed, in my newsletter, i just ran a link to an article that contained a warning at the top. Some parts of this article were written by an AI. And then edited by the author. And, I can sort of see that but i asked in my post, you know, how long it will be before we stop putting these warnings in because it just becomes so commonplace.

That we're always using AI to help us. Write, or help us make decisions etc. How common will this become how necessary will these notices be? Explanation is interesting because First of all, it's notoriously difficult. An AI will take into account tens of thousands, hundreds of thousands of different factors.

And there's no single set of factors that you could say. Oh, this is the reason why it made that decision. But a lot of these sets of principles ask for some kind of explanation. So how do you provide that You can't see it follows a rule because it did not follow a rule.

A common approach is. To try to do it. Counter factually, that is to say, You run. The same scenario with the artificial intelligence but you change some factors and see if you get a different result. And that allows you to say. If you had done this, you didn't, but if you had, then this would have been the result and that's important.

If the AI, for example, decides that you do not qualify for a loan. An explanation might say. If you owned a house, then you would qualify. But these explanations might not be very useful. Because it's not a question of whether or not you own the house. That wasn't the deciding factor.

A lot of people who don't own a house, still qualify for a loan. So what was it in your particular case? And that's a real challenge. Finally, the The white house statement talks about human alternatives consideration of fallback. They call that human in the loop. Right, to make sure that there's human oversight.

To make sure that the AI is not doing something wrong.

Now. We get to the study that i talked about. That argue that there's a consensus around artificial intelligence. And it's authored by. Jessica fell in her colleagues. Uh, from the harvard berkman school. And their assertion is that? If you look at, A bunch of different safe sets of principles of AI.

Like i did You got these things in common privacy, accountability, safety and security, transparency and explainability. Fairness and non-discrimination human control of technology, professional responsibility and promotion of human values. You get them in common but as we see from the diagram and they know it's not a very good diagram, but there's a nice big version in the report.

You don't get them in the same quantity. Different studies emphasize different principles differently. And you don't get consensus in the detail. You know what? For example is fairness. There's fairness everybody gets the same thing. Or is fairness? Everybody gets what they need or is fairness. Uh, Taking into account historical discrimination and making that right.

Well, different people have different perspectives on fairness and i would say People at the harvard berkman school have one idea of fairness and people living in rural Canada. Have a very different view of fairness. That would be my contention. So let's look at your results. Uh, Let's see. Now, i have to get the, uh, Okay, the right screen up here.

All right.

So, Here's what we got. I don't know. I'm not sure how many of you participated Looks like one.

But,

Rob out table. All right, so Here's what was put at the top of our ballot. Non-maleficence. In other words, the principle of no harm. Democracy and justice. Now. All of you sitting there. Now, i'm actually going to run this survey on my newsletter as well so we may get different results.

From the wider group of people who respond to this. But, All, if you looking at this list, you can ask yourself. Are these the values that i would want to see? Uh, represented in a statement of principles, or guidance regarding artificial intelligence. What about consent? What about integrity? What about confidentiality?

Um, So anyhow. So, have a tie. For the top. Their tip trusting. I wonder how i wonder what the numbers were. With threshold. Anyhow. Okay, this that didn't go nearly as well as i would have liked but Okay, you take your chances. You get your results.

Um, Have a look at the chat. And i'm just going to see if anybody is commented on anything yet.

We have Celeste's otter, pilot to everyone. I'm an AI assistant, helping Celeste bird. Take notes. For this meeting. Hey, that's great. That's really cool and you notice that follows the principle of transparity, right? She said, she's telling everybody, I'm an AI using this. Um similarly i'm an AI assistant helping alex kron, take notes And, That's mostly what we've got for the chat.

So that's that's really interesting. It's kind of funny, we can sort of Project to a future. Where somebody like me. That's a presentation like this except of course, i'm very busy. And so, i don't have time to do it in person. So i use an AI to look at all of my notes on the subject composer presentation.

And then generate an animated image of me giving the talk And then all of you who are also very busy, people don't have time to actually attend this presentation. You all use your AI assistant. To summarize the presentation that my AI just gave. We can have a completely automated presentation.

With no humans at all, either giving or listening to the presentation. It's a kind of a funny concept. All right.

Well, treasury board canada. Their treasury board. Secretary of canada. Recently. Authored uh, author to directive on automated decision making. This didn't really directly affect me, although i work. For the public service. I work with the national research, council of canada but they don't let me make decisions. Um, But it did want to wish you this directive and it was kind of precautionary.

I'm pretty sure there's not a lot of people.

Okay, Cindy says, i saw some of my answers but not all. I think it said it was showing the first place vote. Oh, Okay, let's When you're first another way of showing that then,

I don't know. Well, i'll share this link with you after, but now, now i'm puzzled by Whether i can.

Surround standard table. Yeah, so that's the first place votes.

Well. This is interesting.

And slider. Okay.

Well, i don't know. Only have these are the first place votes. Sorry about that. Well, i got four responses and those were the four first place folks. I know i got four responses because it said that later on I need to afford people voted for something different. Um, i don't know why it's not showing the second place votes.

Around. Let's oh, let's see. Let's try this.

Your first place voters into pairing. Okay.

Well, how disappointing?

I thought this would be a lot better than this.

Maybe we'll get better results after we do the other elections on the ballot. See, there's the chart legend.

Okay. I think, i think what he's doing. I think it's trying to do it, like, one of those political conventions where you have around, then you drop the bottom thing off the ballot, and then you have another round and you drop the bottle. That's not what i wanted. I just want to see the first place.

Second place in third place, etc, and i thought i would see that. That's what it looked like in the demo. Anyhow. So Yeah. We'll still, i'll still keep collecting this data and i'll try to put better versions of it on the slide. Anyhow. Back to the treasury board. Secretariat Um, So, they have They they require and and this is very processed based which is typical for a government.

Um, So it's almost like virtue-based, but for an organization. Which is kind of interesting. We don't really think about whether an organization can have AI that ethics, but of course, it can, right? Um, if you have institutional policies you as an organization are probably More ethical than And organization that does not have institutional policies, maybe.

Anyhow. So, this is For the organization. And that recommends doing an impact assessment which is very consequentialist, right? Transparency what we've talked about quality assurance which we haven't really talked about but what counts as quality. Well, they list it. Uh, testing. Making sure we have quality data. Making sure we have proper data governance processes.

So that You know, we eliminate duplicate and make sure our data isn't too old to make sure our data coverage is complete. Peer review. Gender-based analysis plus. Uh, i think that plus might also mean, this is tech your tbs jargon here. But i believe that that means, you know, it's not just analysis based on male or female but also non-binary and other genders employee training.

IT in business continuity, security legal, and ensuring human intervention. So, but what i don't see maybe it's Contained under data quality is representation from different cultural groups. I don't see that in there. And that obviously, you know, If we're doing gender-based analysis, why aren't we doing culture-based analysis or nation-based analysis?

I think that's important. And then finally recourse, and reporting. So, this directive is in force now and it does govern Uh, public sector workers. Like me. Although, i don't make decisions with or without AI. Amnesty international by contrast. Takes an approach based on equality and non-discrimination. And if we had to trace that back, it probably goes back to Responsibility and duty.

And the philosophy of manual cunt. And they are process in a few different ways. One is using the framework of international human rights law. And so, that establishes basic rights. And again, that's based on the idea that all humans have intrinsic or inherent value, right? How do you express that value through a statement of rights?

So, each human being no matter who they are, no matter where they come from, have certain rights, And here, as an example, the rights to equality, Or equity depends on how you want to phrase this, right? And non-discrimination And then actively presenting preventing discrimination. And then, The more positive side promoting, diversity and inclusion.

And we also explicitly assign duties. To nations. Duties to Promote human rights. First of all by identifying a risks. To human rights ensuring transparency and accountability so that we can know whether or not human rights. Violations occurred. And then finally enforcing oversight, making sure that the reason the agency that watches and ensures Uh, the government departments respect human rights.

So you can see how this would apply to. AI basically. This guideline would say that there should be an AI governing body if some sort in a nation, And this AI should ensure that proper processes are followed. Similar to the treasury board, secretariat guidelines, but also should be monitoring for consequences and in particular consequences related to the violation of human rights.

So, this is backwards.

Okay. So, Our next quiz. Who are we responsible to? We've talked about duties and obligations. But to whom do we have these duties of obligations? That's the second election. And even though our results don't come out, great. I'd still ask you to put in your response. And there's a lot of different groups that we can be responsible to.

The last fortunate, perhaps. To our clients. If we're a professional like the lawyer or a doctor, Or social worker. To children. Do they have special rights? Arguably, they do. Others are rating. I spelled it wrong. Sorry about that. Uh, responsible to the law, how about responsibility to ourselves You know, put your own oxygen on mask first, sort of thing.

Shareholders, funders unions professions. Uh, students. All of these would be, uh, typical in an educational context. Students society environment country. Uh, a wider perspective. Research subjects. If we're a researcher. We may well have responsibility specifically to whomever a researching about parents or guardians this applies, especially to children. Responsible, not only to the children, but for the people, to the people who are responsible for the children, And maybe to our employer into our colleagues.

No, i'm ranking. These Because often, These obligations conflict. Your responsibility to a child. Might conflict with your responsibility to a funder. A funder might want to collect children's data. To market. Fast food to them. But your responsibility to the child might come in conflict with that. Be based on the idea that the child has, You know, the right to be the right to privacy confidentiality.

And to not be advertised to by fast food companies. So now filling your selections here. And we'll see what the results look like in just a little bit.

So, i've looked at amnesty. And i've looked at,

I've looked at, Uh tbc, let's come back to canada. Uh, the government of Canada recently, this year. Issued. A guide on the use of generative artificial intelligence, and as we talked about, Earlier generative, AI. Is the new type of AI. That writes, text creates images, etc. And, The government principles follow, what's called the faster principle which is an interesting acronym.

Saying that these uses should be fair and they describe what they believe fairies. Accountable. Secure. Transparent. Educated and many, many principles don't include a responsibility of the users to be educated about. The strengths limitations and responsible use of AI tools. And then finally relevant. Uh, using AI only where it's appropriate to use AI.

The. Oecd. Um, Also has principles and these are intended to apply. Worldwide, especially to economically developed countries in their view. But also because it's OECD to all countries. More outcomes centers. And and they have a, a specific set of outcomes. That they favor. Things like inclusive growth. Sustainable development and well-being.

And sustainable development is interesting because there are some sets of principles. That explicitly link the use of AI to the united nations sustainability development goals or SDGs. Which are some 24. Different goals, for example. Protecting the environment. Uh, promoting the rates of women promoting the rights of children. Uh, ensuring everybody has an adequate education, things like that.

And these are specifically linked to the development of AI. Um, Creating a profit for shareholders. Not one of the sustainable development goals. Which is interesting. So, Growth as OECD defines it. It's not necessarily sustainable development as UNESCO defines it. Oecd also favors human centered values, and fairness, it's interesting that it link slows together.

Transparency and explainability, which we talked about, Robustness. Whatever that means security and safety. So they are risk averse similar to many governments and then finally accountability And that goes back to the question of duty. So, let's look at The results of that second survey.

And, Oh interesting. The top choices here, where children thunder society and environment? And i don't think we could get four different priorities if we tried. And it really raises the question. This is why i wanted to see in the display. Second place, third place, fourth place votes. Is. You know, if we had to choose between them.

How do we choose? And i gave already the example of children versus funder, Uh, Which seems pretty clear cut. But what if promoting the interests of children is in a certain way bad for the environment? Now, we have a tougher question to ask and really, we have to ask about the details of it.

Interesting.

The university of helsinki. Has provided in the AI ethics move. I think it'd be useful to look at for people. They? Basically provide five principles of AI ethics answering five, different questions we focused on just one. But here the the they sort of break down into the different ethical theories.

Should we use AI for good and not for causing harm? Well, i think probably, right? Ah, Who should be blamed when AI causes harm. And by blamed, we talk about hell of the cantable but even more to the point If there's a cost or retribution necessary, who should pay A restoration necessary.

Should we understand what? And why? AI does whatever it does? I think that would be a huge challenge. I mean, it's hard for AI experts to say why in AI does something it does. Very hard. Not a simple question. Should AI be fair? Or i would say and non-discriminative the principle of fairness.

Again, we're probably going to say, yes. But it's not the most important principle OECD might say no. His productivity, in their minds might be more important. Should AI respect and promote human rights. Those are two different things, obviously. We would hope it respects human rights, but does AI have a responsibility to promote human rights.

A different question.

Finally. What matters? Well, let's go to the the third question. So again, enter your votes for the third. Question on the ballot. So go to the next selection.

Foundational principles on what are our ethics of artificial intelligence based? And and these again, these are principles that came up. In the different. Documents that we've looked at both the ones i've talked about in this session. And the ones i reviewed in my study. Everything from professional requirement fact universality.

That's the county and principally again defensibility or can we rationalize it after the fact? Fundamental rights balance. Or trade-offs finding the middle ground trust Haven't talked about trust yet. Uh, should it be based on what we know? Should it be based on social good, or maybe social order, or should it be based on fairness and again?

Which of these we choose. Uh, may conflict with other of these foundation of values in a particular circumstance. So, i'll have you put those in. We're almost done their session.

So this is the university of helsinki, AI, ethics move. So, There's another resource just came out recently brian. Alexander, who's a friend and colleague of mine. Ask the question on his blog. What readings would be best for people? Um, In a postgraduate class. So, And it's fairly high level.

Readings. And She got a lot of good answers. Here are the things that he listed. In this course. But then in the comments, this is where you get really good responses. And, and You know, i've read quite a number of these and some of these are well worth reading.

Uh, The uh, You want to have a look at this conversation. It's a youtube episode with mark and driesen. Who's a venture capitalist? On the future of AI. As it says here. Uh, this might be useful as an example of venture capital hubris. Um, and there. Many, many more.

Now, let me check the chat. Have to find the chat.

A chat. Oh, can't find the chat.

Oh, here we go. And last minute items before the meeting ends.

Um, oh, this is their

Oh, this is interesting. Oh, i want to read these later. These are the actual comments. All right, so let's have a last. Look at the results from our survey. Um, here we go. So, let's move on to the next one. Oh, this is better. Okay, so This is showing me all my results.

Now, this is what i thought i would get the first time. So, Here are some of the other second and third, place votes. For the first set.

Yeah. Okay, i wonder if i get different results for okay. Oh, this is for the third one. Okay, so social good fundamental right, facts and trust Through the top four chosen. With second and third place of votes, going to fairness balance of risks, what we know, etc. Interesting, professional requirements and defensibility.

Didn't get a vote. Um, So let's let's just reload this screen and see if we get our No, still doesn't show a second or third on those. I wonder why it showed them only on the one. That's very, i mean, how i expected this for all of them. I don't know why i didn't get it.

Well, it's just going to show those Oh well, anyhow. I think that's really interesting. I think we've seen. An example in the session. Of how just four people. Can have four very different outcomes. Um, You know, for for all three of the questions that i post. So, anyhow, that's Basically, the session.

Um and thank you for your your time and for your attention you can find me this website. Um, And, Oh, hang on to see if there's any questions or comments. Um, but if you need to go do something else right now, by all means, please feel free to do that.

I also really like,

And i think, Good.

Yeah.

Ever been talking about the value of agency, but that there are a whole set of other principles that people knowledge. And, and it's hard to make any sense of that. So i yeah.

Why?

Fantastic exercise.

Yeah. So, it's about four of the comments in a question, but thank you. Yeah, i appreciate that. Efficiency is a really interesting one. Um, In my own work. I separated out what i called the benefits of artificial intelligence. Or the applications of AI. And there's a whole list of of ways AI is used, especially in education, but elsewhere as well especially generative AI these days.

Um, in order to produce, you know, faster, high value. And as, as you said here, efficient outcomes, for example, producing open, educational resources, and there's a big discussion about, should we use generative AI to produce? Open educational resources or how about custom resources for each individual learner, based on what they previously know.

And of course, right now, there's the rest that the AI might include, factually incorrect stuff in the reasons. But it's time goes by, i think that risk will be diminished. And then the question really comes up in earnest. And i think next time, i'll Program my own tool rather than depending on something they found on the internet and and i'll get a more new event measure of these values.

Anyone else?

Yeah.

Yeah, that's a great question and You know. There are variations on the question that don't involve AI like What if you asked your friend to draw a table for you? Or what if you ask your friend? For information. On how to draw the table or what if you read about, how to draw the table from a book or what if you copied the table from the book?

You see these variations right? And there are all the same sort of act and the source really doesn't matter whether it is an AI, whether it's a book, whether it's a friend, Remember, that doesn't really matter. It's the action that matters at least to my mind. I wouldn't differentiate between the source.

I've That principle comes up a lot in my own thinking. Um, And, If i had to say, what is, you know, how would you explain it to students, what they shouldn't should not be doing? I'd say. It's basically the principle of plagiarism. If you use a source for something, cite your source, If you used ai to generate a table for you, Then put down underneath the table.

This table is generated using a i and if you wanted to be really complete, Uh, you could include in a footnote or an appendix, the prompt that you use to ask the AI to produce the table. And if you wanted to be really thorough about it, if you're a student, You'd say something about why?

You're confident in the accuracy of this table. Um, But, The key thing here is transparency. And and providing credit when something or someone other than you did something, If you copy the table from a book, you'd be expected to name the book in the page number. And that's a very common practice in academic writing, right?

If your friend drew it for you, then, He'd give me a friend credit. So, You know, a lot of people say, well, students should not use AI, and i think First of all, you can't enforce that. And secondly it's a question of what sort of values are you, are you trying to To to Generating your students.

Uh, if it's just, you must follow the rules for the sake of the rules, and then, yeah, tell them. Don't use AI. But if it's, you know, honesty and transparency, then i i think that approach the giving credit approaches probably better And then there's a middle ground, we listen to always putting the risk in this case, And that's i said, well, you could Just get it from the neoi.

But you will learn more if you generate it yourself, We can't force you to generate it yourself. We know this. Because we know how the technology works. But you would probably benefit. And then it's up to them to decide whether they want that benefit. I think that's how i would approach it again, though.

As we've seen Different people would approach it differently, but i, i think i could make a pretty strong case for that approach.

Yeah.

It's not a bad source. You know, asking for trends is pretty valued neutral. Um and you're not really asking it to make a decision. It might not be a complete listing, but it probably going to be pretty complete. It used to be the case that Chat gp the input data cut off in 2021 but that's no longer true.

As, as i learned after my last session, And, The data being provided to chat gbt now, Is more current and if you're using chat GPT 4, Um, Then you're getting Different types of data. And go wider range of data, you're getting images, you're getting videos. All of this is feeding into it as well.

So you're getting a pretty broad selection. I feel pretty confident. If i ask chat GPT, what are the latest trends in some subject? I'd feel pretty confident in the result, but Trust. But verify. Right. I take those results. And i google. Uh, google them and and see what came up.

Yeah, that's right. Yeah, that'll be the next one. That'll be in december. Yeah.

Yes.

Yeah. Yeah.

Yeah.

Yeah.

Yeah.

Anxiety. But

I i find that that's the case for every one of these. Um, you know, every single value, every people, every person that you're accountable to every principle When you ask? What do we mean by that? Um, you know, even if we don't agree, even if we agree on these major categories, right?

Even if we agree, that we ought to be responsible to the environment, as you say, when we ask, what do we mean by that, then the questions questions, like the ones that you just posed. Which are very good questions, they come up, right. Who speaks for the environment? Is carl.

Sagan said, who speaks for earth. Um, and there's no clear answer to that question. And so we but we we want to be responsible to the environment at least i do. Yeah.

Yeah. Yeah.

That's interesting. Yeah.

Yeah.

Yep.

Body.

Well sure.

Yeah.

Yeah.

Right.

Right.

Right.

Yeah. Well, that that's it. Exactly. We have numerous indigenous nations across canada. And they're, you know, they're different from each other. Uh, i've experienced that. And and they they have different perspectives and different points of view as they should.

Yeah.

Right.

There's, There's a distinction we can draw, and i've drawn this distinction in the past. Between. Custom and customization. Right. Customization is what you described where you take the original OER. And then you would adjust it for the particular individual. And as you suggest, and i've seen this myself It's often inappropriate the original OER just isn't able to Be adapted to the specific cultural background, or framework that a person.

Is in. Custom on the other hand. Is where you build a new thing from scratch for each individual person. You know, if you have a custom built motorcycle, somebody built a motorcycle from scratch, they didn't just take a honda off the production lining change the paint color. They built a motorcycle for you.

And it's the same with open education, resources and this is why we like teachers so much Because when a teacher, is in a classroom that teacher is basically producing custom learning for the people in that class. Right? The teachers responding to the needs and interests of the people right in front of them.

And creating the lesson in real time. Now, the problem is that's expensive. And it's difficult. Um, it costs a lot to to get teachers to everybody in the world. You know, you talked about being in india. Uh, i'm sure it's difficult to get teachers for every child in india.

Uh, i certainly know. It would be expensive. To my mind. And one of the reasons why i've always been interested in educational technology. Use that if we do it, right? And if we're able to do it, these are still open questions. There's the possibility. That we can use. Artificial intelligence to generate custom resource for people not customized.

Not off the shelf and then change. But a brand new resource from scratch for that particular person. That would respect their particular culture and background and experiences and values and needs. That's hard. That's a hard project. I'm not even going to pretend to say it's easy. And i'm not going to pretend to say we can do it now because we can't But i think it's worth thinking about whether it's possible.

And to ask if it were possible, Would it be worth doing? And i think, you know, again we talk about children around the world Who have really no access to an education. And we were very lucky in many parts of canada where we do not all parts. But many parts But for children who have no access or very limited access to education, This could be, you know, a life changer.

If it can be done. I don't know if it can be done. It would require. Allocating resources not as many resources as An individual human teacher for each one of them or each class of them. But still a lot of resources. Um so but that's you know, that's basically Been the source of my interest and online, learning generally, and computing technology, specifically is the possibility of providing an education to each person.

That actually responds to the needs values and backgrounds and culture and interests. And language and all the rest of it of each person. If we can't do that, that would be great. Probably. Yeah. So, Well. And, Hello cuss. So for customized it's like you take an existing resource. Like an existing open.

Educational resource. And then you change it, you adopt it in some way, maybe you translate it to a different language, maybe you change out the pictures of one type of person and put it in a different type of person. Maybe change some of the examples but you're always working from this base prototype model.

And then changing it rather than building something new That's what customization is.

And that's, that's

Yep.

Yep.

Yep.

Yeah. Yeah. There's a yeah, because it's

Yeah.

Sure. Videos. Yeah.

Yeah.

Yeah.

Yeah.

Yeah.

Yeah.

Yeah.

Vegetable.

Yeah.

Well. Destiny. True of the world. Let's, you know, Thought of something came up with something the other day. Let's sort of stuck with me and i might follow this up a bit. Um, When we talk about trust in you're asking, who should we trust? And that's a common question.

Who should we believe on what grants we believe in? And, and we think that, Trust is a property of You know, whether a person is trustworthy, for example. But i think, As i think about it more, i think trust is a skill. Uh, it's not an attribute. Somebody else has It's a skill that i have.

Um, or that you have, right? You talked about your experience in the community and that gives you a certain skill. With respect to that community about what people you can trust and what people you can't and it's a very specific skill. I mean, Uh, in your case into my case, we point to different individuals that we know.

And we could say i trust them implicitly or i trust them. Most of the time or i trust them with my truck but not to install my electrical wiring or you know what i mean? And, Our ability to make these kind of statements and make these kind of judgments is itself, a skill?

And like any skill. You can't break it down into a simple set of principles. You can't say well you trust this kind of person but not that kind of person. In fact the more you make generalizations the more dangerous it is So, We would never say right. You should trust somebody just because they're a professor.

I wouldn't do that. I've known many professors who are very trustworthy. And i've known other professors who i wouldn't trust for a minute. You know. You should trust somebody because they're a lawyer really. Right. So There's no principled way. Um, trust is something that has to be learned. And it's only through the experience of trusting.

And by people who've learned who to trust and how to trust for them to share their experiences with others. And me again, even here, right? There's no single thing that they say. It's just they need to sort of model or, you know, make it clear. How they trust somebody what they trust?

Etc. I try to do that in my newsletter. And and when i talk, right, uh, because i have to depend on Sources as well. I don't know everything surprising, but true. Um, And so, I always try to indicate, you know, to what degree, i trust something why i would trust this organization and not that.

Yeah, i talked about oecd in this presentation. And i pointed to your some of the values that OECD has regarding productivity and that would influence how i regard a publication from OECD. And it's all these little things. These 50 thousand factors that a human takes into account. When they trust somebody or whether they don't whether they believe something, or whether they don't, And and like i say it's a skill.

It's like critical thinking. Uh, it's like literacy. Um, but it's trust. And and that's the only answer i can give to a question like that. Uh, it's just learning through experience and with the guidance of other people who have more experience, just how to do it, almost on a case.

By case basis, Does that this?

Yeah, exactly. Exactly. Anyhow. I have to go. I was i have another appointment. Yes, thank you so much for the opportunity, i really appreciate it and for your organization.


Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Nov 21, 2024 06:50 a.m.

Canadian Flag Creative Commons License.

Force:yes