Jan 22, 2010
Blog summary of an NRC-IIT summary on research ethics held in Fredericton on Tuesday. Speakers were given an opportunity to review the text before it was posted, but not to exercise editorial control. Posted on Half an Hour.
Will van den Hoonaard
Professor Emeritus, University of New Brunswick
Ingeragency Advisory Panel on Research Ethics, Government of Canada
‘Vertical ethics' is the idea that ethics reviews are being required not only at the project level, but also at the institute level and even by journals and publications. Research Ethics Boards (REBs) may vary greatly in their approach.
The Tri-Council Board of Research Ethics (TCB) is mostly concerned with medical ethics. It is concerned with an ethics "protocol", which specifies a procedure that is not subject to interpretation. But as social researchers we interpret all the time. Also, protocols are unchanging, but we change our plans all the time. We also don't know all the benefits and the risks – again, the idea of benefits and risks comes from a medical perspective. And again, what is consent? Can you be asked just once, or must you be asked for each event? Confidentiality, again, is interpreted very differently in social research.
Take Mitch Duneier's book, Sidewalk, for example. A researcher saw a homeless person selling a copy of his book on the sidewalk, and then worked with homeless people for seven years to find out how they live. All the names and the photos are in the book – except for the name of the police officer who takes the books away. This is good – ethical - research.
Or consider Tim Diamond, who in Making Grey Gold took notes while working at a nursing home. Covert research. He almost got caught – an executive was going to find out, but was turned back by the smells. When you read the book, it brought change in the way nursing homes were run. And he treated his subjects with dignity. So here you have two examples that are very different but are still good research.
Deception and covert research – these are very different, but are often brought together. And finally, consider anonymity. This is often hard to practice in social research. The data doesn't come anonymously – we interview people, we live with people, we record identities in field notes. A small community of 500 people, if you interview someone, everybody in the community will know. Anonymity is not possible; you have to call on other principles.
The new TCPS has 160 pages (the old was 84 pages). There are differences between the two:
- qualitative research is covered by only 4 paragraphs in the old draft (three of which are warnings). In the new TCPS, 60 pages deal with it.
- The old TCPS has 8 basic principles. The new has three: respect for persons, concern for welfare, and justice
- The old TCPS talks about standards and procedures. The new talks about a "compass" for doing research – knowing about ethics, ‘this is what is best'.
- The new TCPS talks about ‘relative autonomy' because no person is completely autonomous. So if you are called upon to respect a person's autonomy, that person has to consider the effects on family members, the university, etc.
Ethics is a relational thing – it's about relationships. Dignity is intrinsic to persons. Every person has dignity, whether we acknowledge it or not.
This also involves not inventing the motives of others. You would be amazed how often people rush in to explain the motives of others. Consider a letter carrier. What's the best route to take, do you think? Shortest, easiest? To know, we should ask the letter carrier. We have so little knowledge, so our natural habit should be to resist inventing motives.
Finally, there are exceptions to the TCPS, and we can talk about that.
Q. Different people have different moral compasses, how do we deal with it in practical life?
A. We cannot have them all operating at the same time, as there may be conflicting principles. We have to determine which principle is more important. There are things to help you – the statements in the TCPS, the literature in your own field, examples of other researchers in your own field.
Q. There are different frameworks for defining what is ethical, what is justice, etc. What is the context for defining ethics in the TCPS.
A. The drafters were trying to find the touchstones of ethics and corner them on eight principles. Those principles were brought down to three principles. The interplay of these principles will depend very much on what you are doing. For example, power is a very important element in research relationships. In social research, there's more power in the research participant – he or she can refuse to answer questions. But it varies in other disciplines. You have to go back to the principles; these principles were well thought out, and have a basis in the philosophical literature.
(den Hoonaard comments later: A point added while editing this talk: Ethics involves relationships and the "new" TCPS makes a point about how the social context and the discipline itself will turn the core principles in varying emphases.)
Q. But there is the case where you could justify it for the general good of the people (ie., utilitarian)?
A. Yes. That's why you have to argue your case, and bring to bear your own knowledge of the particular topic, and what other people have done.
Q. Where should the power lie in ethical decisions, with the research, or boards?
A. Ideally, the researcher and the board would have a base of consulting about the issue. But that is very variable – some boards are very stringent, others are more open. So you need to create a climate where you discuss the ethics. There are two aspects – education, and ethics review. The focus is always on review, but by far the best approach is to create a learning approach. You will find, it always boils down to some core principles.
(den Hoonaard comments later: in a later discussion with Stephen, we agreed that researchers should own ethics. It is not something that can be delegated to a checklist or to a body. Ultimately, it is the researcher who must take the moral responsibility for conducting ethical research.)
Q. You mention, you often have to look to your own field for answers and practices. But what if a method is ethically wrong, but is often used in the field. For example, in computer science, we often see ‘Wizard of Oz' tests, where people are deceived into thinking something works, when it doesn't.
A. So how do you thing the basic three principles would apply?
(den Hoonard comments later: My later reflection and discussion with this researcher made me realized that much of social research resembles the "Wizard of Oz" tests. You see, a number of social researchers, including me, start researching a topic believing that we have selected in advance (and we explain that to the research participants) but in the course of the research we might actually change the focus of our research, sometimes catching us by surprise. Can one call this a "Wizard of Oz" test in reverse?)
Q. It depends on what you think of as justice.
A. In some sense it sometimes seems like the question is taken away from you. And that goes back to the issue of power. You have to decide. You can't just fall back on principles. Take them into account – but you should not be alienated from your own research.
Q. One of the problems with ethics in research, partial disclosure is often confused with deception. But the problem is, if you disclosed the whole thing, it would invalidate the research. For example, you might not tell people what you are measuring for. In the consent process, you don't tell them what you want to find out. Partial disclosure should be completely fine so long as it doesn't impact their assessment of the risk of their participation.
Now in ‘Wizard of Oz' experiments, you are actually saying something that is not true. You are saying the computer is doing something, but in reality it's a person typing into a keypad. So, first, do you need to use deception? The answer is, we don't know. Do you need to tell the people it's the computer? Mass and Reeves argue it shouldn't make any difference, because people treat computers and TVs anthropomorphically. And second, how does this impact the subjects' risks and benefits? Not a whole lot. But you are lying to them; it's a bit of an affront to their dignity.
Second speaker
Francis Rolleston
Former Director, Ethics, Canadian Institutes of Health Research
Chair, National Research Council Ethics Board
My real question is whether the NRC is being optimally served. The NRC's policies are on the website. "NRC affirms that ethics in research cannot be achieved without excellence in ethics." The REB is responsible for oversight, and reports to the Secretary General.
We are a moderately active REB, with 153 ongoing files. Reviews can take anywhere from 20 days (for a subcommittee) to 77 days (for a full board review).
(Rolleston comments later: The times stated as taken for review are generally the top end of the ranges that I identified. The median time (24 days for full Board review, 13 for sub-committee and 2 for Chair review) would be more accurate. This applies to the second note on my talk and also the long answer to the first question. The long time of 77 days was an aberration, and involved an initial application that was badly prepared and reviewed over the Christmas period. If an applicant prepares the application carefully and well (i.e., professionally), review times are at or below the median. If the application is badly thought through then review times get high.)
The idea of the REB is intended to provide independent research support, that we are part of the research. But we sometimes hear that people think that we are a nuisance and get in the way,
Background: what does the REB do? A project will meet Canadian standards of ethics if it is carried out as described in the documents reviewed and approved.
You get conflicts in ethics when you get values that conflict with each other. How do you determine the results? There are three levels of consent:
you, the researcher, NRC, the funders, etc
society, through the Research Ethics Board (REB)
research subjects, through individual consent
Who is then responsible for questions of research ethics? Ultimately, it's the researcher. The REB is there to support you, but is not responsible for research ethics.
Does NRC's REB support research? It scales the review process in relation to ethcis issues, There can be delegated review, generic and template applications, a process of consultation, and where applicable, excuse from REB review.
Does the REB support research? We try to deal with a rapid turnaround and respond quickly to questions.
One question has to do with science review – what right does the TEB have with respect to the science. When we are not qualified? Well – we have 6 people with PhDs, 5 people with extensive experience in ethics, 3 in law, 5 from outside NRC, and 6 bilingual members. So it's a fairly widely based REB – but it's not free from idiosyncrasies.
What does REB review? First, protocol. Second, informed consent, which is basically required for all research. If you are asking for an exception here, the REB will need to understand why, otherwise it will not be easy to approve the way you are doing things. Another concern is the question of personal information – whether data is tied to a person, whether data is private. Unless there is consent for data to be identifiable, then it must be free of identifying information.
The important questions are, does the REB serve the needs of NRC, and second, is research not being done because of REB. Also, if there is research that you are doing, and you have not involved REB, why not?
Q. (Stephen described how he doesn't use REB because of three points: REB questions about the science, the turnaround, the results never coming back.)
A. We think we're pretty fast. For example, 15-77 days seems fast to us. Three weeks seems to be a pretty quick turnaround. It's not long compared to other REBs, it's not long compared to what it used to be. (My comment – it used to be 90 days, the REB only met once a month. Now, maybe, around 30 days is more likely). Surveys on the web – we now have a generic protocol for them. (Comment: not that I'm aware of). We have a standard approach for these. For example, anonymous surveys. Should be turnkey, review by the chair, then it comes back. (Comment: I have an online survey, it wasn't turnkey). Well if we make it a generic review, then we can have a review by the chair, turnaround time 8 days.
Q. I think a lot of people aren't really clear about when something becomes human subject research. Eg. I'm developing some software, and it's going to be used by some people at IAR. They're using something else and say we're going to give them something better. So we do a requirements assessment. Is that human research?
A. Well, if we're asking people to do something they would normally do in the performance of their job, it's not human subject research.
Q. Well, what if I go to Mitel and observe software engineers keyword searching?
A. That gets closer to human subject research, it involves questions of anonymity and privacy, but it's the sort of thing that could be part of a generic application.
Q. So basically when we talk to users the question comes up. So what do I do?
A. Come ask us. The risk is a privacy risk (comment: and a coercion risk). (Will van den Hoonaard: the nature of the institution or corporation would also be a factor – the more likely you go out of your organization, the more likely the REB is going to be involved. There is privacy, risk, and whether some corporations want people to be interviewed by outsiders).
Q. What is REB? What does it stand for? Also, is there a template for user interface testing? Because getting permission for testing for each element of user interface gets a bit much.
A. REB stands for ‘Research Ethics Board', it has been in operation for 20 years, and it must give permission before NRC allows the research to take place.
If there is a standard approach, this is a great case for a generic application, then it will take you 20 minutes to fill out, and it comes to me, and I turn it around right away.
Will van den Hoonaard
Professor Emeritus, University of New Brunswick
Ingeragency Advisory Panel on Research Ethics, Government of Canada
‘Vertical ethics' is the idea that ethics reviews are being required not only at the project level, but also at the institute level and even by journals and publications. Research Ethics Boards (REBs) may vary greatly in their approach.
The Tri-Council Board of Research Ethics (TCB) is mostly concerned with medical ethics. It is concerned with an ethics "protocol", which specifies a procedure that is not subject to interpretation. But as social researchers we interpret all the time. Also, protocols are unchanging, but we change our plans all the time. We also don't know all the benefits and the risks – again, the idea of benefits and risks comes from a medical perspective. And again, what is consent? Can you be asked just once, or must you be asked for each event? Confidentiality, again, is interpreted very differently in social research.
Take Mitch Duneier's book, Sidewalk, for example. A researcher saw a homeless person selling a copy of his book on the sidewalk, and then worked with homeless people for seven years to find out how they live. All the names and the photos are in the book – except for the name of the police officer who takes the books away. This is good – ethical - research.
Or consider Tim Diamond, who in Making Grey Gold took notes while working at a nursing home. Covert research. He almost got caught – an executive was going to find out, but was turned back by the smells. When you read the book, it brought change in the way nursing homes were run. And he treated his subjects with dignity. So here you have two examples that are very different but are still good research.
Deception and covert research – these are very different, but are often brought together. And finally, consider anonymity. This is often hard to practice in social research. The data doesn't come anonymously – we interview people, we live with people, we record identities in field notes. A small community of 500 people, if you interview someone, everybody in the community will know. Anonymity is not possible; you have to call on other principles.
The new TCPS has 160 pages (the old was 84 pages). There are differences between the two:
- qualitative research is covered by only 4 paragraphs in the old draft (three of which are warnings). In the new TCPS, 60 pages deal with it.
- The old TCPS has 8 basic principles. The new has three: respect for persons, concern for welfare, and justice
- The old TCPS talks about standards and procedures. The new talks about a "compass" for doing research – knowing about ethics, ‘this is what is best'.
- The new TCPS talks about ‘relative autonomy' because no person is completely autonomous. So if you are called upon to respect a person's autonomy, that person has to consider the effects on family members, the university, etc.
Ethics is a relational thing – it's about relationships. Dignity is intrinsic to persons. Every person has dignity, whether we acknowledge it or not.
This also involves not inventing the motives of others. You would be amazed how often people rush in to explain the motives of others. Consider a letter carrier. What's the best route to take, do you think? Shortest, easiest? To know, we should ask the letter carrier. We have so little knowledge, so our natural habit should be to resist inventing motives.
Finally, there are exceptions to the TCPS, and we can talk about that.
Q. Different people have different moral compasses, how do we deal with it in practical life?
A. We cannot have them all operating at the same time, as there may be conflicting principles. We have to determine which principle is more important. There are things to help you – the statements in the TCPS, the literature in your own field, examples of other researchers in your own field.
Q. There are different frameworks for defining what is ethical, what is justice, etc. What is the context for defining ethics in the TCPS.
A. The drafters were trying to find the touchstones of ethics and corner them on eight principles. Those principles were brought down to three principles. The interplay of these principles will depend very much on what you are doing. For example, power is a very important element in research relationships. In social research, there's more power in the research participant – he or she can refuse to answer questions. But it varies in other disciplines. You have to go back to the principles; these principles were well thought out, and have a basis in the philosophical literature.
(den Hoonaard comments later: A point added while editing this talk: Ethics involves relationships and the "new" TCPS makes a point about how the social context and the discipline itself will turn the core principles in varying emphases.)
Q. But there is the case where you could justify it for the general good of the people (ie., utilitarian)?
A. Yes. That's why you have to argue your case, and bring to bear your own knowledge of the particular topic, and what other people have done.
Q. Where should the power lie in ethical decisions, with the research, or boards?
A. Ideally, the researcher and the board would have a base of consulting about the issue. But that is very variable – some boards are very stringent, others are more open. So you need to create a climate where you discuss the ethics. There are two aspects – education, and ethics review. The focus is always on review, but by far the best approach is to create a learning approach. You will find, it always boils down to some core principles.
(den Hoonaard comments later: in a later discussion with Stephen, we agreed that researchers should own ethics. It is not something that can be delegated to a checklist or to a body. Ultimately, it is the researcher who must take the moral responsibility for conducting ethical research.)
Q. You mention, you often have to look to your own field for answers and practices. But what if a method is ethically wrong, but is often used in the field. For example, in computer science, we often see ‘Wizard of Oz' tests, where people are deceived into thinking something works, when it doesn't.
A. So how do you thing the basic three principles would apply?
(den Hoonard comments later: My later reflection and discussion with this researcher made me realized that much of social research resembles the "Wizard of Oz" tests. You see, a number of social researchers, including me, start researching a topic believing that we have selected in advance (and we explain that to the research participants) but in the course of the research we might actually change the focus of our research, sometimes catching us by surprise. Can one call this a "Wizard of Oz" test in reverse?)
Q. It depends on what you think of as justice.
A. In some sense it sometimes seems like the question is taken away from you. And that goes back to the issue of power. You have to decide. You can't just fall back on principles. Take them into account – but you should not be alienated from your own research.
Q. One of the problems with ethics in research, partial disclosure is often confused with deception. But the problem is, if you disclosed the whole thing, it would invalidate the research. For example, you might not tell people what you are measuring for. In the consent process, you don't tell them what you want to find out. Partial disclosure should be completely fine so long as it doesn't impact their assessment of the risk of their participation.
Now in ‘Wizard of Oz' experiments, you are actually saying something that is not true. You are saying the computer is doing something, but in reality it's a person typing into a keypad. So, first, do you need to use deception? The answer is, we don't know. Do you need to tell the people it's the computer? Mass and Reeves argue it shouldn't make any difference, because people treat computers and TVs anthropomorphically. And second, how does this impact the subjects' risks and benefits? Not a whole lot. But you are lying to them; it's a bit of an affront to their dignity.
Second speaker
Francis Rolleston
Former Director, Ethics, Canadian Institutes of Health Research
Chair, National Research Council Ethics Board
My real question is whether the NRC is being optimally served. The NRC's policies are on the website. "NRC affirms that ethics in research cannot be achieved without excellence in ethics." The REB is responsible for oversight, and reports to the Secretary General.
We are a moderately active REB, with 153 ongoing files. Reviews can take anywhere from 20 days (for a subcommittee) to 77 days (for a full board review).
(Rolleston comments later: The times stated as taken for review are generally the top end of the ranges that I identified. The median time (24 days for full Board review, 13 for sub-committee and 2 for Chair review) would be more accurate. This applies to the second note on my talk and also the long answer to the first question. The long time of 77 days was an aberration, and involved an initial application that was badly prepared and reviewed over the Christmas period. If an applicant prepares the application carefully and well (i.e., professionally), review times are at or below the median. If the application is badly thought through then review times get high.)
The idea of the REB is intended to provide independent research support, that we are part of the research. But we sometimes hear that people think that we are a nuisance and get in the way,
Background: what does the REB do? A project will meet Canadian standards of ethics if it is carried out as described in the documents reviewed and approved.
You get conflicts in ethics when you get values that conflict with each other. How do you determine the results? There are three levels of consent:
you, the researcher, NRC, the funders, etc
society, through the Research Ethics Board (REB)
research subjects, through individual consent
Who is then responsible for questions of research ethics? Ultimately, it's the researcher. The REB is there to support you, but is not responsible for research ethics.
Does NRC's REB support research? It scales the review process in relation to ethcis issues, There can be delegated review, generic and template applications, a process of consultation, and where applicable, excuse from REB review.
Does the REB support research? We try to deal with a rapid turnaround and respond quickly to questions.
One question has to do with science review – what right does the TEB have with respect to the science. When we are not qualified? Well – we have 6 people with PhDs, 5 people with extensive experience in ethics, 3 in law, 5 from outside NRC, and 6 bilingual members. So it's a fairly widely based REB – but it's not free from idiosyncrasies.
What does REB review? First, protocol. Second, informed consent, which is basically required for all research. If you are asking for an exception here, the REB will need to understand why, otherwise it will not be easy to approve the way you are doing things. Another concern is the question of personal information – whether data is tied to a person, whether data is private. Unless there is consent for data to be identifiable, then it must be free of identifying information.
The important questions are, does the REB serve the needs of NRC, and second, is research not being done because of REB. Also, if there is research that you are doing, and you have not involved REB, why not?
Q. (Stephen described how he doesn't use REB because of three points: REB questions about the science, the turnaround, the results never coming back.)
A. We think we're pretty fast. For example, 15-77 days seems fast to us. Three weeks seems to be a pretty quick turnaround. It's not long compared to other REBs, it's not long compared to what it used to be. (My comment – it used to be 90 days, the REB only met once a month. Now, maybe, around 30 days is more likely). Surveys on the web – we now have a generic protocol for them. (Comment: not that I'm aware of). We have a standard approach for these. For example, anonymous surveys. Should be turnkey, review by the chair, then it comes back. (Comment: I have an online survey, it wasn't turnkey). Well if we make it a generic review, then we can have a review by the chair, turnaround time 8 days.
Q. I think a lot of people aren't really clear about when something becomes human subject research. Eg. I'm developing some software, and it's going to be used by some people at IAR. They're using something else and say we're going to give them something better. So we do a requirements assessment. Is that human research?
A. Well, if we're asking people to do something they would normally do in the performance of their job, it's not human subject research.
Q. Well, what if I go to Mitel and observe software engineers keyword searching?
A. That gets closer to human subject research, it involves questions of anonymity and privacy, but it's the sort of thing that could be part of a generic application.
Q. So basically when we talk to users the question comes up. So what do I do?
A. Come ask us. The risk is a privacy risk (comment: and a coercion risk). (Will van den Hoonaard: the nature of the institution or corporation would also be a factor – the more likely you go out of your organization, the more likely the REB is going to be involved. There is privacy, risk, and whether some corporations want people to be interviewed by outsiders).
Q. What is REB? What does it stand for? Also, is there a template for user interface testing? Because getting permission for testing for each element of user interface gets a bit much.
A. REB stands for ‘Research Ethics Board', it has been in operation for 20 years, and it must give permission before NRC allows the research to take place.
If there is a standard approach, this is a great case for a generic application, then it will take you 20 minutes to fill out, and it comes to me, and I turn it around right away.