In November last year, the international scientific community was shocked by the announcement by He Jiankui, a CRISPR scientist, that he and his team at the Southern University of Science and Technology in Shenzhen had created the first “gene-edited babies”. The genetic material of these babies had been edited to make them resistant to HIV, smallpox and cholera.
This event has raised a flurry of reactions and controversies, and also demonstrated some deep ambiguities surrounding the risks and implications of research on genome editing, which could fundamentally change how humans are “fabricated”. Beyond the purely scientific aspect, the question of how to balance potential benefits against the potential negative consequences must consider the acceptability of the risks involved.
Is it acceptable to edit the human genome, and in particular the human germline?
This question was discussed in a report from the US National Academies of Sciences, Engineering and Medicine in 2017 that provided principles and recommendations to guide the governance of human genome editing. The report committee concluded that, "in light of the technical and social concerns involved […] heritable genome-editing research trials might be permitted, but only following much more research aimed at meeting existing risk/benefit standards for authorizing clinical trials, and even then only for compelling reasons and under strict oversight". It is understandable that the conclusion of the report is subject to different interpretations. In scientific and medical terms, HIV/Aids is not a condition that justifies doing hard gene editing. There are, indeed, many conditions, such as cancer or rare incapacitating diseases, that should be prioritized for research and testing. But some people, like He Jiankui and the parents of the two babies involved in his projects, may view things differently.
Are the risks of unwanted and undetected errors acceptable?
Nobody knows the exact consequences of modifying a genome. Scientists generally agree that we should not go beyond research in the laboratory until we have proper understanding of the consequences. But for parents who are discriminated against at work and in their access to medical care (as often the case in China for individuals with HIV/Aids), the perspective is different: they may want to try anything so that their child is not affected by the condition and discriminated against.
How do we make the judgment over whether the risk is acceptable or not?
If we define risk as the negative consequence that comes from uncertainty within a given activity or objective, risk results from the threat of something bad happening, where “bad” is related to what people value (eg health, non-discrimination). Speaking in terms of risk implies (a) defining the risk of something to someone, and (b) understanding the context in which the risk is evaluated and a judgment is made. Risk related to authorizing, prohibiting or using genome editing techniques thus depends on the values or norms, or even social or individual preferences, related to the technique and its deployment.
Who makes the judgment over whether the risk is acceptable or not?
Who is legitimate to do it? Educated individuals who “know”? Policymakers and regulators? Or those who are affected? Safety risks in the medical domain are clearly explained in the context of “unmet medical needs”. However, as long as there are people who suffer and others who claim to have the ability to relieve their pain, free-riding will happen. When the stakes are high, incentives to cheat or make others take undue risks are also high.
Why did this happen
Why was someone able to do it? There are various reasons, including ambiguity on the part of the scientists who say that it is not responsible, yet at the same time continue research on embryos, permissiveness and loose regulation, but also ambition and fame. Whatever happens, He Jiankui will be remembered as the first one to do it.
Do criticisms of He and his team reflect the public opinion at large?
Well, that is not clear. Surveys in various countries indicate that the balancing of risks and expected benefit may be in favour of fabricating CRISPR babies. This is what seems to be the case in an opinion survey conducted in China by Sun Yat-sen University, which concluded: “more than 80% of the sample would like to use gene editing to modify the genes of their child if he was likely to develop fatal genetic disease”. Opinion surveys in the US also show that Americans are becoming more open to human genome editing, but concerns remain. According to a 2018 Pew survey, “a majority (72%) of US adults say changing a baby’s genes to treat a serious congenital disease is appropriate”. While interpreting opinion surveys is a delicate exercise, there is certainly something to learn there.
Does this event show that we are on a slippery slope?
Like many new techniques in life sciences, we should be concerned by possible “dual-use”, where the technique is developed for good, but could also be used in a way that is not so desirable, controllable or stoppable: a slippery slope towards human enhancement, discrimination and eugenics. Germline engineering is potentially a path towards a dystopia of super-people and designer babies for those who can afford it or can impose it on others. Where policymakers, decision-makers and others should focus their attention is ensuring that every individual concerned is able to make an evidence-based decision and give his or her informed consent.
Would strengthening regulation be an effective solution to avoid rogue behaviour?
It’s unclear. Some have called for a moratorium on research, but others (including the US NAS in 2017) said no. Several teams confirmed in December that they would continue research on embryos. Top-down regulations by informed and legitimate governments to set fundamental principles are necessary, but must be accompanied by dialogue and education on the consequences.
Would voluntary private agreements work?
They must be encouraged. For example, we heard from David Baltimore at the International Summit on Human Genome Editing, in November 2018, that “we have to come together to agree on what we want to do and not do, on what we consider to be right and wrong, on a voluntary basis”. This kind of bottom-up form of regulation is critically important because its purpose is to embed responsible behaviour in the practice of every scientist.
How should we engage with the public to inform them about the pros and cons?
Communication and dialogue among scientists, practitioners who apply and use a certain technique, patients and other individuals should be organised. Communication with the public should be tested. It is unfortunately too often the case that risk communication misses its target because how people perceive risk is not well understood.
The questions discussed above are just some of those that a proper "risk governance” approach should consider, with a view to evaluating risks, making decisions about them, selecting and implementing management options and communicating about risk and benefits. Such an approach is much needed for governing risks involved in gene-editing in general, and CRISPR or gene-drive technologies in particular.