To date there has been a only surprisingly small handful of cases where serious collective reflection about the hazards of a new technology--and attempts to regulate it--have actually preceded the development of that technology. Nuclear technology is one (debatable) example. Recombinant DNA technology is another (though, of course, rDNA is a broad term covering not just one but a number of different technologies). One of the storied events in the history of biotechnology is an example of this sort of reflection-in-advance: The 1975 Asilomar Conference on Recombinant DNA. At the time, rDNA research was brand new. The first use of restriction enzymes to cut specific DNA sites had been demonstrated barely two years before. Stanley Cohen of Stanford and Herbert Boyer of UC San Francisco (who would later go on to found Genetech) had just applied for a patent on basic techniques of rDNA technology. The Asilomar conference was intended to draw up guidelines for the safe containment of rDNA experiments, to curtail certain lines of potentially harmful research and to raise public public awareness about the risks and benefits of genetic biotechnology. While it did to give rise to any changes in law (at least not directly), it was direct impetus behind the NIH "Guidelines for Research Involving Recombinant DNA Molecules" that, with occasional amendments, has effectively regulated academic biotechnology research in the U.S. ever since. And, certainly, it succeed in raising public awareness.
Recently the Association for the Advancement of Artificial Intelligence has, it seems, tried to follow in the footsteps of the 1975 conference by staging another conference at Asilomar, this one devoted to the dangers of autnomous technological systems powered by AI (such as predator drones, medical robosts that intereact directly with patients, etc.). Chief among those dangers is the possibility that, precisely because they are designed to act autonomously, such systems may escpape human control and so any damage that they might inflict might become extremely difficult to stop. Some the specific exmaples discussed in the NYT piece--such as the use of AI in data mining and the use of speech synethesis to impersonate and defraud--really do seem to me to be quite significant. Especialy so since the relevant technologies are already pretty highly developed and widely diffused.
We'll see what becomes of "Asilomar II" when the report from the conference is released later this year. Perhaps, like the orginal, it will motivate reseach funding agencies to adopt new guidelines. (Though, frankly, I wouldn't bet on it.)
[BTW, for people who are into this sort of thing, AI Topics hosts a really pretty useful collection articles on AI ethics. Very much worth a look]
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment