March 17, 2008
In London the other day I had dinner with an engaging transhumanist and a couple of scholars who are somewhat skeptical of the transhumanist project (since it was London, perhaps I should say sceptical). Not all transies are equally engaging (one once listed me alongside the Unabomber in his catalogue of “bioconservatives”), though there is something inherently intellectually refreshing about smart people who ask fundamental questions and seem happy to follow unconventional arguments pretty much wherever they lead (no use using reductio ad absurdum arguments here, as they tend to smile and plead guilty as charged). Peter Singer is another engaging intellectual bookend who illumines all from his “extreme” posture, on a related though distinct plane of conversation.
The point I have begun to press in these interlocutions with transies is twofold. First, those among them who are basically into “radical life extension” should drop the moniker. RLE is controversial in many aspects, but the idea of people living longer is about as transhuman as the fact that we are getting taller (I read on the web that the Dutch are now the tallest nation). The transie project, even before it resolves into the post- prefix, is trans-. RLE enthusiasts should trim their sails and rebrand as humanists who want more of the same. Of course, they would need to ditch the merchants of immortality who plan to have Scottie beam them up into hard drives, and focus on the amazing stuff that worms and mice are teaching us.
But more generally, it seems to me that the transies proper are their own worst enemies. If they really want to persuade us of the merits of aspiring to cyborgdom with its vastly superior opportunities for, well, everything except old-time human biology – they should stop going on about it. In particular, they should cease to suggest that incremental advances in human function (such as prostheses of various kinds and certain drug uses) are the beginnings of the transhuman effort, and press the argument that these are all as human as they can be. Of many of these current efforts this is plainly true, as they are generally unambiguously therapeutic in their use. Of some, it may not be. The transies’ theme should be “don’t worry,” not “get excited, Homo sap. is about to come to an end.” Their interests lie in boiling the human lobster, not confronting it with a species-wide firing squad.
Not that they will listen to me. But we do need to find ways to defuse the extremes if we are going to build in the middle ground. Those of us who are pro-tech and pro-human need to shape the future – and ground its discussion in something other than the alternative of cyborgiana or a return to the caves.
March 10, 2008
A thoughtful piece in today’s New York Times has usefully brought the “enhancement” question into general discussion. It is a curious thing how little attention this clutch of questions has received, and how when attention has been evident it has tended to be to some particular (mainly in the sports context) rather than the general principle.
It is widely agreed that there is no easy way to draw the “enhancement” line; though some otherwise smart people have been much too quick to suggest that therefore there is no issue here to be discussed. One of the most useful documents to emanate from the President’s Council on Bioethics was the report Beyond Therapy, which asserted jointly that this issue has huge importance and it is not possible to give it a sharp definition. Yet the examples of steroid use, human growth hormone, and the sprinter with prostheses, are beginning to illustrate to the public that definitional difficulty is no bar to the need to make decisions, and to our ability to make them when we are presented with a particular application.
The key need is to mainstream this discussion, and get it out of the hands of transhumanists on the one hand and Luddites, if there really are any, on the other. Our embrace of the technologies of the 21st century depends vitally on our understanding of their implications and our ability to take responsibility for their development.
The significance of the sports doping (and prosthetics) stories, and the particular issue that sparked the Times article (discussion among educators about academics’ use of performance-enhancing pills like Adderall), is that they offer us the opportunity to reflect on the limits of such interventions in human capacities with an eye on current technologies and their uses – well before tomorrow’s technologies, with their sci-fi promise of brain-boosting implants and the steady cyborgization of at least some of the world’s “haves,” have become immediate problems. Conversely, how the current debates go (and they may well be going one way in the sports world and another in the laisser-faire world outside) is likely to weight the dice for every future engagement with such technology applications. (ChoosingTomorrow)
November 19, 2007
Time to Re-focus: the National Nanotechnology Initiative Re-authorization
The 2003 21st Century Nanotechnology Research and Development Act, which set out a legislative framework for the National Nanotechnology Initiative, is up for re-authorization. This process offers a unique opportunity to those who have raised concerns about the implementation (and to some extent the language) of the original Act.
While the 2003 Act set out concerns about safety issues and broader societal and ethical implications, a key problem with the development of the NNI can be traced back to the Act’s failure to ensure that they would be followed through with appropriate urgency and resources. Since many parties are involved in the NNI (including a couple of dozen agencies as well as the National Academies, who were required by the Act to report on the operation of the NNI), it is hard to assign responsibility. But there seems little doubt about the failure – on both scores, EHS (environmental health and safety) and NELSI (nano ethical, legal and societal issues). While good work has been funded, both of these elements have received short shrift. Parsimony on the EHS front has led business and environmental leaders to combine in pressing for a major increase in safety research. Delay and limited action on the NELSI front has left the US well behind our European competitors in seeking to show how serious we are about following through on the implications of this transformative technology – and enabled some key individuals, under the auspices of the NSF, to frame nanoscale technological convergence along transhumanist lines (in the infamous conference document on “improving human performance” and its successor volumes).
Matters were not helped by the National Academies’ report (required by the statute), which dismissed concerns about the implications of artificial intelligence and the enhancement of human intelligence (specified in the statute as two out of six areas of special concern) as “science fiction” not worthy of consideration. Or the recent congressional Joint Economic Committee report’s proposal that the “singularity” (when, as some suggest, artificial intelligence will surpass human, and take over) may be expected as soon as 2020.
Congress has already shown concern at neglect on the NELSI front. In the FY2006 appropriations legislation, the conference report included these words: “The conferees are aware of concerns that insufficient attention and study has been directed toward the ethical dimensions of nanotechnology research. . . . The conferees expect OSTP [White House Office of Science and Technology Policy] to follow the pattern established for the human genome project, allocating three percent of funding to ethical, legal and social issues research.”
This is the context for the Second Annual Conference on Nanopolicy being convened on November 30 at the National Press Club by the Center on Nanotechnology and Society, with a focus on risk – both safety risk and ethical risk. Speakers include representatives from the American Chemistry Council and the AFL-CIO. (See nano-and-society.org for details.) It is also a major theme of our recently-published book, Nanoscale: Issues and Perspectives for the Nano Century.
It is to be hoped that the re-authorization process will result in legislation that ensures that both NELSI and EHS are pulled into the mainstream of the NNI, as we seek to develop the extraordinary promise of emerging technologies on the nanoscale.
October 29, 2007
Marketing to our Brains
The implications of our growing knowledge of the processes of the brain continue to unfold.
It should be no surprise that marketers have been following along as neuroscientists have moved in on ever clearer understanding of just what happens on the neuro level when we make our choices. After all, they are in business to influence those choices.
So listen to this:
Neuromarketing uses state-of-the-art technologies such as functional magnetic resonance
imaging (fMRI), magneto-encephalography, and more conventional electroencephalograms
(EEGs) to observe which areas of the brain “light up” when test subjects view, hear, or
even smell products or promos. The activity of regions such as the nucleus accumbens,
insula, and mesial prefrontal cortex give researchers insight into how consumers respond
to specific stimuli.
“Emotions cannot necessarily be accurately described,” says Gemma Calvert, head of the
Multisensory Research Group at Britain’s University of Bath and director of
neuromarketing consultancy Neurosense in Oxford, England. Using brain scans, she says,
“We can see the discrepancy between what you say and what your brain says, and reduce
the margin of error.”
That’s what attracted Viacom Brand Solutions to experiment with neuromarketing. The
London-based Viacom (VIA) subsidiary, which sells ads on the entertainment giant’s
channels including MTV, VH1, Nickelodeon, Paramount Comedy, and E! Channel in Great
Britain and Ireland, engaged Neurosense to measure the response of 18- to 30-year- old
viewers to ads interspersed into episodes of cartoon comedy South Park. The two dozen
subjects each spent an hour inside an fMRI scanner watching four programs while their
brain activity was measured. The Importance of Placement
The result? Advertisements for popular “alcopop” vodka beverage WKD from Torquay,
England-based Beverage Brands elicited vigorous brain responses, while ads for the Red
Cross and reliable old Tetley tea produced much less reaction. The takeaway, says
Calvert, is that ads “congruent” with their environment outperform those that are
Are we troubled? The knack in the next generation is going to lie in knowing how to be troubled without simply emulating Ned Ludd. We need a much more vigorous and varied dialogue on how emerging technologies are shaping our futuren that does not resolve each time into a choice between “shut it down” and “it’s wonderful” (with a possible third – fatalistic – choice becoming more evident: “we can’t stop it so what’s the point in this conversation?”). We need neither Luddism nor, worse still, fatalism, but an engagement in these technologies and their application that comports with the values of our society and its democratic accountability. What does that mean for neuromarketing? A “health warning” on the ads so we know they have been neuro-honed? Proscription on the use of these techniques in political ads? And what about that perennially dire topic of ads directed at kids?
Posted by Nigel Cameron
Posted in Neuroethics
August 1, 2007
A Global View: Emerging Technologies and Climate Change
The central theme of this blog has been the need for us to make future-minded choices – and to see the future as an arena for responsibility, as if it were another geographical area of our planet. That is of course how risk management has always framed the future, and how futurists, when they have been at their most useful, have helped set up scenarios to feed today’s decision-making process. But most people just don’t see it that way, and one can see why. They reason that either we have definite future knowledge (of the Old Mother Shipton and Nostradamus variety), or what will happen is so unclear that we wasting our time talking about it. Yet those have never been the alternatives.
As the pace of change has picked up, and governments – especially, and ironically, within the democracies – remain wedded to short-term thinking, the problem is getting worse by leaps and bounds. It is not simply a matter of technologies and their impact, though that factor grows exponentially in import and integrates with all others. The growing asymmetries that are helping make the world a more dangerous and unpredictable have not junked the need for forward thinking; they have just made it more necessary and, to be sure, more interesting. Of course, governments plan. Their often maligned civil servants, drawn at the higher levels from the brightest and best of their generation, do this all the time. I had the privilege last year of participating in just such a project, at the behest of the US Department of State and other agencies. But on the political level – among elected officials and the citizens who elect them – the situation can be dire. And unless there is buy-in from them, the efforts of the brightest and best will stay forever in their files.
It is hard enough to run a corporation accountable to the market through quarterly reporting. Think how hard it must be to run a country accountable to the daily news cycle. And, thanks to the new media, there barely is a news cycle any more. We are into the politics of 24/7. The notion of hundreds of leaders traipsing across Europe to spend months and years working on the Peace of Westphalia or the Treaty of Versailles reads today like a bizarre kind of science fiction, the history of another planet. (How will this nation state idea play on CNN?) Flux, unpredictability and asymmetry require studied future-mindedness, and yet are proportionately less likely to get it. Which fact itself becomes a looming element in the emerging asymmetric equation.
And this, of course, is the environment into which the climate change dynamic has been released. It has begun to evince the characteristics of the political matrix that has two settings, each alike antithetical to considered long-term policy development: panic and ignore. But there is a third setting: panic AND ignore. That is, panic and make speeches. Like the economic consequences of an ageing workforce and the specific issue (that we keep highlighting here) of the need to develop long-term technology policy, the inter-generational thinking (and trade-offs) involved are going to test our mettle as few things have. The easy answers that will play well on the 24-hour media – all the way from “nanotechnology will solve the problem so we don’t need to make hard choices” to “blame America because it is easy and makes us feel good” to “it’s all up to China and India so let’s sit on our hands” – are alike ways to slough off responsibility for our own futures.
Not that the answers are easy; answers rarely are. But once we get the questions right at least we have a chance.
Posted by Nigel Cameron
Posted in EmergingTech
July 17, 2007
New Initiative: The Center for Policy on Emerging Technologies (C-PET)
Developments in such fields as information technology and biotechnology have already had profound effects on our lives as individuals and communities. There is general agreement that the compounding effects of “emerging technologies” (ET) will radically re-shape the future of human society. While there is uncertainty and disagreement as to the likely pace and direction of such change, its far-ranging social and personal impact are indisputable. Business, NGOs, the scientific community and government have a common interest in public engagement on issues of technology policy, and critical evaluation of particular applications. Yet popular understanding and civil society debate on the implications of emerging technologies remain very limited.
As ETs increase their impact on all aspects of society, including all industrial sectors, it will be hard to overstate the importance of mature, informed discussion within civil society as the context for policy development by governments and multilaterals. This has been illustrated by the insurgent ET debates over genetically-modified (GMO) foods (mainly in Europe), and over embryonic stem-cell research/cloning. If, as observers from various perspectives claim, nanoscale technological “convergence” is set to offer the disruptive and transformative technology of our generation, such current controversies should be seen as samples of the kind of political and social upheavals that loom ahead. One lesson from the GMO controversy, in particular, is the importance of informed dialogue on ET issues early in a technology’s development, as a key component in the search for policy solutions – and in order to mitigate risk.
There is at present no standalone think tank in Washington, DC, with broad interests in science and technology policy issues.
C-PET key issues
The focus will be on issues of strategic significance that have not yet entered the policy mainstream, including the following:
* Artificial intelligence (AI) and enhanced human intelligence; ET military applications; Surveillance/privacy, including RFIDs; Distributive justice (e.g., “nano-divide”) issues; Neuroscience, neurotechnology, and behavioral control; Risk and public policy; Human augmentation (“enhancement”); Nanotechnology and “converging technologies;” Synthetic biology; Developments in Genomics
The Steering Committee represents the nonpartisan character of C-PET by drawing on thought leaders from across the spectrum of political, cultural and religious/secular opinion:
* Daniel Caprio, former Chief Privacy Officer and Deputy Assistant Secretary for Technology Policy, Department of Commerce; Patricia Smith Churchland, Presidential Chair in Philosophy, University of California, San Diego; Andrew Kimbrell, Director, International Center for Technology Assessment; Carl Mitcham, Professor of Liberal Arts and International Studies, Colorado School of Mines; editor, Encyclopedia of Science, Technology and Ethics; C. Ben Mitchell, Professor of Bioethics and Contemporary Culture, Trinity Evangelical Divinity School; editor, Ethics and Medicine; Jonathan Moreno, Professor of Medical Ethics and of History and Sociology of Science, University of Pennsylvania; Senior Fellow, Center for American Progress; Charles Rubin, Associate Professor of Political Science, Duquesne University; contributing editor, The New Atlantis; Daniel Sarewitz, Professor of Science and Society; Director, Consortium for Science, Policy and Outcomes, Arizona State University; Cynthia P. Schneider, Distinguished Professor of the Practice of Diplomacy; Executive Director, Perspectives of the Future of Science and Technology, Georgetown University; former U.S. Ambassador to the Netherlands; Gregory Stock, Director, Program on Medicine, Technology, and Society, School of Public Health, UCLA
Current C-PET initiatives
Early projects include:
* Establishing the Atlantic Dialogue on Emerging Technologies (ADET), to bring together corporate, NGO and government perspectives from Europe and the United States
* Building partnerships with collaborating organizations, including the Illinois Institute of Technology’s Center on Nanotechnology and Society, the Consortium for Science and Public Outcomes at Arizona State University, and the Converging Technologies Bar Association
* Developing a web-based global clearing-house on technology policy and its societal implications.
C-PET is being organized as a nonpartisan, not-for-profit, 501(c)3 corporation.
Nigel M. de S. Cameron
Research Professor of Bioethics and Associate Dean, Illinois Institute of Technology
President, Center for Policy on Emerging Technologies
June 14, 2007
US Government funds virtual reality research
News that the NSF is spending half a million dollars to improve our ability to create a virtual reality avatar is bizarre enough. Add the info that the avatar is intended to represent a senior NSF official and we start checking the calendar on the assumption that this is April 1. Or the Onion. Or the Twilight Zone. Since it’s June already and the World Futures Society does not publish the Onion, it looks like the Twilight Zone after all. But read it for yourself. From the story:
For centuries, humans have been trying to beat mortality through technology, employing such fanciful (if chilling) methods as cryonics, or the freezing of cadavers in the hope that science might one day stumble upon a cure for all ills. Now, the National Science Foundation has awarded a half-million-dollar grant to the universities of Central Florida at Orlando and Illinois at Chicago to explore how researchers might use artificial intelligence, archiving, and computer imaging to create convincing, digital versions of real people, a possible first step toward virtual immortality.
“The goal is to combine artificial intelligence with the latest advanced graphics and video game-type technology to enable us to create historical archives of people beyond what can be achieved using traditional technologies such as text, audio, and video footage,” says Jason Leigh of the University of Chicago’s Electronic Visualization Laboratory.
Leigh’s lab will attempt to store and then digitize the appearance, mannerisms, voice, and (some of) the knowledge of a senior program manager from the National Science Foundation who is known for his institutional savvy. The researchers hope to then assemble the data into a “virtual person” or avatar that will be able to respond to questions and behave in a manner representative of the test subject.
Not that we should refuse to research on AI, nanotechnology, and avatars; that is not my point. But we need to talk about it first. Where is the great public debate about the implications of these technologies? We are shooting first and asking questions later. And for watchers of the National Science Foundation, this may be no surprise. The NSF’s flagrant disregard of Congressional concern that we look long and hard at the ethical and societal dimensions of such research – especially artificial intelligence in the context of the National Nanotechnology Initiative – is slowly becoming a scandal.
But we can always ask the NSF avatar for the answer.
Posted by Nigel Cameron
Posted in EmergingTech
June 11, 2007
The Conquest of the Neuron
With potential implications that could dwarf those of every other technology, the slow courtship of brain and machine continues . . . . Here is the latest: scientists in Israel have used live neurons to store information. According to the New Scientist, this is a first . . . .
Now Itay Baruchi and Eshel Ben-Jacob of Tel Aviv University in Israel
have taught new firing patterns to a network of neurons by targeting
specific points of the network with a chemical called picrotoxin. The
new patterns lasted for up to two days without harming the pre-
existing firing patterns (Physical Review Letters E, DOI: 10.1103/
PhysRevE.75.050901). “You can think of it like a Christmas tree with
lights that flicker,” says Ben-Jacob. “We imprinted another pattern of
lights on top of the original.”
Many believe that complex patterns of neuronal firing are templates
for memory, which the brain uses when storing information. Imprinting
such “memories” on artificial neural networks provides a potential way
to develop cyborg chips, says Ben-Jacob. These would be useful for
monitoring biological systems like the brain and blood since, being
human, they would respond to the same chemicals.
April 27, 2007
No doubt about it: whatever the speed of change that gets us there – and we may well believe it will be slower than Kurzweilian singularity-speak suggests – the significance of artificial intelligence will grow and grow. And meanwhile, the social conversation that will shape how and – in some areas – whether it grows has hardly begun to happen. From the story:
Once people have followed a recipe and become acquainted with robots,
they can build on their experience, said Emily Hamner, a senior
research associate in the CREATE Lab. Not only can they customize the
recipes to their liking, they can also design new robot types using
the Qwerk controller.
Qwerk itself is a full-fledged computer with a Linux operating system
that can use any computer language. It features a field programmable
gate array (FPGA) to control motors, servos, cameras, amplifiers and
other devices. It also accepts USB peripheral devices, such as Web
cameras and GPS receivers. We leveraged several low-cost, yet high-
performance components that were originally developed for the consumer
electronics industry when we designed Qwerk, said Rich LeGrand,
president of Charmed Labs. “The result is a cost-effective robot
controller with impressive capabilities.
Posted by Nigel Cameron
Posted in EmergingTech
April 2, 2007
Congress and Credulity
The report just issued by the Joint Economic Committee of Congress (a combined House/Senate group) makes bizarre reading.
Like some of the other surreal documents that have resulted from the efforts of the National Nanotechnology Initiative, it is hard to read it as a statement concerning the real world – or integrated with other aspects of that world. Which is not in any way to doubt the truly extraordinary potential of nanoscale research and development to transform large areas of the economy. But is to doubt the sanity of much of what has passed for policy discussion around that potential. The multi-volume book series on “converging technologies” that has been in various ways sponsored by the NSF’s nano leaders continues to run its zany course. And here we have our congressional leaders not simply buying the highly optimistic gloss that some of those leaders have put on the development and prospects of the technology, but adding as a coup de grace Ray Kurzweil’s “singularity” as phase 5 of the nano roll-out -to be expected in 2020.
Is this report issued on the same planet as anguished discussion of “the deficit,” hand-wringing over the costs of war, and occasional bursts of bipartisanship on the need to do “something” about our burgeoning social security responsibilities? Because, of course, if the “singularity” is postulated in just 13 years time, many years sooner than assorted economic projections that are routinely made, all bets are surely off.
Which is to say: we have to find some rational way to develop S and T policy, so it is not dependent upon the unbridled enthusiasms of scientific civil servants in the agencies, or the uncritical adoption of their projections by lawmakers.
March 28, 2007
News of the new online risk-focused journal launched by Rice University’s NSF-funded Center on Biological and Environmental Nanotechnology and associated International Council on Nanotechnology (CBEN and ICON to their friends) is welcome, though I am puzzled why it should be called a “virtual journal.” Isn’t it an actual journal? We are so used to online publications that this seems curious; the kind of decision that must have been made by a committee, perhaps with an old-timer on board who believes real journals need paper and print . . . . Unless, of course, it will be published in Second Life.
The real significance of this news lies in the fact that it is news. The effort to play down the significance of risk – risk of all kinds – in the nanoworld has (with delightful irony; delightful, that is, to those of us who enjoy irony) added more to its risk than anything so far published. The failure of the various relevant U.S. federal government agencies to take serious responsibility for nano risk has left the foes of nano publicly angry but, at least in some cases, privately pleased – for this very reason. It has left the more naive friends of nano pleased too; who wants to have their boat rocked?
Those who would aspire to be non-naive nano friends remain perplexed, since they have a firmer grasp of the deep necessity of violent boat-rocking upstream in the development of the technology if it is going to be found both safe and socially credible. If this is really the ultimately transformative technology, as book after book from the NSF has been telling us with federal authority, one would have thought that a commensurate effort would be put out to ensure not simply that we know all there is to be known about nanotoxicity (and know it fast) but that wider risk issues – arising from the NELSI questions (nano ethical, legal and societal issues, which weighed heavily with Congress when it passed the 2003 nano act) would be funded to the hilt.
There are other initiatives in the risk pipeline, including one from Environmental Defense and DuPont. But back of these particulars, the ultimate risk remains that emerging nanotechnologies will be employed to demean humankind by advancing the “transhumanist” agenda – and/or that the prospect of this future will lead to a neo-Luddism that repudiates benefit and disbenefit alike.
The nanotechnology coalition that launched the first online database
of scientific findings related to the benefits and risks of
nanomaterials has taken the concept one step further with the launch
today of The Virtual Journal of Nanotechnology Environment, Health &
Safety (VJ-Nano EHS). The journal may be accessed at
A monthly online journal that contains citations and links to articles
on the environment and health impacts of nanotechnology, VJ-Nano EHS
is a product of The International Council on Nanotechnology (ICON) and
Rice University’s Center for Biological and Environmental
Nanotechnology (CBEN), which launched the first EHS database in August
March 23, 2007
Economics of Pandemic Flu
The low level of public attention being focused on the prospect of pandemic flu continues to surprise, though it meshes with the ambivalence with which various government entities have been seeking to catch our interest.
The latest report on likely economic impacts is devastating: a 5.5% drop in US GDP. The model used was based on the experience of the 1918 pandemic, and assumed that this time around 2.2 million Americans would die, out of 90 million infected. With numbers like that in prospect (higher than some other somewhat rosier predictions) one can see why politicians are gun-shy and aspiring presidential candidates have not majored on a “Katrina in every state” scenario – yet.
Posted by Nigel Cameron
Posted in Healthcare
March 7, 2007
Matrix, here we come
We’ve had wires used to connect the brain to the PC, most celebratedly Kevin Warwick’s. We’ve recently seen a report of a wire being made out of neurons. Now we have an EEG skullcap – to be used for video games. And while all three are, variously, primitive and experimental, the message is plain as a pikestaff to anyone with the eyes to see: the brain-machine divide is morphing into an interface, and an interface unmediated by us and our senses.
To use Emotiv’s system, a person puts on the EEG cap and adjusts it to her head, making sure that most of the sensors touch the scalp. The system automatically picks up blinks and emotional states. However, in order to move virtual objects, such as a box on a computer screen, a person must go through a series of training sessions in which she concentrates for about 10 seconds on mentally moving the box. Tan Le, one of Emotiv’s cofounders, says that there is a large amount of machine learning built into the software, so the more a person concentrates on a specific task, the more precisely the system follows the mental instructions.
March 3, 2007
Reports keep coming on the efforts of some of our smartest people to reverse engineer the brain – the so-called Human Cognome Project. This one in Wired describes a Silicon Valley entrepreneur’s project to rebuild it from the bottom up. There is little doubt that the results of these ventures will present us humans with our biggest challenges ever – if only we can begin to frame the questions we need to start answering. It doesn’t help that there is seemingly no-one at the highest level in public life who is taking them seriously. If the human brain is going to save itself from being copied and “improved” by machine intelligence it is going to have to get much more focused on the policy dimensions of cognitive science.
February 28, 2007
The Price(lessness) of Privacy
No-one who has an eye on the spate of reports that every day demonstrate the range and vigor of emerging technologies can doubt that privacy is as good as over – or, to put it another way, that the passive privacy we have taken so much for granted (no-one knows what you are doing as no-one can see you) is soon to be replaced by what could prove the most costly of all commodities: privacy at a price. Think all those movies about the NSA and its capacity to spy on your every move, and double it, triple it, and keep moving up the geometric progression of surveillance 2.0.
This report looks at the new generation of video surveillance; others have updated us on RFIDs. One way and another, those who want liberty and see privacy as essential to its flourishing will have to find a way to carve out a zone around us where intrusions driven by fear of crime, defense against terror, and relentless marketing are held at bay.
Surveillance cameras are common in many cities, monitoring tough street corners to deter crime, watching over sensitive government buildings and even catching speeders. Cameras are on public buses and in train stations, building lobbies, schools and stores. Most feed video to central control rooms, where they are monitored by security staff.
The innovations could mean fewer people would be needed to watch what they record, and make it easier to install more in public places and private homes.
“Law enforcement people in this country are realizing they can use video surveillance to be in a lot of places at one time,” said Roy Bordes, who runs an Orlando, Fla.-based security consulting company. He also is a council vice president with ASIS International, a Washington-based organization for security officials. The advancements have already been put to work. For example, cameras in Chicago and Washington can detect gunshots and alert police. Baltimore installed cameras that can play a recorded message and snap pictures of graffiti sprayers or illegal dumpers.
In the commercial market, the gaming industry uses camera systems that can detect facial features, according to Bordes. Casinos use their vast banks of security cameras to hunt cheating gamblers who have been flagged before.
February 27, 2007
A “Triumph” for Science, or merely for Scientists?
There is news from the UK, home of the ethical slippery-slope, that a campaign by scientists to undermine a highly unusual effort on the part of the government to say no to the production of hybrid embryos has paid off. The government has pulled back. Hybrid embryos, the latest sine qua non for cures through embryo stem cell research through cloning, are now on the cards. Once more the UK is way out of step with most of the western democracies (which are liable to send you to jail for doing things like that, not give you funding), but this time the policymakers do not seem to have gone down without a fight – itself driven by public opinion.
One of the great unknowns of biopolicy development, especially in Europe, lies in the question whether at some stage the vast impetus of opposition to genetically-modified foods will move to crush outlandish developments in human biology too; and whether provocative moves like this one will finally serve to strength biocritics.
At the same time, the credibility of scientists is on the line. Do we want them to serve as lobbyists? Is that good for democracy, and, finally, is it good for science? Should generals lobby for a war? Whatever their motives (and we assume they go beyond their love of grant funding), the spectacle is unedifying and, finally, could prove counter-productive. Scientists in the early 20th century have retained the status of demi-gods in our technophile and disease-fearing culture. There are many reasons why their image is slipping. Scientist-as-entrepreneur has already clouded the picture. The emergence of scientist-as-lobbyist could add another cause of doubt.
The Times (London)
February 27, 2007, Tuesday
SECTION: HOME NEWS; Pg. 7
LENGTH: 611 words
HEADLINE: Scientists triumph in battle over ban on hybrid embryos
BYLINE: Mark Henderson, Science Editor
* Proposal for legal curbs to be dropped
* Fears for British science led to move
Plans to outlaw the creation of human-animal hybrid embryos for potentially life-saving stem cell research are to be dropped after a revolt by scientists.
The proposed government ban on fusing human DNA with animal eggs, which promises insights into incurable conditions such as Alzheimer’s and motor neuron disease, will be abandoned because of concerns among senior ministers that it will damage British science.
While ministers will not endorse the research in full yet, they are no longer seeking legislation to prohibit it, The Times has learnt. The Government will instead provide the fertility watchdog with funds for a public debate on the subject before new laws are drafted.
Posted by Nigel Cameron
Posted in EmergingTech
February 26, 2007
More from Second Life
Savvy observers of the interface of humans and technology have been casting an eye on Second Life’s particular instance: that between humans and technologically-delivered virtual reality. In this LA Times story, the growing pains of the virtual community are under scrutiny, including its experience of terrorism and the appearance of mainstream (non-virtual) brands.
One is reminded of the early days of the internet itself, when “netiquette” reigned and early adopters early adopted a utopianism that now seems silly. Whether Amazon and Ebay have ruined the web or shown that even technology can sometimes be useful has ceased to keep us awake at night.
Virtual loses its virtues
By Alana Semuels, Times Staff Writer
February 22, 2007
LIKE any pioneer, Marshal Cahill arrived in a new world curious and eager to sample its diversions. Over time, though, he saw an elite few grabbing more than their share.
They bought up all the plum real estate. They awarded building contracts to friends. They stifled free speech.
Cahill saw a bleak future, but he felt powerless to stop them. So he detonated an atomic bomb outside an American Apparel outlet. Then another outside a Reebok store.
As political officer for the Second Life Liberation Army, Cahill is passionately committed to righting what he considers the wrongs of a world that exists only on the computer servers of Linden Lab in San Francisco.
February 22, 2007
Full-Blown Human Cloning is Inevitable, says Nature
This month is the 10-year anniversary of the momentous announcement that Dolly the sheep had been cloned. Her fatuous name (a foray into locker-room humor) and ever-placid face were soon familiar to billions of humans. And her human creators were quick to point out that they would never dream of doing this to a member of their own species.
Well, one of the few certainties of the modern world is the Law of Ethical Entropy. Find the strongest declaration you can that something should and will never happen, add 10 years, and you know what you get. And despite Nature’s deployment of one of the most tendentious and morally bankrupt statements I have ever read (“There is a consensus that dignity is not undermined if a human offspring is valued in its own right and not merely as a means to an end”) it is still widely agreed by people left, right and center that we do not ever want human babies to be cloned. Nature now tells us that this is inevitable, and perhaps it is – like the next flu pandemic. But that does not mean we either welcome it or use it as an opportunity to write supine editorials (whose is the “consensus” of which Nature speaks?).
Meanwhile, the main debate still focuses on making cloned embryos for research. 1997 was just three years after the Washington Post declared that it would be “unconscionable” to create embryos for research; and in 1997 itself the European Convention on Human Rights and Bioethics was opened for signature, which turns the the Post’s repugnance into international law by prohibiting signatories form creating embryos for research.
In contrast, what has been universally deemed as unacceptable is the pursuit of human reproductive cloning – or the production of what some have called a delayed identical twin. Here, the two issues that have dominated the discussion have been dignity and safety. There is a consensus that dignity is not undermined if a human offspring is valued in its own right and not merely as a means to an end. But there is no consensus that we will eventually know enough about cloning for the risks of creating human clones to be so small as to be ethically acceptable.
The debate may seem to have been pre-empted by prompt prohibition. But as the science of epigenetics and of development inevitably progresses, those for whom cloning is the only means to bypass sterility or genetic disease, say, will increasingly demand its use. Unless there is some unknown fundamental biological obstacle, and given wholly positive ethical motivations, human reproductive cloning is an eventual certainty.
February 17, 2007
Mind-reading is Today
The movie Minority Report keeps tugging at today from tomorrow. The latest hook lies in a report on research that shows the possibility of predicting human action in advance by scanning the brain.
02:00 AM Feb, 14, 2007
Tapping Brains for Future Crimes
By Jennifer Granick
A team of neuroscientists announced a scientific breakthrough last week in the use of brain scans to discover what’s on someone’s mind.
Researchers from the Max Planck Institute for Human Cognitive and Brain Sciences, along with scientists from London and Tokyo, asked subjects to secretly decide in advance whether to add or subtract two numbers they would later be shown. Using computer algorithms and functional magnetic resonance imaging, or fMRI, the scientists were able to determine with 70 percent accuracy what the participants’ intentions were, even before they were shown the numbers.
The study used “multivariate pattern recognition” to identify oxygen flow in the brain that occurs in association with specific thoughts. The researchers trained a computer to recognize these flow patterns and to extrapolate from what it had learned to accurately read intentions.
The finding raises issues about the application of such tools for screening suspected terrorists — as well as for predicting future dangerousness more generally. Are we closer than ever to the crime-prediction technology of Minority Report?
Posted by Nigel Cameron
Posted in Neuroethics
February 11, 2007
Hard Questions on the Public Funding of Science
We should never be surprised when libertarians weigh in with radical questioning of the status quo. It is, as it were, their function. And the stronger the status quo, the more valuable the questioning.
So the op-ed in today’s NY Times (regional opinions) on public funding of science and technology, by Sigrid Fry-Revere of the Cato Institute, is to be welcomed. She points out, among other things, that most medical research in the US is not funded by the NIH, and lauds the market as the place where funding will be found for worthy ideas. Her special focus is the question of public funding for stem cell research, and she is critical of California’s multi-billion dollar effort and supportive of states that merely ensure it is legal so private funders can come up with the cash.
This makes a certain amount of sense, although the example she picks is a poor one. While some private institutions have been funding embryonic stem cell research, there is so little funding from for-profit corporations that I could state in the San Francisco Chronicle (a week or two before the vote on Prop. 71 in 2004) that the market had valued this research at close to $0 – without fear of my being proved wrong.
To that extent, Ms. Fry-Revere has opened a somewhat different debate about the role of hype in the getting of public funding (aka, in some cases, corporate welfare) as a means of transferring risk to the public purse – without, of course, the transfer of corresponding benefit (thanks to Bayh-Dole et al.). This is a debate we need to have. The post-War vision of Vannevar Bush that set out the model in which, as we might say, the federal government operates as the venture capitalist of last resort in funding long-term research projects, is in need of review.
Even where political consensus can be reached, government-financed research is enormously wasteful and bureaucratic. In California, once bonds are issued to raise the promised money, residents will end up paying an additional $3 billion in taxes to cover interest and related costs over three decades, when private research could have proceeded with no debt at all. Both the proposals in New York and New Jersey would also entail bond issues with huge interest burdens.
At least one state has gotten it right. During the last election, Missouri voters passed a constitutional amendment protecting the right to pursue all forms of stem cell research allowed under federal law, creating a haven for advanced laboratories. There was no state financing included. Within days, the Stowers Institute for Medical Research, which had raised $2 billion, put to work an international team of stem cell researchers it had assembled in anticipation of the amendment’s passage.
Medical research and development do not grind to a halt when government declines to support them. As a former Health and Human Services secretary, Tommy Thompson, said at a meeting of biotech industry leaders in December, 80 percent of all of the world’s medical research is done in the United States, in spite of more generous government subsidies in some countries.
According to The Journal of the American Medical Association, the amount of money for biomedical research in the United States increased to $94.3 billion in 2003 from $37.1 billion in 1994, and only 28 percent of that money came from the National Institutes of Health. Furthermore, data reported by the journal and by the National Science Foundation indicate that in areas where public financing remained the same or declined, the private sector stepped in and increased its share of research spending.
That’s exactly what happened 30 years ago, when the federal government – also for ethical reasons – refused to financially support in vitro fertilization research. The research went forward privately, and today, reproductive technologies represent a $6-billion-a-year industry in the United States alone.
Whether or not someone supports embryonic stem cell research from an ethical perspective, he should oppose subsidizing research that could potentially bring billions in profits to the biotech industry. There is no reason that private companies can’t invest in their own research and development; they do it all the time, though one can understand their desire to have taxpayers foot the bill.
If the federal government must be involved, it shouldn’t insulate companies from the financial risks of developing new therapies. It should do what Missouri did: promise to stay out of the way and let research proceed regardless of political whims.
Government financing, after all, comes and goes with the politics of those in power. Private money, by contrast, comes and goes depending on the progress of the research and the likelihood of success.
Scientists should spend more time seeking private assistance and less time lobbying for government support. If there is a need, and if there is a way, the private sector will do it and do it much more efficiently than government would.
Stem cell research holds more medical promise than any scientific breakthrough since Francis Crick and James Watson discovered the structure of DNA, and it should be explored with all the enthusiasm and ingenuity the scientific community can muster.
February 9, 2007
Fantastic Voyage News Ahead
It would seem that the fantastic voyage has arrived, or is at least about to depart. Not, initially, on the nanoscale, but the microscale – two hairs wide – and quite small enough to tour the circulatory system. For those who have been the beneficiaries of rather larger intrusions into their orifices and plumbing this news will come as potentially rather personal cause for exhilaration. It has always seemed curious that as our chips get smaller and uber-powerful we have been unable to let loose microscopic explorers into the tubes that run through our bodies.
According to this report, the key lies in the development of a piezoelectric motor, other efforts to design sufficiently powerful outboards (or inboards) for the cruising craft having failed.
Of course, two whole hairs is huge in nano terms. But it’s one small step for man . . .
An international team of scientists is developing what they say will be the world’s first microrobot — as wide as two human hairs — that can swim through the arteries and digestive system.
The scientists are designing the 250-micron device to transmit images and deliver microscopic payloads to parts of the body outside the reach of existing catheter technology.
It will also perform minimally invasive microsurgeries, said James Friend of the Micro/Nanophysics Research Laboratory at Australia’s Monash University, who leads the team. The researchers hope the device will reduce the risks normally associated with delicate surgical procedures.
While others have tried and failed to create microrobots for arterial travel, Friend believes his team will succeed because they are the first to exploit piezoelectric materials — crystals that create an electric charge when mechanically stressed — in their micromotor design.
“People have tried various techniques, including electromagnetic motors,” Friend said. “But at this scale, electromagnetic motors become impractical because the magnetic fields become so weak. No one has taken the trouble to build piezoelectric motors at the same scales, for this kind of application.”
Funded by the Australian Research Council, Friend’s team is tweaking larger versions of the device, and expects to have a working prototype later this year and a completed version by 2009.
The scientists say stroke, embolism and vascular-disease patients should be the first to benefit from the new technology.
The tiny robot, small enough to pass through the heart and other organs, will be inserted using a syringe. Guided by remote control, it will swim to a site within the body to perform a series of tasks, then return to the point of entry where it can be extracted, again by syringe.