World Health Organization Vaccine Recommendations: Scientific Flaws, or Criminal Misconduct?

Editors Note: ( Ralph Turchiano) The following is from the expert witness Marc Girard, M.D., M.Sc against the alleged vaccine conspiracy and criminal inquiry against the World Health Organization (WHO). At the time of this printing most of the documentation from those court proceedings were secret, and sealed by court order. PDF of the Original

Journal of American Physicians and Surgeons Volume 11 Number 1 Spring 2006

Marc Girard, M.D., M.Sc


While much information concerning World Health Organization (WHO) recommendations on vaccines, particularly against hepatitis B, remains secret, there is sufficient evidence in the open literature to suggest scientific incompetence, misconduct, or even criminal malfeasance. The benefits are overstated and toxicity greatly understated. Influenza vaccine recommendations falsely imply that the available vaccines could help prevent avian influenza.

Continue reading “World Health Organization Vaccine Recommendations: Scientific Flaws, or Criminal Misconduct?”

‘Sugar papers’ reveal industry role in 1970s dental program

Public Release: 10-Mar-2015

“they noted that the sugar industry’s current position remains that public health should focus on fluoride toothpaste, dental sealants and other ways to reduce the harm of sugar, rather than reducing consumption. “

University of California – San Francisco

A newly discovered cache of industry documents reveals that the sugar industry worked closely with the National Institutes of Health in the 1960s and ’70s to develop a federal research program focused on approaches other than sugar reduction to prevent tooth decay in American children.

An analysis of those papers by researchers at UC San Francisco appears March 10, 2015 in the open-source scientific journal, PLOS Medicine.

The archive of 319 industry documents, which were uncovered in a public collection at the University of Illinois, revealed that a sugar industry trade organization representing 30 international members had accepted the fact that sugar caused tooth decay as early as 1950, and adopted a strategy aimed at identifying alternative approaches to reducing tooth decay.

Meanwhile, the National Institutes of Health had come to the conclusion in1969 that focusing on reducing consumption of sucrose, “while theoretically possible,” was not practical as a public health measure.

Continue reading “‘Sugar papers’ reveal industry role in 1970s dental program”

Full AR5 draft leaked here, contains game-changing admission of enhanced solar forcing (( Please Review, In case it gets Censored ))

* EEV: I saved the whole Adobe file, if any links become broken please inform me.  I will then gladly publish the whole file online, as a backup.

Posted by Alec Rawls, 12/13/12

I participated in “expert review” of the Second Order  Draft of AR5 (the next IPCC report), Working Group 1 (“The Scientific Basis”),  and am now making the full draft available to the public. I believe that the  leaking of this draft is entirely legal, that the taxpayer funded report is properly in the public domain under the Freedom of Information Act, and that  making it available to the public is in any case protected by established legal  and ethical standards, but web hosting companies are not in the business of  making such determinations so interested readers are encouraged to please  download copies of the report for further dissemination in case this content is  removed as a possible terms-of-service violation. My reasons for leaking the  report are explained below. Here are the chapters:

Summary for  Policymakers
Chapter 1: Introduction
Chapter 2:  Observations: Atmosphere and Surface
Chapter 3: Observations:  Ocean
Chapter 4: Observations:  Cryosphere
Chapter 5: Information from  Paleoclimate Archives
Chapter 6: Carbon and  Other Biogeochemical Cycles
Chapter 7: Clouds  and Aerosols
Chapter 8:  Anthropogenic and Natural Radiative Forcing
Chapter 8 Supplement
Chapter 9: Evaluation of  Climate Models
Chapter 10: Detection  and Attribution of Climate Change: from Global to Regional
Chapter 11: Near-term  Climate Change: Projections and Predictability
Chapter 12: Long-term  Climate Change: Projections, Commitments and Irreversibility
Chapter 13: Sea Level  Change
Chapter 14: Climate  Phenomena and their Relevance for Future Regional Climate Change
Chapter 14 Supplement
Technical Summary

Why leak the  draft report?

By Alec Rawls (email)

General principles

The ethics of leaking tax-payer funded documents  requires weighing the “public’s right to know” against any harm to the public  interest that may result. The press often leaks even in the face of extreme such  harm, as when the New York Times published details of how the Bush administration was tracking terrorist  financing with the help of the private sector Society for Worldwide Interbank  Financial Telecommunication (SWIFT), causing this very successful anti-terror  program to immediately collapse.

That was a bad leak, doing great harm to expose  something that nobody needed to know about. With the UN’s IPCC reports the  calculus is reversed. UN “climate chief” Christina Figueres explains what is  at stake for the public:

… we are inspiring government, private sector, and  civil society to [make] the biggest transformation that they have ever  undertaken. The Industrial Revolution was also a transformation, but it wasn’t a  guided transformation from a centralized policy perspective. This is a  centralized transformation that is taking place because governments have decided  that they need to listen to science.

So may we please see this “science” on the basis of which  our existing energy infrastructure is to be ripped out in favor of non-existent  “green” energy? The only reason for secrecy in the first place is to enhance the  UN’s political control over a scientific story line that is aimed explicitly at  policy makers. Thus the drafts ought to fall within the reach of the Freedom of  Information Act.

The Obama administration implicitly acknowledged  this when it tried to evade FOIA by setting up private “backdoor  channels” for communications with the IPCC.  If NCAR’s Gerald Meehl (a lead author of AR5’s chapter on near-term climate  change), has working copies of the draft report (and he’s only one of dozens of U.S. government researchers who would), then by law the draft report (now finished)  should be available to the public.

The IPCC’s official reason for wanting secrecy (as  they explained it to Steve McIntyre in  January 2012) is so that criticisms of the drafts are not spread out across the internet  but get funneled through the UN’s comment process. If there is any merit to that  rationale it is now moot. The comment period ended November 30th so the comment  process can no longer be affected by publication.

As for my personal confidentiality agreement with  the IPCC, I regard that as vitiated by the systematic dishonesty of the report  (“omitted  variable fraud” as I called it in my FOD  comments). This is a general principle of journalistic confidentiality: bad  faith on one side breaks the agreement on the other. They can’t ask reviewers to  become complicit in their dishonesty by remaining silent about it.

Then there is the specific content of the Second  Order Draft where the addition of one single sentence demands the release of the  whole. That sentence is an astounding bit of honesty, a  killing admission that completely undercuts the main premise and the main  conclusion of the full report, revealing the fundamental dishonesty of the  whole.

Lead story from the Second Order Draft: strong  evidence for solar forcing beyond TSI now acknowledged by IPCC

Compared to the First Order Draft, the SOD now adds the  following sentence, indicated in bold (page 7-43, lines 1-5, emphasis added):

Many empirical relationships  have been reported between GCR or cosmogenic isotope archives and some aspects  of the climate system (e.g., Bond et al., 2001; Dengel et al., 2009; Ram and  Stolz, 1999). The forcing from changes  in total solar irradiance alone does not seem to account for these observations,  implying the existence of an amplifying mechanism such as the hypothesized  GCR-cloud link. We focus here on  observed relationships between GCR and aerosol and cloud properties.

The Chapter 7 authors are admitting strong evidence  (“many empirical relationships”) for enhanced solar forcing (forcing beyond  total solar irradiance, or TSI), even if they don’t know what the mechanism is.  This directly undercuts the main premise of the report, as stated in Chapter 8  (page 8-4, lines 54-57):

There is very high confidence that natural forcing is a  small fraction of the anthropogenic forcing. In particular, over the past three  decades (since 1980), robust evidence from satellite observations of the TSI and  volcanic aerosols demonstrate a near-zero (–0.04 W m–2) change in the natural  forcing compared to the anthropogenic AF increase of ~1.0 ± 0.3 W m–2.

The Chapter 8 authors (a different group than the Chapter  7 authors) are explicit here that their claim about natural forcing being small  compared to anthropogenic forcing is based on an analysis in which the only  solar forcing that is taken into account is TSI. This can be verified from the  radiative forcing table on page 8-39 where the only solar variable included in  the IPCC’s computer models is seen to be “solar irradiance.”

This analysis, where post-1980 warming gets  attributed to the human release of CO2 on the grounds that it cannot be  attributed to solar irradiance, cannot stand in the face of the Chapter 7  admission of substantial evidence for solar forcing beyond solar irradiance.  Once the evidence for enhanced solar forcing is taken into account we can have no confidence that natural forcing is small compared to anthropogenic forcing.

The Chapter 8 premise that natural forcing is relatively  small leads directly to the main conclusion of the entire report, stated in the  first sentence of the Executive Summary (the very first sentence of the entire  report): that advances since AR4 “further strengthen the basis for human  activities being the primary driver in climate change” (p.1-2, lines 3-5). This  headline conclusion is a direct descendant of the assumption that the only solar  forcing is TSI, a claim that their own report no longer accepts.

The report still barely hints at the mountain of  evidence for enhanced solar forcing, or the magnitude of the evidenced effect.  Dozens of studies (section two here) have found  between a .4 and .7 degree of correlation between solar activity and various  climate indices, suggesting that solar activity “explains” in the statistical  sense something like half of all past temperature change, very little of which  could be explained by the very slight variation in TSI. At least the Chapter 7  team is now being explicit about what this evidence means: that some mechanism  of enhanced solar forcing must be at work.

My full submitted comments (which I will post  later) elaborate several important points. For instance, note that the Chapter 8  premise (page 8-4, lines 54-57) assumes that it is the change in the  level of forcing since 1980, not the level of forcing, that would be causing  warming. Solar activity was at historically high levels at least through the end  of solar cycle 22 (1996), yet the IPCC is assuming that because this high level  of solar forcing was roughly constant from 1950 until it fell off during solar  cycle 23 it could not have caused post-1980 warming. In effect they are claiming  that you can’t heat a pot of water by turning the burner to maximum and leaving  it there, that you have to keep turning the  flame up to get continued warming, an un-scientific absurdity that I have been  writing about for several years (most recently in my post about Isaac Held’s  bogus 2-box model of  ocean equilibration).

The admission of strong evidence for enhanced solar  forcing changes everything. The climate alarmists can’t continue to claim that  warming was almost entirely due to human activity over a period when solar  warming effects, now acknowledged to be important, were at a maximum. The final  draft of AR5 WG1 is not scheduled to be released for another year but the public  needs to know now how the main premises and conclusions of the IPCC story line have been undercut  by the IPCC itself.

President Obama is already pushing a carbon tax premised on the fear that CO2 is causing dangerous global warming. Last week his  people were at the UN’s climate meeting in Doha pretending that  Hurricane Sandy was caused by human increments to CO2 as UN insiders assured the  public that the next IPCC report will “scare  the wits out of everyone” with its  ramped-up predictions of human-caused global warming to come, but this is not  where the evidence points, not if climate change is in any substantial measure  driven by the sun, which has now gone quiet and is exerting what influence it  has in the cooling direction.

The acknowledgement of strong evidence for enhanced solar  forcing should upend the IPCC’s entire agenda. The easiest way for the UN to  handle this disruptive admission would be to remove it from their final draft,  which is another reason to make the draft report public now. The devastating  admission needs to be known so that the IPCC can’t quietly take it back.

Will some  press organization please host the leaked report?

Most of us have to worry about staying within cautiously  written and cautiously applied terms-of-service agreements. That’s why I created  this new website. If it gets taken down nothing else gets taken with it. Media  companies don’t have this problem. They have their own servers and publishing  things like the draft IPCC report is supposed to be their bailiwick.

If the press has First Amendment protection for the  publication of leaked materials even when substantial national security  interests are at stake (the Supreme Court precedent set in the Pentagon Papers  case), then it can certainly republish a leaked draft of a climate science  report where there is no public  interest in secrecy. The leaker could be  at risk (the case against Pentagon leaker Daniel Ellsberg was thrown out for  government misconduct, not because his activity was found to be protected) but  the press is safe, and their services would be appreciated.

United States taxpayers have funded climate science to the tune of well over 80 billion dollars,  all channeled through the funding bureaucracy established by Vice President  Albert “the end is nigh” Gore when he served as President Clinton’s “climate  czar.”  That  Gore-built bureaucracy is still to this day striving to insure that not a penny  of all those taxpayer billions ever goes to any researcher who is not committed  to the premature conclusion that human contributions to atmospheric CO2 are  causing dangerous global warming (despite the lack of any statistically significant warming for more than 15 years).

Acolytes of this bought “consensus” want to see what new  propaganda their tax dollars have wrought and so do the skeptics. It’s  unanimous, and an already twice-vetted draft is sitting now in thousands of  government offices around the world. Time to fork it over to the people.

Landmark climate change report leaked online

Draft of IPCC’s fifth assessment, due to be published in September 2013, leaked online by climate sceptic Alex Rawls

The BoA 2&3 coal-burning power plant, which began operation in Aug 2012 near Grevenbroich, Germany

The BoA coal-burning power plant, which went into operation in August 2012 near Grevenbroich, Germany. Photograph: Juergen Schwarz/Getty Images

The draft of a major global warming report by the UN’s climate science panel has been leaked online.

The fifth assessment report (AR5) by the Intergovernmental Panel on Climate Change, which is not due to be published in full until September 2013, was uploaded onto a website called Stop Green Suicide on Thursday and has since been mirrored elsewhere on the internet.

The IPCC, which confirmed the draft is genuine, said in a statement: “The IPCC regrets this unauthorized posting which interferes with the process of assessment and review. We will continue not to comment on the contents of draft reports, as they are works in progress.”

A little-known US-based climate sceptic called Alex Rawls, who had been accepted by the IPCC to be one of the report’s 800 expert reviewers, admitted to leaking the document. In a statement posted online, he sought to justify the leak: “The addition of one single sentence [discussing the influence of cosmic rays on the earth’s climate] demands the release of the whole. That sentence is an astounding bit of honesty, a killing admission that completely undercuts the main premise and the main conclusion of the full report, revealing the fundamental dishonesty of the whole.”

Climate sceptics have heralded the sentence – which they interpret as meaning that cosmic rays could have a greater warming influence on the planet than mankind’s emissions – as “game-changing”.

The isolation by climate sceptics of one sentence in the 14-chapter draft report was described as “completely ridiculous” by one of the report’s lead authors. Prof Steve Sherwood, a director of the Climate Change Research Centre at the University of New South Wales, told ABC Radio in Australia: “You could go and read those paragraphs yourself and the summary of it and see that we conclude exactly the opposite, that this cosmic ray effect that the paragraph is discussing appears to be negligible … It’s a pretty severe case of [cherry-picking], because even the sentence doesn’t say what [climate sceptics] say and certainly if you look at the context, we’re really saying the opposite.”

The leaked draft “summary for policymakers” contains a statement that appears to contradict the climate sceptics’ interpretation.

It says: “There is consistent evidence from observations of a net energy uptake of the earth system due to an imbalance in the energy budget. It is virtually certain that this is caused by human activities, primarily by the increase in CO2 concentrations. There is very high confidence that natural forcing contributes only a small fraction to this imbalance.”

By “virtually certain”, the scientists say they mean they are now 99% sure that man’s emissions are responsible. By comparison, in the IPCC’s last report, published in 2007, the scientists said they had a “very high confidence” – 90% sure – humans were principally responsible for causing the planet to warm.

Richard Betts, a climate scientist at the Met Office Hadley Centre and an AR5 lead author, tweeted that the report is still a draft and could well change: “Worth pointing out that the wording in the leaked IPCC WG1 [working group 1, which examines the “physical science basis” of climate change] draft chapters may still change in the final versions, following review comments.”

Bob Ward, policy and communications director at the Grantham Research Institute on Climate Change and the Environment at London School of Economics and Political Science, said that Rawls appeared to have broken the confidentiality agreement signed by reviewers: “As a registered reviewer of the IPCC report, I condemn the decision by a climate change sceptic to violate the confidentiality of the review process. The review of the IPCC report is being carried out in line with the principles of peer review which operate throughout academic science, including an expectation of high standards of ethical behaviour by reviewers. It is disappointing, if not surprising, that climate change sceptics have been unable to meet these high standards of ethical behaviour.”

The IPCC, which publishes a detailed synthesis of the latest climate science every seven years to help guide policy makers, has experienced leaks before. In 2000, the third assessment report was leaked to the New York Times, while the fourth assessment report was published in 2006 by the US government a year ahead of its official publication.

Prof Bill McGuire, Professor of Geophysical & Climate Hazards at University College London and contributing author on the recent IPCC report on climate change and extreme events, said that sceptics’ reading of the draft was incorrect: “Alex Rawls’ interpretation of what IPCC5 says is quite simply wrong. In fact, while temperatures have been ramping up in recent decades, solar activity has been pretty subdued, so any interaction with cosmic rays is clearly having minimal – if any – effects. IPCC AR5 reiterates what we can be absolutely certain of: that contemporary climate change is not a natural process, but the consequence of human activities.”

Prof Piers Forster, Professor of Climate Change at the University of Leeds, said: “Although this may seem like a ‘leak’, the draft IPCC reports are not kept secret and the review process is open. The rationale in not disseminating the findings until the final version is complete, is to try and iron out all the errors and inconsistencies which might be inadvertently included. Personally, I would be happy if the whole IPCC process were even more open and public, and I think we as scientists need to explore how we can best match the development of measured critical arguments with those of the Twitter generation.”


Mistakes found in all radiation projections


The Nuclear Regulation Authority said Thursday a thorough review of its mistake-plagued projections for the spread of radiation turned up errors in the data for every atomic power plant in Japan.

The regulatory body examined the data in detail to ensure there would be no more mistakes in the projections. Local governments are expected to use the information to craft plans to prepare for nuclear disasters.

The NRA said there were significant changes in diagrams for how radiation could spread in the event of crises at Kyushu Electric Power Co.’s Genkai and Sendai power plants and Hokkaido Electric Power Co.’s Tomari nuclear complex, compared with the previously revised projections released Oct. 29.

The three projections had to be revised either because the plant operators supplied erroneous weather information or because the data were incorrectly processed by the Japan Nuclear Energy Safety Organization, which was tasked with creating the projections.

The process of calculating the projections for the remaining 14 plants across the country, including disaster-hit Fukushima No. 1 operated by Tokyo Electric Power Co., also contained errors or was mishandled, although this did not result in drastic changes in the projections, according to the NRA’s secretariat.

The simulation showed the distances at which doses could reach 100 millisieverts a week after a severe crisis like last year’s three meltdowns at Fukushima No. 1. At that dose level, evacuation is recommended by the International Atomic Energy Agency.

The latest projections show the most distant point where such severe radiation could spread is 40.1 km east of Tepco’s Kashiwazaki-Kariwa plant in Niigata Prefecture.  That point is in the city of Nagaoka.

In the earlier projections, the NRA said the most distant point would still be in Nagaoka, but 40.2 km from Tepco’s facility, the largest nuclear plant in the world.


Personhood: Causality of Modern Medicine

Professor Emeritus of Family Medicine at the University of Washington School of Medicine

Posted: 12/06/2012  1:38 pm

Even as we marvel at the latest advances in medical technology in this country, a dire and unacceptable consequence of these changes is already in plain sight — the loss of the patient as person in the process of our fragmented and dysfunctional health care “system.”

There are so many ways in which patients as persons get lost in the increasingly-chaotic landscape of today’s health care environment. These are some of the trends that perpetuate and escalate this problem:

    • A multi-payer financing system with some 1,300 private insurance companies  that creates perverse incentives for physicians and other providers to deliver increased volume of inappropriate and unnecessary services to grow their revenues.
  • Lack of price controls throughout the system.


  • A largely for-profit private health insurance industry that profits by covering less care.


  • A system driven by a business ethic more than a service ethic, with health care just another commodity for sale on an open market.


  • A medical-industrial complex that has become solidly entrenched over the last 40 years with tremendous economic and political power.


  • Reimbursement policies that favor procedures and specialized services over more time-intensive services typical of primary care, geriatrics and psychiatry.


  • A physician workforce that has become dominated by non-primary care specialists with little knowledge of the patients as persons, as members of families and communities.


  • A disconnect between ambulatory care and hospital care, with primary care physicians following their patients into the hospital now a rarity.


  • Hospitalists trying to coordinate hospital care of patients with little knowledge of their prior care and often insufficient communication among multiple specialists.


  • A growing use of electronic medical records, a necessary and welcome change but marred by competing systems that don’t speak to each other and largely omit any information of patients as persons.


As a result of these changes, the present state of health care in this country to an increasing extent  involves strangers caring for strangers, with patients’ narratives and life stories no longer a key element guiding decisions about their own health care. This is a serious problem, not yet part of our national conversation, that has led to a growing gap between the care that patients need and deserve and what they receive.

Can the person be restored as the central object of health care in today’s profit-driven system? The challenges are daunting and will require a long-term social, economic, political and cultural shift. In a broad view, we need a paradigm shift similar to that of the Copernican revolution, when the Renaissance astronomer conceived a heliocentric cosmology that displaced the Earth from the center of the universe. Such a shift in U.S. health care, that puts patients and families at the center of health care, is illustrated here.

As has already occurred in most advanced countries around the world, we need to turn around how we think about health care in this country in these kinds of ways:

    • Shifting from a system based on ability to pay to one based on medical need.
  • Moving from health care as a commodity to a basic human right and need.


  • Moving from a dysfunctional, fragmented and exploitative private health insurance industry to a single-payer improved Medicare for All coupled with a private delivery system.


  • Moving from political- and lobbyist-driven coverage policies toward those based on scientific evidence of efficacy and cost-effectiveness.


  • Replacing today’s unaccountable system with one that stewards limited health care resources for the benefit of all Americans in a single risk pool (“Everybody in, nobody out”).


Many lines of reform would help to move present dynamics in health care toward a more patient-centered process. Financing and payment reforms would go a long way in that direction. Improved Medicare for All (H.R. 676) would establish universal access to health care for all Americans. It would also facilitate other enabling steps, such as achieving price controls on the supply side and encouraging growth of new approaches to primary care that are more person-centered.

The missing element in today’s depersonalized health care is time — listening and talking time between patients, their physicians and other health care professionals — during which patients can relate their narratives that can then be integrated into plans for their care. We already know that trust between physicians and patients built over years improves medical outcomes and enhances healing.

As system problems of U.S. health care impact more ordinary Americans with diminished access to affordable care, and as growing millions forego essential care, we are now seeing a renewal of literature in the medical humanities as a counter trend  to depersonalized care that too often does not meet patients’ needs. Health Affairs, as the premier health policy journal, has had a regular feature for years dealing with patient narratives. The American Medical Student Association (AMSA) has established its Humanities Institute, now offering workshops exploring the art of medicine, including sessions on narrative medicine and writing for social justice. A number of books are charting new territory in this direction, including Norman Cousins’ Anatomy of an Illness, Arthur Kleinman’s The Illness Narratives, Howard Brody’s Stories of Sickness, and Rita Charon’s Narrative Medicine: Honoring the Stories of Illness. Copernicus Healthcare is adding to this promising trend with its forthcoming release of The Art of Medicine in Metaphors: A Collection of Poems and Narratives, edited by James Borton. This kind of writing builds on a rich earlier tradition of medical writing in this genre, including the poetry of Dr. William Carlos Williams in the last century.

As Dr. Charon notes in her 2006 book:

I hope that the frame of narrative medicine can gather new combinations of us — from the humanities, from all the health professions, from the lay world, the business world, the political world — and make new relations among us, so as to look with refreshed eyes at what is means to be sick and to help others get well. (1)

This hope gives us a vision toward better, more humane health care. Admittedly, it is a Herculean task to reverse well-entrenched trends in our dysfunctional market-based health care system, and paradigm shifts take a long time.  But our present system is not sustainable, reforms are not impossible, and in order to progress, we first need this kind of vision.

(1) Charon, R. Narrative Medicine: Honoring the Stories of Illness, New York. Oxford University Press, 2006, p. xiii.


Illness leads Serb ambassador to NATO to suicide

Fri, 7 Dec 2012 15:05 GMT


BELGRADE, Dec 7 (Reuters) – Serbia’s ambassador to NATO who killed himself earlier this week had recently been diagnosed with a life-threatening illness, a government official said on Friday.

Branislav Milinkovic, 52, jumped to his death from a multi-storey car park at Brussels airport on Tuesday evening. Belgian authorities said they would not investigate the incident as all findings indicated a suicide.

On Friday, a Serbian government official who asked not to be named told Reuters that Milinkovic “was apparently distressed by bad news about his health”.

Milinkovic’s wife Sanja also told a Belgrade newspaper that he had been diagnosed “with a sudden and grave illness” only days before his death.

“Doctors … told him about prolonged treatment and an uncertain outcome,” she was quoted as saying by the tabloid daily Kurir. “Branislav could not bear the fact of living, as he put it, the rest of his life without human dignity.”

He committed suicide during a conference of NATO foreign ministers, shortly after meeting a Serb delegation that had arrived for the event.

NATO Secretary-General Anders Fogh Rasmussen said he was deeply saddened by Milinkovic’s death. The Serbian Foreign Ministry also praised Milinkovic as a distinguished diplomat and announced a memorial service.

Milinkovic was appointed ambassador to NATO in 2009 but had already been based in Brussels since 2004 as an envoy from the now-defunct state union of Serbia and Montenegro.  (Reporting by Aleksandar Vasovic; Editing by Stephen Powell)

Could there be a REAL ‘Manchurian Candidate’? TV show to test whether innocent people can be turned into brainwashed assassins

  • Bobby  Kennedy’s assassin claimed he was hypnotised to carry out  killing
  • CIA  investigated mind control between Fifties and Seventies as part of covert  project known as MKUltra

By Damien Gayle

PUBLISHED:08:36 EST, 26  October 2012| UPDATED:08:59 EST, 26 October 2012

A television programme is out to test whether  innocent people can be brainwashed in becoming unwitting assassins as in the  plot of political thriller The Manchurian Candidate.

In the 1959 novel, a man is brainwashed into  becoming an unwitting sleeper assassin as part of a Communist conspiracy to  overthrow the U.S. government.

The novel and its film adaptations have  intrigued many, and related conspiracy theories have long-held that the U.S.  government and others have tried to develop techniques to control the minds of  individuals.

Programmed to kill: Angela Lansbury and Laurence Harvey in a scene from the 1962 cinema adaptation of The Manchurian CandidateProgrammed to kill: Angela Lansbury and Laurence Harvey  in a scene from the 1962 cinema adaptation of The Manchurian Candidate, in which  a man is brainwashed into becoming an assassin as part of a communist  plot

In one famous case, the assassin who killed  presidential candidate Bobby Kennedy, Christian Palestinian Sirhan Sirhan, later  claimed he was hypnotised into carrying out the killing.

Likewise, Patty Hearst, the newspaper heiress  kidnapped by the left-wing revolutionary group the Symbionese Liberation Army,  claim the gang brainwashed her into taking part in a bank robbery.

There have been government-sponsored studies  into the possibility of mind control.

Between the Fifties and Seventies, the CIA  conducted controversial experiments to develop behavioural engineering that many  believed aimed at brainwashing subjects.


MKUltra was  the code name for a covert CIA human research program run by the CIA Office of  Scientific Intelligence.

The program began  in the early Fifties and continued at least through the late Sixties, using  mainly U.S. and Canadian citizens as its test subjects.

The published  evidence indicates that Project MKUltra involved the use of many methodologies  to manipulate individual mental states and alter brain function, including the  surreptitious administration of drugs and other chemicals, sensory deprivation,  isolation, and verbal and sexual abuse.

It was first  brought to wide public attention in 1975 by Congress, through investigations by  the Church Committee, and by a presidential commission known as the Rockefeller  Commission.

However,  investigative efforts were hampered by the fact that CIA Director Richard Helms  ordered all MKULTRA files destroyed in 1973.

The Church  Committee and Rockefeller Commission investigations relied on the sworn  testimony of direct participants and on the relatively small number of documents  that survived Helms’ destruction order.

In recent times  most information regarding MKULTRA has been officially declassified. It was  first made available through a FOIA request in 1977 that uncovered a cache of  some 20,000 documents relating to project MKULTRA and led to Senate  hearings.

In July of  2001 some surviving information regarding MKUltra was officially declassified.

Investigative efforts were hampered by the  fact that Richard Helms, director of the CIA, ordered many files related to the  programme – known as MKUltra – destroyed in 1973, years before any investigation  into it began.

It was nevertheless found that the remit of  the project was to develop mind-controlling drugs and techniques, with the CIA  especially interested in being able to manipulate foreign leaders.

Now a Discovery Channel documentary aims to  see if it is indeed possible to persuade unwitting, law-abiding subjects to  become cold-blooded killers who will shoot someone to order, Today’s Clicker  blog reports.

Experimental psychopathologist Cynthia  Meyerburg, who oversaw the study, says in a trailer for the documentary:  ‘Science has only begun to understand how the brain works, and one of the things  we don’t yet understand is whether it’s possible to control someone else’s  mind.’

The programme will show work with a group of  test subjects who gave their consent to take part in a hypnosis study for  television.

They were not told that the ultimate  objective of the study, as described in its trailer, is ‘to take a mentally  healthy, law-abiding individual and program them to shoot and kill a complete  stranger’.

Not all experts are agreed on whether the  goal is possible to achieve. Oxford University neuroscientist Matt Stokes, who  also contributes to the film, said: ‘What we’re trying to do here is strip away  someone’s sense of free will and see if they can carry out extreme  acts.

‘Can it be done? Well, I’m not so sure that  it can.’

Curiosity: Brainwashed will be broadcast on  Sunday night at 9pm on the Discovery Channel.

Now watch the trailer for the programme

Read more: Follow us: @MailOnline on Twitter | DailyMail on Facebook

Hacking the President’s DNA : Personalized Bioweapons

The U.S. government is surreptitiously collecting the DNA of world leaders, and is reportedly protecting that of Barack Obama. Decoded, these genetic blueprints could provide compromising information. In the not-too-distant future, they may provide something more as well—the basis for the creation of personalized bioweapons that could take down a president and leave no trace.

By Andrew Hessel, Marc Goodman and Steven Kotler

Miles Donovan

This is how the future arrived. It began innocuously, in the early 2000s, when businesses started to realize that highly skilled jobs formerly performed in-house, by a single employee, could more efficiently be crowd-sourced to a larger group of people via the Internet. Initially, we crowd-sourced the design of T‑shirts ( and the writing of encyclopedias (, but before long the trend started making inroads into the harder sciences. Pretty soon, the hunt for extraterrestrial life, the development of self-driving cars, and the folding of enzymes into novel proteins were being done this way. With the fundamental tools of genetic manipulation—tools that had cost millions of dollars not 10 years earlier—dropping precipitously in price, the crowd-sourced design of biological agents was just the next logical step.

In 2008, casual DNA-design competitions with small prizes arose; then in 2011, with the launch of GE’s $100 million breast-cancer challenge, the field moved on to serious contests. By early 2015, as personalized gene therapies for end-stage cancer became medicine’s cutting edge, virus-design Web sites began appearing, where people could upload information about their disease and virologists could post designs for a customized cure. Medically speaking, it all made perfect sense: Nature had done eons of excellent design work on viruses. With some retooling, they were ideal vehicles for gene delivery.

Soon enough, these sites were flooded with requests that went far beyond cancer. Diagnostic agents, vaccines, antimicrobials, even designer psychoactive drugs—all appeared on the menu. What people did with these bio-designs was anybody’s guess. No international body had yet been created to watch over them.

So, in November of 2016, when a first-time visitor with the handle Cap’n Capsid posted a challenge on the viral-design site 99Virions, no alarms sounded; his was just one of the 100 or so design requests submitted that day. Cap’n Capsid might have been some consultant to the pharmaceutical industry, and his challenge just another attempt to understand the radically shifting R&D landscape—really, he could have been anyone—but the problem was interesting nonetheless. Plus, Capsid was offering $500 for the winning design, not a bad sum for a few hours’ work.

Later, 99Virions’ log files would show that Cap’n Capsid’s IP address originated in Panama, although this was likely a fake. The design specification itself raised no red flags. Written in SBOL, an open-source language popular with the synthetic-biology crowd, it seemed like a standard vaccine request. So people just got to work, as did the automated computer programs that had been written to “auto-evolve” new designs. These algorithms were getting quite good, now winning nearly a third of the challenges.

Within 12 hours, 243 designs were submitted, most by these computerized expert systems. But this time the winner, GeneGenie27, was actually human—a 20-year-old Columbia University undergrad with a knack for virology. His design was quickly forwarded to a thriving Shanghai-based online bio-marketplace. Less than a minute later, an Icelandic synthesis start‑up won the contract to turn the 5,984-base-pair blueprint into actual genetic material. Three days after that, a package of 10‑milligram, fast-dissolving microtablets was dropped in a FedEx envelope and handed to a courier.

Two days later, Samantha, a sophomore majoring in government at Harvard University, received the package. Thinking it contained a new synthetic psychedelic she had ordered online, she slipped a tablet into her left nostril that evening, then walked over to her closet. By the time Samantha finished dressing, the tab had started to dissolve, and a few strands of foreign genetic material had entered the cells of her nasal mucosa.

Some party drug—all she got, it seemed, was the flu. Later that night, Samantha had a slight fever and was shedding billions of virus particles. These particles would spread around campus in an exponentially growing chain reaction that was—other than the mild fever and some sneezing—absolutely harmless. This would change when the virus crossed paths with cells containing a very specific DNA sequence, a sequence that would act as a molecular key to unlock secondary functions that were not so benign. This secondary sequence would trigger a fast-acting neuro-destructive disease that produced memory loss and, eventually, death. The only person in the world with this DNA sequence was the president of the United States, who was scheduled to speak at Harvard’s Kennedy School of Government later that week. Sure, thousands of people on campus would be sniffling, but the Secret Service probably wouldn’t think anything was amiss.

It was December, after all—cold-and-flu season.

The scenario we’ve just sketched may sound like nothing but science fiction—and, indeed, it does contain a few futuristic leaps. Many members of the scientific community would say our time line is too fast. But consider that since the beginning of this century, rapidly accelerating technology has shown a distinct tendency to turn the impossible into the everyday in no time at all. Last year, IBM’s Watson, an artificial intelligence, understood natural language well enough to whip the human champion Ken Jennings on Jeopardy. As we write this, soldiers with bionic limbs are returning to active duty, and autonomous cars are driving down our streets. Yet most of these advances are small in comparison with the great leap forward currently under way in the biosciences—a leap with consequences we’ve only begun to imagine.

Personalized bioweapons are a subtler and less catastrophic threat than accidental plagues or WMDs. Yet they will likely be unleashed much more readily.

More to the point, consider that the DNA of world leaders is already a subject of intrigue. According to Ronald Kessler, the author of the 2009 book In the President’s Secret Service, Navy stewards gather bedsheets, drinking glasses, and other objects the president has touched—they are later sanitized or destroyed—in an effort to keep would‑be malefactors from obtaining his genetic material. (The Secret Service would neither confirm nor deny this practice, nor would it comment on any other aspect of this article.) And according to a 2010 release of secret cables by WikiLeaks, Secretary of State Hillary Clinton directed our embassies to surreptitiously collect DNA samples from foreign heads of state and senior United Nations officials. Clearly, the U.S. sees strategic advantage in knowing the specific biology of world leaders; it would be surprising if other nations didn’t feel the same.

While no use of an advanced, genetically targeted bio-weapon has been reported, the authors of this piece—including an expert in genetics and microbiology (Andrew Hessel) and one in global security and law enforcement (Marc Goodman)—are convinced we are drawing close to this possibility. Most of the enabling technologies are in place, already serving the needs of academic R&D groups and commercial biotech organizations. And these technologies are becoming exponentially more powerful, particularly those that allow for the easy manipulation of DNA.

The evolution of cancer treatment provides one window into what’s happening. Most cancer drugs kill cells. Today’s chemotherapies are offshoots of chemical-warfare agents: we’ve turned weapons into cancer medicines, albeit crude ones—and as with carpet bombing, collateral damage is a given. But now, thanks to advances in genetics, we know that each cancer is unique, and research is shifting to the development of personalized medicines—designer therapies that can exterminate specific cancerous cells in a specific way, in a specific person; therapies focused like lasers.

To be sure, around the turn of the millennium, significant fanfare surrounded personalized medicine, especially in the field of genetics. A lot of that is now gone. The prevailing wisdom is that the tech has not lived up to the talk, but this isn’t surprising. Gartner, an information-technology research-and-advisory firm, has coined the term hype cycle to describe exactly this sort of phenomenon: a new technology is introduced with enthusiasm, only to be followed by an emotional low when it fails to immediately deliver on its promise. But Gartner also discovered that the cycle doesn’t typically end in what the firm calls “the trough of disillusionment.” Rising from those ashes is a “slope of enlightenment”—meaning that when viewed from a longer-term historical perspective, the majority of these much-hyped groundbreaking developments do, eventually, break plenty of new ground.

As George Church, a geneticist at Harvard, explains, this is what is now happening in personalized medicine. “The fields of gene therapies, viral delivery, and other personalized therapies are progressing rapidly,” Church says, “with several clinical trials succeeding into Phase 2 and 3,” when the therapies are tried on progressively larger numbers of test subjects. “Many of these treatments target cells that differ in only one—rare—genetic variation relative to surrounding cells or individuals.” The Finnish start-up Oncos Therapeutics has already treated close to 300 cancer patients using a scaled-down form of this kind of targeted technology.

These developments are, for the most part, positive—promising better treatment, new cures, and, eventually, longer life. But it wouldn’t take much to subvert such therapies and come full circle, turning personalized medicines into personalized bioweapons. “Right now,” says Jimmy Lin, a genomics researcher at Washington University in St. Louis and the founder of Rare Genomics, a nonprofit organization that designs treatments for rare childhood diseases based on individual genetic analysis, “we have drugs that target specific cancer mutations. Examples include Gleevec, Zelboraf, and Xalkori. Vertex,” a pharmaceutical company based in Massachusetts, “has famously made a drug for cystic-fibrosis patients with a particular mutation. The genetic targeting of individuals is a little farther out. But a state-sponsored program of the Stuxnet variety might be able to accomplish this in a few years. Of course, this work isn’t very well known, so if you tell most people about this, they say that the time frame sounds like science fiction. But when you’re familiar with the research, it’s really feasible that a well-funded group could pull this off.” We would do well to begin planning for that possibility sooner rather than later.

If you really want to understand what’s happening in the biosciences, then you need to understand the rate at which information technology is accelerating. In 1965, Gordon Moore famously realized that the number of integrated-circuit components on a computer chip had been doubling roughly every year since the invention of the integrated circuit in the late 1950s. Moore, who would go on to co-found Intel, predicted that the trend would continue “for at least 10 years.” He was right. The trend did continue for 10 years, and 10 more after that. All told, his observation has remained accurate for five decades, becoming so durable that it’s now known as “Moore’s Law” and used by the semi-conductor industry as a guide for future planning.

Moore’s Law originally stated that every 12 months (it is now 24 months), the number of transistors on an integrated circuit will double—an example of a pattern known as “exponential growth.” While linear growth is a slow, sequential proposition (1 becomes 2 becomes 3 becomes 4, etc.), exponential growth is an explosive doubling (1 becomes 2 becomes 4 becomes 8, etc.) with a transformational effect. In the 1970s, the most powerful supercomputer in the world was a Cray. It required a small room to hold it and cost roughly $8 million. Today, the iPhone in your pocket is more than 100 times faster and more than 12,000 times cheaper than a Cray. This is exponential growth at work.

In the years since Moore’s observation, scientists have discovered that the pattern of exponential growth occurs in many other industries and technologies. The amount of Internet data traffic in a year, the number of bytes of computer data storage available per dollar, the number of digital-camera pixels per dollar, and the amount of data transferable over optical fiber are among the dozens of measures of technological progress that follow this pattern. In fact, so prevalent is exponential growth that researchers now suspect it is found in all information-based technology—that is, any technology used to input, store, process, retrieve, or transmit digital information.

Over the past few decades, scientists have also come to see that the four letters of the genetic alphabet—A (adenine), C (cytosine), G (guanine), and T (thymine)—can be transformed into the ones and zeroes of binary code, allowing for the easy, electronic manipulation of genetic information. With this development, biology has turned a corner, morphing into an information-based science and advancing exponentially. As a result, the fundamental tools of genetic engineering, tools designed for the manipulation of life—tools that could easily be co-opted for destructive purposes—are now radically falling in cost and rising in power. Today, anyone with a knack for science, a decent Internet connection, and enough cash to buy a used car has what it takes to try his hand at bio-hacking.

These developments greatly increase several dangers. The most nightmarish involve bad actors creating weapons of mass destruction, or careless scientists unleashing accidental plagues—very real concerns that urgently need more attention. Personalized bioweapons, the focus of this story, are a subtler and less catastrophic threat, and perhaps for that reason, society has barely begun to consider them. Yet once available, they will, we believe, be put into use much more readily than bioweapons of mass destruction. For starters, while most criminals might think twice about mass slaughter, murder is downright commonplace. In the future, politicians, celebrities, leaders of industry—just about anyone, really—could be vulnerable to attack-by-disease. Even if fatal, many such attacks could go undetected, mistaken for death by natural causes; many others would be difficult to pin on a suspect, especially given the passage of time between exposure and the appearance of symptoms.

Moreover—as we’ll explore in greater detail—these same scientific developments will pave the way, eventually, for an entirely new kind of personal warfare. Imagine inducing extreme paranoia in the CEO of a large corporation so as to gain a business advantage, for example; or—further out in the future—infecting shoppers with the urge to impulse-buy.

We have chosen to focus this investigation mostly on the president’s bio-security, because the president’s personal welfare is paramount to national security—and because a discussion of the challenges faced by those charged with his protection will illuminate just how difficult (and different) “security” will be, as biotechnology continues to advance.

A direct assault against the president’s genome requires first being able to decode genomes. Until recently, this was no simple matter. In 1990, when the U.S. Department of Energy and the National Institutes of Health announced their intention to sequence the 3 billion base pairs of the human genome over the next 15 years, it was considered the most ambitious life-sciences project ever undertaken. Despite a budget of $3 billion, progress did not come quickly. Even after years of hard work, many experts doubted that the time and money budgeted would be enough to complete the job.

This started to change in 1998, when the entrepreneurial biologist J. Craig Venter and his company, Celera, got into the race. Taking advantage of the exponential growth in biotechnology, Venter relied on a new generation of gene sequencers and a novel, computer-intensive approach called shotgun sequencing to deliver a draft human genome (his own) in less than two years, for $300 million.

Venter’s achievement was stunning; it was also just the beginning. By 2007, just seven years later, a human genome could be sequenced for less than $1 million. In 2008, some labs would do it for $60,000, and in 2009, $5,000. This year, the $1,000 barrier looks likely to fall. At the current rate of decline, within five years, the cost will be less than $100. In the history of the world, perhaps no other technology has dropped in price and increased in performance so dramatically.

Still, it would take more than just a gene sequencer to build a personally targeted bioweapon. To begin with, prospective attackers would have to collect and grow live cells from the target (more on this later), so cell-culturing tools would be a necessity. Next, a molecular profile of the cells would need to be generated, involving gene sequencers, micro-array scanners, mass spectrometers, and more. Once a detailed genetic blueprint had been built, the attacker could begin to design, build, and test a pathogen, which starts with genetic databases and software and ends with virus and cell-culture work. Gathering the equipment required to do all of this isn’t trivial, and yet, as researchers have upgraded to new tools, as large companies have merged and consolidated operations, and as smaller shops have run out of money and failed, plenty of used lab equipment has been dumped onto the resale market. New, the requisite gear would cost well over $1 million. On eBay, it can be had for as little as $10,000. Strip out the analysis equipment—since those processes can now be outsourced—and a basic cell-culture rig can be cobbled together for less than $1,000. Chemicals and lab supplies have never been easier to buy; hundreds of Web resellers take credit cards and ship almost anywhere.

Biological knowledge, too, is becoming increasingly democratized. Web sites like JoVE (Journal of Visualized Experiments) provide thousands of how-to videos on the techniques of bioscience. MIT offers online courses. Many journals are going open-access, making the latest research, complete with detailed sections on materials and methods, freely available. If you wanted a more hands-on approach to learning, you could just immerse yourself in any of the dozens of do-it-yourself-biology organizations, such as Genspace and BioCurious, that have lately sprung up to make genetic engineering into something of a hobbyist’s pursuit. Bill Gates, in a recent interview, told a reporter that if he were a kid today, forget about hacking computers: he’d be hacking biology. And for those with neither the lab nor the learning, dozens of Contract Research and Manufacturing Services (known as CRAMS) are willing to do much of the serious science for a fee.

From the invention of genetic engineering in 1972 until very recently, the high cost of equipment, and the high cost of education to use that equipment effectively, kept most people with ill intentions away from these technologies. Those barriers to entry are now almost gone. “Unfortunately,” Secretary Clinton said in a December 7, 2011, speech to the Biological and Toxin Weapons Convention Review Conference, “the ability of terrorists and other non-state actors to develop and use these weapons is growing. And therefore, this must be a renewed focus of our efforts … because there are warning signs, and they are too serious to ignore.”

The radical expansion of biology’s frontier raises an uncomfortable question: How do you guard against threats that don’t yet exist? Genetic engineering sits at the edge of a new era. The old era belonged to DNA sequencing, which is simply the act of reading genetic code—identifying and extracting meaning from the ordering of the four chemicals that make up DNA. But now we’re learning how to write DNA, and this creates possibilities both grand and terrifying.

Again, Craig Venter helped to usher in this shift. In the mid‑1990s, just before he began his work to read the human genome, he began wondering what it would take to write one. He wanted to know what the minimal genome required for life looked like. It was a good question. Back then, DNA-synthesis technology was too crude and expensive for anyone to consider writing a minimal genome for life or, more to our point, constructing a sophisticated bioweapon. And gene-splicing techniques, which involve the tricky work of using enzymes to cut up existing DNA from one or more organisms and stitch it back together, were too unwieldy for the task.

Exponential advances in biotechnology have greatly diminished these problems. The latest technology—known as synthetic biology, or “synbio”—moves the work from the molecular to the digital. Genetic code is manipulated using the equivalent of a word processor. With the press of a button, code representing DNA can be cut and pasted, effortlessly imported from one species into another. It can be reused and repurposed. DNA bases can be swapped in and out with precision. And once the code looks right? Simply hit Send. A dozen different DNA print shops can now turn these bits into biology.

In May 2010, with the help of these new tools, Venter answered his own question by creating the world’s first synthetic self-replicating chromosome. To pull this off, he used a computer to design a novel bacterial genome (of more than 1 million base pairs in total). Once the design was complete, the code was e‑mailed to Blue Heron Biotechnology, a Seattle-area company that specializes in synthesizing DNA from digital blueprints. Blue Heron took Venter’s A’s, T’s, C’s, and G’s and returned multiple vials filled with frozen plasmid DNA. Just as one might load an operating system into a computer, Venter then inserted the synthetic DNA into a host bacterial cell that had been emptied of its own DNA. The cell soon began generating proteins, or, to use the computer term popular with today’s biologists, it “booted up”: it started to metabolize, grow, and, most important, divide, based entirely on the code of the injected DNA. One cell became two, two became four, four became eight. And each new cell carried only Venter’s synthetic instructions. For all practical purposes, it was an altogether new life form, created virtually from scratch. Venter called it “the first self-replicating species that we’ve had on the planet whose parent is a computer.”

But Venter merely grazed the surface. Plummeting costs and increasing technical simplicity are allowing synthetic biologists to tinker with life in ways never before feasible. In 2006, for example, Jay D. Keasling, a biochemical engineer at the University of California at Berkeley, stitched together 10 synthetic genes made from the genetic blueprints of three different organisms to create a novel yeast that can manufacture the precursor to the antimalarial drug artemisinin, artemisinic acid, natural supplies of which fluctuate greatly. Meanwhile, Venter’s company Synthetic Genomics is working in partnership with ExxonMobil on a designer algae that consumes carbon dioxide and excretes biofuel; his spin-off company Synthetic Genomics Vaccines is trying to develop flu-fighting vaccines that can be made in hours or days instead of the six-plus months now required. Solazyme, a synbio company based in San Francisco, is making biodiesel with engineered micro-algae. Material scientists are also getting in on the action: DuPont and Tate & Lyle, for instance, have jointly designed a highly efficient and environmentally friendly organism that ingests corn sugar and excretes propanediol, a substance used in a wide range of consumer goods, from cosmetics to cleaning products.

Bill Gates, in a recent interview, told a reporter that if he were a kid today, forget about hacking computers: he’d be hacking biology.

Other synthetic biologists are playing with more-fundamental cellular mechanisms. The Florida-based Foundation for Applied Molecular Evolution has added two bases (Z and P) to DNA’s traditional four, augmenting the old genetic alphabet. At Harvard, George Church has supercharged evolution with his Multiplex Automated Genome Engineering process, which randomly swaps multiple genes at once. Instead of creating novel genomes one at a time, MAGE creates billions of variants in a matter of days.

Finally, because synbio makes DNA design, synthesis, and assembly easier, we’re already moving from the tweaking of existing genetic designs to the construction of new organisms—species that have never before been seen on Earth, species birthed entirely by our imagination. Since we can control the environments these organisms will live in—adjusting things like temperature, pressure, and food sources while eliminating competitors and other stresses—we could soon be generating creatures capable of feats impossible in the “natural” world. Imagine organisms that can thrive on the surface of Mars, or enzymes able to change simple carbon into diamonds or nanotubes. The ultimate limits to synthetic biology are hard to discern.

All of this means that our interactions with biology, already complicated, are about to get a lot more troublesome. Mixing together code from multiple species or creating novel organisms could have unintended consequences. And even in labs with high safety standards, accidents happen. If those accidents involve a containment breach, what is today a harmless laboratory bacterium could tomorrow become an ecological catastrophe. A 2010 synbio report by the Presidential Commission for the Study of Bioethical Issues said as much: “Unmanaged release could, in theory, lead to undesired cross-breeding with other organisms, uncontrolled proliferation, crowding out of existing species, and threats to biodiversity.”

Just as worrisome as bio-error is the threat of bioterror. Although the bacterium Venter created is essentially harmless to humans, the same techniques could be used to construct a known pathogenic virus or bacterium or, worse, to engineer a much deadlier version of one. Viruses are particularly easy to synthetically engineer, a fact made apparent in 2002, when Eckard Wimmer, a Stony Brook University virologist, chemically synthesized the polio genome using mail-order DNA. At the time, the 7,500-nucleotide synthesis cost about $300,000 and took several years to complete. Today, a similar synthesis would take just weeks and cost a few thousand dollars. By 2020, if trends continue, it will take a few minutes and cost roughly $3. Governments the world over have spent billions trying to eradicate polio; imagine the damage terrorists could do with a $3 pathogen.

During the 1990s, the Japanese cult Aum Shinrikyo, infamous for its deadly 1995 sarin-gas attack on the Tokyo subway system, maintained an active and extremely well-funded bioweapons program, which included anthrax in its arsenal. When police officers eventually raided its facilities, they found proof of a years-long research effort costing an estimated $30 million—demonstrating, among other things, that terrorists clearly see value in pursuing bioweaponry. Although Aum did manage to cause considerable harm, it failed in its attempts to unleash a bioweapon of mass destruction. In a 2001 article for Studies in Conflict & Terrorism, William Rosenau, a terrorism expert then at the Rand Corporation, explained:

Aum’s failure suggests that it may, in fact, be far more difficult to carry out a deadly bioterrorism attack than has sometimes been portrayed by government officials and the press. Despite its significant financial resources, dedicated personnel, motivation, and freedom from the scrutiny of the Japanese authorities, Aum was unable to achieve its objectives.

That was then; this is now. Today, two trends are changing the game. The first began in 2004, when the International Genetically Engineered Machine (iGEM) competition was launched at MIT. In this competition, teams of high-school and college students build simple biological systems from standardized, interchangeable parts. These standardized parts, now known as BioBricks, are chunks of DNA code, with clearly defined structures and functions, that can be easily linked together in new combinations, a little like a set of genetic Lego bricks. iGEM collects these designs in the Registry of Standard Biological Parts, an open-source database of downloadable BioBricks accessible to anyone.

Viruses are particularly easy to synthetically engineer. In 2002, Eckard Wimmer synthesized the polio genome from mail-order DNA.

Over the years, iGEM teams have pushed not only technical barriers but creative ones as well. By 2008, students were designing organisms with real-world applications; the contest that year was won by a team from Slovenia for its designer vaccine against Helicobacter pylori, the bacterium responsible for most ulcers. The 2011 grand-prize winner, a team from the University of Washington, completed three separate projects, each one rivaling the outputs of world-class academics and the biopharmaceutical industry. Teams have turned bacterial cells into everything from photographic film to hemoglobin-producing blood substitutes to miniature hard drives, complete with data encryption.

As the sophistication of iGEM research has risen, so has the level of participation. In 2004, five teams submitted 50 potential BioBricks to the registry. Two years later, 32 teams submitted 724 parts. By 2010, iGEM had mushroomed to 130 teams submitting 1,863 parts—and the registry database was more than 5,000 components strong. As The New York Times pointed out:

iGEM has been grooming an entire generation of the world’s brightest scientific minds to embrace synthetic biology’s vision—without anyone really noticing, before the public debates and regulations that typically place checks on such risky and ethically controversial new technologies have even started.

(igem itself does require students to be mindful of any ethical or safety issues, and encourages public discourse on these questions.)

The second trend to consider is the progress that terrorist and criminal organizations have made with just about every other information technology. Since the birth of the digital revolution, some early adopters have turned out to be rogue actors. Phone phreakers like John Draper (a k a “Captain Crunch”) discovered back in the 1970s that AT&T’s telephone network could be fooled into allowing free calls with the help of a plastic whistle given away in cereal boxes (thus Draper’s moniker). In the 1980s, early desktop computers were subverted by a sophisticated array of computer viruses for malicious fun—then, in the 1990s, for information theft and financial gain. The 2000s saw purportedly uncrackable credit-card cryptographic algorithms reverse-engineered and smartphones repeatedly infected with malware. On a larger scale, denial-of-service attacks have grown increasingly destructive, crippling everything from individual Web sites to massive financial networks. In 2000, “Mafiaboy,” a Canadian high-school student acting alone, managed to freeze or slow down the Web sites of Yahoo, eBay, CNN, Amazon, and Dell.

In 2007, Russian hackers swamped Estonian Web sites, disrupting financial institutions, broadcasting networks, government ministries, and the Estonian parliament. A year later, the nation of Georgia, before the Russian invasion, saw a massive cyberattack paralyze its banking system and disrupt cellphone networks. Iraqi insurgents subsequently repurposed SkyGrabber—cheap Russian software frequently used to steal satellite television—to intercept the video feeds of U.S. Predator drones in order to monitor and evade American military operations.

Lately, organized crime has taken up crowd-sourcing parts of its illegal operations—printing up fake credit cards, money laundering—to people or groups with specialized skills. (In Japan, the yakuza has even begun to outsource murder, to Chinese gangs.) Given the anonymous nature of the online crowd, it is all but impossible for law enforcement to track these efforts.

The historical trend is clear: Whenever novel technologies enter the market, illegitimate uses quickly follow legitimate ones. A black market soon appears. Thus, just as criminals and terrorists have exploited many other forms of technology, they will surely soon turn to synthetic biology, the latest digital frontier.

In 2005, as part of its preparation for this threat, the FBI hired Edward You, a cancer researcher at Amgen and formerly a gene therapist at the University of Southern California’s Keck School of Medicine. You, now a supervisory special agent in the Weapons of Mass Destruction Directorate within the FBI’s Biological Countermeasures Unit, knew that biotechnology had been expanding too quickly for the bureau to keep pace, so he decided the only way to stay ahead of the curve was to develop partnerships with those at the leading edge. “When I got involved,” You says, “it was pretty clear the FBI wasn’t about to start playing Big Brother to the life sciences. It’s not our mandate, and it’s not possible. All the expertise lies in the scientific community. Our job has to be outreach education. We need to create a culture of security in the synbio community, of responsible science, so the researchers themselves understand that they are the guardians of the future.”

Toward that end, the FBI started hosting free bio-security conferences, stationed WMD outreach coordinators in 56 field offices to network with the synbio community (among other responsibilities), and became an iGEM partner. In 2006, after reporters at The Guardian successfully mail-ordered a crippled fragment of the genome for the smallpox virus, suppliers of genetic materials decided to develop self-policing guidelines. According to You, the FBI sees the organic emergence of these guidelines as proof that its community-based policing approach is working. However, we are not so sure these new rules do much besides guarantee that a pathogen isn’t sent to a P.O. box.

In any case, much more is necessary. An October 2011 report by the WMD Center, a nonprofit organization led by former Senators Bob Graham (a Democrat) and Jim Talent (a Republican), said a terrorist-sponsored WMD strike somewhere in the world was probable by the end of 2013—and that the weapon would most likely be biological. The report specifically highlighted the dangers of synthetic biology:

As DNA synthesis technology continues to advance at a rapid pace, it will soon become feasible to synthesize nearly any virus whose DNA sequence has been decoded … as well as artificial microbes that do not exist in nature. This growing ability to engineer life at the molecular level carries with it the risk of facilitating the development of new and more deadly biological weapons.

Malevolent non-state actors are not the only danger to consider. Forty nations now host synbio research, China among them. The Beijing Genomics Institute, founded in 1999, is the largest genomic-research organization in the world, sequencing the equivalent of roughly 700,000 human genomes a year. (In a recent Science article, BGI claimed to have more sequencing capacity than all U.S. labs combined.) Last year, during a German E. coli outbreak, when concerns were raised that the disease was a new, particularly deadly strain, BGI sequenced the culprit in just three days. To put that in perspective, SARS—the deadly pneumonia variant that panicked the world in 2003—was sequenced in 31 days. And BGI appears poised to move beyond DNA sequencing and become one of the foremost DNA synthesizers as well.

BGI hires thousands of bright young researchers each year. The training is great, but the wages are reportedly low. This means that many of its talented synthetic biologists may well be searching for better pay and greener pastures each year, too. Some of those jobs will undoubtedly appear in countries not yet on the synbio radar. Iran, North Korea, and Pakistan will almost certainly be hiring.

In the run-up to Barack Obama’s inauguration, threats against the incoming president rose markedly. Each of those threats had to be thoroughly investigated. In his book on the Secret Service, Ronald Kessler writes that in January 2009, for example, when intelligence emerged that the Somalia-based Islamist group al‑Shabaab might try to disrupt Obama’s inauguration, the Secret Service’s mandate for that day became even harder. In total, Kessler reports, the Service coordinated some 40,000 agents and officers from 94 police, military, and security agencies. Bomb-sniffing dogs were deployed throughout the area, and counter-sniper teams were stationed along the parade route. This is a considerable response capability, but in the future, it won’t be enough. A complete defense against the weapons that synbio could make possible has yet to be invented.

The range of threats that the Secret Service has to guard against already extends far beyond firearms and explosive devices. Both chemical and radiological attacks have been launched against government officials in recent years. In 2004, the poisoning of the Ukrainian presidential candidate Viktor Yushchenko involved TCCD, an extremely toxic dioxin compound. Yushchenko survived, but was severely scarred by chemically induced lesions. In 2006, Alexander Litvinenko, a former officer of the Russian security service, was poisoned to death with the radioisotope polonium 210. And the use of bioweapons themselves is hardly unknown; the 2001 anthrax attacks in the United States nearly reached members of the Senate.

The Kremlin, of course, has been suspected of poisoning its enemies for decades, and anthrax has been around for a while. But genetic technologies open the door for a new threat, in which a head of state’s own DNA could be used against him or her. This is particularly difficult to defend against. No amount of Secret Service vigilance can ever fully secure the president’s DNA, because an entire genetic blueprint can now be produced from the information within just a single cell. Each of us sheds millions and millions of cells every day. These can be collected from any number of sources—a used tissue, a drinking glass, a toothbrush. Every time President Obama shakes hands with a constituent, Cabinet member, or foreign leader, he’s leaving an exploitable genetic trail. Whenever he gives away a pen at a bill-signing ceremony, he gives away a few cells too. These cells are dead, but the DNA is intact, allowing for the revelation of potentially compromising details of the president’s biology.

To build a bioweapon, living cells would be the true target (although dead cells may suffice as soon as a decade from now). These are more difficult to recover. A strand of hair, for example, is dead, but if that hair contains a follicle, it also contains living cells. A sample gathered from fresh blood or saliva, or even a sneeze, caught in a discarded tissue, could suffice. Once recovered, these living cells can be cultured, providing a continuous supply of research material.

Even if Secret Service agents were able to sweep up all the shed cells from the president’s current environs, they couldn’t stop the recovery of DNA from the president’s past. DNA is a very stable molecule, and can last for millennia. Genetic material remains present on old clothes, high-school papers—any of the myriad objects handled and discarded long before the announcement of a presidential candidacy. How much attention was dedicated to protecting Barack Obama’s DNA when he was a senator? A community organizer in Chicago? A student at Harvard Law? A kindergartner? And even if presidential DNA were somehow fully locked down, a good approximation of the code could be made from cells of the president’s children, parents, or siblings, living or not.

Presidential DNA could be used in a variety of politically sensitive ways, perhaps to fabricate evidence of an affair, fuel speculation about birthplace and heritage, or identify genetic markers for diseases that could cast doubt on leadership ability and mental acuity. How much would it take to unseat a president? The first signs of Ronald Reagan’s Alzheimer’s may have emerged during his second term. Some doctors today feel the disease was then either latent or too mild to affect his ability to govern. But if information about his condition had been genetically confirmed and made public, would the American people have demanded his resignation? Could Congress have been forced to impeach him?

For the Secret Service, these new vulnerabilities conjure attack scenarios worthy of a Hollywood thriller. Advances in stem-cell research make any living cell transformable into many other cell types, including neurons or heart cells or even in vitro–derived (IVD) “sperm.” Any live cells recovered from a dirty glass or a crumpled napkin could, in theory, be used to manufacture synthetic sperm cells. And so, out of the blue, a president could be confronted by a “former lover” coming forward with DNA evidence of a sexual encounter, like a semen stain on a dress. Sophisticated testing could distinguish an IVD fake sperm from the real thing—they would not be identical—but the results might never be convincing to the lay public. IVD sperm may also someday prove capable of fertilizing eggs, allowing for “love children” to be born using standard in vitro fertilization.

In the hope of mounting the best defense, one option is radical transparency: release the president’s DNA.

As mentioned, even modern cancer therapies could be harnessed for malicious ends. Personalized therapies designed to attack a specific patient’s cancer cells are already moving into clinical trials. Synthetic biology is poised to expand and accelerate this process by making individualized viral therapies inexpensive. Such “magic bullets” can target cancer cells with precision. But what if these bullets were trained to attack healthy cells instead? Trained against retinal cells, they would produce blindness. Against the hippocampus, a memory wipe may result. And the liver? Death would follow in months.

The delivery of this sort of biological agent would be very difficult to detect. Viruses are tasteless and odorless and easily aerosolized. They could be hidden in a perfume bottle; a quick dab on the attacker’s wrist in the general proximity of the target is all an assassination attempt would require. If the pathogen were designed to zero in specifically on the president’s DNA, then nobody else would even fall ill. No one would suspect an attack until long after the infection.

Pernicious agents could be crafted to do their damage months or even years after exposure, depending on the goals of the designer. Several viruses are already known to spark cancers. New ones could eventually be designed to infect the brain with, for instance, synthetic schizophrenia, bipolar disorder, or Alzheimer’s. Stranger possibilities exist as well. A disease engineered to amplify the production of cortisol and dopamine could induce extreme paranoia, turning, say, a peace-seeking dove into a warmongering hawk. Or a virus that boosts the production of oxytocin, the chemical likely responsible for feelings of trust, could play hell with a leader’s negotiating abilities. Some of these ideas aren’t new. As far back as 1994, the U.S. Air Force’s Wright Laboratory theorized about chemical-based pheromone bombs.

Of course, heads of state would not be the only ones vulnerable to synbio threats. Al‑Qaeda flew planes into buildings to cripple Wall Street, but imagine the damage an attack targeting the CEOs of a number of Fortune 500 companies could do to the world economy. Forget kidnapping rich foreign nationals for ransom; kidnapping their DNA might one day be enough. Celebrities will face a new kind of stalker. As home-brew biology matures, these technologies could end up being used to “settle” all sorts of disputes, even those of the domestic variety. Without question, we are near the dawn of a brave new world.

How might we protect the president in the years ahead, as biotech continues to advance? Despite the acceleration of readily exploitable biotechnology, the Secret Service is not powerless. Steps can be taken to limit risks. The agency would not reveal what defenses are already in place, but establishing a crack scientific task force within the agency to monitor, forecast, and evaluate new biotechnological risks would be an obvious place to start. Deploying sensing technologies is another possibility. Already, bio-detectors have been built that can sense known pathogens in less than three minutes. These can get better—a lot better—but even so, they might be limited in their effectiveness. Because synbio opens the door to new, finely targeted pathogens, we’d need to detect that which we’ve never seen before. In this, however, the Secret Service has a big advantage over the Centers for Disease Control and Prevention or the World Health Organization: its principal responsibility is the protection of one specific person. Bio-sensing technologies could be developed around the president’s actual genome. We could use his living cells to build an early-warning system with molecular accuracy.

Cultures of live cells taken from the president could also be kept at the ready—the biological equivalent to data backups. The Secret Service reportedly already carries several pints of blood of the president’s type in his motorcade, in case an emergency transfusion becomes necessary. These biological backup systems could be expanded to include “clean DNA”—essentially, verified stem-cell libraries that would allow bone-marrow transplantation or the enhancement of antiviral or antimicrobial capabilities. As so-called tissue-printing technologies improve, the president’s cells could even be turned, one day, into ready-made standby replacement organs.

Yet even if the Secret Service were to implement some or all of these measures, there is no guarantee that the presidential genome could be completely protected. Anyone truly determined to get the president’s DNA would probably succeed, no matter the defenses. And the Secret Service might have to accept that it can’t fully counter all bio-threats, any more than it can guarantee that the president will never catch a cold.

In the hope of mounting the best defense against an attack, one possible solution—not without its drawbacks—is radical transparency: release the president’s DNA and other relevant biological data, either to a select group of security-cleared bioscience researchers or (the far more controversial step) to the public at large. These ideas may seem counterintuitive, but we have come to believe that open-sourcing this problem—and actively engaging the American public in the challenge of protecting its leader—might turn out to be the best defense.

One practical reason is cost. Any in-house protection effort would be exceptionally pricey. Certainly, considering what’s at stake, the country would bear the expense, but is that the best solution? After all, over the past five years, DIY Drones, a nonprofit online community of autonomous aircraft hobbyists (working for free, in their spare time), produced a $300 unmanned aerial vehicle with 90 percent of the functionality of the military’s $35,000 Raven. This kind of price reduction is typical of open-sourced projects.

Moreover, conducting bio-security in-house means attracting and retaining a very high level of talent. This puts the Secret Service in competition with industry—a fiscally untenable position—and with academia, which offers researchers the freedom to tackle a wider range of interesting problems. But by tapping the collective intelligence of the life-sciences community, the agency would enlist the help of the group best prepared to address this problem, at no cost.

Open-sourcing the president’s genetic information to a select group of security-cleared researchers would bring other benefits as well. It would allow the life sciences to follow in the footsteps of the computer sciences, where “red-team exercises,” or “penetration testing,” are extremely common practices. In these exercises, the red team—usually a group of faux-black-hat hackers—attempts to find weaknesses in an organization’s defenses (the blue team). A similar testing environment could be developed for biological war games.

One of the reasons this kind of practice has been so widely instituted in the computer world is that the speed of development far exceeds the ability of any individual security expert, working alone, to keep pace. Because the life sciences are now advancing faster than computing, little short of an internal Manhattan Project–style effort could put the Secret Service ahead of this curve. The FBI has far greater resources at its disposal than the Secret Service; almost 36,000 people work there, for instance, compared with fewer than 7,000 at the Secret Service. Yet Edward You and the FBI reviewed this same problem and concluded that the only way the bureau could keep up with biological threats was by involving the whole of the life-sciences community.

So why go further? Why take the radical step of releasing the president’s genome to the world instead of just to researchers with security clearances? For one thing, as the U.S. State Department’s DNA-gathering mandate makes clear, the surreptitious collection of world leaders’ genetic material has already begun. It would not be surprising if the president’s DNA has already been collected and analyzed by America’s adversaries. Nor is it unthinkable, given our increasingly nasty party politics, that the president’s domestic political opponents are in possession of his DNA. In the November 2008 issue of The New England Journal of Medicine, Robert C. Green and George J. Annas warned of this possibility, writing that by the 2012 election, “advances in genomics will make it more likely that DNA will be collected and analyzed to assess genetic risk information that could be used for or, more likely, against presidential candidates.” It’s also not hard to imagine the rise of a biological analog to the computer-hacking group Anonymous, intent on providing a transparent picture of world leaders’ genomes and medical histories. Sooner or later, even without open-sourcing, a president’s genome will end up in the public eye.

So the question becomes: Is it more dangerous to play defense and hope for the best, or to go on offense and prepare for the worst? Neither choice is terrific, but even beyond the important issues of cost and talent attraction, open-sourcing—as Claire Fraser, the director of the Institute for Genome Sciences at the University of Maryland School of Medicine, points out—“would level the playing field, removing the need for intelligence agencies to plan for every possible worst-case scenario.”

It would also let the White House preempt the media storm that would occur if someone else leaked the president’s genome. In addition, constant scrutiny of the president’s genome would allow us to establish a baseline and track genetic changes over time, producing an exceptional level of early detection of cancers and other metabolic diseases. And if such diseases were found, an open-sourced genome could likewise accelerate the development of personalized therapies.

The largest factor to consider is time. In 2008, some 14,000 people were working in U.S. labs with access to seriously pathogenic materials; we don’t know how many tens of thousands more are doing the same overseas. Outside those labs, the tools and techniques of genetic engineering are accessible to many other people. Back in 2003, a panel of life-sciences experts, convened by the National Academy of Sciences for the CIA’s Strategic Assessments Group, noted that because the processes and techniques needed for the development of advanced bio agents can be used for good or for ill, distinguishing legitimate research from research for the production of bioweapons will soon be extremely difficult. As a result, “most panelists argued that a qualitatively different relationship between the government and life sciences communities might be needed to most effectively grapple with the future BW threat.”

In our view, it’s no longer a question of “might be.” Advances in biotechnology are radically changing the scientific landscape. We are entering a world where imagination is the only brake on biology, where dedicated individuals can create new life from scratch. Today, when a difficult problem is mentioned, a commonly heard refrain is There’s an app for that. Sooner than you might believe, an app will be replaced by an organism when we think about the solutions to many problems. In light of this coming synbio revolution, a wider-ranging relationship between scientists and security organizations—one defined by open exchange, continual collaboration, and crowd-sourced defenses—may prove the only way to protect the president. And, in the process, the rest of us.


Andrew Hessel is a faculty member and a former co-chair of bioinformatics and biotechnology at Singularity University, and a fellow at the Institute for Science, Society, and Policy at the University of Ottawa. Marc Goodman investigates the impact of advancing technologies on global security, advising Interpol and the U.S. government. He is the founder of the Future Crimes Institute and Chair for Policy, Law & Ethics at Silicon Valley’s Singularity University. Steven Kotler is a New York Times–best-selling author and an award-winning journalist.

For Kodak, nuclear reactor and weapons-grade uranium proved useful

Reposted for filing

Kodak had a nuclear reactorAn Eastman Kodak facility had a small nuclear reactor and 3½ pounds of weapons-grade uranium for more than 30 years. (Associated Press / May 14, 2012)
By Matt PearceMay 14, 2012, 3:01 p.m.

Kodak has the bomb.

… OK, not really. But according to a report from the Rochester, N.Y., Democrat and Chronicle, an Eastman Kodak facility had a small nuclear reactor and 3 ½ pounds of weapons-grade uranium for more than 30 years.

Kodak. The company that makes cameras and printers.

“It’s such an odd situation because private companies just don’t have this material,” Miles Pomper, a senior research associate at the Center for Nonproliferation Studies in Washington, D.C., told the Democrat and Chronicle.

No kidding. A spokesman for the Nuclear Regulatory Commission told the Los Angeles Times that the company had enriched 1,582 grams of uranium-235 up to 93.4%, a level considered weapons-grade. Good thing Kodak isn’t in Iran; that’s the kind of thing Israel’s been threatening to go to war over.

The company was using the reactor to check its chemicals and perform radiography tests, the commission said, and had upgraded to its in-house system after using one at Cornell University, according to the Democrat and Chronicle. It was reportedly guarded and monitored carefully.

Kodak, not known as one of the world’s nuclear powers, filed for bankruptcy protection in January and has been shedding some of its holdings.

Lest this story conjure up memories of the anxiety over “loose nukes” after the collapse of the Soviet Union in 1991, Kodak ditched the uranium in 2007 with the coordination of the U.S. government, according to the Nuclear Regulatory Commission.

Neil Sheehan, a commission spokesman, told The Times that he doesn’t know how many private companies have weapons-grade uranium but that Kodak’s situation was rare. “This was a unique type of device they were using at Kodak,” he said.

Only one other like it existed, and it belonged to the U.S. Department of Energyand was decommissioned in the 1990s.

According to government data, the Nuclear Regulatory Commission oversees 31 research reactors in the U.S. Most of them are at universities, but Aerotest in San Ramon, Calif., Dow Chemical Co. in Midland, Mich., and GE-Hitachi in Sunol, Calif., have each had operating licenses to run research reactors for more than 40 years.,0,5639531.story


Nobel laureate challenges psychologists to clean up their act

Social-priming research needs “daisy chain” of replication.

  • Ed Yong 03 October 2012

Nobel prize-winner Daniel Kahneman has issued a strongly worded call to one group of psychologists to restore the credibility of their field by creating a replication ring to check each others’ results.

Kahneman, a psychologist at Princeton University in New Jersey, addressed his open e-mail to researchers who work on social priming, the study of how subtle cues can unconsciously influence our thoughts or behaviour. For example, volunteers might walk more slowly down a corridor after seeing words related to old age1, or fare better in general-knowledge tests after writing down the attributes of a typical professor2.

Such tests are widely used in psychology, and Kahneman counts himself as a “general believer” in priming effects. But in his e-mail, seen by Nature, he writes that there is a “train wreck looming” for the field, due to a “storm of doubt” about the robustness of priming results.

Under fire

This scepticism has been fed by failed attempts to replicate classic priming studies, increasing concerns about replicability in psychology more broadly (see ‘Bad Copy‘), and the exposure of fraudulent social psychologists such as Diederik Stapel, Dirk Smeesters and Lawrence Sanna, who used priming techniques in their work.

“For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research,” Kahneman writes. “I believe that you should collectively do something about this mess.”

Kahneman’s chief concern is that graduate students who have conducted priming research may find it difficult to get jobs after being associated with a field that is being visibly questioned.

“Kahneman is a hard man to ignore. I suspect that everybody who got a message from him read it immediately,” says Brian Nosek, a social psychologist at the University of Virginia in Charlottesville.

David Funder, at the University of California, Riverside, and president-elect of the Society for Personality and Social Psychology, worries that the debate about priming has descended into angry defensiveness rather than a scientific discussion about data. “I think the e-mail hits exactly the right tone,” he says. “If this doesn’t work, I don’t know what will.”

Hal Pashler, a cognitive psychologist at the University of California, San Diego, says that several groups, including his own, have already tried to replicate well-known social-priming findings, but have not been able to reproduce any of the effects. “These are quite simple experiments and the replication attempts are well powered, so it is all very puzzling. The field needs to get to the bottom of this, and the quicker the better.”

Chain of replication

To address this problem, Kahneman recommends that established social psychologists set up a “daisy chain” of replications. Each lab would try to repeat a priming effect demonstrated by its neighbour, supervised by someone from the replicated lab. Both parties would record every detail of the methods, commit beforehand to publish the results, and make all data openly available.

Kahneman thinks that such collaborations are necessary because priming effects are subtle, and could be undermined by small experimental changes.

Norbert Schwarz, a social psychologist at the University of Michigan in Ann Arbor who received the e-mail, says that priming studies attract sceptical attention because their results are often surprising, not necessarily because they are scientifically flawed.. “There is no empirical evidence that work in this area is more or less replicable than work in other areas,” he says, although the “iconic status” of individual findings has distracted from a larger body of supportive evidence.

“You can think of this as psychology’s version of the climate-change debate,” says Schwarz. “The consensus of the vast majority of psychologists closely familiar with work in this area gets drowned out by claims of a few persistent priming sceptics.”

Still, Schwarz broadly supports Kahneman’s suggestion. “I will participate in such a daisy-chain if the field decides that it is something that should be implemented,” says Schwarz, but not if it is “merely directed at one single area of research”.

“I hope that this becomes part of a broader movement in psychology to be more self-critical, and to see if there are gaps in the way we do everyday science,” says Nosek. “I suspect those who are really committed to doing the best science possible will say that this or some alternative is a good idea.”

Journal name:
Nature DOI: doi:10.1038/nature.2012.11535


Bargh, J. A., Chen, M. & Burrows, L. J. Pers. Soc. Psych. 71, 230–244 (1996).

Dijksterhuis, A. & van Knippenberg, A. J. Pers. Soc. Psych. 74, 865-877 (1998).

Black Hat hacker details lethal wireless attack on insulin pumps

Engineering Evil: A while ago we posted that Pacemakers can be hacked….  Unfortunately we stress that there is an  urgent need to better secure these medical devices  A.S.A.P. . Our nightmare scenario, is that wireless signals can be broadcast over many miles///

Re-Posted for filing….

Black Hat hacker details lethal wireless attack on insulin pumps

Wireless insulin pump attack

If you thought that unlocking cars via SMS was the definition of nefarious, think again: at the Black Hat security conference, security researcher Jerome Radcliffe has detailed how our use of SCADA insulin pumps, pacemakers, and implanted defibrillators could lead to untraceable, lethal attacks from half a mile away.

Radcliffe, who is a diabetic with a wireless, always-attached insulin pump, was slightly worried that someone might hack his pump, meddle with its settings, and kill him — and so, in true hacker fashion, he has spent the last two years trying to hack it himself. Unfortunately, he was very successful. He managed to intercept the wireless control signals, reverse them, inject some fake data, and then send it back to the pump. He could increase the amount of insulin injected by the pump, or reduce it. In both cases the pump showed no signs of being tampered with, and it did not generate a warning that he was probably about to die. “I can get full remote control,” Radcliffe said. “If I were an evil hacker, I could issue commands to give insulin, without anyone else’s authority. This is scary. And I can manipulate the data so it happens in a stealth way.”

The problem with these wireless devices is that, rather insanely, they are not designed with security in mind. As with early computer networks, no one believes that someone would even try to hack a wireless insulin pump or pacemaker, and so they are left relatively unsecured. Some SCADA systems do use encryption, like the wireless control systems used by government facilities, airports, and power plants, but encryption adds complexity, power usage, and cost. The manufacturer of Radcliffe’s insulin pump obviously had to decide between being cheap and quick to market, or secure. Needless to say, now that Radcliffe has shown that it’s rather easy to kill a user of this insulin pump, the manufacturer will now move rather quickly to secure it before it loses billions of dollars in a lawsuit.

Unfortunately the weakness of “non-vital” SCADA systems is endemic. Three years ago, a similar vulnerability [PDF] was found in wireless pacemakers — and according to Brad Smith, a security researcher and also a registered nurse, these same wireless control systems can be found in other medical devices, too. The only saving grace is that no hacker has yet gone public with the exact process required to hack a modern, actively-used medical device — and indeed, the process will vary from device to device — but it does make you feel a little queasy that someone could park up outside a hospital or care home and kill with wireless, untrackable impunity.

The only solution, as with wired and wireless computer networks, is to step up security. Proprietary hardware would be a good start, and encryption could also be used — but in the case of implanted devices that must go for months or years without a change of batteries, the increased power draw of complex circuitry is highly undesirable. Ultimately, these wireless control devices must simply be built with the assumption that hackers will eventually break in. In the case of the insulin pump, it should contain hardware-level sanity checking. It could contain a piece of read-only memory that contains the minimum and maximum amounts of insulin that should ever be injected into the patient.

Read more at Scott Hanselman’s blog and VentureBeat

Obama waives sanctions on countries that use child soldiers

Posted By Josh Rogin Monday, October 1, 2012 – 1:48 PM

U.S. President Barack Obama issued a new executive order last week to fight human trafficking, touting his administration’s handling of the issue.

“When a little boy is kidnapped, turned into a child soldier, forced to kill or be killed — that’s slavery,” Obama said in a speech at the Clinton Global Initiative. “It is barbaric, and it is evil, and it has no place in a civilized world. Now, as a nation, we’ve long rejected such cruelty.”

But for the third year in a row, Obama has waived almost all U.S. sanctions that would punish certain countries that use child soldiers, upsetting many in the human rights community.

Late Friday afternoon, Obama issued a presidential memorandum waiving penalties under the Child Soldiers Protection Act of 2008 for Libya, South Sudan, and Yemen, penalties that Congress put in place to prevent U.S. arms sales to countries determined by the State Department to be the worst abusers of child soldiers in their militaries. The president also partially waived sanctions against the Democratic Republic of the Congo to allow some military training and arms sales to that country.

Human rights advocates saw the waivers as harmful to the goal of using U.S. influence to urge countries that receive military assistance to move away from using child soldiers and contradictory to the rhetoric Obama used in his speech.

“After such a strong statement against the exploitation of children, it seems bizarre that Obama would give a pass to countries using children in their armed forces and using U.S. tax money to do that,” saidJesse Eaves, the senior policy advisor for child protection at World Vision.

The Obama administration doesn’t want to upset its relationships with countries that it needs for security cooperation, but the blanket use of waivers is allowing the administration to avoid the law’s intent, which was to use force the U.S. government to put a greater priority on human rights and child protection when doling out military aid, he said.

“The intent in this law was to use this waiver authority only in extreme circumstances, yet this has become an annual thing and this has become the default of this administration,” Eaves said.

The Romney campaign has made Obama’s record on human rights a feature of its foreign-policy critique, with top advisors accusing the president of deprioritizing the issue, often in sweeping terms.

“Barack Obama has broken with a tradition that goes back to Woodrow Wilson about human rights and values animating our foreign policy. This administration has not been an effective voice for human rights,” said Romney campaign senior advisor for foreign policy Rich Williamson, who also served as George W. Bush‘s special envoy to Sudan, toldThe Cable in July.

Bushsigned the child-soldiers law in 2008. It prohibits U.S. military education and training, foreign military financing, and other defense-related assistance to countries that actively recruit troops under the age of 18. Countries are designated as violators if the State Department’s annual Trafficking in Persons report identifies them as recruiting child soldiers. The original bill was sponsored by Sen. Dick Durbin (D-IL).

Obama first waived the sanctions in 2010, the first year they were to go into effect. At that time, the White House failed to inform Congress or the NGO community of its decision in advance, setting off a fierce backlash. A justification memo obtained by The Cable at the time made several security-related arguments for the waivers. Sudan was going through a fragile transition, for example. Yemen was crucial to counterterrorism cooperation, the administration argued.

But NSC Senior Director for Multilateral Affairs Samantha Power told NGO leaders at the time that the waivers would not become a recurring event.

“Our judgment was: Brand them, name them, shame them, and then try to leverage assistance in a fashion to make this work,” Power said, saying the administration wanted to give the violator countries one more year to show progress. “Our judgment is we’ll work from inside the tent.”

But the next year, in 2011, Obama waived almost all the sanctions once again, using largely the same justifications, except that the administration argued that the law didn’t apply to South Sudan because it wasn’t a country until July 2011. Rep. Jeff Fortenberry (R-NE) tried to pass new legislation to force Obama to notify Congress before issuing the waivers.

Fortenberry called the decision an “assault on human dignity,” and said, “Good citizens of this country who do not want to be complicit in this grave human rights abuse must challenge this administration.”

This year, the State Department held a briefing for NGO leaders and human rights activists to answer questions about the waivers and try to ally their concerns.

“They are addressing the concerns of the legislation in a more pragmatic and useful way than in the past, but they still have a ways to go and this was a clear missed opportunity,” Rachel Stohl, a senior associate at the Stimson Center who attended the briefing, told The Cable. “You want the waivers to be used very sparingly but some of these countries get the waiver every year.”

Stohl rejects the administration’s argument that countries like Libya and South Sudan are so fragile that they can’t be leaned on to do better on human rights.

“I would argue that this is exactly the right time to make clear to Libya what the parameters are,” she said.

Jo Becker, advocacy director for the children’s rights division at Human Rights Watch, told The Cable that where the United States has used some pressure, such as in the DRC, where there was a partial cutoff of military aid last year, there was a positive effect.

“After years of foot-dragging, Congo is close to signing a U.N. action plan to end its use of child soldiers,” she said. “But in other countries with child soldiers, including South Sudan, Libya, and Yemen, the U.S. continues to squander its leverage by giving military aid with no conditions.”

NSC Spokesman Tommy Vietor did not respond to multiple requests for comment.


Fake TV News: Widespread and Undisclosed

2008 report posted for filing

Although the number of media formats and outlets has exploded in recent years, television remains the dominant news source in the United States. More than three-quarters of U.S. adults rely on local TV news, and more than 70 percent turn to network TV or cable news on a daily or near-daily basis, according to a January 2006 Harris Poll. The quality and integrity of television reporting thus significantly impacts the public’s ability to evaluate everything from consumer products to medical services to government policies.

To reach this audience—and to add a veneer of credibility to clients’ messages—the public relations industry uses video news releases (VNRs). VNRs are pre-packaged “news” segments and additional footage created by broadcast PR firms, or by publicists within corporations or government agencies. VNRs are designed to be seamlessly integrated into newscasts, and are freely provided to TV stations. Although the accompanying information sent to TV stations identifies the clients behind the VNRs, nothing in the material for broadcast does. Without strong disclosure requirements and the attention and action of TV station personnel, viewers cannot know when the news segment they’re watching was bought and paid for by the very subjects of that “report.”

From an ad for the broadcast PR firm D S Simon ProductionsIn recent years, the U.S. Congress, the Federal Communications Commission, journalism professors, reporters and members of the general public have expressed concern about VNRs. In response, public relations executives and broadcaster groups have vigorously defended the status quo, claiming there is no problem with current practices. In June 2005, the president of the Radio-Television News Directors Association (RTNDA), Barbara Cochran, told a reporter that VNRs were “kind of like the Loch Ness Monster. Everyone talks about it, but not many people have actually seen it.”

To inform this debate, the Center for Media and Democracy (CMD) conducted a ten-month study of selected VNRs and their use by television stations, tracking 36 VNRs issued by three broadcast PR firms. Key findings include:

VNR use is widespread. CMD found 69 TV stations that aired at least one VNR from June 2005 to March 2006—a significant number, given that CMD was only able to track a small percentage of the VNRs streaming into newsrooms during that time. Collectively, these 69 stations broadcast to 52.7 percent of the U.S. population, according to Nielsen Media figures. Syndicated and network-distributed segments sometimes included VNRs, further broadening their reach.

VNRs are aired in TV markets of all sizes. TV stations often use VNRs to limit the costs associated with producing, filming and editing their own reports. However, VNR usage is not limited to small-town stations with shoestring budgets. Nearly two-thirds of the VNRs that CMD tracked were aired by stations in a Top 50 Nielsen market area, such as Detroit, Pittsburgh or Cincinnati. Thirteen VNRs were broadcast in the ten largest markets, including New York, Los Angeles, Chicago, Philadelphia and Boston.

TV stations don’t disclose VNRs to viewers. Of the 87 VNR broadcasts that CMD documented, not once did the TV station disclose the client(s) behind the VNR to the news audience. Only one station, WHSV-3 in Harrisonburg, VA, provided partial disclosure, identifying the broadcast PR firm that created the VNR, but not the client, DaimlerChrysler. WHSV-3 aired soundbites from a Chrysler representative and directed viewers to websites associated with Chrysler, without disclosing the company’s role in the “report.”

TV stations disguise VNRs as their own reporting. In every VNR broadcast that CMD documented, the TV station altered the VNR’s appearance. Newsrooms added station-branded graphics and overlays, to make VNRs indistinguishable from reports that genuinely originated from their station. A station reporter or anchor re-voiced the VNR in more than 60 percent of the VNR broadcasts, sometimes repeating the publicist’s original narration word-for-word.

TV stations don’t supplement VNR footage or verify VNR claims. While TV stations often edit VNRs for length, in only seven of the 87 VNR broadcasts documented by CMD did stations add any independently-gathered footage or information to the segment. In all other cases, the entire aired “report” was derived from a VNR and its accompanying script. In 31 of the 87 VNR broadcasts, the entire aired “report” was the entire pre-packaged VNR. Three stations (WCPO-9 in Cincinnati, OH; WSYR-9 in Syracuse, NY; and WYTV-33 in Youngstown, OH) removed safety warnings from a VNR touting a newly-approved prescription skin cream. WSYR-9 also aired a VNR heralding a “major health breakthrough” for arthritis sufferers—a supplement that a widely-reported government study had found to be little better than a placebo.

The vast majority of VNRs are produced for corporate clients. Of the hundreds of VNRs that CMD reviewed for potential tracking, only a few came from government agencies or non-profit organizations. Corporations have consistently been the dominant purveyors of VNRs, though the increased scrutiny of government-funded VNRs in recent years may have decreased their use by TV newsrooms. Of the VNRs that CMD tracked, 47 of the 49 clients behind them were corporations that stood to benefit financially from the favorable “news” coverage.

Satellite media tours may accompany VNRs. Broadcast PR firms sometimes produce both VNRs and satellite media tours (SMTs) for clients. SMTs are actual interviews with TV stations, but their focus and scope are determined by the clients. In effect, SMTs are live recitations of VNR scripts. CMD identified 10 different TV stations that aired SMTs for 17 different clients with related VNRs. In only one instance was there partial disclosure to viewers. An anchor at WLTX-19 in Columbia, SC, said after the segment, “This interview … was provided by vendors at the consumer trade show,” but did not name the four corporate clients behind the SMT.

In sum, television newscasts—the most popular news source in the United States—frequently air VNRs without disclosure to viewers, without conducting their own reporting, and even without fact checking the claims made in the VNRs. VNRs are overwhelmingly produced for corporations, as part of larger public relations campaigns to sell products, burnish their image, or promote policies or actions beneficial to the corporation.

Mass. chemist’s colleagues had questioned her work: 60,000 drug samples submitted in the cases of about 34,000 defendants are now in question

AP foreign, Wednesday September 26 2012


Associated Press= BOSTON (AP) — A chemist at the center of a drug lab testing scandal admitted she faked results for two to three years, forged signatures and skipped proper procedures, a police report shows.

Some of Annie Dookhan’s colleagues also had concerns for years about the high number of drug samples she tested and inconsistencies in her work, according to other police reports The Associated Press obtained Wednesday.

Lab employees’ interviews with investigators show they convinced themselves their concerns were invalid or reported them to supervisors who didn’t intervene to stop Dookhan.

Dookhan’s mishandling of drug samples at the now-closed state lab in Boston has thrown thousands of criminal cases into question, authorities say. A handful of defendants already are free or have had their criminal sentences suspended.

Concerns from Dookhan’s colleagues prompted two supervisors to audit her work in 2010, but they just looked at paperwork and didn’t retest drug samples.

It wasn’t until the spring of 2011, when police say Dookhan admitted forging a colleague’s initials on paperwork after taking 90 drug samples from evidence, that things started to unravel for her. Another colleague told police it was “almost like Dookhan wanted to get caught.”

Anne Goldbach, forensic services director for the Committee for Public Counsel Services, which oversees the provision of legal representation for indigent people, said the new documents show the problems at the Hinton State Laboratory are more troubling than originally believed. She said it appears there was unsupervised access to the lab’s evidence office and evidence safe.

While Goldbach said she didn’t see evidence of intentional wrongdoing by other chemists, she said that because Dookhan was in charge of quality control equipment other chemists could have gotten false test results without knowing it.

“The fact that she failed to conduct quality control steps … it calls into question all the testing done by the lab,” Goldbach said.

Attorney John T. Martin, who represents several defendants whose samples Dookhan handled, said he believes she changed drug weights to meet statutory standards for stricter sentencing.

Martin said in the cases of four of his clients, Dookhan determined that the weight of the drug sample was just 1 gram above the amount needed for a more serious penalty even though police reports made the seizure seem smaller.

Police say Dookhan told them several times in an August interview that she knew she had done wrong.

“I screwed up big time,” she said while becoming teary-eyed, according to the report by investigators for Attorney General Martha Coakley’s office. “I messed up bad. It’s my fault. I don’t want the lab to get in trouble.”

Authorities haven’t filed charges against Dookhan or commented on her possible motives as their probe continues. Dookhan hasn’t responded to repeated requests for comment.

In the Aug. 28 interview with two investigators at her dining room table, Dookhan first denied doing anything wrong when she analyzed drug samples. She changed her story after they confronted her with a Boston Police Department retest of a suspected cocaine sample that came back negative after Dookhan identified it as the narcotic. Police also told her the number of samples she reported analyzing was too high and she couldn’t have done all the tests.

The report shows Dookhan then admitted identifying drug samples by looking at them instead of testing them, called dry labbing.

She said she tested about five out of 25 samples she got from evidence, after routinely getting a large number of samples from different cases out of the evidence room, police say. She also told police she contaminated samples a few times to get more work finished but no one asked her to do anything improper, they say.

“I intentionally turned a negative sample into a positive a few times,” Dookhan said in a signed statement she gave police.

Dookhan also told investigators she routinely skirted proper procedures by looking up data for assistant district attorneys who called her directly rather than going through the evidence department.

State police say Dookhan tested more than 60,000 drug samples submitted in the cases of about 34,000 defendants during her nine years at the lab. She resigned in March amid an internal investigation by the Department of Public Health.

After state police took over the lab in July as part of a state budget directive, they said they discovered her violations were much more extensive than previously believed and went beyond sloppiness into malfeasance and deliberate mishandling of drug samples.

In the August police interview, Dookhan said that in June 2011 she improperly took 90 samples that weren’t assigned to her from evidence and forged another person’s initials on a log book after a supervisor questioned her about it. While Dookhan’s lab duties were suspended after that, she said she disobeyed orders and continued to give law enforcement officials information on their cases.

Two days after her police interview, Gov. Deval Patrick ordered state police to close the lab.

That day, a police lieutenant spoke with Dookhan to tell her she should get an attorney because she could face criminal charges.

Dookhan cried on the phone, saying she didn’t know any lawyers, didn’t have money and was in a long divorce with her husband and didn’t want to involve family.

The drugs don’t work: a modern medical scandal

The doctors prescribing the drugs don’t know they don’t do what they’re meant to. Nor do their patients. The manufacturers know full well, but they’re not telling.

    Ben Goldacre The Guardian,   Friday 21 September 2012 18.00 EDT


    Drugs are tested by their manufacturers,  in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques that exaggerate the benefits. Photograph: Photograph: Getty Images. Digital manipulation: Phil Partridge for GNL Imaging

    Reboxetine is a drug I have prescribed. Other drugs had done nothing for my patient, so we wanted to try something new. I’d read the trial data before I wrote the prescription, and found only well-designed, fair tests, with overwhelmingly positive results. Reboxetine was better than a placebo, and as good as any other antidepressant in head-to-head comparisons. It’s approved for use by the Medicines and Healthcare products Regulatory Agency (the MHRA), which governs all drugs in the UK. Millions of doses are prescribed every year, around the world. Reboxetine was clearly a safe and effective treatment. The patient and I discussed the evidence briefly, and agreed it was the right treatment to try next. I signed a prescription.

    But we had both been misled. In October 2010, a group of researchers was finally able to bring together all the data that had ever been collected on reboxetine, both from trials that were published and from those that had never appeared in academic papers. When all this trial data was put together, it produced a shocking picture. Seven trials had been conducted comparing reboxetine against a placebo. Only one, conducted in 254 patients, had a neat, positive result, and that one was published in an academic journal, for doctors and researchers to read. But six more trials were conducted, in almost 10 times as many patients. All of them showed that reboxetine was no better than a dummy sugar pill. None of these trials was published. I had no idea they existed.

    It got worse. The trials comparing reboxetine against other drugs showed exactly the same picture: three small studies, 507 patients in total, showed that reboxetine was just as good as any other drug. They were all published. But 1,657 patients’ worth of data was left unpublished, and this unpublished data showed that patients on reboxetine did worse than those on other drugs. If all this wasn’t bad enough, there was also the side-effects data. The drug looked fine in the trials that appeared in the academic literature; but when we saw the unpublished studies, it turned out that patients were more likely to have side-effects, more likely to drop out of taking the drug and more likely to withdraw from the trial because of side-effects, if they were taking reboxetine rather than one of its competitors.

    I did everything a doctor is supposed to do. I read all the papers, I critically appraised them, I understood them, I discussed them with the patient and we made a decision together, based on the evidence. In the published data, reboxetine was a safe and effective drug. In reality, it was no better than a sugar pill and, worse, it does more harm than good. As a doctor, I did something that, on the balance of all the evidence, harmed my patient, simply because unflattering data was left unpublished.

    Nobody broke any law in that situation, reboxetine is still on the market and the system that allowed all this to happen is still in play, for all drugs, in all countries in the world. Negative data goes missing, for all treatments, in all areas of science. The regulators and professional bodies we would reasonably expect to stamp out such practices have failed us. These problems have been protected from public scrutiny because they’re too complex to capture in a soundbite. This is why they’ve gone unfixed by politicians, at least to some extent; but it’s also why it takes detail to explain. The people you should have been able to trust to fix these problems have failed you, and because you have to understand a problem properly in order to fix it, there are some things you need to know.

    Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques that are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer. When trials throw up results that companies don’t like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug’s true effects. Regulators see most of the trial data, but only from early on in a drug’s life, and even then they don’t give this data to doctors or patients, or even to other parts of government. This distorted evidence is then communicated and applied in a distorted fashion.

    In their 40 years of practice after leaving medical school, doctors hear about what works ad hoc, from sales reps, colleagues and journals. But those colleagues can be in the pay of drug companies – often undisclosed – and the journals are, too. And so are the patient groups. And finally, academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure. Sometimes whole academic journals are owned outright by one drug company. Aside from all this, for several of the most important and enduring problems in medicine, we have no idea what the best treatment is, because it’s not in anyone’s financial interest to conduct any trials at all.

    Now, on to the details.

    In 2010, researchers from Harvard and Toronto found all the trials looking at five major classes of drug – antidepressants, ulcer drugs and so on – then measured two key features: were they positive, and were they funded by industry? They found more than 500 trials in total: 85% of the industry-funded studies were positive, but only 50% of the government-funded trials were. In 2007, researchers looked at every published trial that set out to explore the benefits of a statin. These cholesterol-lowering drugs reduce your risk of having a heart attack and are prescribed in very large quantities. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. They found that industry-funded trials were 20 times more likely to give results favouring the test drug.

    These are frightening results, but they come from individual studies. So let’s consider systematic reviews into this area. In 2003, two were published. They took all the studies ever published that looked at whether industry funding is associated with pro-industry results, and both found that industry-funded trials were, overall, about four times more likely to report positive results. A further review in 2007 looked at the new studies in the intervening four years: it found 20 more pieces of work, and all but two showed that industry-sponsored trials were more likely to report flattering results.

    It turns out that this pattern persists even when you move away from published academic papers and look instead at trial reports from academic conferences. James Fries and Eswar Krishnan, at the Stanford University School of Medicine in California, studied all the research abstracts presented at the 2001 American College of Rheumatology meetings which reported any kind of trial and acknowledged industry sponsorship, in order to find out what proportion had results that favoured the sponsor’s drug.

    In general, the results section of an academic paper is extensive: the raw numbers are given for each outcome, and for each possible causal factor, but not just as raw figures. The “ranges” are given, subgroups are explored, statistical tests conducted, and each detail is described in table form, and in shorter narrative form in the text. This lengthy process is usually spread over several pages. In Fries and Krishnan (2004), this level of detail was unnecessary. The results section is a single, simple and – I like to imagine – fairly passive-aggressive sentence:

    “The results from every randomised controlled trial (45 out of 45) favoured the drug of the sponsor.”

    How does this happen? How do industry-sponsored trials almost always manage to get a positive result? Sometimes trials are flawed by design. You can compare your new drug with something you know to be rubbish – an existing drug at an inadequate dose, perhaps, or a placebo sugar pill that does almost nothing. You can choose your patients very carefully, so they are more likely to get better on your treatment. You can peek at the results halfway through, and stop your trial early if they look good. But after all these methodological quirks comes one very simple insult to the integrity of the data. Sometimes, drug companies conduct lots of trials, and when they see that the results are unflattering, they simply fail to publish them.

    Because researchers are free to bury any result they please, patients are exposed to harm on a staggering scale throughout the whole of medicine. Doctors can have no idea about the true effects of the treatments they give. Does this drug really work best, or have I simply been deprived of half the data? No one can tell. Is this expensive drug worth the money, or has the data simply been massaged? No one can tell. Will this drug kill patients? Is there any evidence that it’s dangerous? No one can tell. This is a bizarre situation to arise in medicine, a discipline in which everything is supposed to be based on evidence.

    And this data is withheld from everyone in medicine, from top to bottom. Nice, for example, is the National Institute for Health and Clinical Excellence, created by the British government to conduct careful, unbiased summaries of all the evidence on new treatments. It is unable either to identify or to access data on a drug’s effectiveness that’s been withheld by researchers or companies: Nice has no more legal right to that data than you or I do, even though it is making decisions about effectiveness, and cost-effectiveness, on behalf of the NHS, for millions of people.

    In any sensible world, when researchers are conducting trials on a new tablet for a drug company, for example, we’d expect universal contracts, making it clear that all researchers are obliged to publish their results, and that industry sponsors – which have a huge interest in positive results – must have no control over the data. But, despite everything we know about industry-funded research being systematically biased, this does not happen. In fact, the opposite is true: it is entirely normal for researchers and academics conducting industry-funded trials to sign contracts subjecting them to gagging clauses that forbid them to publish, discuss or analyse data from their trials without the permission of the funder.

    This is such a secretive and shameful situation that even trying to document it in public can be a fraught business. In 2006, a paper was published in the Journal of the American Medical Association (Jama), one of the biggest medical journals in the world, describing how common it was for researchers doing industry-funded trials to have these kinds of constraints placed on their right to publish the results. The study was conducted by the Nordic Cochrane Centre and it looked at all the trials given approval to go ahead in Copenhagen and Frederiksberg. (If you’re wondering why these two cities were chosen, it was simply a matter of practicality: the researchers applied elsewhere without success, and were specifically refused access to data in the UK.) These trials were overwhelmingly sponsored by the pharmaceutical industry (98%) and the rules governing the management of the results tell a story that walks the now familiar line between frightening and absurd.

    For 16 of the 44 trials, the sponsoring company got to see the data as it accumulated, and in a further 16 it had the right to stop the trial at any time, for any reason. This means that a company can see if a trial is going against it, and can interfere as it progresses, distorting the results. Even if the study was allowed to finish, the data could still be suppressed: there were constraints on publication rights in 40 of the 44 trials, and in half of them the contracts specifically stated that the sponsor either owned the data outright (what about the patients, you might say?), or needed to approve the final publication, or both. None of these restrictions was mentioned in any of the published papers.

    When the paper describing this situation was published in Jama, Lif, the Danish pharmaceutical industry association, responded by announcing, in the Journal of the Danish Medical Association, that it was “both shaken and enraged about the criticism, that could not be recognised”. It demanded an investigation of the scientists, though it failed to say by whom or of what. Lif then wrote to the Danish Committee on Scientific Dishonesty, accusing the Cochrane researchers of scientific misconduct. We can’t see the letter, but the researchers say the allegations were extremely serious – they were accused of deliberately distorting the data – but vague, and without documents or evidence to back them up.

    Nonetheless, the investigation went on for a year. Peter Gøtzsche, director of the Cochrane Centre, told the British Medical Journal that only Lif’s third letter, 10 months into this process, made specific allegations that could be investigated by the committee. Two months after that, the charges were dismissed. The Cochrane researchers had done nothing wrong. But before they were cleared, Lif copied the letters alleging scientific dishonesty to the hospital where four of them worked, and to the management organisation running that hospital, and sent similar letters to the Danish medical association, the ministry of health, the ministry of science and so on. Gøtzsche and his colleagues felt “intimidated and harassed” by Lif’s behaviour. Lif continued to insist that the researchers were guilty of misconduct even after the investigation was completed.

    Paroxetine is a commonly used antidepressant, from the class of drugs known as selective serotonin reuptake inhibitors or SSRIs. It’s also a good example of how companies have exploited our long-standing permissiveness about missing trials, and found loopholes in our inadequate regulations on trial disclosure.

    To understand why, we first need to go through a quirk of the licensing process. Drugs do not simply come on to the market for use in all medical conditions: for any specific use of any drug, in any specific disease, you need a separate marketing authorisation. So a drug might be licensed to treat ovarian cancer, for example, but not breast cancer. That doesn’t mean the drug doesn’t work in breast cancer. There might well be some evidence that it’s great for treating that disease, too, but maybe the company hasn’t gone to the trouble and expense of getting a formal marketing authorisation for that specific use. Doctors can still go ahead and prescribe it for breast cancer, if they want, because the drug is available for prescription, it probably works, and there are boxes of it sitting in pharmacies waiting to go out. In this situation, the doctor will be prescribing the drug legally, but “off-label”.

    Now, it turns out that the use of a drug in children is treated as a separate marketing authorisation from its use in adults. This makes sense in many cases, because children can respond to drugs in very different ways and so research needs to be done in children separately. But getting a licence for a specific use is an arduous business, requiring lots of paperwork and some specific studies. Often, this will be so expensive that companies will not bother to get a licence specifically to market a drug for use in children, because that market is usually much smaller.

    So it is not unusual for a drug to be licensed for use in adults but then prescribed for children. Regulators have recognised that this is a problem, so recently they have started to offer incentives for companies to conduct more research and formally seek these licences.

    When GlaxoSmithKline applied for a marketing authorisation in children for paroxetine, an extraordinary situation came to light, triggering the longest investigation in the history of UK drugs regulation. Between 1994 and 2002, GSK conducted nine trials of paroxetine in children. The first two failed to show any benefit, but the company made no attempt to inform anyone of this by changing the “drug label” that is sent to all doctors and patients. In fact, after these trials were completed, an internal company management document stated: “It would be commercially unacceptable to include a statement that efficacy had not been demonstrated, as this would undermine the profile of paroxetine.” In the year after this secret internal memo, 32,000 prescriptions were issued to children for paroxetine in the UK alone: so, while the company knew the drug didn’t work in children, it was in no hurry to tell doctors that, despite knowing that large numbers of children were taking it. More trials were conducted over the coming years – nine in total – and none showed that the drug was effective at treating depression in children.

    It gets much worse than that. These children weren’t simply receiving a drug that the company knew to be ineffective for them; they were also being exposed to side-effects. This should be self-evident, since any effective treatment will have some side-effects, and doctors factor this in, alongside the benefits (which in this case were nonexistent). But nobody knew how bad these side-effects were, because the company didn’t tell doctors, or patients, or even the regulator about the worrying safety data from its trials. This was because of a loophole: you have to tell the regulator only about side-effects reported in studies looking at the specific uses for which the drug has a marketing authorisation. Because the use of paroxetine in children was “off-label”, GSK had no legal obligation to tell anyone about what it had found.

    People had worried for a long time that paroxetine might increase the risk of suicide, though that is quite a difficult side-effect to detect in an antidepressant. In February 2003, GSK spontaneously sent the MHRA a package of information on the risk of suicide on paroxetine, containing some analyses done in 2002 from adverse-event data in trials the company had held, going back a decade. This analysis showed that there was no increased risk of suicide. But it was misleading: although it was unclear at the time, data from trials in children had been mixed in with data from trials in adults, which had vastly greater numbers of participants. As a result, any sign of increased suicide risk among children on paroxetine had been completely diluted away.

    Later in 2003, GSK had a meeting with the MHRA to discuss another issue involving paroxetine. At the end of this meeting, the GSK representatives gave out a briefing document, explaining that the company was planning to apply later that year for a specific marketing authorisation to use paroxetine in children. They mentioned, while handing out the document, that the MHRA might wish to bear in mind a safety concern the company had noted: an increased risk of suicide among children with depression who received paroxetine, compared with those on dummy placebo pills.

    This was vitally important side-effect data, being presented, after an astonishing delay, casually, through an entirely inappropriate and unofficial channel. Although the data was given to completely the wrong team, the MHRA staff present at this meeting had the wit to spot that this was an important new problem. A flurry of activity followed: analyses were done, and within one month a letter was sent to all doctors advising them not to prescribe paroxetine to patients under the age of 18.

    How is it possible that our systems for getting data from companies are so poor, they can simply withhold vitally important information showing that a drug is not only ineffective, but actively dangerous? Because the regulations contain ridiculous loopholes, and it’s dismal to see how GSK cheerfully exploited them: when the investigation was published in 2008, it concluded that what the company had done – withholding important data about safety and effectiveness that doctors and patients clearly needed to see – was plainly unethical, and put children around the world at risk; but our laws are so weak that GSK could not be charged with any crime.

    After this episode, the MHRA and EU changed some of their regulations, though not adequately. They created an obligation for companies to hand over safety data for uses of a drug outside its marketing authorisation; but ridiculously, for example, trials conducted outside the EU were still exempt. Some of the trials GSK conducted were published in part, but that is obviously not enough: we already know that if we see only a biased sample of the data, we are misled. But we also need all the data for the more simple reason that we need lots of data: safety signals are often weak, subtle and difficult to detect. In the case of paroxetine, the dangers became apparent only when the adverse events from all of the trials were pooled and analysed together.

    That leads us to the second obvious flaw in the current system: the results of these trials are given in secret to the regulator, which then sits and quietly makes a decision. This is the opposite of science, which is reliable only because everyone shows their working, explains how they know that something is effective or safe, shares their methods and results, and allows others to decide if they agree with the way in which the data was processed and analysed. Yet for the safety and efficacy of drugs, we allow it to happen behind closed doors, because drug companies have decided that they want to share their trial results discretely with the regulators. So the most important job in evidence-based medicine is carried out alone and in secret. And regulators are not infallible, as we shall see.

    Rosiglitazone was first marketed in 1999. In that first year, Dr John Buse from the University of North Carolina discussed an increased risk of heart problems at a pair of academic meetings. The drug’s manufacturer, GSK, made direct contact in an attempt to silence him, then moved on to his head of department. Buse felt pressured to sign various legal documents. To cut a long story short, after wading through documents for several months, in 2007 the US Senate committee on finance released a report describing the treatment of Buse as “intimidation”.

    But we are more concerned with the safety and efficacy data. In 2003 the Uppsala drug monitoring group of the World Health Organisation contacted GSK about an unusually large number of spontaneous reports associating rosiglitazone with heart problems. GSK conducted two internal meta-analyses of its own data on this, in 2005 and 2006. These showed that the risk was real, but although both GSK and the FDA had these results, neither made any public statement about them, and they were not published until 2008.

    During this delay, vast numbers of patients were exposed to the drug, but doctors and patients learned about this serious problem only in 2007, when cardiologist Professor Steve Nissen and colleagues published a landmark meta-analysis. This showed a 43% increase in the risk of heart problems in patients on rosiglitazone. Since people with diabetes are already at increased risk of heart problems, and the whole point of treating diabetes is to reduce this risk, that finding was big potatoes. Nissen’s findings were confirmed in later work, and in 2010 the drug was either taken off the market or restricted, all around the world.

    Now, my argument is not that this drug should have been banned sooner because, as perverse as it sounds, doctors do often need inferior drugs for use as a last resort. For example, a patient may develop idiosyncratic side-effects on the most effective pills and be unable to take them any longer. Once this has happened, it may be worth trying a less effective drug if it is at least better than nothing.

    The concern is that these discussions happened with the data locked behind closed doors, visible only to regulators. In fact, Nissen’s analysis could only be done at all because of a very unusual court judgment. In 2004, when GSK was caught out withholding data showing evidence of serious side-effects from paroxetine in children, their bad behaviour resulted in a US court case over allegations of fraud, the settlement of which, alongside a significant payout, required GSK to commit to posting clinical trial results on a public website.

    Nissen used the rosiglitazone data, when it became available, and found worrying signs of harm, which they then published to doctors – something the regulators had never done, despite having the information years earlier. If this information had all been freely available from the start, regulators might have felt a little more anxious about their decisions but, crucially, doctors and patients could have disagreed with them and made informed choices. This is why we need wider access to all trial reports, for all medicines.

    Missing data poisons the well for everybody. If proper trials are never done, if trials with negative results are withheld, then we simply cannot know the true effects of the treatments we use. Evidence in medicine is not an abstract academic preoccupation. When we are fed bad data, we make the wrong decisions, inflicting unnecessary pain and suffering, and death, on people just like us.

    • This is an edited extract from Bad Pharma, by Ben Goldacre, published next week by Fourth Estate at £13.99. To order a copy for £11.19, including UK mainland p&p, call 0330 333 6846, or go to

    Author defends Monsanto GM study as EU orders review; Claims ” people are responsible and guilty of authorizing this GMO after only three months,”

    Author defends Monsanto GM study as EU orders review

    Posted 2012/09/20 at 11:56 am EDT

    BRUSSELS, Sep. 20, 2012 (Reuters) — The French author of a study linking a type of genetically modified corn to higher health risks in rats dismissed criticism of his research methods on Thursday, describing the work as the most detailed study to date on the subject.

    Gilles-Eric Seralini of the University of Caen and colleagues said on Wednesday that rats fed on Monsanto’s genetically modified corn or exposed to its top-selling weed killer suffered tumors and multiple organ damage and premature death.

    But experts not involved with the study were skeptical, describing the French team’s statistical methods as unconventional and accusing them of going “on a statistical fishing trip”.

    Speaking at a news conference in Brussels on Thursday, Seralini defended the peer-reviewed study, which was published in the journal Food and Chemical Toxicology.

    “This study has been evaluated by the world’s best food toxicology magazine, which took much more time than people who reacted within 24 hours without reading the study,” he told Reuters Television.

    “I’m waiting for criticism from scientists who have already published material in journals… on the effects of GMOs and pesticides on health, in order to debate fairly with peers who are real scientists, and not lobbyists.”

    Earlier, the European Commission said it had asked the EU’s food safety authority, EFSA, to verify the results of the French study and report their findings.

    “EFSA’s mandate is to verify what this group of scientists has presented, to look at their research conditions, look at how the animals were treated,” Commission health spokesman Frederic Vincent told a regular news briefing.

    “We hope that by the end of the year we will have an EFSA opinion on this piece of scientific research.”

    In 2003, EFSA published a safety assessment of the GM corn variety known as NK603, which is tolerant to Monsanto’s Roundup weed killer. The assessment concluded that NK603 was as safe as non-GM corn, after which the European Union granted approval for its use in food and feed.

    Seralini said EFSA’s assessments were less rigorous than his team’s study.

    “GMOs have been evaluated in a extremely poor and lax way with much less analysis than we have done. It’s the world’s most detailed and longest study. Therefore, some people are responsible and guilty of authorizing this GMO after only three months,” he said.

    (Reporting by Clement Rossignol; Writing by Charlie Dunmore; Editing by Hugh Lawson)

    In Reference too the following article:

    Farmers accuse Madagascar mining company of killing bees

    By Agence France-Presse Tuesday, September 18, 2012 15:07 EDT

    Bees (AFP Photo)

    A swath of farmland around a giant nickel and cobalt mine in Madagascar has been contaminated by pesticides that have wiped out local bee populations, a group of farmers claimed Tuesday.

    The Ambatovy mine, located about 80 kilometres (50 miles) east of the capital Antananarivo, is Madagascar’s largest foreign investment, built at a cost of about $5.5 billion (7.2 billion euros).

    Jean-Louis Berard, the secretary of a local farming and beekeeping association, said a 30-kilometre (20-mile) strip of farmland around the mine has been devastated by Ambatovy’s spraying of pesticides to reduce mosquito populations that pester workers.

    “According to our estimates, 1,000 tonnes of rice and 40 tonnes of honey are lost annually,” Berard said.

    Ambatovy did not immediately respond to a request for comment but has previously defended its environmental stance and claims to be in compliance with the highest standards for protecting the area’s broad biodiversity.

    The mine says it will create 15,000 direct and indirect jobs.

    Berard’s association says two pesticides have caused die-offs at hundreds of beehives and caused other serious environmental damages around the mine.

    According to Ambatovy’s website, construction at the mine was due to finish earlier this year. Ambatovy’s main shareholder is Canadian mining giant Sherritt International, but it also benefits from Japanese and South Korean investors.

    On Friday, it secured a six-month, renewable operating licence, but was asked to pay a $50-million deposit to cover potential environmental damage.

    Tuesday’s claims are the latest in a string of complaints about the mine’s environmental impact.

    Former Justice Souter: ‘Pervasive civic ignorance’ in U.S. could bring dictatorship

    By Eric W. Dolan Monday, September 17, 2012 18:20 EDT

    Justice David Souter screenshot

    Retired U.S. Supreme Court Justice David H. Souter thinks the decline of civic education is putting the United States in danger.

    During a question and answer session last week at University of New Hampshire School of Law, Souter described “pervasive civic ignorance” as one of the biggest problems in the United States. He warned that Americans’ ignorance about their own government could lead to a dictatorship.

    “I don’t worry about our losing a republican government in the United States because I’m afraid of a foreign invasion, he said. “I don’t worry about it because of a coup by the military, as has happened in some other places. What I worry about is that when problems are not addressed people will not know who is responsible, and when the problems get bad enough — as they might do for example with another serious terrorist attack, as they might do with another financial meltdown — some one person will come forward and say ‘Give me total power and I will solve this problem.’”

    “That is how the Roman republic fell,” Souter continued. “Augustus became emperor not because he arrested the Roman senate. He became emperor because he promised that he would solve problems that were not being solved.”

    “If we know who is responsible, I have enough faith in the American people to demand performance from those responsible. If we don’t know, we will stay away from the polls, we will not demand it and the day will come when somebody will come forward and we and the government will in effect say, ‘Take the ball and run with it, do what you have to do.’ That is the way democracy dies.”

    Watch video, uploaded to YouTube by PBS Newshour, below:

    Scores at risk as new breed of mosquito foils malaria prevention methods: There is NO KNOWN DNA match

    Published: 16 September, 2012, 21:14

    Annual deaths could jump by the hundreds of thousands because of a new species of mosquito, which bites people in the early evening rather than at night, making bed nets useless in the battle against malaria.

    The new strain of mosquito, which was discovered in the highlands of western Kenya by scientists from the London School of Hygiene and Tropical Medicine, feeds while people are outside in the early evening, according to a Sunday report by the Independent.

    Malaria is already one of the world’s top killers, with nearly one million people a year dying from the disease.

    And if not for mosquito nets that number would be much higher, as nets prevent the insects from biting at night, when the female anopheles mosquito sucks blood as part of its egg-production cycle. As many as one million people are thought to have dodged death by sleeping under mosquito nets covered with insecticide over the last 12 years.

    Even more distressing is that scientists have as yet been unable to match the DNA of the new species to that of any existing variety.

    Jennifer Stevenson, a scientist in the London School research group, told the Independent, “We observed that many mosquitos we caught – including those infected with malaria – did not physically resemble other known malaria mosquitoes.”

    Stevenson, whose team set up outdoor and indoor traps to catch the species, added, “the main difference that came through from this study is that we caught 70 per cent of these species A – which is what we named them because we don’t know exactly what they are – outdoors before 10:30pm, which is the time when people in the village usually go indoors.”

    Jo Lines, a colleague of Stevenson and a former co-coordinator for the World Health Organization’s global malaria program, also said, “we do not yet know what these unidentified specimens are, or whether they are acting as vectors [transmitters] on a wider scale, but in the study area they are clearly playing a major and previously unsuspected role.”

    Scientists are now calling for wider controls to deal with the outdoor transmission of the disease.

    Andrew Griffiths, from the children’s charity World Vision, said the findings are a setback in the fight against the disease. “It’s concerning because bed nets are one of the important tools in combating malaria and we’ve seen deaths go down dramatically. It would mean that one of the important parts in the response to malaria would be taken away. We have to be talking about protecting yourself at different times of the day and put even more focus on the community and other systems,” he said.

    In a separate development, scientists in the UK and the US are developing genetically-modified mosquitos, which could prove effective in the battle against mosquito borne-diseases like malaria

    NSA whistleblower: Illegal data collection a ‘violation of everybody’s Constitutional rights’: The Story of “ThinThread”

    By Paul Harris, The Guardian Saturday, September 15, 2012 14:53 EDT

    Geoff Shively and William Binney screencap

    Former National Security Agency official Bill Binney says US is illegally collecting huge amounts of data on his fellow citizens

    Bill Binney believes he helped create a monster.

    Sitting in the innocuous surroundings of an Olive Garden in the Baltimore suburbs, the former senior National Security Agency (NSA) official even believes he owes the whole American people an apology.

    Binney, a tall, professorial man in his late 60s, led the development of a secret software code he now believes is illegally collecting huge amounts of information on his fellow citizens. For the staunch Republican, who worked for 32 years at the NSA, it is a civil liberties nightmare come true.

    So Binney has started speaking out as an NSA whistleblower – an act that has earned him an armed FBI raid on his home. “What’s happening is a violation of the constitutional rights of everybody in the country. That’s pretty straightforward. I could not be associated with it,” he told the Guardian.

    Binney, a career NSA employee who first volunteered for the army in the mid-1960s, has now become a high-profile thorn in the side of NSA chiefs when they deny the programme’s existence.

    At a hacking conference this summer in Las Vegas, NSA director General Keith Alexander said the NSA “absolutely” did not keep files on Americans.

    “Anyone who would tell you that we’re keeping files or dossiers on the American people knows that’s not true,” the NSA chief told an audience of computer and security experts. But Binney himself was at the same conference and publicly accused Alexander of playing a “word game”.

    “Once the software takes in data, it will build profiles on everyone in that data,” he told a convention panel there.

    Binney’s outspokenness has earned him media appearances on shows across America’s political spectrum ranging from ultra-conservative Glenn Beck’s TV show to the liberal radio icon of Democracy Now.

    “This is not a political issue. People on both sides are concerned,” Binney said.

    The story Binney tells is one of extreme over-reaction by America’s national security establishment post-9/11. He recounts developing a small software system, called ThinThread, in the late 1990s at the NSA where he was the technical director of the organisation’s 6,000-strong World Geopolitical and Military Analysis Reporting Group.

    ThinThread correlated data from emails, phone calls, credit card payments and Internet searches and stored and mapped it in ways that could be analysed.

    Binney wanted to use ThinThread to track foreign threats but it worked too well and kept catching data on Americans too.

    So Binney’s team built in safeguards that encrypted that data. But, by 2000, the NSA decided to go with developing a larger scale programme called Trailblazer to be built by outside contractors (that eventually failed to make it past the design stage) and ThinThread was effectively mothballed.

    Then September 11 happened. Within a few weeks, Binney says, he realised parts of ThinThread were now being used by the NSA in a massive and secret surveillance operation.

    But his safeguards had been removed allowing for far more targeted surveillance of American citizens. “I knew the dangers so I built in protections. And you could still find the bad guys with the protections in it. But that wasn’t what they wanted so they took those things out,” Binney said.

    Binney quickly left the agency and kept his silence. But that was not the end of the story. In late 2005, the New York Times broke the story that the NSA was engaged in large-scale warrantless electronic surveillance.

    The scandal eventually led to the passing of amendments to the Foreign Intelligence Surveillance Act in 2008 which, many critics say, simply gave legal protection to the agency’s data-mining operations.

    The programme has thus effectively continued under the Obama administration, which has launched a ruthless crackdown on national security whistleblowers, especially those leaking NSA secrets.

    Binney gradually began to protest behind the scenes. Yet that earned him an FBI raid by armed agents as he showered at his home. “Here’s a guy coming into my shower and pointing a gun at me. I’d been co-operating with these people. Why are they doing this?” he said.

    Over the past year Binney has gone fully public, detailing what he believes is a massive effort under the Obama administration to collect virtually all electronic data in the country, from Facebook posts to Google searches to emails.

    It is a deeply secret programme, Binney says, that is called Stellar Wind. He points to the NSA’s creation of a giant data centre at Bluffdale in Utah as part of the system.

    The gigantic building is set to cost $2bn and be up and running by 2013.

    It is being designed to store huge amounts of accessible web information – such as social media updates – but also information in the “deep web” behind passwords and other firewalls that keep it away from the public.

    As an example of Stellar Wind’s power, Binney believes it is hoovering up virtually every email sent by every American and perhaps a good deal of the people of the rest of the world, too.

    “I didn’t expect it from my government. I thought we were the good guys. We wear white hats, right?” he said.

    For Binney, Bluffdale is a symbol that the national security policy conducted by Obama has been little different than that of Bush.

    Obama has renewed the Patriot Act, tried to broaden the powers of detention of American citizens for national security reasons, and deployed the anti-spy Espionage Act more times than all other presidents combined.

    “They are still continuing the same programmes – actually, Obama is doing more in some areas,” Binney said. Nor is Binney optimistic of rolling back the surveillance.

    Last week the House of Representatives voted for a five-year extension to the controversial 2008 FISA amendments.

    Yet Binney believes there has been too much of a sacrifice of civil liberties in order to fight terrorism. “People should feel the ability to go out there and and do anything that they want to without being looked at all the time. Monitored. Watched,” he said.

    “The terrorists win, OK? We’ve lost because we have destroyed our society just to combat them and there was really no reason to do that.”

    Binney is also determined to keep on speaking out. “I don’t see any other recourse. Everybody needs to wake up to what we are doing here and whether we want it or not. There is a big hole at the end of this tunnel and it drops off to nowhere.” he said.


    Malware being installed on computers in factories, warns Microsoft : ” found forged versions of Windows on all the machines “

    Researchers find malware pre-installed on brand new computers bought in China

    Associated Press,   Friday 14 September 2012 07.41 EDT

    Microsoft malware

    Microsoft investigator David Anselmi shows how malware can wind up on consumer computers. Photograph: Elaine Thompson/AP

    Criminals are installing malware on PCs before they leave the factory, according to Microsoft.

    Microsoft researchers in China investigating the sale of counterfeit software found malware pre-installed on four of 20 brand new desktop and laptop PCs they bought for testing. They found forged versions of Windows on all the machines.

    The worst piece of malicious software they found is called Nitol, an aggressive virus found on computers in China, the US, Russia, Australia and Germany. Microsoft has even identified servers in the Cayman Islands controlling Nitol-infected machines.

    All these affected computers become part of a botnet – a collection of compromised computers – one of the most invasive and persistent forms of cybercrime

    The findings were revealed in court documents unsealed on Thursday in a federal court in Virginia. The records describe a new front in a legal campaign against cybercrime being waged by the maker of the Windows operating system, which is the biggest target for viruses.

    The documents are part of a computer fraud lawsuit filed by Microsoft against a web domain registered to a Chinese businessman named Peng Yong.

    The company says it is a major hub for illicit Internet activity. The domain is home base for Nitol and more than 500 other types of malware, making it the largest single repository of infected software that Microsoft officials have ever encountered.

    Peng, the owner of an internet services firm, said he was not aware of the Microsoft lawsuit but he denied the allegations and said his company did not tolerate improper conduct on the domain,

    Three other unidentified individuals accused by Microsoft of establishing and operating the Nitol network are also named in the suit.

    What emerges most vividly from the court records and interviews with Microsoft officials is a disturbing picture of how vulnerable web users have become, in part because of weaknesses in computer supply chains.

    To increase their profit margins, less reputable computer manufacturers and retailers may use counterfeit copies of popular software products to build machines more cheaply.

    Plugging the holes is nearly impossible, especially in less regulated markets such as China, and that leaves openings for cybercriminals.

    Paul Davis, director of Europe at security company FireEye, said hackers had upped their game and taken cybercrime to the next level.

    “According to Microsoft, some of the malware was capable of remotely turning on an infected computer’s microphone and video camera, posing a serious cyber espionage issue for consumers and businesses alike,” he said.

    “If the exploitation of supply chain vulnerabilities should become an emerging trend, it should be taken very seriously indeed, as it the impact could be far-reaching, costly and destructive.

    “When people buy a new PC, they often expect that machine to be secure out of the box. The fact that malware is being inserted at such an early stage in the product lifecycle turns this on its head and unfortunately means that no matter how discerning a user is online, their caution becomes irrelevant if that PC is already tainted.”

    Mark James, technical team leader at UK security company ESET, said that apart from installing the operating system yourself, as well as good antivirus software, there was not a lot users would do to protect against this type of abuse.

    “If the machine is already infected and talking to the outside world, the end user may be unaware and accept any strange occurrences as normal for a new machine,” he said.

    “Often the end user notices when a new machine becomes infected and slower, but in this scenario, may not until a specific problem arises. I would hope a business environment would have a procedure in place to test new machines for any kind of infection before it was added to the domain or work environment using a good antivirus program.”

    Nitol, meanwhile, appears poised to strike. Infection rates have peaked, according to Patrick Stratton, a senior manager in Microsoft’s digital crimes unit who filed a document in the court case explaining Nitol and its connection to the domain.

    For Microsoft, pursuing cybercriminals is a smart business – the Windows operating system runs most of the computers connected to the internet.

    Victims of malware are likely to believe their problems stem from Windows instead of a virus they are unaware of, and that damages the company’s brand and reputation.

    The investigation by Microsoft’s digital crimes unit began in August 2011 as a study into the sale and distribution of counterfeit versions of Windows.

    Microsoft employees in China bought 20 new computers from retailers and took them back to a home with a web connection.

    They found forged versions of Windows on all the machines and malware pre-installed on four. The one with Nitol, however, was the most alarming because the malware was active.

    “As soon as we powered on this particular computer, of its own accord without any instruction from us, it began reaching out across the internet, attempting to contact a computer unfamiliar to us,” Stratton said in the document filed with the court.

    Stratton and his colleagues also found Nitol to be highly contagious. They inserted a thumb drive into the computer and the virus immediately copied itself onto it. When the drive was inserted into a separate machine, Nitol quickly copied itself on to it.

    Microsoft examined thousands of samples of Nitol, which has several variants, and all of them connected to command-and-control servers associated with the domain, according to the court records.

    “In short, is a major hub of illegal Internet activity, used by criminals every minute of every day to pump malware and instructions to the computers of innocent people worldwide,” Microsoft said in its lawsuit.

    Peng, the registered owner of, said he had zero tolerance for the misuse of domain names and works with Chinese law enforcement whenever there are complaints. Still, he said, his huge customer base made policing difficult.

    “Our policy unequivocally opposes the use of any of our domain names for malicious purposes,” Peng said in a private chat via Sina Weibo. “We currently have 2.85m domain names and cannot exclude that individual users might be using domain names for malicious purposes.” accounted for more than 17% of the world’s malicious web transactions in 2009, according to Zscaler, a computer security firm in San Jose. In 2008, Russian security company Kaspersky Lab reported that 40% of all malware programs, at one point or another, connected to

    US district judge Gerald Bruce Lee, who is presiding in the case, granted a request from Microsoft to begin steering web traffic from that has been infected by Nitol and other malwares to a special site called a sinkhole.

    From there, Microsoft can alert affected computer users to update their anti-virus protection and remove Nitol from their machines.

    Since Lee issued the order, more than 37m malware connections have been blocked from, according to Microsoft.

    Harvard pediatrics professor arrested after police found ‘up to 100 DVDs and 500 images of child porn at his home’

    • Dr Richard Keller was medical director at  Phillips Academy for 19 years
    • Spent almost $3,000 on child porn over two  years
    • Some pornographic content was delivered to  his office at the boarding school
    • Faces 20 years in jail

    By Rachel Quigley

    PUBLISHED:15:14 EST, 13  September 2012| UPDATED:15:14 EST, 13 September 2012

    Dr Richard Keller, 56, of Andover, 'knowingly received films depicting minors engaged in sexually explicit conduct'
    Dr Richard Keller, 56, of Andover, ‘knowingly received films depicting minors engaged in sexually explicit conduct’

    A pediatrician at Boston Children’s Hospital  was arrested today after police found almost 100 child pornography DVDs and more  than 500 similar images at his home.

    Dr Richard Keller, 56, of Andover,  Massachusetts, ‘knowingly received films  depicting minors engaged in sexually explicit conduct’, according to the U.S.  attorney’s office.

    Keller was the medical director at Phillips  Academy for almost 20 years before leaving in 2011.

    Phillips Academy is a  highly prestigious boarding school for students in grades nine to 12 and the oldest incorporated academy in the  United States, established in 1778 by Samuel Phillips, Jr.

    Andover has educated two American presidents,  George H. W. Bush and George W. Bush.

    Keller is also a pediatrics instructor at  Harvard Medical School.

    In a  statement released this afternoon, federal prosecutors said Keller ‘purchased  and ordered over 50 DVDs of child pornography online.

    ‘At this time, more than 500 photographs and  between 60 – 100 DVDs have been recovered during an ongoing search of Dr  Keller’s home today.’

    The case began with an investigation of an  oversea’s movie production company that offered films featuring naked young boys  having food fights, wrestling, showering together and playing Twister.

    When investigators accessed the company  website and reviewed film previews and summaries, one of the videos was  described as: ‘We bring you action-packed discs of ooey-gooey slippery goodness.  This two set disc features (name) and his buddies going commando in a very  unique way.

    ‘They’re sweet enough but that didn’t stop  them from breaking out sugary cupcakes and giving you a whole new perspective on  nudist food fighting.’

    An investigation into the pediatric  endocrinologist’s account showed that between January 2009 and July 2011, he  ordered titles on 19 separate occasions, spending $2,695.

    On five of the 19 occasions, the pornographic  material was sent to Phillips Academy’s student health center, according to the  arrest warrant.

    Keller was placed on immediate leave from Boston Children's Hospital Boston pending an investigation Concerns: Keller was placed on immediate leave from  Boston Children’s Hospital Boston pending an investigation
    Famous boarding school: Keller worked at prestigious Phillips Academy in Andover, Massachusetts, for almost 20 years before he left last year Famous boarding school: Keller worked at prestigious  Phillips Academy in Andover, Massachusetts, for almost 20 years before he left  last year

    It also states that 60 to 100 DVDs and 500  high-gloss images were seized from his bedroom.

    If Keller is convicted of all charges, he  faces up to 20 years in prison and will be placed on the sex offender’s register  for life.

    Children’s Hospital spokesman Rob Graham  released this statement: ‘Providing safe  and appropriate care in a safe and protective environment is the absolute  paramount priority for Boston Children’s Hospital.

    ‘When the hospital learned of the allegations  against Dr Richard Keller earlier today, he was immediately put on  administrative leave pending results of the investigation by the US Attorney’s  Office. We will cooperate fully with the US Attorney’s Office and all other  involved regulatory and legal authorities.

    ‘No complaints or concerns have been  expressed by any patients or family members about the care Dr Keller provided  while he was at Children’s.’

    Because Keller would have been in  contact  with countless minors through the nature of his work, the U.S.  Attorney’s  Office said: ‘Members of the public who  have  questions, concerns or information regarding this case should call  617-748-3274, and messages will be promptly  returned.’

    Read more:

    How China and US ‘secretly tested genetically modified golden rice on children’

    By Daily Mail Reporter

    PUBLISHED:07:13 EST, 11  September 2012| UPDATED:07:22 EST, 11 September 2012

    Genetically manipulated Golden rice has been proposed as a solution to vitamin A deficiency
    Genetically manipulated Golden rice has been proposed as  a solution to vitamin A deficiency

    China’s health authorities are investigating  allegations that genetically modified rice has been tested on Chinese children  as part of a research project.

    A recent scientific publication suggested that researchers, backed by the US Department of Agriculture, fed  experimental  genetically engineered golden rice to 24 children in China  aged between six and  eight years old.

    The environmental group Greenpeace is  demanding a stop to field trials of the genetically enriched rice, which has  been proposed as a solution to vitamin A deficiency, as it says the rice carries  environmental and health risks.

    China is the world’s largest grower of  genetically modified (GMO) cotton and the top importer of GMO soybeans but,  while Beijing has already approved home-grown strains of GMO rice, it remains  cautious about introducing the technology on a commercial basis amid widespread  public concern about food safety.

    The Chinese Centre for Disease Control and  Prevention investigation came after a report last month by environmental group  Greenpeace claimed that a U.S. Department of Agriculture-backed study used 24  Chinese children aged between six and eight to test genetically modified ‘golden  rice’.

    The International Rice Research Institute is  working with leading nutrition and agricultural research organisations to  develop golden rice as a potential method to reduce vitamin A deficiency in the  Philippines and Bangladesh.

    The research by Tufts University and other  Chinese scientists was published in the American Journal of Clinical Nutrition  in August. It aimed to demonstrate that the rice could provide a good source of  vitamin A for children in countries where deficiency in the vitamin is  common.

    Andrea Grossman, assistant director of public  relations at Tufts University, told state news agency Xinhua that the university  was deeply concerned about the allegations and is reviewing protocols used in  the 2008 research.

    ‘We have always placed the highest importance  on human health, and we take all necessary steps to ensure the safety of human  research subjects,’ Grossman said.

    ‘We have always been and remain committed to  the highest ethical standards in research.’

    The Greenpeace report sparked a wave of  criticism on Weibo, China’s version of Twitter, with the researchers accused of  a breach of ethics for testing poor, rural children whose families may not have  been informed properly.

    One of the Chinese authors, Shi-an Yin, has  been suspended from work pending further investigation after his responses  proved to be inconsistent.

    Yin was cited by the official People’s Daily  newspaper as saying he helped collect data for the study but was unaware that it  involved GM rice.

    The second of the two Chinese researchers, Hu  Yuming, denied his involvement in the research, the People’s Daily  said.

    China, the world’s top rice producer and  consumer, approved the safety of one locally developed strain of genetically  modified rice, known as the Bt rice, in 2009, but commercial production has been  delayed.

    Apart from genetically modified products,  China’s vast food sector is still struggling to come to grips with food safety  four years after a major scandal where tainted milk powder was blamed for the  deaths of at least six children

    Read more:

    IOM States ” roughly $750 billion — was wasted on unnecessary services, excessive administrative costs, fraud, and other problems” in 2009

    Date:  Sept. 6, 2012


    Transformation of Health System Needed to Improve Care and Reduce Costs

    WASHINGTON — America’s health care system has become too complex and costly to continue business as usual, says a new report from the Institute of Medicine.  Inefficiencies, an overwhelming amount of data, and other economic and quality barriers hinder progress in improving health and threaten the nation’s economic stability and global competitiveness, the report says.  However, the knowledge and tools exist to put the health system on the right course to achieve continuous improvement and better quality care at lower cost, added the committee that wrote the report.

    The costs of the system’s current inefficiency underscore the urgent need for a systemwide transformation.  The committee calculated that about 30 percent of health spending in 2009 — roughly $750 billion — was wasted on unnecessary services, excessive administrative costs, fraud, and other problems.  Moreover, inefficiencies cause needless suffering.  By one estimate, roughly 75,000 deaths might have been averted in 2005 if every state had delivered care at the quality level of the best performing state.

    Incremental upgrades and changes by individual hospitals or providers will not suffice, the committee said.  Achieving higher quality care at lower cost will require an across-the-board commitment to transform the U.S. health system into a “learning” system that continuously improves by systematically capturing and broadly disseminating lessons from every care experience and new research discovery.  It will necessitate embracing new technologies to collect and tap clinical data at the point of care, engaging patients and their families as partners, and establishing greater teamwork and transparency within health care organizations.  Also, incentives and payment systems should emphasize the value and outcomes of care.

    “The threats to Americans’ health and economic security are clear and compelling, and it’s time to get all hands on deck,” said committee chair Mark D. Smith, president and CEO, California HealthCare Foundation, Oakland.  “Our health care system lags in its ability to adapt, affordably meet patients’ needs, and consistently achieve better outcomes.  But we have the know-how and technology to make substantial improvement on costs and quality.  Our report offers the vision and road map to create a learning health care system that will provide higher quality and greater value.”

    The ways that health care providers currently train, practice, and learn new information cannot keep pace with the flood of research discoveries and technological advances, the report says.  How health care organizations approach care delivery and how providers are paid for their services also often lead to inefficiencies and lower effectiveness and may hinder improvement.

    Better use of data is a critical element of a continuously improving health system, the report says.  About 75 million Americans have more than one chronic condition, requiring coordination among multiple specialists and therapies, which can increase the potential for miscommunication, misdiagnosis, potentially conflicting interventions, and dangerous drug interactions.  Health professionals and patients frequently lack relevant and useful information at the point of care where decisions are made.  And it can take years for new breakthroughs to gain widespread adoption; for example, it took 13 years for the use of beta blockers to become standard practice after they were shown to improve survival rates for heart attack victims.

    Mobile technologies and electronic health records offer significant potential to capture and share health data better.  The National Coordinator for Health Information Technology, IT developers, and standard-setting organizations should ensure that these systems are robust and interoperable, the report says.  Clinicians and care organizations should fully adopt these technologies, and patients should be encouraged to use tools, such as personal health information portals, to actively engage in their care.

    Health care costs have increased at a greater rate than the economy as a whole for 31 of the past 40 years.  Most payment systems emphasize volume over quality and value by reimbursing providers for individual procedures and tests rather than paying a flat rate or reimbursing based on patients’ outcomes, the report notes.  It calls on health economists, researchers, professional societies, and insurance providers to work together on ways to measure quality performance and design new payment models and incentives that reward high-value care.

    Although engaging patients and their families in care decisions and management of their conditions leads to better outcomes and can reduce costs, such participation remains limited, the committee found.  To facilitate these interactions, health care organizations should embrace new tools to gather and assess patients’ perspectives and use the information to improve delivery of care.  Health care product developers should create tools that assist people in managing their health and communicating with their providers.

    Increased transparency about the costs and outcomes of care also boosts opportunities to learn and improve and should be a hallmark of institutions’ organizational cultures, the committee said.  Linking providers’ performance to patient outcomes and measuring performance against internal and external benchmarks allows organizations to enhance their quality and become better stewards of limited resources, the report says.  In addition, managers should ensure that their institutions foster teamwork, staff empowerment, and open communication.

    The report was sponsored by the Blue Shield of California Foundation, Charina Endowment Fund, and Robert Wood Johnson Foundation.  Established in 1970 under the charter of the National Academy of Sciences, the Institute of Medicine provides objective, evidence-based advice to policymakers, health professionals, the private sector, and the public.  The Institute of Medicine, National Academy of Sciences, National Academy of Engineering, and National Research Council together make up the independent, nonprofit National Academies.  For more information, visit or   A committee roster follows.


    Christine Stencel, Senior Media Relations Officer

    Luwam Yeibio, Media Relations Assistant

    Office of News and Public Information

    202-334-2138; e-mail


    Pre-publication copies of Best Care at Lower Cost: The Path to Continuously Learning Health Care in Americaare available from the National Academies Press; tel. 202-334-3313 or 1-800-624-6242 or on the Internet at  Reporters may obtain a copy from the Office of News and Public Information (contacts listed above).

    Harvard psychology professor ‘faked data and fudged results in monkey experiments’

    • Marc Hauser, 52, researched evolutionary  roots of human abilities
    • Probe by Office of Research Integrity found  Hauser responsible for six cases of scientific misconduct
    • Allegedly fabricated data in a paper on  monkeys’ ability to learn syllables
    • Currently works with at-risk youth at Cape  Cod Collaborative

    By Daily Mail Reporter

    PUBLISHED:17:26 EST, 5  September 2012| UPDATED:17:48 EST, 5 September 2012


    Fall from grace: A federal probe found former Harvard psychology professor Marc Hauser to be responsible for six cases of research misconduct Fall from grace: A federal probe found former Harvard  psychology professor Marc Hauser to be responsible for six cases of research  misconduct

    Federal investigators have found that a  Harvard University psychology professor who resigned after being accused of  scientific transgression fabricated data and manipulated results in  experiments.

    The findings detailing Marc Hauser’s  transgressions were contained in a report by the Department of Health and Human  Services Office of Research Integrity (ORI) released Wednesday  online.

    Hauser, 52, left Harvard last summer, ten  months after a faculty investigation found him ‘solely responsible’ for eight  instances of scientific misconduct at the prestigious Ivy League school, the Boston  Globe reported.

    The federal document found six cases in which  Hauser engaged in research misconduct in work supported by four National  Institutes of Health grants. One paper was retracted and two were corrected.  Other problems were found in unpublished work.

    Hauser released a statement Wednesday, saying  that although he has fundamental differences with the findings, he acknowledges  that he made mistakes.

    ‘I let important details get away from my  control, and as head of the lab, I take responsibility for all errors made  within the lab, whether or not I was directly involved,’ he stated.

    ‘I am saddened that this investigation has  caused some to question all of my work, rather than the few papers and  unpublished studies in question.

    ‘I remain proud of the many important papers  generated by myself, my collaborators and my students over the years. I am also  deeply gratified to see my students carve out significant areas of research at  major universities around the world,’ Hauser said.

    Internal probe: A three-year investigation at Harvard found in 2010 that Hauser had committed eight instances of scientific misconductInternal probe: A three-year investigation at Harvard  found in 2010 that Hauser had committed eight instances of scientific  misconduct

    In one instance, investigators determined  that Hauser fabricated half the data in a bar graph in a research paper on  cotton-top tamarind monkeys’ ability to learn syllables that was published in  2002 in the journal Cognition.

    According to the findings of the probe, the  52-year-old professor ‘falsified the coding’ of some monkeys’ responses to sound  stimuli in two unpublished papers to ensure that a particular finding does not  appear random, and ‘falsely reported the results and methodology’ for one of  seven experiments in a paper published in Proceedings of the Royal Society B in  2007.

    A paper examining monkeys’ abilities to learn  grammatical patterns included false descriptions of how the monkeys’ behavior  was coded, ‘leading to a false proportion or number of animals showing a  favorable response,’ the report stated.

    The document, which will be published in the  Federal Register Thursday, comes on the heels of a three-year internal  investigation at Harvard that found in 2010 that the popular professor had  committed eight instances of scientific misconduct.

    According to the report, Hauser ‘neither  admits nor denies committing research misconduct but accepts ORI has found  evidence of research misconduct.’ As part of a voluntary settlement, the former  professor has accepted several professional restrictions for the next three  years.

    New job: Hauser currently works with at-risk youth at the Alternative Education Program at Cape Cod CollaborativeNew job: Hauser currently works with at-risk youth at  the Alternative Education Program at Cape Cod Collaborative

    Hauser has agreed to have any research  supported by the U.S. Public Health Service supervised, to exclude himself from  serving as an adviser to the PHS, and to have his employer guarantee the  accuracy of his data and methodology before applying for federal funding.

    According to his LinkedIn profile, Hauser  currently works with at-risk youth at the Alternative Education Program at Cape  Cod Collaborative.

    Houser, a published author of popular books  favored by the media, gained wide recognition for his research into the  evolutionary roots of human abilities, including language, and whether morality  was inborn or learned.

    Less than a week ago, Hauser’s former Ivy  League employer made national headlines after it has been revealed that 125  students at the prestigious institution are being investigated for allegedly  cheating on a take-home exam.

    Read more: