Agnotology: study of disinformation propagation

This is a cracker!

Clive Thompson on How More Info Leads to Less Knowledge

Is global warming caused by humans? Is Barack Obama a Christian? Is evolution a well-supported theory?

You might think these questions have been incontrovertibly answered in the affirmative, proven by settled facts. But for a lot of Americans, they haven’t.

…What’s going on? Normally, we expect society to progress, amassing deeper scientific understanding and basic facts every year. Knowledge only increases, right?

Robert Proctor doesn’t think so. A historian of science at Stanford, Proctor points out that when it comes to many contentious subjects, our usual relationship to information is reversed: Ignorance increases.

He has developed a word inspired by this trend: agnotology. Derived from the Greek root agnosis, it is “the study of culturally constructed ignorance.”

As Proctor argues, when society doesn’t know something, it’s often because special interests work hard to create confusion…when the dust settles, society knows less than it did before.

People always assume that if someone doesn’t know something, it’s because they haven’t paid attention or haven’t yet figured it out,” Proctor says. “But ignorance also comes from people literally suppressing truth—or drowning it out—or trying to make it so confusing that people stop caring about what’s true and what’s not.”

After years of celebrating the information revolution, we need to focus on the countervailing force: The disinformation revolution. The ur-example of what Proctor calls an agnotological campaign is the funding of bogus studies by cigarette companies trying to link lung cancer to baldness, viruses—anything but their product.

…Maybe the Internet itself has inherently agnotological side effects. People graze all day on information tailored to their existing worldview. And when bloggers or talking heads actually engage in debate, it often consists of pelting one another with mutually contradictory studies they’ve Googled: “Greenland’s ice shield is melting 10 years ahead of schedule!” vs. “The sun is cooling down and Earth is getting colder!”

As Farhad Manjoo notes in True Enough: Learning to Live in a Post-Fact Society, if we argue about what a fact means, we’re having a debate. If we argue about what the facts are, it’s agnotological Armageddon, where reality dies screaming.

Can we fight off these attempts to foster ignorance? Despite his fears about the Internet’s combative culture, Proctor is optimistic. During last year’s election, campaign-trail lies were quickly exposed via YouTube and transcripts. The Web makes secrets harder to keep.

We need to fashion information tools that are designed to combat agnotological rot. Like Wikipedia: It encourages users to build real knowledge through consensus, and the result manages to (mostly) satisfy even people who hate each other’s guts. Because the most important thing these days might just be knowing what we know.

Nassim Nicholas Taleb is damn right when he advises us to avoid the media.

“As Steve Pinker aptly said, our mind is made for fitness, not for truth — but fitness for a different probabilistic structure. Which tricks work? Here is one: avoid the media. We are not rational enough to be exposed to the press.” – “Learning to Expect the Unexpected“, Edge.org

The signal to noise ratio is is massively out of kilter in favour of noise. In the marketplace of ideas the truth – so often counter-intuitive, hard to explain or requiring education – loses out to sound bites and propaganda. Is this what informational entropy looks like? Memetic poison and toxic disinformation leaking out of echo chambers generating confusion and Flat Earth News?

See also:

Daily Me
Echo Chamber
Flat Earth News

Analysis, Second Order Effects and Black Swans

Kevin Kelly has a super interesting section of his upcoming book “The Technium” devoted to what he calls “The Pro-Actionary Principle”:

The current default algorithm for testing new technologies is the Precautionary Principle. There are several formulas of the Precautionary Principle but all variations of this heuristic hold this in common: a technology must be shown to do no harm before it is embraced. It must be proven to be safe before it is disseminated. If it cannot be proven safe, it should be prohibited, curtailed, modified, junked, or ignored. In other words, the first response to a new idea should be inaction until its safety is established. When an innovation appears, we should pause. The second step is to test it offline, in a model, or in any non-critical, safe, lowest-risk manner.  Only after is has been deemed okay should we try to live with it.

Unfortunately the Precautionary Principle doesn’t work as a reliable safeguard. Because of the inherent uncertainties in any model, laboratory, simulation, or test, the only reliable way to assess a new technology is to let it run in place. It has to be exercised sufficiently that it can begin to express secondary effects. When a technology is first cautiously tested  soon after its birth only its primary effects are being examined. But in most cases it is the unintended second-order effects of technologies that are usually the root of most problems. Second order effects often require a certain density, a semi-ubiquity, to reveal themselves. The main concern of the first automobiles was for the occupants — that the gas engines didn’t blow up, or that the brakes don’t fail. But the real threat of autos was to society en masse — the accumulated exposure to their minute pollutants and ability to kill others at high speeds, not to mention the disruptions of suburbs, and long commutes – all second order effects.

Second order effects – the ones that usually overtake society – are rarely captured by forecasts, lab experiments, or white papers.

…The absences of second-order effects in small precise experiments, and our collective impulse to adapt technology as we use it, make reliable models of advance technological innovations impossible. An emerging technology must be tested in action, and evaluated in real time. In other words the risks of a particular technology have to be determined by trial and error in real life. We can think of this vetting-by-action algorithm as the Proactionary Principle.

[The Pro-Actionary Principle by Kevin Kelly]

Nassim Nicholas Taleb, author of The Black Swan, agrees:

Taleb believes in tinkering – it was to be the title of his next book. Trial and error will save us from ourselves because they capture benign black swans. Look at the three big inventions of our time: lasers, computers and the internet. They were all produced by tinkering and none of them ended up doing what their inventors intended them to do. All were black swans. The big hope for the world is that, as we tinker, we have a capacity for choosing the best outcomes.

We have the ability to identify our mistakes eventually better than average; that’s what saves us.” We choose the iPod over the Walkman. Medicine improved exponentially when the tinkering barber surgeons took over from the high theorists. They just went with what worked, irrespective of why it worked. Our sense of the good tinker is not infallible, but it might be just enough to turn away from the apocalypse that now threatens Extremistan.

[Times Online, 1st June 2008]

There seems to be some sort of backlash against the deluge of “Analysis”, especially “Risk Analysis”. Right now the signal to noise ration in public discussion – especially about futurity, policy and risks – is now heavily dominated by noise. As Charlie Edwards from Global Dashboard put it recently”

Do we need to call ‘time out’ on global risk analysis?  The NIC report on global trends 2025 is one of a plethora of recent publications on global risks and security challenges from think tanks, Government departments, the defence community, NGOs, business, academia, and the media. Do we really need any more?

3 questions spring to mind:

1. Are we suffocating under the weight of all this analysis?
2. Should we consider having a period of consolidation and reflection?
3. Do we need a transformational shift from analysis to action?

[The Seduction of Analysis, Global Dashboard, 25th November 2008]

This is a theme explored by sociologist and skeptic Frank Furedi writing in the Times Higher Education:

As someone devoted to academic research, I feel increasingly embarrassed when I encounter the words “research shows” in a newspaper article. The status of research is not only exploited to prove the obvious, but also to validate the researcher’s political beliefs, lifestyle and prejudice.

…advocacy research has now acquired an unprecedented significance in Western culture. One important driver of its expansion is the growing significance that people attach to their lifestyles. The very subjects that advocacy research addresses suggest that lifestyle issues such as emotional orientation, parenting styles and the management of relations have become increasingly politicised.

In a world where lifestyle has unprecedented significance, people seek to endow it with moral worth. So it matters when a study concludes that children of gay parents “do just fine” or that single mothers’ sons can succeed at school, or that marriage protects elderly adults from mental illness.

Naturally, academics also take their lifestyles very seriously. But it is important that we resist the temptation to discover the moral worth of our lifestyle through our research. And maybe we should take the lead in informing the public that when they see the words “research shows”, they should assume the role of a sceptic.

[The Times Higher Education, 20th November 2008]

I see some themes developing here: advocacy research, journalism of attachment, flat earth news and cognitive biases all mutating and amplifying in recursive reinforcing feedback loops. It is some sort of incestuous emergence that generates confusion and entropy. These confusions and false choices are paralysing us, all of us, at precisely the time when urgent action is required in multiple domains.

Kevin Kelly again:

Technologies must be evaluated in action, by action. We test them in labs, we try them out in prototypes, we use them in pilot programs, we adapt our expectations, we monitor their alterations, we redefine their aims as they are modified, we retest them given actual behavior, we re-direct them to new jobs when we are not happy with their outcomes.

Of course we should forecast, anticipate and minimize known problems from the start.

All technologies will generate problems. None are problem free. All have social costs. And all technologies will cause disruptions to other technologies around them and may diminish technological benefits elsewhere. The problems of a new technology have to be weighed, balanced, and minimized but they cannot be fully eliminated.

Furthermore the costs of inaction (the default response called for by the Precautionary Principle), have to be weighed together with the costs of action. Inaction will also generate problems and unintended effects.  In a very fast changing environment the status quo has hidden substantial penalties that might only become visible over time.  These costs of inaction need to be added into the equations of evaluation.

Kelly then goes on to list the 5 Pro-actions that for the basis of the Pro-Actionary Principle (which in turn is a revision of Max More’s original):

1. AnticipationAll tools of anticipation are valid. The more techniques we use the better because different techniques fit different technologies. Scenarios, forecasts and outright science fiction can give partial pictures. Objective scientific measurement of models, simulations, and controlled experiments should carry greater weight, but these too are only partial. The process should try to imagine as many horrors as glories, and if possible to anticipate ubiquity; what happens if everyone has this for free? Anticipation should not a judgment. Rather the purpose of anticipation is to prepare a base for the next four steps. It is a way to rehearse future actions.

2. Continuous assessment

We have increasing means to quantifiably test everything we use all the time. By means of embedded technology we can turn daily use of technologies into large scale experiments. No matter how much a new technology is tested at first, it should be constantly retest in real time. We also have more precise means of niche-testing, so we can focus on susceptible neighborhoods, subcultures, gene pools, use patterns, etc. Testing should also be continuous, 24/7 rather than the traditional batch mode. Further, new technology allows citizen-driven concerns to surface into verifiable science by means of self-organized assessments. Testing is active and not passive. Constant vigilance is baked into the system.

3. Prioritize risks, including natural ones

Risks are real, but endless. Not all risks are equal. They must be weighted and prioritized. Known and proven threats to human and environmental health are given precedence over hypothetical risks.

Furthermore the risks of inaction and the risks of natural systems must be treated symmetrically. In More’s words: “Treat technological risks on the same basis as natural risks; avoid underweighting natural risks and overweighting human-technological risks.”

4. Rapid restitution of harm

When things go wrong – and they always will – harm should be compensated quickly in proportion to actual damages. Penalizing for hypothetical harm or even potential harm demeans justice and weakens the system, reducing honesty and penalizing those who act in good faith. Mechanisms for actively fixing harms of current technologies indirectly aid future technologies, because it permits errors to be corrected quicker. The expectation that any given  technology will create harms of some sort (not unlike bugs) that must be remedied should be part of technology creation.

5. Redirection rather than prohibition

Prohibition does not work with technology. Absolute prohibition produces absolute outlaws. In a review of past attempts to ban technology, I discovered that most technologies could only be temporarily displaced. Either they moved to somewhere else on the planet, or they moved into a different niche. The contemporary ban on nuclear weapons has not eliminated them from the planet at all. Bans of genetically modified foods have only displaced these crops to other continents. Bans on hand guns may succeed for citizens but not soldiers or cops. From technology’s point of view, bans only change their address, not their identity. In fact what we want to do with technologies that produce more harm than good is not to ban them but to find them new jobs. We want to move DDT from an insecticide aerial-sprayed on crops to a household malaria remedy. Society becomes a parent for our technological children, constantly hunting for the right mix of beneficial technological friends in which cultivates the best side of each new invention. Often times the first job we assign to a technological is not at all ideal, and we may take many tries, many jobs, before we find a great role for a given technology.

People sometimes ask what possible role of humans might play in a world of extremely smart autonomous technology? I think the answer is we’ll play parents; redirecting active technologies into healthy jobs, good friends, and instilling positive values.

If so, we should be looking for highly evolved tools that assist our pro-actions. On our list should be better tools for anticipation, better tools for ceaseless monitoring and testing, better tools for determining and ranking risks, better tools for remediation of harm done, and better tools and techniques for redirecting technologies as they grow.

[The Pro-Actionary Principle by Kevin Kelly]

The great Evolutionary Psychologist Dr. David Buss is interviewed…

Courage, Not Denial: An Interview with Dr. David Buss

BC: Why does evolutionary psychology evoke such strong reactions in people? I’ve noted that when I discuss basic principles with those who have never heard of it before I am met with either enthusiasm or anger. There seems to be little in between. Why might this be so? You are the perfect person to ask.

DDB: I think the strength of reactions is caused by several factors. One is religious, since evolutionary psychology threatens beliefs about divine creation. A second comes from political ideologies–people have agendas for making the world a better place, and evolutionary psychology is erroneously believed to be at odds with social change.

People think “if things like violence or infidelity are rooted in evolved adaptations, then we are doomed to have violence and infidelity because they are an unalterable part of human nature. On the other hand, if violence and infidelity are caused by the ills of society, by media, by bad parenting, then we can fix these things and make a better world.”

It’s what I call the “romantic fallacy”: I don’t want people to be like that, therefore they are not like that [interviewer’s emphasis]. The thinking is wrong-headed, of course. Knowledge of our evolved psychological mechanisms gives us more power to change, if change is desired, not less power.