When your darkest secrets leak

“Just a hint of things to come. When your darkest secrets leak – it won’t be from Google. A low-tier service that has your behavioural profile gets hacked or purchased by some seedy site owners, and all your activities are cross-referenced.” Trevor_Goodchild

[UPDATE 22 Nov 2019: 1.2 billion records leaked from exactly the sort of sketchy data aggregation merchants this quote addressed]

This Is a Reddit comment on an article about how a porn producer is suing a TV executive for pirating gay porn.

He or she is right. The biggest risk to your truly darkest secrets being exposed does not come from Google or other big Internet companies who tend to obey privacy laws and have an interest and ability to protect your data. Nor does it come from western intelligence agencies whose, again, work under the law.

The risk comes from sketchy profilers who cross-reference your IP and other digital fingerprints, linking all your online activities and personas. It also comes from lawsuits – capricious or not – where discovery processes mean you may have to hand over deeply sensitive emails and other records. I remember the newspapers mocking Max Clifford about the size of his penis after that intimate detail was exposed in one of his trials.

No record is safe. Your diary can be subpoena’d. Your medical history can become a court record. Your emails can easily become public property entirely through legal means. This is especially true to work emails. Just remember that when you gossip about a colleague or share some other private information, you could be one anti-trust lawsuits from having that in the internet archive.

So watch what you commit to written record. Assume absolutely everything done on work equipment or at work is never private. Learn to protect your privacy online. The Electronic Freedom Foundation’s Surveillance Self Defense is a great starting point.

Warnings: Return of The Long Emergency

warnings

James Kunstler’s 2005 book “The Long Emergency” made a huge impression on me when I read it in 2006. In fact, it was one of the reasons I found myself pursuing a career in cloud computing in 2007. Partly thanks to this book and a former boss from British Telekom, my business partner and I were convinced that peak oil and climate change would create a huge demand for energy efficient, carbon neutral compute resources, and cloud computing was the future.

The Long Emergency was primarily concerned with America’s oil addiction and ill-preparedness for what looked at the time to be the coming energy (oil) shock, but it also examined other threats to civilization:

  • Climate Change
  • Infectious diseases (microbial resistance)
  • Water scarcity
  • Habitat destruction
  • Economic instability
  • Political extremism
  • War

Every one of those is still an enormous threat.

A new book by national security veteran Richard Clarke and R.P Eddy called “Warnings: Finding Cassandras to Stop Catastrophes” updates The Long Emergency with some new features of the threat landscape.

The book starts off by asking how we can reliably spot Cassandras – people who correctly predict disasters but who were not heeded – so that we can prevent future disasters.

They examine recent disasters – like 9/11, the Challenger space shuttle disaster and Hurricane Katrina, then examine the people who predicted these events, looking or patterns. They come up with some stable characteristics that allow us to score people on their Cassandra Quotient.

The second part of the book looks at current threats, and their doomsayers, to see if any have a high Cassandra Quotient and thus should be heeded.

The threats are:

  • Artificial Intelligence
  • Pandemic Disease
  • Sea-Level Rise
  • Nuclear Ice Age
  • The Internet of Everything
  • Meteor Strike
  • Gene Editing (CRISPR)

The bad news is that they all have high Cassandra Quotients and the scenarios in the book are plausible, science-backed and terrifying.

Artificial Intelligence as a threat hs been on my radar for a year or so thanks to Elon Musk, Bill Gates, Stephen Hawkins and Sam Harris warning of the risks of intelligent machines that can design and build ever moire intelligent machines.

Pandemic Disease has worried me since reading The Long Emergency, but I thought there had been better global awareness, especially since the world took the 2011 flu scare seriously, and Ebola and Zika.  Unfortunately, we are – as a planet – woefully ill-prepared for a global pandemic. A high fatality airborne flu could kill billions.

Sea-Level Rise genuinely surprised me, especially since the Cassandra in question – James Hansen – predicted the current melting and ice shelf break-offs we see in the Arctic today…30 years ago. I even googled how high my home is above sea level after being convinced we could see 7m rises within my lifetime.

As a child of the 70’s and 80’s, nuclear horror is deeply embedded in my psyche. But I thought the risk of a Nuclear Ice Age was a pretty low risk. It turns out you do not need a large-scale nuclear exchange between the US and Russia to cause global climate chaos. A limited exchange between India and Pakistan could be sufficient to kill billions though global starvation. I was also surprised to learn that Pakistan moves its nuclear arsenal around to thwart attacks my Indian commandos in the event of a war. This raises the risk of terrorists intercepting on of these weapons on the move, and using it for nuclear terrorism.

The book does a good job of examining the incredible fragility of out interconnected IT systems in the chapter on The Internet of Everything. As an IT professional I know the reality of how fragile these systems are and we are right to be scared of dire consequences of a serious cyber war.

I do not really think about Meteor Strikes, as there is little we can do about them and they are now part of popular culture.

The final worry in the book is about Gene Editing, especially CRISPR. CRISP has absolutely marvelous potential, but it also has many people worried. Daniel Saurez even has a new book on the topic called “Change Agent“. CRISPR is could be the mother of all second order effects. Take “off target events” for example:

Another serious concern arises from what are known as off-target events. After its discovery, researchers found that the CRISPR/Cas9 complex sometimes bonds to and cuts the target DNA at unintended locations. Particularly when dealing with human cells, they found that sometimes as many as five nucleotides were mismatched between the guide and target DNA. What might the consequences be if a DNA segment is improperly cut and put back together? What sorts of effects could this cause, both immediately and further down the road for heritable traits? Experimenting with plants or mouse bacteria in a controlled laboratory environment is one thing, but what is the acceptable level of error if and when researchers begin experimenting with a tool that cuts up a person’s DNA? If an error is in fact made, is there any potential way to fix the mistake?

So we have planet-scale problems, ingenious solutions. Instead of feeling paralysis or resignation we should accept Peter Thiel’s challenge to find the big breakthroughs, 0 to 1 intensive progress:

Progress comes in two flavors: horizontal/extensive and vertical/intensive. Horizontal or extensive progress basically means copying things that work. In one word, it means simply “globalization.” Consider what China will be like in 50 years. The safe bet is it will be a lot like the United States is now. Cities will be copied, cars will be copied, and rail systems will be copied. Maybe some steps will be skipped. But it’s copying all the same.

Vertical or intensive progress, by contrast, means doing new things. The single word for this is “technology.” Intensive progress involves going from 0 to 1 (not simply the 1 to n of globalization). We see much of our vertical progress come from places like California, and specifically Silicon Valley. But there is every reason to question whether we have enough of it. Indeed, most people seem to focus almost entirely on globalization instead of technology; speaking of “developed” versus “developing nations” is implicitly bearish about technology because it implies some convergence to the “developed” status quo. As a society, we seem to believe in a sort of technological end of history, almost by default.

It’s worth noting that globalization and technology do have some interplay; we shouldn’t falsely dichotomize them. Consider resource constraints as a 1 to n subproblem. Maybe not everyone can have a car because that would be environmentally catastrophic. If 1 to n is so blocked, only 0 to 1 solutions can help. Technological development is thus crucially important, even if all we really care about is globalization.

…Maybe we focus so much on going from 1 to because that’s easier to do. There’s little doubt that going from 0 to 1 is qualitatively different, and almost always harder, than copying something times. And even trying to achieve vertical, 0 to 1 progress presents the challenge of exceptionalism; any founder or inventor doing something new must wonder: am I sane? Or am I crazy?

From Blake Masters notes