Advertisement
Guest User

Dan Greer / Cybersecurity as Realpolitik

a guest
Aug 11th, 2014
1,071
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Latex 66.19 KB | None | 0 0
  1. \documentclass[12pt]{article}
  2. \usepackage{chicago}
  3.  
  4. \usepackage{chicago}
  5. \usepackage{parskip}
  6. \usepackage{enumerate}
  7. \usepackage{pict2e}
  8. \usepackage{url}
  9.  
  10. % --------------------
  11. % Formatting
  12. % --------------------
  13. % \setlength{\parindent}{0}     % Supersceded by parskip
  14. % \setlength{\parindent}{15pt}  % If you want the paragraph indent back.
  15. % --------------------
  16.  
  17. % --------------------
  18. % Command Definitions
  19. % --------------------
  20. \newcommand{\booktitle}[1]{\textit{#1}}
  21. \newcommand{\articletitle}[1]{``{#1}''}
  22. \newcommand{\journaltitle}[1]{\textit{#1}}
  23. \newcommand{\movietitle}[1]{\textit{#1}}
  24. % --------------------
  25.  
  26. \begin{document}
  27.  
  28. \title{Cybersecurity as Realpolitik\footnote{
  29.    nominal delivery draft, 6 August 2014}
  30. }
  31.  
  32. \author{Dan Geer}
  33.  
  34. \date{August 6, 2014}
  35.  
  36. \maketitle
  37.  
  38. \begin{abstract}
  39.    Power exists to be used.  Some wish for cyber safety, which they
  40.    will not get.  Others wish for cyber order, which they will not
  41.    get.  Some have the eye to discern cyber policies that are ``the
  42.    least worst thing;'' may they fill the vacuum of wishful thinking.
  43. \end{abstract}
  44.  
  45. Good morning and thank you for the invitation to speak with you
  46. today.  The plaintext of this talk has been made available to the
  47. organizers.  While I will not be taking questions today, you are
  48. welcome to contact me later and I will do what I can to reply.  For
  49. simple clarity, let me repeat the abstract for this talk:
  50.  
  51. \begin{quote}
  52.    Power exists to be used.  Some wish for cyber safety, which they
  53.    will not get.  Others wish for cyber order, which they will not
  54.    get.  Some have the eye to discern cyber policies that are ``the
  55.    least worst thing;'' may they fill the vacuum of wishful thinking.
  56. \end{quote}
  57.  
  58. There are three professions that beat their practitioners into a
  59. state of humility: farming, weather forecasting, and cyber security.
  60. I practice two of those, and, as such, let me assure you that the
  61. recommendations which follow are presented in all humility.  Humility
  62. does not mean timidity.  Rather, it means that when a strongly held
  63. belief is proven wrong, that the humble person changes their mind.
  64. I expect that my proposals will result in considerable push-back,
  65. and changing my mind may well follow.  Though I will say it again
  66. later, this speech is me talking for myself.
  67.  
  68. As if it needed saying, cyber security is now a riveting concern,
  69. a top issue in many venues more important than this one.  This is
  70. not to insult Black Hat; rather it is to note that every speaker,
  71. every writer, every practitioner in the field of cyber security who
  72. has wished that its topic, and us with it, were taken seriously has
  73. gotten their wish.  Cyber security \emph{is} being taken seriously,
  74. which, as you well know is not the same as being taken usefully,
  75. coherently, or lastingly.  Whether we are talking about laws like
  76. the Digital Millenium Copyright Act or the Computer Fraud and Abuse
  77. Act, or the non-lawmaking but perhaps even more significant actions
  78. that the Executive agencies are undertaking, ``we'' and the cyber
  79. security issue have never been more at the forefront of policy.
  80. And you ain't seen nothing yet.
  81.  
  82. I wish that I could tell you that it is still possible for one
  83. person to hold the big picture firmly in their mind's eye, to track
  84. everything important that is going on in our field, to make few if
  85. any sins of omission.  It is not possible; that phase passed sometime
  86. in the last six years.  I have certainly tried to keep up but I
  87. would be less than candid if I were not to say that I know that I
  88. am not keeping up, not even keeping up with what is going on in my
  89. own country much less all countries.  Not only has cybersecurity
  90. reached the highest levels of attention, it has spread into nearly
  91. every corner.  If area is the product of height and width, then the
  92. footprint of cybersecurity has surpassed the grasp of any one of us.
  93.  
  94. The rate of technological change is certainly a part of it.  When
  95. younger people ask my advice on what they should do or study to
  96. make a career in cyber security, I can only advise specialization.
  97. Those of us who were in the game early enough and who have managed
  98. to retain an over-arching generalist knowledge can't be replaced
  99. very easily because while absorbing most new information most of
  100. the time may have been possible when we began practice, no person
  101. starting from scratch can do that now.  Serial specialization is
  102. now all that can be done in any practical way.  Just looking at the
  103. Black Hat program will confirm that being really good at any one
  104. of the many topics presented here all but requires shutting out the
  105. demands of being good at any others.
  106.  
  107. Why does that matter?  Speaking for myself, I am not interested in
  108. the advantages or disadvantages of some bit of technology unless I
  109. can grasp how it is that that technology works.  Whenever I see
  110. marketing material that tells me all the good things that adopting
  111. this or that technology makes possible, I remember what George
  112. Santayana said, that ``Scepticism is the chastity of the intellect;
  113. it is shameful to give it up too soon, or to the first comer.'' I
  114. suspect that a majority of you have similar skepticism --- ``It's
  115. magic!'' is not the answer a security person will ever accept.  By
  116. and large, I can tell \emph{what} something is good for once I know
  117. \emph{how}
  118. it works.  Tell me how it works and then, but only then, tell me
  119. why you have chosen to use those particular mechanisms for the
  120. things you have chosen to use them for.
  121.  
  122. Part of my feeling stems from a long-held and well-substantiated
  123. belief that all cyber security technology is dual use.  Perhaps
  124. dual use is a truism for any and all tools from the scalpel to the
  125. hammer to the gas can --- they can be used for good or ill --- but I
  126. know that dual use is inherent in cyber security tools.  If your
  127. definition of ``tool'' is wide enough, I suggest that the cyber
  128. security tool-set favors offense these days.  Chris Inglis, recently
  129. retired NSA Deputy Director, remarked that if we were to score cyber
  130. the way we score soccer, the tally would be 462-456 twenty minutes
  131. into the game,\footnote{
  132. Chris Inglis, confirmed by personal communication
  133. } i.e., all offense.  I will take his comment as
  134. confirming at the highest level not only the dual use nature of
  135. cybersecurity but also confirming that offense is where the innovations
  136. that only States can afford is going on.
  137.  
  138. Nevertheless, this essay is an outgrowth from, an extension of,
  139. that increasing importance of cybersecurity.  With the humility of
  140. which I spoke, I do not claim that I have the last word.  What I
  141. do claim is that when we speak about cybersecurity policy we are
  142. no longer engaging in some sort of parlor game.  I claim that policy
  143. matters are now the most important matters, that once a topic area,
  144. like cybersecurity, becomes interlaced with nearly every aspect of
  145. life for nearly everybody, the outcome differential between good
  146. policies and bad policies broadens, and the ease of finding answers
  147. falls.  As H.L. Mencken so trenchantly put it, ``For every complex
  148. problem there is a solution that is clear, simple, and wrong.''
  149.  
  150. The four verities of government are these:
  151.  
  152. \begin{itemize}
  153.    \item Most important ideas are unappealing
  154.    \item Most appealing ideas are unimportant
  155.    \item Not every problem has a good solution
  156.    \item Every solution has side effects
  157. \end{itemize}
  158.  
  159. This quartet of verities certainly applies to the interplay between
  160. cybersecurity and the affairs of daily living.  Over my lifetime
  161. the public expectation of what government can and should do has
  162. spectacularly broadened from guaranteeing that you may engage in
  163. the ``pursuit of happiness'' to guaranteeing happiness in and of
  164. itself.  The central dynamic internal to government is, and always
  165. has been, that the only way for either the Executive or the Legislature
  166. to control the many sub-units of government is by way of how much
  167. money they can hand out.  Guaranteeing happiness has the same dynamic
  168. -- that the only tool government really has to achieve the outcome
  169. of everyone happy or everyone healthy or everyone safe at all times
  170. from things that go bump in the night is through the dispensing of
  171. money.  This is true in foreign policy; one can reasonably argue
  172. that the United States' 2007 troop ``surge'' in Iraq did provide an
  173. improvement in safety.  One can also argue that the work of those
  174. troops, some of whom gave what Abraham Lincoln called ``the last
  175. full measure of devotion,'' was materially aided by the less publicized
  176. arrival of C-130s full of \$100 bills with which to buy off potential
  177. combatants.  Why should cybersecurity be any different?
  178. Suppose, however, that surveillance becomes too cheap to meter,
  179. that is to say too cheap to limit through budgetary processes.  Does
  180. that lessen the power of the Legislature more, or the power of the
  181. Executive more?  I think that ever-cheaper surveillance substantially
  182. changes the balance of power in favor of the Executive and away
  183. from the Legislature.  While President Obama was referring to
  184. something else when he said ``I've Got A Pen And I've Got A Phone,''
  185. he was speaking to exactly this idea --- things that need no
  186. appropriations are outside the system of checks and balances.  Is
  187. the ever-wider deployment of sensors in the name of cybersecurity
  188. actually contributing to our safety?  Or is it destroying our safety
  189. in order to save it?
  190. To be entirely clear by way of repetition, this essay is written by
  191. someone as his own opinion and not on behalf of anyone else.  It
  192. is written without the supposed benefits of insider information; I
  193. hold no Clearance but am instead informed solely by way of open
  194. source intelligence.  This path may be poised to grow easier; if
  195. the chief benefit of having a Clearance is to be able to see into
  196. the future a little further than those without one, then it must
  197. follow that as the pace of change accelerates the difference between
  198. how far can you see with a Clearance versus how far can you see
  199. without one will shrink.
  200. There are, in other words, parallels between cybersecurity and the
  201. intelligence functions insofar as predicting the future has a strong
  202. role to play in preparing your defenses for probable attacks.  As
  203. Dave Aitel has repeatedly pointed out, the hardest part of crafting
  204. good attack tools is testing them before deployment.  Knowing what
  205. your tool will find, and how to cope with that, is surely harder
  206. than finding an exploitable flaw in and of itself.  This, too, may
  207. grow in importance if the rigor of testing causes attackers to use
  208. some portion of the Internet at large as their test platform rather
  209. than whatever rig they can afford to set up in their own shop.  If
  210. that is the case, then full scale traffic logs become an indispensable
  211. intelligence tool insofar as when an attack appears to be de novo
  212. those with full scale traffic logs may be in a position to answer
  213. the question ``How long has this been going on?''  The company Net
  214. Witness, now part of EMC, is one player who comes to mind in this
  215. regard, and there are others.  This idea of looking backward for
  216. evidence that you didn't previously know enough to look for does
  217. certainly have intelligence value both for the Nation State and for
  218. the enterprise.
  219. And there is a lot of traffic that we don't have a handle on.  John
  220. Quarterman of Internet Perils makes a round number guess that 10\%
  221. of Internet backbone traffic is unidentifiable as to
  222. protocol.\footnote{
  223.    John Quarterman, personal communication
  224. }
  225. Whether he is off by a factor of two in either direction, that is
  226. still a lot of traffic.  Arbor Networks estimates that perhaps 2\%
  227. of all \emph{identifiable} backbone traffic is, to use their term, ``raw
  228. sewage.''\footnote{
  229.    ``2\% of Internet Traffic Raw Sewage''
  230.    \url{http://www.arbornetworks.com/asert/2008/03/2-of-internet-traffic-raw-sewage}
  231. }
  232. There are plenty of other estimates of this sort, of
  233. course.  To my way of thinking, all such estimates continue to
  234. remind us that the end-to-end design of the Internet\footnote{
  235.    ``End-to-End Arguments in System Design''
  236.    \url{http://web.mit.edu/Saltzer/www/publications/endtoend/endtoend.pdf}
  237. }
  238. was not some failure of design intellect but a brilliant avoidance of
  239. having to pick between the pitiful toy a completely safe Internet would
  240. have to be versus an Internet that was the ultimate tool of State
  241. control.  In nothing else is it more apt to say that our choices are
  242. Freedom, Security, Convenience --- Choose Two.
  243. Let me now turn to some policy proposals on a suite of pressing
  244. current topics.  None of these proposals are fully formed, but as
  245. you know, those who don't play the game don't make the rules.  These
  246. proposals are not in priority order, though some are more at odds
  247. with current practice than others and might, therefore, be said to
  248. be more pressing.  There are more where these came from, but this
  249. talk has a time limit, and there is a meta-analysis at the end.
  250. \section{Mandatory reporting --- YES/Tiered}
  251. The United States Centers for Disease Control are respected the
  252. world around.  When you really get down to it, three capabilities
  253. describe the CDC and why they are as effective as they are: (1)
  254. mandatory reporting of communicable diseases, (2) stored data and
  255. the data analytic skill to distinguish a statistical anomaly from
  256. an outbreak, and (3) away teams to take charge of, say, the appearance
  257. of Ebola in Miami.  Everything else is details.  The most fundamental
  258. of these is the mandatory reporting of communicable diseases.
  259. At the same time, we have well established rules about medical
  260. privacy.  Those rules are helpful; when you check into the hospital
  261. there is a licensure-enforced, accountability-based, need-to-know
  262. regime that governs the handling of your data.\footnote{
  263.    Protected Health Information, abbreviated PHI, as defined by
  264.    Section 1171 of Part C of Subtitle F of Public Law 104-191, ``The
  265.    Health Insurance Portability and Accountability Act of 1996,'' also
  266.    known as HIPAA.
  267. }
  268. Most days, that is, but if you check in with Bubonic Plague or Typhus or
  269. Anthrax, you will have zero privacy as those are the ``mandatory
  270. reporting of communicable disease conditions'' as variously mandated not
  271. just by the CDC but by public health law in all fifty States.
  272. So let me ask you, would it make sense, in a public health of the
  273. Internet way, to have a mandatory reporting regime for cybersecurity
  274. failures?  Do you favor having to report cyber penetrations of your
  275. firm or of your household to some branch of government or some
  276. non-government entity?  Should you face criminal charges if you
  277. fail to make such a report?  Forty-eight States vigorously penalize
  278. failure to report sexual molestation of children.\footnote{
  279.    ``Penalties for failure to report and false reporting of child
  280.    abuse and neglect,'' US Dept of Health and Human Services, Children's
  281.    Bureau, Child Welfare Information Gateway.
  282. }
  283. The (US) Computer Fraud and Abuse Act\footnote{
  284.    U.S. Code, Title 18, Part I, Chapter 47, Section 1030
  285.    \url{http://www.law.cornell.edu/uscode/text/18/1030}
  286. }
  287. defines a number of felonies related to computer penetrations, and the
  288. U.S. Code says that it is a crime to fail to report a felony of which
  289. you have knowledge.\footnote{
  290.    U.S. Code, Title 18, Part I, Chapter 1, Section 4
  291.    \url{http://www.law.cornell.edu/uscode/text/18/4}
  292. }
  293. Is cybersecurity event data the kind of data around which you want to
  294. enforce mandatory reporting?  Forty-six States require mandatory
  295. reporting of one class of cyber failures in the form of their data
  296. breach laws,\footnote{
  297.    Security Breach Information Act
  298.    \url{http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb\_1351-1400/sb\_1386\_bill\_20020926\_chaptered.pdf}
  299. }
  300. while the Verizon Data Breach Investigations
  301. Report\footnote{
  302.    Verizon Data Breach Investigations Report
  303.    \url{http://www.verizonenterprise.com/DBIR}
  304. }
  305. found, and the Index of Cyber Security\footnote{
  306.    Index of Cyber Security
  307.    \url{http://www.cybersecurityindex.org}
  308. }
  309. confirmed, that
  310. 70-80\% of data breaches are discovered by unrelated third parties, not
  311. by the victim, meaning that the victim might never know if those who do
  312. the discovering were to keep quiet.  If you discover a cyber attack, do
  313. you have an ethical obligation to report it?  Should the law mandate
  314. that you fulfill such an obligation?
  315. My answer to this set of questions is to mirror the CDC, that is
  316. for the force of law to require reporting of cybersecurity failures
  317. that are above some severity threshold that we have yet to negotiate.
  318. Below that threshold, I endorse the suggestion made in a piece two
  319. weeks ago, ``Surviving on a Diet of Poisoned Fruit,'' by Richard
  320. Danzig where he made this policy proposal:\footnote{
  321.    ``Surviving on a Diet of Poisoned Fruit; Reducing the National
  322.    Security Risks of America's Cyber Dependencies''
  323.    \url{http://www.cnas.org/surviving-diet-poisoned-fruit}
  324. }
  325. \begin{quote}
  326.    Fund a data collection consortium that will illuminate the
  327.    character and magnitude of cyber attacks against the U.S. private
  328.    sector, using the model of voluntary reporting of near-miss
  329.    incidents in aviation.  Use this enterprise as well to help
  330.    develop common terminology and metrics about cybersecurity.
  331.    While regulatory requirements for aviation accident reporting
  332.    are firmly established through the National Transportation Safety
  333.    Board, there are no requirements for reporting the vastly more
  334.    numerous and often no less informative near misses.  Efforts to
  335.    establish such requirements inevitably generate resistance:
  336.    Airlines would not welcome more regulation and fear the reputational
  337.    and perhaps legal consequences of data visibility; moreover,
  338.    near accidents are intrinsically more ambiguous than accidents.
  339.    An alternative path was forged in 2007 when MITRE, a government
  340.    contractor, established an Aviation Safety Information Analysis
  341.    and Sharing (ASIAS) system receiving near-miss data and providing
  342.    anonymized safety, benchmarking and proposed improvement reports
  343.    to a small number of initially participating airlines and the
  344.    Federal Aviation Administration (FAA).
  345. \end{quote}
  346. Today, 44 airlines participate in that program voluntarily.  The
  347. combination of a mandatory CDC model for above-threshold cyber
  348. events and a voluntary ASIAS model for below-threshold events is
  349. what I recommend.  This leaves a great deal of thinking still to
  350. be done; diseases are treated by professionals, but malware infections
  351. are treated by amateurs.  Diseases spread within jurisdictions
  352. before they become global, but malware is global from the get-go.
  353. Diseases have predictable behaviors, but malware comes from sentient
  354. opponents.  Don't think this proposal is an easy one or one without
  355. side effects.
  356. \section{Net neutrality --- CHOICE}
  357. There is considerable irony in the Federal Communications Commission
  358. classifying the Internet as an information service and not as a
  359. communications service insofar as while that may have been a gambit
  360. to relieve ISPs of telephone-era regulation, the value of the
  361. Internet is ever more the bits it carries, not the carriage of those
  362. bits.  The FCC decisions are both several and now old, the FCC
  363. classified cable as an information service in 2002, classified DSL
  364. as an information service in 2005, classified wireless broadband
  365. as an information service in 2007, and classified broadband over
  366. power lines as an information service in 2008.  A decision by the
  367. D.C. Circuit Court of Appeals on this very point appeared earlier
  368. this year,\footnote{
  369.    Verizon v. FCC, 740 F.3d 623 (D.C. Cir. 2014)
  370.    \url{http://www.cadc.uscourts.gov/internet/opinions.nsf/3AF8B4D938CDEEA685257C6000532062/\$file/11-1355-1474943.pdf}
  371. }
  372. but settled little.  The question remains, is the Internet a
  373. telecommunications service or an information service?
  374.  
  375. I've nothing new to say to you about the facts, the near-facts, nor
  376. the lying distortions inherent in the debate regarding network
  377. neutrality so far or still to come.  What I can say is that network
  378. neutrality is no panacea nor is it anathema; peoples' tastes vary
  379. and so do corporations'.  What I can say is that the varied tastes
  380. need to be reflected in constrained choice rather than the idea
  381. that the FTC or some other agency can assure happiness if and only
  382. if it, rather than corporations or individuals, does the choosing.
  383. Channeling for Doctor Seuss, if I ran the zoo I'd call up the ISPs
  384. and say this:
  385.  
  386. Hello, Uncle Sam here.
  387.  
  388. You can charge whatever you like based on the contents of what
  389. you are carrying, but you are responsible for that content if it
  390. is hurtful; inspecting brings with it a responsibility for what
  391. you learn.
  392. -or-
  393. You can enjoy common carrier protections at all times, but you
  394. can neither inspect nor act on the contents of what you are
  395. carrying and can only charge for carriage itself.  Bits are bits.
  396.  
  397. Choose wisely.  No refunds or exchanges at this window.
  398.  
  399. In other words, ISPs get the one or the other; they do not get both.
  400. The FCC gets some heartache but also a natural experiment in whether
  401. those who choose common carrier status turn out differently than
  402. those who choose multi-tiered service grades with liability exposure.
  403. We already have a lot of precedent and law in this space.  The
  404. United States Postal Service's term of art, ``sealed against inspection,''
  405. is reserved for items on which the highest postage rates are charged;
  406. is that also worth stirring into the mix?
  407.  
  408. As a side comment, I might add that it was in Seuss' book \booktitle{If I Ran
  409. the Zoo} that the word ``nerd'' first appeared in English.  If Black
  410. Hat doesn't yet have an official book, I'd suggest this one.
  411.  
  412.  
  413.  
  414. \section{Source code liability --- CHOICE}
  415.  
  416. Nat Howard said that ``Security will always be exactly as bad as it
  417. can possibly be while allowing everything to still
  418. function,''\footnote{
  419.    Nat Howard at USENIX 2000, per Marcus Ranum
  420. }
  421. but with each passing day, that ``and still function'' clause requires
  422. a higher standard.  As Ken Thompson told us in his Turing Award
  423. lecture, there is no technical escape;\footnote{
  424.    Ken Thompson, ``Reflections on Trusting Trust,'' 1984
  425. }
  426. in strict mathematical terms you neither trust a program nor a house
  427. unless you created it 100\% yourself, but in reality most of us will
  428. trust a house built by a suitably skilled professional, usually we will
  429. trust it more than one we had built ourselves, and this even if we have
  430. never met the builder, or even if he is long since dead.
  431.  
  432. The reason for this trust is that shoddy building work has had that
  433. crucial ``or else ...'' clause for more than 3700 years:
  434.  
  435. If a builder builds a house for someone, and does not construct
  436. it properly, and the house which he built falls in and kills
  437. its owner, then the builder shall be put to death.
  438. -- Code of Hammurabi, approx 1750 B.C.
  439.  
  440. Today the relevant legal concept is ``product liability'' and the
  441. fundamental formula is ``If you make money selling something, then
  442. you better do it well, or you will be held responsible for the
  443. trouble it causes.''  For better or poorer, the only two products
  444. not covered by product liability today are religion and software,
  445. and software should not escape for much longer.  Poul-Henning Kamp
  446. and I have a strawman proposal for how software liability regulation
  447. could be structured.
  448.  
  449. \begin{enumerate}
  450.    \item \textbf{Consult criminal code to see if damage caused was due
  451.         to intent or willfulness.}
  452.  
  453.     We are only trying to assign liability for unintentionally caused
  454.     damage, whether that's sloppy coding, insufficient testing, cost
  455.     cutting, incomplete documentation, or just plain incompetence.
  456.     Clause zero moves any kind of intentionally inflicted damage out of
  457.     scope.  That is for your criminal code to deal with, and most
  458.     already do.
  459.  
  460.    \item \textbf{If you deliver your software with complete and
  461.         buildable source code and a license that allows disabling any
  462.         functionality or code the licensee decides, your liability is
  463.     limited to a refund.}
  464.  
  465.     Clause one is how to avoid liability: Make it possible for your
  466.     users to inspect and chop out any and all bits of your software they
  467.     do not trust or want to run.  That includes a bill of materials
  468.     (``Library ABC comes from XYZ'') so that trust has some basis,
  469.     paralleling why there are ingredient lists on processed foods.
  470.  
  471.     The word ``disabling'' is chosen very carefully:  You do not need to
  472.     give permission to change or modify how the program works, only to
  473.     disable the parts of it that the licensee does not want or trust.
  474.     Liability is limited even if the licensee never actually looks at
  475.     the source code; as long has he has received it, you (as maker) are
  476.     off the hook.  All your other copyrights are still yours to control,
  477.     and your license can contain any language and restriction you care
  478.     for, leaving the situation unchanged with respect to
  479.     hardware-locking, confidentiality, secrets, software piracy, magic
  480.     numbers, etc.
  481.  
  482.     Free and Open Source Software (FOSS) is obviously covered by this
  483.     clause which leaves its situation unchanged.
  484.  
  485.    \item \textbf{In any other case, you are liable for whatever damage
  486.     your software causes when it is used normally.}
  487.  
  488.     If you do not want to accept the information sharing in Clause 1,
  489.     you fall under Clause 2, and must live with normal product liability,
  490.     just like manufactures of cars, blenders, chain-saws and hot coffee.
  491.  
  492. \end{enumerate}
  493.  
  494. How dire the consequences, and what constitutes ``used normally'' is
  495. for your legislature and courts to decide, but let us put up a
  496. strawman example:
  497.  
  498. \begin{quote}
  499.    A sales-person from one of your long time vendors visits and
  500.    delivers new product documentation on a USB key, you plug the
  501.    USB key into your computer and copy the files onto the computer.
  502. \end{quote}
  503.  
  504. This is ``used normally'' and it should never cause your computer to
  505. become part of a botnet, transmit your credit card number to Elbonia,
  506. or copy all your design documents to the vendor.  If it does, your
  507. computer's operating system is defective.
  508.  
  509. The majority of today's commercial software would fall under Clause
  510. 2 and software houses need a reasonable chance to clean up their act
  511. or to move under Clause 1, so a sunrise period is required.  But no
  512. longer than five years --- we are trying to solve a dire computer
  513. security problem here.
  514.  
  515. And that is it really:  Either software houses deliver quality and
  516. back it up with product liability, or they will have to let their
  517. users protect themselves.  The current situation --- users can't see
  518. whether they need to protect themselves and have no recourse to
  519. being unprotected --- cannot go on.  We prefer self-protection (and
  520. fast recovery), but other's mileage may differ.
  521.  
  522. Would it work?  In the long run, absolutely yes.  In the short run,
  523. it is pretty certain that there will be some nasty surprises as
  524. badly constructed source code gets a wider airing.  The FOSS
  525. community will, in parallel, have to be clear about the level of
  526. care they have taken, and their build environments as well as their
  527. source code will have to be kept available indefinitely.
  528.  
  529. The software houses will yell bloody murder the minute legislation
  530. like this is introduced, and any pundit and lobbyist they can afford
  531. will spew their dire predictions that ``This law will mean the end
  532. of computing as we know it!''
  533.  
  534. To which our considered answer will be:
  535.  
  536. \begin{quote}
  537.    Yes, please!  That was exactly the idea.
  538. \end{quote}
  539.  
  540.  
  541.  
  542. \section{Strike back --- LIMITED YES}
  543.  
  544. I suspect that a fair number of you have, in fact, struck back at
  545. some attacker somewhere or, at least, done targeting research even
  546. if you didn't pull the trigger.  I'd trust many of you to identify
  547. targets carefully enough to minimize collateral damage, but what
  548. we are talking about here is the cyber equivalent of the smart bomb.
  549. As I implied earlier, cyber smart bombs are what the national
  550. laboratories of several countries are furiously working on.  In
  551. that sense, you do know what is happening behind the curtain, and
  552. you know how hard that targeting really is because you know how
  553. hard attribution --- real attribution --- really is.
  554.  
  555. The issue is shared infrastructure, and that issue is not going
  556. away.  There are some entities that can operate globally and strike
  557. back effectively, Microsoft and the FBI teaming up on the GameOver
  558. Zeus trojan for example,\footnote{
  559.    ``Microsoft and FBI team up to take down GameOver Zeus botnet''
  560.    \url{http://www.techradar.com/us/news/internet/web/microsoft-and-fbi-team-up-to-take-down-gameover-zeus-botnet-1251609}
  561. }
  562. but that's an expensive therapy in
  563. limited supply that can only be applied to the most damaging malware.
  564. Nevertheless, that is the therapy we have.  Smaller entities cannot
  565. act globally nor can they act in certain ways without pairing with
  566. national agencies.  That can, and must, go on, but I don't see how
  567. the individual or the smaller entity can shoot back.  All I see is
  568. for the individual or the smaller entity to put all their effort
  569. into having fast recovery.
  570.  
  571.  
  572.  
  573. \section{Fall backs and resiliency --- TOO COMPLICATED FOR ONE POLICY}
  574.  
  575. There has always been a lot of talk about what to do when failure
  576. is unacceptable and yet failure is inevitable.  Heretofore, almost
  577. anything that has come to be seen as essential to the public gets
  578. some sort of performance standard imposed upon it, electricity and
  579. water, say.  But let's talk about software.
  580.  
  581. For one example, a commonly voiced desire for cryptographic protocols
  582. is ``algorithm agility,'' the ability to swap from one cryptographic
  583. algorithm to another if and when the first one becomes unsafe.  The
  584. security benefit of such a swap is not what you turn on but what
  585. you turn off.  For that to be possible, a second algorithm has to
  586. already be in place, but that means that the second algorithm had
  587. to be designed in at the outset and at both ends, with a way to
  588. choose between them such that either end of the proposed connection
  589. can force a change-over to the alternate algorithm.  One might argue
  590. that implementing algorithm agility actually means a single, more
  591. complex algorithm.  Or maybe what you want is two algorithms where
  592. you always use both, such as when you encrypt with one algorithm
  593. and super-encrypt with another so that the failure of one has no
  594. practical effect on security and nothing has to change.
  595.  
  596. I say all that just to demonstrate that it is not always simple to
  597. have a pre-deployed fallback should something break, that design
  598. willpower alone is not enough.  So perhaps mandating pre-deployed
  599. fallbacks is a bad idea entirely.  Perhaps what is needed is a way
  600. to reach out and upgrade the endpoints when the time of necessity
  601. comes.  But today, or real soon now, most of the places needing a
  602. remote management interface through which you can remotely upgrade
  603. the endpoints are embedded hardware.  So let me ask a question,
  604. should or should not an embedded system be required to have a remote
  605. management interface?  If it does not, then a late discovered flaw
  606. cannot be fixed without visiting all the embedded systems --- which
  607. is likely to be infeasible because some you will be unable to find,
  608. some will be where you cannot again go, and there will be too many
  609. of them in any case.  If it does have a remote management interface,
  610. the opponent of skill will focus on that and, once a break is
  611. achieved, will use those self-same management functions to ensure
  612. that not only does he retain control over the long interval but,
  613. as well, you will be unlikely to know that he is there.
  614.  
  615. Perhaps what is needed is for embedded systems to be more like
  616. humans, and I most assuredly do not mean artificially intelligent.
  617. By ``more like humans'' I mean this: Embedded systems, if having no
  618. remote management interface and thus out of reach, are a life form
  619. and as the purpose of life is to end, an embedded system without a
  620. remote management interface must be so designed as to be certain
  621. to die no later than some fixed time.  Conversely, an embedded
  622. system with a remote management interface must be sufficiently
  623. self-protecting that it is capable of refusing a command.  Inevitable
  624. death and purposive resistance are two aspects of the human condition
  625. we need to replicate, not somehow imagine that to overcome them is
  626. to improve the future.
  627.  
  628. Lest some of you think this is all so much picayune, tendentious,
  629. academic perfectionist posturing, let me inform some of you and
  630. remind the others that it is entirely possible to deny the Internet
  631. to a large fraction of its users.  Home routers have drivers and
  632. operating systems that are binary blobs amounting to snapshots of
  633. the state of Linux plus the lowest end commodity chips that were
  634. extant at the time of the router's design.  Linux has moved on.
  635. Device drivers have moved on.  Samba has moved on.  Chipsets have
  636. moved on.  But what is sold at Best Buy or the like is remarkably
  637. cheap and remarkably old.  With certainty born of long engineering
  638. experience, I assert that those manufacturers can no longer build
  639. their deployed software blobs from source.  If, as my colleague Jim
  640. Gettys has laboriously measured, the average age of the code base
  641. on those ubiquitous low-end routers is 4-5 years,\footnote{
  642.    Gettys J, former VP Software, One Laptop Per Child, personal
  643.    communication
  644. }
  645. then you can be assured that the CVE catalog lists numerous methods of
  646. attacking those operating systems and device drivers
  647. remotely.\footnote{
  648.    Common Vulnerabilities and Exposures, cve.mitre.org/cve
  649. }
  650. If I can commandeer them remotely, then I can build a botnet that is on
  651. the \emph{outside} of the home network.  It need not ever put a single
  652. packet through the firewall, it need never be detectible by any means
  653. whatsoever from the interior of the network it serves, but it is most
  654. assuredly a latent weapon, one that can be staged to whatever level of
  655. prevalence I desire before I ask it to do more.  All I need is to
  656. include in my exploit a way to signal that device to do three things:
  657. stop processing anything it henceforth receives, start flooding the
  658. network with a broadcast signal that causes other peers to do the same,
  659. and zero the on-board firmware thus preventing reboot for all time.  Now
  660. the only way to recover is to unplug all the devices, throw them in the
  661. dumpster, and install new ones --- but aren't the new ones likely to have
  662. the same kind of vulnerability spectrum in CVE that made this possible
  663. in the first place?  Of course they do, so this is not a quick trip to
  664. the big box store but rather flushing the entire design space and
  665. pipeline inventory of every maker of home routers.  There appears to be
  666. an event at DefCon around this very issue.\footnote{
  667.    SOHOpelessly Broken, \url{http://www.sohopelesslybroken.com}
  668. }
  669.  
  670. Resiliency is an area where no one policy can be sufficient, so
  671. I've suggested a trio of baby steps: embedded systems cannot be
  672. immortal if they have no remote management interface, embedded
  673. systems must have a remote management interface if they are to be
  674. immortal, and swap-over is preferable to swap-out when it comes to
  675. data protection.
  676.  
  677.  
  678.  
  679. \section{Vulnerability finding --- HEGEMONY}
  680.  
  681. Vulnerability finding is a job.  It has been a job for something
  682. like eight years now, give or take.  For a good long while, you
  683. could do vulnerability finding as a hobby and get paid in bragging
  684. rights, but finding vulnerabilities got to be too hard to do as a
  685. hobby in your spare time --- you needed to work it like a job and
  686. get paid like a job.  This was the result of hard work on the part
  687. of the software suppliers including the suppliers of operating
  688. systems, but as the last of the four verities of government says,
  689. every solution has side effects.  In this case, the side effect is
  690. that once vulnerability finding became a job and stopped being a
  691. bragging-rights hobby, those finding the vulnerabilities stopped
  692. sharing.  If you are finding vulns for fun and fame, then the minute
  693. you find a good one you'll let everybody know just to prevent someone
  694. else finding it and beating you to the punch.  If you are doing it
  695. for profit, then you don't share.  That's where the side effect is
  696. -- once coin-operated vuln finders won't share, the percentage of
  697. all attacks that are zero-day attacks must rise, and it has.
  698.  
  699. In a May article in The Atlantic,\footnote{
  700.    ``Should U.S. Hackers Fix Cybersecurity Holes or Exploit Them?''
  701.    \url{http://www.theatlantic.com/technology/archive/2014/05/should-hackers-fix-cybersecurity-holes-or-exploit-them/371197}
  702. }
  703. Bruce Schneier asked a cogent
  704. first-principles question: Are vulnerabilities in software dense
  705. or sparse?  If they are sparse, then every one you find and fix
  706. meaningfully lowers the number of avenues of attack that are extant.
  707. If they are dense, then finding and fixing one more is essentially
  708. irrelevant to security and a waste of the resources spent finding
  709. it.  Six-take-away-one is a 15\% improvement.  Six-thousand-take-
  710. away-one has no detectable value.
  711.  
  712. If a couple of Texas brothers could corner the world silver
  713. market,\footnote{
  714.    ``Hunt Brothers Corner Silver Market''
  715.    \url{http://web.archive.org/web/20060118031501/http://www.wallstraits.com/main/viewarticle.php?id=1298}
  716. }
  717. there is no doubt that the U.S. Government could openly corner the
  718. world vulnerability market, that is we buy them all and we make
  719. them all public.  Simply announce ``Show us a competing bid, and
  720. we'll give you 10x.''  Sure, there are some who will say ``I hate
  721. Americans; I sell only to Ukrainians,'' but because vulnerability
  722. finding is increasingly automation-assisted, the seller who won't
  723. sell to the Americans knows that his vulns can be rediscovered in
  724. due course by someone who \emph{will} sell to the Americans who will
  725. tell everybody, thus his need to sell his product before it outdates
  726. is irresistible.
  727.  
  728. This strategy's usefulness comes from two side effects: (1) that
  729. by overpaying we enlarge the talent pool of vulnerability finders
  730. and (2) that by making public every single vuln the USG buys we
  731. devalue them.  Put differently, by overpaying we increase the rate
  732. of vuln finding, while by showing everyone what it is that we bought
  733. we zero out whatever stockpile of cyber weapons our adversaries
  734. have.  We don't need intelligence on what weapons our adversaries
  735. have if we have something close to a complete inventory of the
  736. world's vulns and have shared that with all the affected software
  737. suppliers.  But this begs Schneier's question: Are vulnerabilities
  738. sparse or dense?  If they are sparse or even merely numerous, then
  739. cornering the market wins in due course.  If they are dense, then
  740. all we would end up doing is increasing costs both to software
  741. suppliers now obligated to repair all the vulns a growing army of
  742. vuln researchers can find and to taxpayers.  I believe that vulns
  743. are scarce enough for this to work and,, therefore I believe that
  744. cornering the market is the cheapest win we will ever get.
  745.  
  746. Let me note, however, that my colleagues in static analysis report
  747. that they regularly see web applications greater than 2GB in size
  748. and with 20,000 variables.  Such web apps can only have been written
  749. by machine and, therefore, the vulns found in them were also written
  750. by machine.  Machine-powered vuln creation might change my analysis
  751. though I can't yet say in what direction.
  752.  
  753.  
  754.  
  755. \section{Right to be forgotten --- YES}
  756.  
  757. I've spoken elsewhere about how we are all intelligence agents now,
  758. collecting on each other on behalf of various overlords.\footnote{
  759.    ``We Are All Intelligence Agents Now''
  760.    \url{http://geer.tinho.net/geer.rsa.28ii14.txt}
  761. }
  762. There are so many technologies now that power observation and
  763. identification of the individual at a distance.  They may not yet be in
  764. your pocket or on your dashboard or embedded in all your smoke
  765. detectors, but that is only a matter of time.  Your digital exhaust is
  766. unique hence it identifies.  Pooling everyone's digital exhaust also
  767. characterizes how you differ from normal.  Privacy used to be
  768. proportional to that which it is impossible to observe or that which can
  769. be observed but not identified.  No more --- what is today observable and
  770. identifiable kills both privacy as impossible-to-observe and privacy as
  771. impossible-to-identify, so what might be an alternative?  If you are an
  772. optimist or an apparatchik, then your answer will tend toward rules of
  773. data procedure administered by a government you trust or control.  If
  774. you are a pessimist or a hacker/maker, then your answer will tend
  775. towards the operational, and your definition of a state of privacy will
  776. be my definition: the effective capacity to misrepresent yourself.
  777.  
  778. Misrepresentation is using disinformation to frustrate data fusion
  779. on the part of whomever it is that is watching you.  Some of it can
  780. be low-tech, such as misrepresentation by paying your therapist in
  781. cash under an assumed name.  Misrepresentation means arming yourself
  782. not at Walmart but in living rooms.  Misrepresentation means swapping
  783. affinity cards at random with like-minded folks.  Misrepresentation
  784. means keeping an inventory of misconfigured webservers to proxy
  785. through.  Misrepresentation means putting a motor-generator between
  786. you and the Smart Grid.  Misrepresentation means using Tor for no
  787. reason at all.  Misrepresentation means hiding in plain sight when
  788. there is nowhere else to hide.  Misrepresentation means having not
  789. one digital identity that you cherish, burnish, and protect, but
  790. having as many as you can.  Your fused identity is not a question
  791. unless you work to make it be.  Lest you think that this is a problem
  792. statement for the random paranoid individual alone, let me tell you
  793. that in the big-I Intelligence trade, crafting good cover is getting
  794. harder and harder and for the exact same reasons: misrepresentation
  795. is getting harder and harder.  If I was running field operations,
  796. I would not try to fabricate a complete digital identity, I'd
  797. ``borrow'' the identity of someone who had the characteristics that
  798. I needed for the case at hand.
  799.  
  800. The Obama administration's issuance of a National Strategy for
  801. Trusted Identities in Cyberspace\footnote{
  802.    National Strategy for Trusted Identities in Cyberspace,
  803.    \url{http://www.nist.gov/nstic}
  804. }
  805. is case-in-point; it ``calls for the development of interoperable
  806. technology standards and policies --- an `Identity Ecosystem' --- where
  807. individuals, organizations, and underlying infrastructure --- such as
  808. routers and servers --- can be authoritatively authenticated.''  If you
  809. can trust a digital identity, that is because it can't be faked.  Why
  810. does the government care about this?  It cares because it wants to
  811. digitally deliver government services and it wants attribution.  Is
  812. having a non-fake-able digital identity for government services worth
  813. the registration of your remaining secrets with that government?  Is
  814. there any real difference between a system that permits easy, secure,
  815. identity-based services and a surveillance system?  Do you trust those
  816. who hold surveillance data on you over the long haul by which I mean the
  817. indefinite retention of transactional data between government services
  818. and you, the individual required to proffer a non-fake-able identity to
  819. engage in those transactions?  Assuming this spreads well beyond the
  820. public sector, which is its designers' intent, do you want this
  821. everywhere?  If you are building authentication systems today, then you
  822. are already playing ball in this league.  If you are using
  823. authentication systems today, then you are subject to the pending design
  824. decisions of people who are themselves playing ball in this league.
  825.  
  826. After a good amount of waffling, I conclude that a unitary, unfakeable
  827. digital identity is no bargain and that I don't want one.  I want
  828. to choose whether to misrepresent myself.  I may rarely use that,
  829. but it is my right to do so.  If that right vanishes into the
  830. panopticon, I have lost something and, in my view, gained next to
  831. nothing.  In that regard, and acknowledging that it is a baby step,
  832. I conclude that the EU's ``Right to be Forgotten'' is both appropriate
  833. and advantageous though it does not go far enough.  Being forgotten
  834. is consistent with moving to a new town to start over, to changing
  835. your name, to a definition of privacy that turns on whether you do
  836. or do not retain the effective capacity to misrepresent yourself,
  837. a right which I will remind you is routinely granted but to those
  838. who have especially helped governmental causes (witness protection,
  839. e.g.).  A right to be forgotten is the only check on the tidal wave
  840. of observability that a ubiquitous sensor fabric is birthing now,
  841. observability that changes the very quality of what ``in public''
  842. means.  Entities that block deep-linking to their web resources are
  843. neutralizing indexability.  Governments of all stripes, irretrievably
  844. balkanizing the Internet through the self-same vehicle of indexing
  845. controls, are claiming that a right to do so is inherently theirs.
  846. The only democratizing brake on this runaway train is for individuals
  847. to be able, in their own small way, to do the same as do other
  848. entities.  I find it notably ironic that The Guardian newspaper's
  849. championing of Edward Snowden's revelations about privacy loss is
  850. paired with the same paper's editorializing that ``No one has a right
  851. to be forgotten.''\footnote{
  852.    ``The Right to Be Forgotten Will Turn the Internet into a Work of
  853.    Fiction,''
  854.    \url{http://www.theguardian.com/commentisfree/2014/jul/06/right-to-be-forgotten-internet-work-of-fiction-david-mitchell-eu-google}
  855. }
  856. Au contraire, madames et monsieurs, they most assuredly do.
  857.  
  858.  
  859.  
  860. \section{Internet voting --- NO}
  861.  
  862. Motivated \& expert opponents are very nearly undefendable against.
  863. People like us here know that, which is why it is natural for people
  864. like us here to oppose voting over the Internet.  The National
  865. Center for Policy Analysis thinks online voting is a bad idea.  NIST
  866. thinks online voting is a bad idea.  With Pamela Smith, Bruce
  867. McConnell editorialized in the pages of the Wall Street
  868. Journal\footnote{
  869.    ``Hack the Vote: The Perils of the Online Ballot Box''
  870.    \url{http://online.wsj.com/articles/pamela-smith-and-bruce-mcconnell-hack-the-vote-the-perils-of-the-online-ballot-box-1401317230}
  871. }
  872. that online voting is a bad idea.  The fact that we here have near
  873. universal disdain for the idea has not seemed to change much policy.
  874.  
  875. Now it is always true that a thorough security analysis will get
  876. much less attention than a juicy conspiracy theory even if both
  877. lead to the same conclusion.  How do we explain this?  If I knew
  878. that, then I would commence to explaining, but we may not need to
  879. explain it if the integrity of some election is put at question by
  880. events.  I'd like to think that we don't need carnage to motivate
  881. a re-think, but perhaps we do.  If we do need carnage, then may its
  882. coming be sooner rather than later.
  883.  
  884.  
  885.  
  886. \section{Abandonment --- CERTAINTY OF CONSEQUENCES}
  887.  
  888. If I abandon a car on the street, then eventually someone will be
  889. able to claim title.  If I abandon a bank account, then the State
  890. will eventually seize it.  If I abandon real estate by failing to
  891. remedy a trespass, then in the fullness of time adverse possession
  892. takes over.  If I don't use my trademark, then my rights go over
  893. to those who use what was and could have remained mine.  If I abandon
  894. my spouse and/or children, then everyone is taxed to remedy my
  895. actions.  If I abandon a patent application, then after a date
  896. certain the teaching that it proposes passes over to the rest of
  897. you.  If I abandon my hold on the confidentiality of data such as
  898. by publishing it, then that data passes over to the commonweal not
  899. to return.  If I abandon my storage locker, then it will be lost
  900. to me and may end up on reality TV.  The list goes on.
  901.  
  902. Apple computers running 10.5 or less get no updates (comprising a
  903. significant fraction of the installed base).  Any Microsoft computer
  904. running XP gets no updates (likewise comprising a significant
  905. fraction of the installed base).  The end of security updates follows
  906. abandonment.  It is certainly ironic that freshly pirated copies
  907. of Windows get security updates when older versions bought legitimately
  908. do not.
  909.  
  910. Stating what to me is the obvious policy stance, if Company X
  911. abandons a code base, then that code base must be open sourced.
  912. Irrespective of security issues, many is the time that a bit of
  913. software I use has gone missing because its maker killed it.  But
  914. with respect to security, some constellation of {I,we,you,they} are
  915. willing and able to provide security patches or workarounds as time
  916. and evil require.
  917.  
  918. Would the public interest not be served by a conversion to open
  919. source for abandoned code bases?  I believe it would.  But wait,
  920. you say, isn't purchased software on a general purpose computer a
  921. thing of the past?  Isn't the future all about auto-updated smartphone
  922. clients transacting over armored private (carrier) networks to
  923. auto-updated cloud services?  Maybe; maybe not.  If the two major
  924. desktop suppliers update only half of today's desktops, then what
  925. percentage will they update tomorrow?
  926.  
  927. If you say ``Make them try harder!,'' then the legalistic, regulatory
  928. position is your position, and the ACLU is already trying that
  929. route.  If smartphone auto-update becomes a condition of merchantability
  930. and your smartphone holds the keying material that undeniably says
  931. that its user is you, then how long before a FISA court orders a
  932. special auto-update to \emph{your} phone for evidence gathering?
  933.  
  934. If you say ``But we already know what they're going to do, don't
  935. we?,'' then the question is what about the abandoned code bases.
  936. Open-sourcing abandoned code bases is the worst option, except for
  937. all the others.  But if seizing an abandoned code base is too big
  938. a stretch for you before breakfast, then start with a Public Key
  939. Infrastructure Certifying Authority that goes bankrupt and ask ``Who
  940. gets the keys?''
  941.  
  942.  
  943.  
  944. \section{ Convergence --- DEFAULT DENY}
  945.  
  946. Let me ask you a question: Are the physical and digital worlds one
  947. world or two?  Are cyberspace and meatspace converging or diverging
  948. over time?  I conclude that they are converging, but if they are
  949. converging, then is cyberspace looking more and more like meatspace
  950. or is meatspace looking more and more like cyberspace?  That is not
  951. so clear.
  952.  
  953. Possibility \#1 is that cyberspace becomes more and more like
  954. meatspace, ergo the re-creation of borders and jurisdictional
  955. boundaries is what happens next.  Possibility \#2 is that meatspace
  956. becomes more and more like cyberspace, ergo jurisdictional boundaries
  957. grow increasingly irrelevant and something akin to one-world
  958. technocratic government more or less follows.  The former is
  959. heterogeneous, the latter is the monoculture of a single nation-state.
  960. As we all know, resiliency and freedom obtain solely from heterogeneity,
  961. so converging meatspace to cyberspace is the unfavorable outcome,
  962. but what can be done about it?
  963.  
  964. At the end of last year, the Pew Research Center invited 12,000
  965. ``experts'' to answer a single Yes/No question:
  966.  
  967. \begin{quote}
  968.    By 2025 will there be significant changes for the worse and
  969.    hindrances to the ways in which people get and share content
  970.    online compared with the way globally networked people can operate
  971.    online today?\footnote{
  972.     \url{http://www.pewinternet.org/2014/07/03/net-threat}
  973.    }
  974. \end{quote}
  975.  
  976. Of the 12,000 invited, some 1,400 did answer.  Putting aside whatever
  977. selection bias may be reflected in who chose to answer and who did
  978. not, Pew found four themes dominated respondent comments:
  979.  
  980. \begin{enumerate}
  981.    \item Actions by nation-states to maintain security and political
  982.     control will lead to more blocking, filtering, segmentation, and
  983.     balkanization of the Internet.
  984.  
  985.    \item Trust will evaporate in the wake of revelations about government
  986.     and corporate surveillance and likely greater surveillance in the
  987.     future.
  988.  
  989.    \item Commercial pressures affecting everything from Internet
  990.     architecture to the flow of information will endanger the open
  991.     structure of online life.
  992.  
  993.    \item Efforts to fix the ``too much information'' problem might
  994.     over-compensate and actually thwart content sharing.
  995. \end{enumerate}
  996.  
  997. My colleague Rob Lemos mapped Pew's themes to the two alternative
  998. futures I mentioned above,\footnote{
  999.    Rob Lemos, personal communication
  1000. }
  1001. saying that ``If cyberspace converges to our physical reality, then we
  1002. will have balkanization and commercial efforts to artificially create
  1003. information monopolies, while if the physical world goes toward digital
  1004. space, then we have greater surveillance, the erosion of trust, much
  1005. information leakage, and the reaction to that leakage.''  More
  1006. crucially, Lemos also observed that the growth of technology has greatly
  1007. increased personal power:
  1008.  
  1009. \begin{quote}
  1010.    The impact that a single person can have on society has significantly
  1011.    increased over time to where a single individual can have a
  1012.    devastating effect.  The natural reaction for government is to
  1013.    become more invasive {possibility \#2 above} to better defend its
  1014.    monoculture, or more separate {possibility \#1 above} to firewall
  1015.    threats from one another.  Because threats and kinetic impacts
  1016.    can increasingly travel through the digital realm, they necessitate
  1017.    that the policy and legal frameworks of the digital and physical
  1018.    world converge.
  1019. \end{quote}
  1020.  
  1021. In other words, Lemos argues that convergence is an inevitable
  1022. consequence of the very power of cyberspace in and of itself.  I
  1023. don't argue with Lemos' idea that increasingly powerful, location
  1024. independent technology in the hands of the many will tend to force
  1025. changes in the distribution of power.  In fact, that is the central
  1026. theme of this essay --- that the power that is growing in the net,
  1027. per se, will soon surpass the ability of our existing institutions
  1028. to modify it in any meaningful way, so either the net must be broken
  1029. up into governable chunks or the net becomes government.
  1030.  
  1031. It seems to me that the leverage here favors cyberspace whenever
  1032. and wherever we give cyberspace a monopoly position, which we are
  1033. doing that blindly and often.  In the last couple of years, I've
  1034. found that institutions that I more or less must use --- my 401(k)
  1035. custodian, the Government Accounting Office's accounts payable
  1036. department, the payroll service my employer outsources to, etc. ---
  1037. no longer accept paper letter instructions, they each only accept
  1038. digital delivery of such instructions.  This means that each of
  1039. them has created a critical dependence on an Internet swarming with
  1040. men in the middle and, which is more, they have doubtlessly given
  1041. up their own ability to fall back to what worked for a century
  1042. before.
  1043.  
  1044. It is that giving up of alternative means that really defines what
  1045. convergence is and does.  It is said that all civil wars are about
  1046. on whose terms re-unification will occur.  I would argue that we
  1047. are in, to coin a phrase, a Cold Civil War to determine on whose
  1048. terms convergence occurs.  Everything in meatspace we give over to
  1049. cyberspace replaces dependencies that are local and manageable with
  1050. dependencies that are certainly not local and I would argue much
  1051. less manageable because they are much less secure.  I say that
  1052. because the root cause of risk is dependence, and most especially
  1053. dependence on expectations of system state.  I say ``much less secure''
  1054. because one is secure, that is to say that one is in a state of
  1055. security, if and only if there can be no unmitigatable surprises.
  1056. The more we put on the Internet, the broader and unmitigatable any
  1057. surprises become.
  1058.  
  1059. This line of thought is beginning to sink in.  Let me quote from a
  1060. Bloomberg article a month ago:\footnote{
  1061.    ``Banks Dreading Computer Hacks Call for Cyber War Council''
  1062.    \url{http://www.bloomberg.com/news/print/2014-07-08/banks-dreading-computer-hacks-call-for-cyber-war-council.html}
  1063. }
  1064.  
  1065. \begin{quote}
  1066.    Wall Street's biggest trade group has proposed a government-industry
  1067.    cyber war council to stave off terrorist attacks that could trigger
  1068.    financial panic by temporarily wiping out account balances,
  1069.    according to an internal document.
  1070.  
  1071.    The proposal by the Securities Industry and Financial Markets
  1072.    Association calls for a committee of executives and deputy-level
  1073.    representatives from at least eight U.S. agencies including the
  1074.    Treasury Department, the National Security Agency and the Department
  1075.    of Homeland Security, all led by a senior White House official.
  1076.  
  1077.    The document sketches an unusually frank and pessimistic view by
  1078.    the industry of its readiness for attacks wielded by nation-states
  1079.    or terrorist groups that aim to ``destroy data and machines.''  It
  1080.    says the concerns are ``compounded by the dependence of financial
  1081.    institutions on the electric grid,'' which is also vulnerable to
  1082.    physical and cyber attack.
  1083. \end{quote}
  1084.  
  1085. So here you have the biggest financial firms saying that their
  1086. dependencies are no longer manageable, and that the State's monopoly
  1087. on the use of force must be brought to bear.  What they are talking
  1088. about is that they have no way to mitigate the risk of common mode
  1089. failure.
  1090.  
  1091. To repeat, risk is a consequence of dependence.  Because of shared
  1092. dependence, aggregate societal dependence on the Internet is not
  1093. estimable.  If dependencies are not estimable, they will be
  1094. underestimated.  If they are underestimated, they will not be made
  1095. secure over the long run, only over the short.  As the risks become
  1096. increasingly unlikely to appear, the interval between events will
  1097. grow longer.  As the latency between events grows, the assumption
  1098. that safety has been achieved will also grow, thus fueling increased
  1099. dependence in what is now a positive feedback loop.  Accommodating
  1100. old methods and Internet rejectionists preserves alternate, less
  1101. complex, more durable means and therefore bounds dependence.  Bounding
  1102. dependence is \emph{the} core of rational risk management.
  1103.  
  1104. If we don't bound dependence, we invite common mode failure.  In
  1105. the language of statistics, common mode failure comes exactly from
  1106. under-appreciated mutual dependence.  Quoting:\footnote{
  1107.    High Integrity Software System Assurance, section 4.2,
  1108.    \url{http://hissa.nist.gov/chissa/SEI\_Framework/framework\_16.html},
  1109.    but you'll have to look in the Internet Archive for it
  1110. }
  1111.  
  1112. \begin{quote}
  1113.    [R]edundancy is the provision of functional capabilities that
  1114.    would be unnecessary in a fault-free environment.  Redundancy
  1115.    is necessary, but not sufficient for fault tolerance... System
  1116.    failures occur when faults propagate to the outer boundary of
  1117.    the system.  The goal of fault tolerance is to intercept the
  1118.    propagation of faults so that failure does not occur, usually
  1119.    by substituting redundant functions for functions affected by a
  1120.    particular fault.  Occasionally, a fault may affect enough
  1121.    redundant functions that it is not possible to reliably select
  1122.    a non-faulty result, and the system will sustain a common-mode
  1123.    failure.  A common-mode failure results from a single fault (or
  1124.    fault set).  Computer systems are vulnerable to common-mode
  1125.    resource failures if they rely on a single source of power,
  1126.    cooling, or I/O.  A more insidious source of common-mode failures
  1127.    is a design fault that causes redundant copies of the same
  1128.    software process to fail under identical conditions.
  1129. \end{quote}
  1130.  
  1131. That last part --- that ``A more insidious source of common-mode
  1132. failures is a design fault that causes redundant copies of the same
  1133. software process to fail under identical conditions'' --- is exactly
  1134. that which can be masked by complexity precisely because complexity
  1135. ensures under-appreciated mutual dependence.
  1136.  
  1137. In sum, as a matter of policy everything that is officially categorized
  1138. as a critical infrastructure must conclusively show how it can
  1139. operate in the absence of the Internet.  The 2008 financial crisis
  1140. proved that we can build systems more complex than we can operate,
  1141. the best policy counter to which has been the system of ``stress
  1142. tests'' thereafter administered to the banks.  We need other kinds
  1143. of stress tests even more.
  1144.  
  1145.  
  1146.  
  1147. \section{Conclusion}
  1148.  
  1149. I titled this talk ``Cybersecurity as Realpolitik.''  Realpolitik
  1150. means, in the words of British historian E. H. Carr, that what is
  1151. successful is right and what is unsuccessful is wrong, that there
  1152. is no moral dimension in how the world is, and that attempting to
  1153. govern based on principles cannot succeed.  Realpolitik is at once
  1154. atheistic and anti-utopian.
  1155.  
  1156. I find that distasteful and, it seems, that in governing my own
  1157. life I daily give up power advantage for principle.  At the same
  1158. time, having principles such as ``Might does not make right'' may
  1159. well be a failing on my part and, by extension, a failing on the
  1160. part of those who govern according to principle.  Cybersecurity as
  1161. we describe it in our mailing lists, on our blogs, at our cons, and
  1162. so forth is rich in principles and utopian desiderata, all the while
  1163. we have opponents at all levels and probably always will for whom
  1164. principle matters little but power matters a lot.  As Thomas Ray
  1165. said, ``Every successful system accumulates parasites'' and the
  1166. Internet plus every widely popular application on it has parasites.
  1167. For some observers, parasites and worse are just a cost of doing
  1168. business.  For other observers, design which encourages bad outcomes
  1169. is an affront that must be fixed.  It is realism and realism alone
  1170. that remains when all else fails.
  1171.  
  1172. Political realism of the sort I am talking about is based on four
  1173. premises:
  1174.  
  1175. \begin{itemize}
  1176.    \item The international system is anarchic
  1177.    \item States are the most important actors
  1178.    \item All states within the system are unitary, rational actors
  1179.    \item The primary concern of all states is survival
  1180. \end{itemize}
  1181.  
  1182. This is likewise the realism of the cybersecurity situation in a
  1183. global Internet.  It is anarchic, and states have become the most
  1184. important actors.  States' investment in offensive cyber is entirely
  1185. about survival in such a world.  States are driven to this by the
  1186. dual, simultaneous expansion of what is possible and what their
  1187. citizens choose to depend on.
  1188.  
  1189. The late Peter Bernstein, perhaps the world's foremost thinker on
  1190. the topic, defined ``risk'' as ``more things can happen than
  1191. will.''\footnote{
  1192.    \booktitle{Against the Gods} and this 13:22 video at
  1193.    \url{http://www.mckinsey.com/insights/risk\_management/peter\_l\_bernstein\_on\_risk}
  1194. }
  1195. With technologic advance accelerating, ``more things can happen than
  1196. will'' takes on a particularly ominous quality if your job is to
  1197. ensure your citizens' survival in an anarchy where, daily, ever
  1198. more things can happen than will.  Realpolitik would say that under
  1199. such circumstances, defense becomes irrelevant.  What is relevant
  1200. is either (1) offense or (2) getting out of the line of fire
  1201. altogether.  States that are investing in offense are being entirely
  1202. rational and are likely to survive.  Those of us who are backing
  1203. out our remaining dependencies on digital goods and services are
  1204. being entirely rational and are likely to survive.  The masses who
  1205. quickly depend on every new thing are effectively risk seeking, and
  1206. even if they do not themselves know it, the States which own them
  1207. know, which explains why every State now does to its own citizens
  1208. what once States only did to officials in competing regimes.
  1209.  
  1210. You have politely listened to a series of ``get off the dime'' policy
  1211. proposals around mandatory reporting, net neutrality, source code
  1212. liability, strike back, fall backs, resiliency, vulnerability
  1213. finding, the right to be forgotten, Internet voting, abandonment,
  1214. and convergence, all by one guy that no one ever elected.  I thank
  1215. you, friends and countrymen, for lending me your ears.  But I shall
  1216. be happier still if some one or several of you find the articulateness
  1217. that overcomes the dynamic which we now inhabit, namely that if
  1218. what is successful is right and what is unsuccessful is wrong, the
  1219. observable allocation of success and of failure is utterly disconnected
  1220. from the technical facts of cybersecurity as we know them here.  In
  1221. the end, reality always wins, and the reality of technical facts
  1222. has more staying power than the reality of market share or utopian
  1223. enthusiasm.
  1224.  
  1225. Nevertheless, cybersecurity is all about power and only power.
  1226. Realpolitik says that what cybersecurity works is right and what
  1227. cybersecurity does not work is wrong and Realpolitik thus resonates
  1228. with Howard's ``Security will always be exactly as bad as it can
  1229. possibly be while allowing everything to still function.''  Realpolitik
  1230. says that offense routinely beating defense is right, and imagining
  1231. otherwise is wrong, that those whose offense wins are right while
  1232. those whose defense loses are wrong.  Realpolitik says that offense's
  1233. superiority means that it a utopian fantasy to believe that information
  1234. can be protected from leakage, and so the counter-offense of
  1235. disinformation is what we must deploy in return.  Realpolitik says
  1236. that sentient opponents have always been a fact of life, but never
  1237. before have they been location independent and never before have
  1238. they been able to recruit mercenaries who will work for free.
  1239. Realpolitik says that attribution is impossible unless we deploy a
  1240. unitary surveillance state.
  1241.  
  1242. I have long preferred to hire security people who are, more than
  1243. anything else, sadder but wiser.  They, and only they, know that
  1244. most of what commercially succeeds succeeds only so long as attackers
  1245. do not give it their attention while what commercially fails fails
  1246. not because it didn't work but because it wasn't cheap or easy or
  1247. sexy enough to try.  Their glasses are not rose-colored; they are
  1248. spattered with Realpolitik.  Sadder but wiser hires, however, come
  1249. only from people who have experienced private tragedies, not global
  1250. ones.  There are no people sadder but wiser about the scale and
  1251. scope of the attack surface you get when you connect everything to
  1252. everything and give up your prior ability to do without.  Until
  1253. such people are available, I will busy myself with reducing my
  1254. dependence on, and thus my risk exposure to, the digital world even
  1255. though that will be mistaken for curmudgeonly nostalgia.  Call that
  1256. misrepresentation, if you like.
  1257.  
  1258. There is never enough time.  Thank you for yours.
  1259.  
  1260.  
  1261. To the reader, see also: ``algorithmic regulation''
  1262.  
  1263. \rule{3in}{0.02in}
  1264.  
  1265.  
  1266.  
  1267. This and other material on file at
  1268. \url{http://http://geer.tinho.net/pubs}
  1269.  
  1270. \end{document}
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement