Advertisement
DarrenRevell

Intel In Bed With NSA

Jul 13th, 2013
516
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 18.16 KB | None | 0 0
  1. INTEL IN BED WITH NSA
  2.  
  3.  
  4.  
  5. ----- Forwarded message from Matt Mackall <mpm@selenic.com> -----
  6.  
  7. Date: Thu, 11 Jul 2013 17:34:48 -0500
  8. From: Matt Mackall <mpm@selenic.com>
  9. To: liberationtech <liberationtech@lists.stanford.edu>
  10. Subject: Re: [liberationtech] Heml.is - "The Beautiful & Secure Messenger"
  11.  
  12. On Thu, 2013-07-11 at 13:47 -0700, Andy Isaacson wrote:
  13.  
  14. > > Linux now also uses a closed RdRand [2] RNG if available.
  15. >
  16. > There was a bunch of churn when this code went in, so I could be wrong,
  17. > but I believe that RdRand is only used to stir the same entropy pool as
  18. > all of the other inputs which are used to generate random data for
  19. > /dev/random et al. It's hard to leverage control of one input to a
  20. > random pool into anything useful.
  21.  
  22. It's worth noting that the maintainer of record (me) for the Linux RNG quit the project about two years ago precisely because Linus decided to include a patch from Intel to allow their unauditable RdRand to bypass the entropy pool over my strenuous objections.
  23.  
  24. From a quick skim of current sources, much of that has recently been rolled back (/dev/random, notably) but kernel-internal entropy users like sequence numbers and address-space randomization appear to still be exposed to raw RdRand output.
  25.  
  26. (And in the meantime, my distrust of Intel's crypto has moved from "standard professional paranoia" to "actual legitimate concern".)
  27.  
  28. --
  29.  
  30. Mathematics is the supreme nostalgia of our time.
  31.  
  32. --
  33.  
  34. Too many emails? Unsubscribe, change to digest, or change password by emailing moderator at companys@stanford.edu or changing your settings at https://mailman.stanford.edu/mailman/listinfo/liberationtech
  35.  
  36. ----- End forwarded message -----
  37.  
  38. --
  39.  
  40. Eugen* Leitl <a href="http://leitl.org">leitl</a> http://leitl.org
  41.  
  42. ______________________________________________________________
  43.  
  44. On 2013-07-13 12:20 AM, Eugen Leitl [forwarding Matt Mackall <mpm@selenic.com>] wrote:
  45.  
  46. It's worth noting that the maintainer of record (me) for the Linux RNG quit the project about two years ago precisely because Linus decided to include a patch from Intel to allow their unauditable RdRand to bypass the entropy pool over my strenuous objections.
  47. Is there a plausible rationale for bypassing the entropy pool?
  48.  
  49. How unauditable is RdRand?
  50.  
  51. Is RdRand unauditable because it uses magic instructions that do unknowable things? Is it designed to actively resist audit? Has Intel gone out of its way to prevent you from knowing how good their true random generation is?
  52.  
  53. On Fri, Jul 12, 2013 at 2:48 PM, James A. Donald <jamesd@echeque.com> wrote:
  54.  
  55. On 2013-07-13 12:20 AM, Eugen Leitl [forwarding Matt Mackall <mpm@selenic.com>] wrote:
  56. It's worth noting that the maintainer of record (me) for the Linux RNG quit the project about two years ago precisely because Linus decided to include a patch from Intel to allow their unauditable RdRand to bypass the entropy pool over my strenuous objections.
  57.  
  58. Is there a plausible rationale for bypassing the entropy pool?
  59.  
  60. Throughput? Not bypassing means having to wait until enough randomness has been gathered from trusted sources.
  61.  
  62. Or maybe it's just trusting Intel and assuming that RDRAND provides better randomness.
  63.  
  64.  
  65. On 12/07/13 21:54 PM, Patrick Mylund Nielsen wrote:
  66.  
  67. On Fri, Jul 12, 2013 at 2:48 PM, James A. Donald <jamesd@echeque.com> wrote:
  68. On 2013-07-13 12:20 AM, Eugen Leitl [forwarding Matt Mackall <mpm@selenic.com>] wrote:
  69. It's worth noting that the maintainer of record (me) for the
  70. Linux RNG quit the project about two years ago precisely because
  71. Linus decided to include a patch from Intel to allow their
  72. unauditable RdRand to bypass the entropy pool over my strenuous
  73. objections.
  74.  
  75. Is there a plausible rationale for bypassing the entropy pool?
  76.  
  77. Throughput? Not bypassing means having to wait until enough randomness has been gathered from trusted sources.
  78.  
  79. Typically, the entropy pool is used to feed a PRNG. Throughput isn't really an issue because modern PRNGs are fast, and there are very few applications that require psuedo-RNs at that sort of speed.
  80.  
  81. Or maybe it's just trusting Intel and assuming that RDRAND provides better randomness.
  82.  
  83. This thread has been seen before. On-chip RNGs are auditable but not verifiable by the general public. So the audit can be done then bypassed. Which in essence means the on-chip RNGs are mostly suitable for mixing into the entropy pool.
  84.  
  85. Not to mention, Intel have been in bed with the NSA for the longest time. Secret areas on the chip, pop instructions, microcode and all that ... A more interesting question is whether the non-USA competitors are also similarly friendly.
  86.  
  87. iang
  88.  
  89.  
  90. [BTW, when responding to a message forwarded, do please fix the quote attribution.]
  91.  
  92. On Fri, Jul 12, 2013 at 2:29 PM, ianG <iang@iang.org> wrote:
  93.  
  94. > This thread has been seen before. On-chip RNGs are auditable but not
  95. > verifiable by the general public. So the audit can be done then bypassed.
  96. > Which in essence means the on-chip RNGs are mostly suitable for mixing
  97. > into the entropy pool.
  98. >
  99. > Not to mention, Intel have been in bed with the NSA for the longest time.
  100. > Secret areas on the chip, pop instructions, microcode and all that ... A
  101. > more interesting question is whether the non-USA competitors are also
  102. > similarly friendly.
  103.  
  104. I'd like to understand what attacks NSA and friends could mount, with Intel's witting or unwitting cooperation, particularly what attacks that *wouldn't* put civilian (and military!) infrastructure at risk should details of a backdoor leak to the public, or *worse*, be stolen by an antagonist. I would hope that talented folks at the NSA would be averse to embedding backdoors in hardware (and firmware, and software) that they could lose control of, especially in light of recent developments. I'm *not* saying that my wishing is an argument for trusting Intel's RNG -- I'm sincerely trying to understand what attacks could conceivably be mounted through a suitably modified RDRAND with low systemic risk.
  105.  
  106. For example, there might be a way to close a backdoor in a hurry, should it leak.
  107.  
  108. Understanding the attacks that sigint agencies might mount in this fashion might help us understand the likelihood of their attempting them.
  109.  
  110. I think it's important to highlight the systemic risk caused by embedding backdoors everywhere. See "Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP", by Bellovin, Blaze, et. al. Systemic failures can be extremely severe. The 2008 financial crisis was a systemic failure, and, sadly, I can imagine far worse systemic failures. Minimizing systemic risk should be a key policy goal in general, but management of systemic risk is inherently not in the interests of any short-term political actors, therefore it's important to ensure institutional inertia for systemic risk minimization. The NSA that once worked to strengthen DES against differential cryptanalysis clearly thought so (or, rather, the people who made that happen did) -- is today's NSA no longer interested in the nation's civilian and military security?!
  111.  
  112. Nico
  113. I think compromising microcode update signing keys would be the easiest path. Then you don't need backdoors baked in the hardware, don't need Intel's buy-in, and can target specific systems without impacting the public at large.
  114.  
  115. This is a pretty interesting analysis showing that these updates are 2048-bit RSA signed blobs:
  116. http://inertiawar.com/microcode/
  117.  
  118. I'd like to understand what attacks NSA and friends could mount, with
  119. Intel's witting or unwitting cooperation, particularly what attacks
  120. that *wouldn't* put civilian (and military!) infrastructure at risk
  121. should details of a backdoor leak to the public, or *worse*, be stolen
  122. by an antagonist.
  123.  
  124. I would hope that talented folks at the NSA would be averse to embedding backdoors in hardware (and firmware, and software) that they could lose control of, especially in light of recent developments.
  125.  
  126. Unfortunately it appears that for security reasons at least some chips are being backdoored. For instance see "Breakthrough silicon scanning discovers backdoor in military chip". The chip designers have admitted in that case that the backdoor was designed as part of the security scheme. Actel inserted the backdoor, I would assume with NSA permission since backdooring sensitive chips without NSA permission would be extremely risky.
  127.  
  128. The NSA that once worked to strengthen DES against differential cryptanalysis clearly thought so (or, rather, the people who made that happen did) -- is today's NSA no longer interested in the nation's civilian and military security?!
  129.  
  130. But remember that the NSA weakened DES by reducing the key size from 128 bits to 54 bits. It appears that the preferred the ability to break DES to issues of civil security even back then.
  131.  
  132. On 2013-07-13 4:54 AM, Patrick Mylund Nielsen wrote:
  133.  
  134. On Fri, Jul 12, 2013 at 2:48 PM, James A. Donald <jamesd@echeque.com> wrote:
  135. On 2013-07-13 12:20 AM, Eugen Leitl wrote:
  136. It's worth noting that the maintainer of record (me) for the Linux RNG quit the project about two years ago precisely because Linus decided to include a patch from Intel to allow their unauditable RdRand to bypass the entropy pool over my strenuous objections.
  137.  
  138. Is there a plausible rationale for bypassing the entropy pool?
  139.  
  140. Throughput? Not bypassing means having to wait until enough randomness has been gathered from trusted sources.
  141.  
  142. Or maybe it's just trusting Intel and assuming that RDRAND provides better randomness.
  143.  
  144. Often, when the computer boots up, it needs to do things that require some true randomness. This is a potential disaster, therefore there should be a non blocking wait for randomness.
  145.  
  146.  
  147. >I'd like to understand what attacks NSA and friends could mount, with Intel's
  148. >witting or unwitting cooperation, particularly what attacks that *wouldn't*
  149. >put civilian (and military!) infrastructure at risk should details of a
  150. >backdoor leak to the public, or *worse*, be stolen by an antagonist.
  151.  
  152. Right. How exactly would you backdoor an RNG so (a) it could be effectively used by the NSA when they needed it (e.g. to recover Tor keys), (b) not affect the security of massive amounts of infrastructure, and (c) be so totally undetectable that there'd be no risk of it causing a s**tstorm that makes the $0.5B FDIV bug seem like small change (not to mention the legal issues, since this one would have been inserted deliberately, so we're probably talking bet-the-company amounts of liability there).
  153.  
  154. >I'm *not* saying that my wishing is an argument for trusting Intel's RNG --
  155. >I'm sincerely trying to understand what attacks could conceivably be mounted
  156. >through a suitably modified RDRAND with low systemic risk.
  157.  
  158. Being careful is one thing, being needlessly paranoid is quite another. There are vast numbers of issues that crypto/security software needs to worry about before getting down to "has Intel backdoored their RNG".
  159.  
  160. Peter.
  161.  
  162. There are plenty of ways to design an apparently random number generator so that you can predict the output (exactly or approximately) without causing any obvious flaws in the pseudorandom output stream. Even the smallest bias can significantly reduce security. This could be a critical failure, and we have no way to determine whether or not it is happening.
  163.  
  164. As for preventing potential security holes and making the backdoor deniable, that takes a little more thinking.
  165.  
  166. And for legal issues, there are any number of hand-wavy blame-shifting schemes that Intel and whoever would want to backdoor their RNG could use.
  167.  
  168. I contest the idea that we should ignore the fact that Intel's RNG could be backdoored. Just because other problems exist doesn't mean we should ignore this one. I agree that perhaps worrying about this constitutes being "too paranoid", but no cryptographer ever got hurt by being too paranoid, and not trusting your hardware is a great place to start.
  169.  
  170. [Peter Gutmann's preceding message omitted.]
  171. On Sat, Jul 13, 2013 at 1:38 AM, William Yager <will.yager@gmail.com> wrote:
  172.  
  173. not trusting your hardware is a great place to start.
  174. Heh, might as well just give up. http://cm.bell-labs.com/who/ken/trust.html
  175.  
  176. (I know what you meant, just couldn't resist.)
  177.  
  178. [Peter Gutmann's message 1 omitted.]
  179.  
  180. William Yager - will.yager@gmail.com writes
  181.  
  182. >no cryptographer ever got hurt by being too paranoid, and not trusting your
  183. >hardware is a great place to start.
  184.  
  185. And while you're lying awake at night worrying whether the Men in Black have backdoored the CPU in your laptop, you're missing the fact that the software that's using the random numbers has 36 different buffer overflows, of which 27 are remote-exploitable, and the crypto uses an RSA exponent of 1 and AES-CTR with a fixed IV.
  186.  
  187. Peter.
  188. It's nice that you can be so cavalier about this, but if your system's RNG is fundamentally broken, it doesn't really matter so much whether your other stuff is well-programmed or not. At least if my web browser is remotely exploitable, it doesn't break my disk encryption software, GPG, SSH, every other web browser I'm using, and pretty much every crypto appliance on my machine.
  189.  
  190. I'd rather have a rickety shed built on solid ground than a castle built on quicksand.
  191.  
  192. On Sat, Jul 13, 2013 at 4:32 PM, Peter Gutmann
  193.  
  194. [Peter Gutmann's preceding message 2 omitted.]
  195.  
  196. A good point, of course. So what should everyone do?
  197.  
  198. --
  199.  
  200. Noon Silk
  201. [Messages omitted by Patrick Mylund Nielsen, William Yager, Peter Gutmann, Nico Williams.]
  202.  
  203. Arrange that a certain specific sequence of data operations, which can be triggered by processing an incoming packet, switches the random number generator from true random mode to pseudo random mode based on a key found in that data.
  204.  
  205. Noon Silk <noonslists@gmail.com> writes:
  206.  
  207. >A good point, of course. So what should everyone do?
  208.  
  209. Look for things, and fix things, in order of likelihood of occurrence and exploitability. (Strong) Crypto is bypassed, not penetrated, so address that first. Once you've addressed all of those issues, then you can start looking for the tape measure to determine how much tinfoil you'll need for the hat.
  210.  
  211. Peter.
  212.  
  213. William Yager <will.yager@gmail.com> writes:
  214.  
  215. >It's nice that you can be so cavalier about this, but if your system's RNG is
  216. >fundamentally broken, it doesn't really matter so much whether your other
  217. >stuff is well-programmed or not.
  218.  
  219. Well I'm not sure what thread you're coming in from, but the current one was about the issue of unnecessary paranoia about MIB's backdooring CPUs (and their RNGs). Good RNG design is an entirely different issue, see e.g.
  220.  
  221. https://www.usenix.org/legacy/publications/library/proceedings/sec98/gutmann.html.
  222.  
  223. >At least if my web browser is remotely exploitable, it doesn't break my disk
  224. >encryption software, GPG, SSH, every other web browser I'm using, and pretty
  225. >much every crypto appliance on my machine.
  226.  
  227. If your browser is remotely exploitable then it breaks everything on what used to be your machine.
  228.  
  229. Peter.
  230. On 13 July 2013 03:20, Peter Gutmann <pgut001@cs.auckland.ac.nz> wrote:
  231.  
  232. [Gutmann message 2 omitted.]
  233.  
  234. But what's the argument for _not_ mixing their probably-not-backdoored RNG with other entropy?
  235. Ben Laurie <ben@links.org> writes:
  236.  
  237. >But what's the argument for _not_ mixing their probably-not-backdoored RNG
  238. >with other entropy?
  239.  
  240. Oh, no argument from me on that one, mix every entropy source you can get your hands on into your PRNG, including less-than-perfect ones, the more redundancy there is the less the chances of a single point of failure.
  241.  
  242. (Look at the Capstone design to see what the MIB are actually doing, they have a noise-based RNG, and ANSI X9.17 generator, and a straight counter, all fed into a SHA-1 PRNG, for redundancy).
  243.  
  244. And then run every static source code analysis tool you can find on your RNG, and implement dynamic analysis if you can, and perform entropy checks, and run a self-test with known-good test vectors on startup, and ... well, you get the picture.
  245.  
  246. This is just careful engineering. Worrying about what the MIB are up to is paranoia. If you apply your security engineering well, you don't need to worry about paranoia.
  247.  
  248. (Well, up to a certain extent anyway. Checked your keyboard firmware and wiring recently? Was that TSOP always there? It looks newer than the surrounding circuitry).
  249.  
  250. Peter.
  251.  
  252. On 13/07/13 09:32 AM, Peter Gutmann wrote:
  253.  
  254. [Gutmann message 2 omitted.]
  255.  
  256. ;) has everyone had a read of this:
  257.  
  258. http://www.infoworld.com/d/security/in-his-own-words-confessions-of-cyber-warrior-222266
  259.  
  260. iang
  261.  
  262. ps, my comments here:
  263.  
  264. http://financialcryptography.com/mt/archives/001439.html
  265.  
  266. On 13 July 2013 10:11, Peter Gutmann <pgut001@cs.auckland.ac.nz> wrote:
  267.  
  268. > and run a self-test with known-good test vectors on startup, and ... well, you get the
  269. > picture.
  270.  
  271. Amusing story: FIPS 140 requires self-tests on the PRNG. There was a bug in FIPS OpenSSL once where the self-test mode got stuck on and so no entropy was fed into the PRNG.
  272.  
  273. Also, back when I was doing FIPS 140 they made me remove some of the entropy feeds into the PRNG - particularly ones that protect against pool duplication over forks.
  274. On 13/07/13 09:43 AM, Noon Silk wrote:
  275.  
  276. > So what should everyone do?
  277.  
  278. Risk analysis. Which starts with your business model.
  279.  
  280. What you do is go talk to your customers and figure out what happens to them. Formally, you would figure out the frequency of these events, and multiply them by the damages. Order them that way. Concentrate on the top one first, munch your way down the list.
  281.  
  282. If you do this, in ordinary business, you will find that the NSA isn't even on the list, unless for some reason you targetted some space that they also targetted [0].
  283.  
  284. <advert> E.g, in my current business I'm dealing with savings for v. poor women in Africa. The threats that are hitting them are shakedowns by police, government, scammers, banks, merchants, each other, family, and self, not necessarily in the order we westerners expect. Sometimes with violence. So those are the things I'm building the system to protect against, which of course takes some cryptography to preserve and lock down assets rather than hide them, mixed with a lot of other things... your classic old 1990s CIA models aren't going to help a lot here. </>
  285.  
  286. iang
  287.  
  288. [0] jihadist websites, CAs and chat systems for Americans spring to mind.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement