Advertisement
Guest User

Lightning Network Onion Routing: Sphinx+HORNET

a guest
Dec 14th, 2015
1,428
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 14.36 KB | None | 0 0
  1. Hi y'all!!
  2.  
  3. So I've been drafting up some schemes for the design of onion routing within the
  4. Lightning Network myself. Rather than create my own mixing format, and onion
  5. routing protocol, I've instead turned to some of the existing academic literature
  6. concerning the subject. I found two schemes, the first building upon the
  7. other, which seem to fit elegantly into our problem domain, directly solving
  8. many problems we've encountered along the way.
  9.  
  10. The two schemes of interest are first Sphinx[1]: a compact mixing format, and
  11. HORNET[2][3], which leverages Sphinx to build a low-latency, high-bandwidth
  12. onion routing protocol.
  13.  
  14. Alright, let's jump into and overview of both schemes!
  15.  
  16. (heads up, this post is a bit long as it includes a brain dump of ideas I've
  17. been been floating around in my head for the past two months or so)
  18.  
  19. First, I'll give a brief overview of Sphinx. I won't delve into the exact
  20. details of the protocol, instead I'll highlight the key insight that allows for
  21. *extremely* small mix-headers. If we assume a maximum diameter of 20 hops, and a
  22. 16-byte MAC, then a full onion blob weighs in at only 705 bytes! This is
  23. considerably smaller (a 4X size reduction) than the 3840 bytes per onion-blob in
  24. current proposal. I recommend consulting the paper for the proper specifics.
  25. Additionally the paper includes a formal proof of security in the Universal
  26. Composability model[4], achieving each of the 4 properties outlined for a secure
  27. onion routing protocol in [5].
  28.  
  29. (btw, don't dismiss figures 1 and 2 in the Sphinx paper. They may look strange,
  30. but finally grokking them resulted in a break through allowing me to finish my
  31. implementation)
  32.  
  33. So, at this point, y'all may be wondering how Sphinx achieves such compact
  34. mix-headers. Well, Rather, than including a group element to perform DH with
  35. for *each* hop in the route, mix-headers in Sphinx only contain a single group
  36. element. This single group element is then deterministically re-randomized by
  37. each hop in the route, preparing the group element for the hop directly
  38. following it.
  39.  
  40. Here's a brief sketch of the mechanism (covering only the DH aspects):
  41.  
  42. Say Alice wants to route a payment to Bob (Y_b) over Lightning. Alice consults her
  43. routing/service table, and finds a route via Carol and Dave. Next, to create the
  44. mix-header, Alice grabs key-pairs for Carol (Y_c) and Dave (Y_d). She then generates
  45. an ephemeral DH key-pair (a_0 = g^x) then proceeds to derive a shared secret
  46. for each hop in the route. Using multiplicative notation, the DH derivation
  47. looks something like this (s = shared key, b = blinding factor):
  48. * a_0 = g^x, s_0 = (Y_c)^x, b_0 = hash(a_0, s_0)
  49. * a_1 = (a_0)^b_0, s_1 = (Y_d)^x*b_0, b_1 = hash(a_1, s_1)
  50. * a_2 = (a_1)^b_1, s_2 = (Y_b)^x*b_0*b_1, b_2 = hash(a_2, s_2)
  51. ......... continued if more hops were in the route ..........
  52.  
  53. Alice computes all the blinding factors and shared secrets up front, allowing
  54. her to create the mix-header for each hop using the derived hop shared secret
  55. to derive subsequent decryption and MAC keys. Each node receives a_{i}, and
  56. prepares a_{i+1} for the next hop by re-randomizing the group element using the
  57. blinding factor.
  58.  
  59. The formula for computing the size of a mix-header for a given hop count, and
  60. security parameter is (ignoring the final message size):
  61. * p + (2r + 2)s
  62. * p = pub key size (in bytes, for DH each hop)
  63. * r = max number of hops
  64. * s = symmetric key size (in bytes)
  65.  
  66. So, for 20 hops the size is computed as (with a symmetric key size of 16 bytes):
  67. * 33 + (2*20 + 2) * 16 = 705 bytes!
  68.  
  69. 445% smaller than the 3840 bytes of the current proposal.
  70.  
  71. The original version of Sphinx was designed with anonymous mailing in mind.
  72. So the final mix-header to the destination node includes a final destination
  73. email-address (k-bits), and an Identifier (k-bits) used to generate reply
  74. messages back to the sender. If we remove these from the mix-net, we save
  75. 2k-bits arriving at a new formula to compute the mix-header size:
  76. * p + (2*r*s)
  77.  
  78. So with 20 hops the size is reduced to:
  79. * 33 + (2*20*16) = 673 bytes
  80.  
  81. With this size reduction the, the base64 encoding of a mix-header for two hops
  82. can fit entirely into a tweet!
  83. * 33 + (2*2*16) = 97 bytes
  84.  
  85. However, it occurred to me that the final destination address may be useful in
  86. the context of a more hub-and-spoke-y network. In this scheme the payment would
  87. be routed to a "shared node", who would then use the destination address to
  88. demultiplexing the payment to one of its connected links in a "host:port" fashion.
  89.  
  90. Finally, Sphinx has built-in protection for replay attacks. Meaning an adversary
  91. is unable to force a node to process a previously processed mix-header. It
  92. achieves this protection by having all nodes remember every derived shard secret
  93. it has ever come across. However, within the paper, the extent of the past with
  94. which a node is to recall is unspecified, leading to potentially large storage
  95. overhead. However, it's worth noting that the paper recommends frequent key
  96. rotation, during which this table may be safely cleared. Additionally, nodes may
  97. also employ a decaying bloom-filter/hash-set [6] for use in-between key
  98. rotations to create an upper bound on the storage overhead.
  99.  
  100. With that said, Sphinx is much more compact than the current proposal, carries
  101. a formal proof of security, and specifies replay protection.
  102.  
  103. It was recently brought up in [7], that allowing nodes to use onion routing with
  104. Lightning as an end-to-end messaging interface. This is where HORNET comes in.
  105. One could use Sphinx's Single-Use-Reply-Blocks to allow the recipient to
  106. communicate with the sender after a dummy onion circuit has been set up.
  107. However, this introduces some performance considerations, due to Sphinx requiring
  108. each hop to perform 2 asymmetric crypto operations (shared secret derivation + group element blinding)
  109. before forwarding. HORNET gets around this performance issue by first using Sphinx
  110. to set up a duplex onion circuit between sender and receiver, which after set up,
  111. only requires nodes to compute purely symmetric crypto operations before forwarding.
  112.  
  113. Continuing, both Sphinx and the current proposal only provide sender anonymity.
  114. Meaning that only the source maintains unlinkability, therefore it learns the
  115. destination's location. The addition of HORNET allows both side to retain full
  116. unlinkability. HORNET has a built-in, an optional rendezvous protocol, which is
  117. a bit similar to the way Hidden Services in TOR work.
  118.  
  119. With that said, what follows is a brief high-level overview of the HORNET onion-routing
  120. protocol (focusing on the initial set up, instead of data transmission).
  121.  
  122. Unlike Sphinx, which is essentially a "fire-and-forget" onion routing protocol
  123. from the view of the sender, HORNET requires a cumulative circuit round-trip
  124. as a set up phase before messaging can begin. Once completed, this set up phase
  125. creates a duplex onion circuit between the sender and destination, allowing a
  126. level of throughput approaching 93 Gb/s as reported experimentation section within
  127. the paper.
  128.  
  129. The primary addition to Sphinx's mix-header format is something they've dubbed the
  130. Anonymous Header (AHDR). Instead of requiring each node in the mix-net to store
  131. per-session state (which would prohibit scale in large networks), all session-state
  132. is instead stored within the onion-packet itself. Therefore, each node only needs to
  133. maintain a local secret. A full AHDR contains contains a series of Forwarding
  134. Segments (FSes) for each hop, allowing them to decrypt the per-hop onion,
  135. revealing the next hop in the path. Each FS contains: the next hop in the route,
  136. a TTL value, and a derived shared secret. A full AHDR has the following format:
  137. * (FS, MAC, Onion wrapped FSes)
  138.  
  139. First, let's go over how a duplex onion circuit is achieved, maintaining sender
  140. unlinkability (once again, using multiplicative notation):
  141. * Sender Set Up:
  142. * The sender obtains the current keys for each hop in the path, constructing
  143. a forward, and backwards path (the reverse of the forward path).
  144. * The sender then generates a per-session ephemeral DH key-pair: (x, g^x)
  145. * Using the Sphinx protocol described above, then sender then generates two
  146. Sphinx mix-headers: one for the forwards path, and another for the backwards
  147. path. At this point, the source has derived shared secrets for each node along
  148. both paths.
  149. * The message payload (SP) of the forward Sphinx mix-header (encrypted to the
  150. destination), is the backwards Sphinx mix-header which allows the destination
  151. to reply to the sender.
  152. * The Sphinx mix-header (SHDR) is then extended with two additions:
  153. * A empty list of FSes (P_fs), which each intermediate node can populate
  154. without learning the total path length, nor their position within the
  155. route.
  156. * The Sphinx per-hop MAC is also extended over the HORNET Common Header
  157. (CHDR): (packet type, hops, TTL)
  158. * The sender then sends this set up packet to the first node in the route:
  159. (CHDR, SHDR, SP, P_fsf)
  160. * Intermediate Node Set Up:
  161. * Upon receipt of the Sphinx packet, each node processes it as normal
  162. (according to Sphinx): deriving the shared secret, recovering the next hop's
  163. ID, and re-blinding the group element.
  164. * Next, the node verifies the next-hop is correct, and that the packet
  165. hasn't yet expired according to its TTL.
  166. * In order to provide forward secrecy for the messaging phase,
  167. the node then derives a new DH key-pair (y, g^y), and then derives a new
  168. shared secret to be included within the FS: (a_i)^y
  169. * Using it's local shared secret (SV_i), the node creates a FS that only it
  170. can decrypt, adding this to the FS payload. This payload entry additionally
  171. includes the new ephemeral DH-key pair (y, g^y), allowing the source to derive
  172. the shared secret which will be used for the FS's MAC (and also when transmitting data).
  173. * Finally the node re-assembles the packet (CHDR', SHDR', SP, P_fsf'), and
  174. forwards it to the next hop.
  175. * Destination Set Up:
  176. * The destination initially processes the packet identically to the intermediate
  177. nodes described above.
  178. * It then decrypts the Sphinx payload, uncovering the backwards SHDR. The node
  179. then creates a new payload each of the completed FSes in the payload.
  180. destined to the source.
  181. * At this point, the destination has no knowledge of any of the derived shared
  182. secrets, other than the one it has derived with the source.
  183. * Next, it creates a new FS list P_fsb, to collect FSes along the backwards path.
  184. * Finally, it creates the backwards set up packet (CHDR, SHDR, SP_b, P_fsb), and
  185. sends it to the first node on the backwards path.
  186. * Each intermediate node then processes the backwards packet just as before.
  187. * Data Transfer (brief overview, consult the paper for proper details)
  188. * The source node then recovers the Sphinx payload SP_b, recovering the
  189. forwarding segments for the forward (P_fsf) and backwards path (P_fsb)
  190. * At this point, set up has been completed!
  191. * The source node then constructs two initial AHDR's, one so it can send
  192. messages to the destination via the circuit, and the other which will
  193. allow the destination to reply to the source.
  194. * After the first message from S -> D, D obtains the backwards AHDR,
  195. allowing it to reply without knowing the actual route.
  196.  
  197. The, the following modifications are necessary to achieve sender-receiver
  198. unlinkability as described earlier. The modification to allow send-receiver
  199. unlinkability is relatively straight forward. It involves nested AHDRs. Lets
  200. say Alice wants to send a message (or a payment in our case) to Bob while maintaining
  201. full unlinkability. Bob will first need to select a rendezvous point (RP),
  202. let's call him Rodriguez. Bob then executes the set up protocol detailed above
  203. between himself and Rodriguez. As a result, Bob obtains a AHDR allowing Rodriguez
  204. to route messages (via some specified path) to Bob (R -> B). Bob then publishes
  205. this AHDR_rb to some "public bulletin board", or in the case that Alice has a
  206. hidden service set up, directly to Alice. Alice then obtains the AHDR_rb,
  207. routing as normal to Rodriguez, but then includes AHDR_rb, nested within the
  208. regular AHDR_ar (Alice -> Bob). Once this mix-header reaches Bob, he recovers the
  209. nested AHDR, allowing him to complete the route to Bob. Additionally, its possible
  210. for Bob to specify several rendezvous points, allowing Alice to select the
  211. closest/cheapest route to her. As a side effect, the usage of this nested AHDR
  212. doubles the bandwidth required for each node to forward along the path.
  213.  
  214. With this duplex onion route set up (with or without the RP), Alice can send a
  215. message to Bob, asking for R (along with other payment details), then propagate
  216. HTLCs along the full path in order to pay Bob. In order to mitigate the potential
  217. DoS attacks introduced by adding a message layer for control signals, we can
  218. either introduce a payment system (as suggested in [8]), or a required Hashcash
  219. challenge (avoiding SHA-256, using either SHA-3 or BLAKE2). However, seeing as
  220. we'd like to maintain the responsiveness of the network, I'm personally leaning
  221. towards payments (if we do in fact deem in network messaging attractive).
  222.  
  223. Okay, so that concludes my first post on the mailing list! So at this point
  224. y'all may be wondering where the implementation I mentioned above is?
  225. Well, so far I have a working implementation of Sphinx by itself, not yet
  226. integrating the aspects of HORNET, but, it's not yet publicly released.
  227.  
  228. We (me+Tadge+Joseph) are planning on publicly releasing our in progress
  229. implementation of Lightning, sometime near the end of this month (December) :).
  230.  
  231. 1. http://www.cypherpunks.ca/~iang/pubs/Sphinx_Oakland09.pdf
  232. 2. http://arxiv.org/pdf/1507.05724v1.pdf (HORNET w/ forward secrecy)
  233. 3. http://www.scion-architecture.net/pdf/2015-HORNET.pdf
  234. 4. https://eprint.iacr.org/2000/067.pdf
  235. 5. http://cs.brown.edu/~anna/papers/cl05-full.pdf
  236. 6. https://moderncrypto.org/mail-archive/messaging/2015/001906.html
  237. 7. http://lists.linuxfoundation.org/pipermail/lightning-dev/2015-December/000369.html
  238. 8. http://lists.linuxfoundation.org/pipermail/lightning-dev/2015-December/000371.html
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement