promos

Third Satoshi Email

Mar 15th, 2018 (edited)
139
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. From: Mike Hearn <mike@plan99.net>
  2. Date: Mon, Dec 27, 2010 at 8:21 PM
  3. To: Satoshi Nakamoto <satoshin@gmx.com>
  4.  
  5.  
  6. Happy Christmas Satoshi, assuming you celebrate it wherever you are in
  7. the world :-)
  8.  
  9. I have been working on a Java implementation of the simplified payment
  10. verification, with an eye to building a client that runs on Android
  11. phones. So I've been thinking a lot about storage requirements and the
  12. scalability of BitCoin, which led to some questions that the paper did
  13. not answer (maybe there could be a new version of the paper at some
  14. point, as I think aspects of it are now out of date).
  15.  
  16. Specifically, BitCoin has a variety of magic numbers and neither the
  17. code nor the paper explain where they came from. For example, the fact
  18. that inflation ceases when 21 million coins have been issued. This
  19. number must have been arrived at somehow, but I can't see how.
  20.  
  21. Another is the 10 minute block target. I understand this was chosen to
  22. allow transactions to propagate through the network. However existing
  23. large P2P networks like BGP can propagate new data worldwide in <1
  24. minute.
  25.  
  26. The final number I'm interested in is the 500kb limit on block sizes.
  27. According to Wikipedia, Visa alone processed 62 billion transactions
  28. in 2009. Dividing through we get an average of 2000 transactions per
  29. second, so peak rate is probably around double that at 4000
  30. transactions/sec. With a ten minute block target, at peak a block
  31. might need to contain 2.4 million transactions, which just won't fit
  32. into 500kb. Is this 500kb a temporary limitation that will be slowly
  33. removed over time from the official client or something more
  34. fundamental?
  35.  
  36. ----------
  37. From: Satoshi Nakamoto <satoshin@gmx.com>
  38. Date: Wed, Dec 29, 2010 at 10:42 PM
  39. To: Mike Hearn <mike@plan99.net>
  40.  
  41.  
  42. I have been working on a Java implementation of the simplified payment
  43. verification, with an eye to building a client that runs on Android
  44. phones. So I've been thinking a lot about storage requirements and the
  45. scalability of BitCoin, which led to some questions that the paper did
  46. not answer (maybe there could be a new version of the paper at some
  47. point, as I think aspects of it are now out of date).
  48.  
  49.  
  50. The simplified payment verification in the paper imagined you would receive transactions directly, as with sending to IP address which nobody uses, or a node would index all transactions by public key and you could download them like downloading mail from a mail server.
  51.  
  52. Instead, I think client-only nodes should receive full blocks so they can scan them for their own transactions. They don't need to store them or index them. For the initial download, they only need to download headers, since there couldn't be any payments before the first time the program was run (a header download command was added in 0.3.18). From then on, they download full blocks (but only store the headers).
  53.  
  54. Code for client-only mode is mostly implemented. There's a feature branch on github with it, also I'm attaching the patch to this message.
  55.  
  56. Here's some more about it:
  57.  
  58. "Here's my client-mode implementation so far. Client-only mode only records block headers and doesn't use the tx index. It can't generate, but it can still send and receive transactions. It's not fully finished for use by end-users, but it doesn't matter because it's a complete no-op if fClient is not enabled. At this point it's mainly documentation showing the cut-lines for client-only re-implementers.
  59.  
  60. With fClient=true, I've only tested the header-only initial download.
  61.  
  62. A little background. CBlockIndex contains all the information of the block header, so to operate with headers only, I just maintain the CBlockIndex structure as usual. The nFile/nBlockPos are null, since the full block is not recorded on disk.
  63.  
  64. The code to gracefully switch between client-mode on/off without deleting blk*.dat in between is not implemented yet. It would mostly be a matter of having non-client LoadBlockIndex ignore block index entries with null block pos. That would make it re-download those as full blocks. Switching back to client-mode is no problem, it doesn't mind if the full blocks are there.
  65.  
  66. If the initial block download becomes too long, we'll want client mode as an option so new users can get running quickly. With graceful switch-off of client mode, they can later turn off client mode and have it download the full blocks if they want to start generating. They should rather just use a getwork miner to join a pool instead.
  67.  
  68. Client-only re-implementations would not need to implement EvalScript at all, or at most just implement the five ops used by the standard transaction templates."
  69.  
  70.  
  71. Specifically, BitCoin has a variety of magic numbers and neither the
  72. code nor the paper explain where they came from. For example, the fact
  73. that inflation ceases when 21 million coins have been issued. This
  74. number must have been arrived at somehow, but I can't see how.
  75.  
  76.  
  77. Educated guess, and the maths work out to round numbers. I wanted something that would be not too low if it was very popular and not too high if it wasn't.
  78.  
  79.  
  80. Another is the 10 minute block target. I understand this was chosen to
  81. allow transactions to propagate through the network. However existing
  82. large P2P networks like BGP can propagate new data worldwide in <1
  83. minute.
  84.  
  85.  
  86. If propagation is 1 minute, then 10 minutes was a good guess. Then nodes are only losing 10% of their work (1 minute/10 minutes). If the CPU time wasted by latency was a more significant share, there may be weaknesses I haven't thought of. An attacker would not be affected by latency, since he's chaining his own blocks, so he would have an advantage. The chain would temporarily fork more often due to latency.
  87.  
  88.  
  89. The final number I'm interested in is the 500kb limit on block sizes.
  90. According to Wikipedia, Visa alone processed 62 billion transactions
  91. in 2009. Dividing through we get an average of 2000 transactions per
  92. second, so peak rate is probably around double that at 4000
  93. transactions/sec. With a ten minute block target, at peak a block
  94. might need to contain 2.4 million transactions, which just won't fit
  95. into 500kb. Is this 500kb a temporary limitation that will be slowly
  96. removed over time from the official client or something more
  97. fundamental?
  98.  
  99.  
  100. A higher limit can be phased in once we have actual use closer to the limit and make sure it's working OK.
  101.  
  102. Eventually when we have client-only implementations, the block chain size won't matter much. Until then, while all users still have to download the entire block chain to start, it's nice if we can keep it down to a reasonable size.
  103.  
  104. With very high transaction volume, network nodes would consolidate and there would be more pooled mining and GPU farms, and users would run client-only. With dev work on optimising and parallelising, it can keep scaling up.
  105.  
  106. Whatever the current capacity of the software is, it automatically grows at the rate of Moore's Law, about 60% per year.
  107.  
  108.  
  109. diff -u old\db.cpp new\db.cpp
  110. --- old\db.cpp Sat Dec 18 18:35:59 2010
  111. +++ new\db.cpp Sun Dec 19 20:53:59 2010
  112. @@ -464,29 +464,32 @@
  113. ReadBestInvalidWork(bnBestInvalidWork);
  114.  
  115. // Verify blocks in the best chain
  116. - CBlockIndex* pindexFork = NULL;
  117. - for (CBlockIndex* pindex = pindexBest; pindex && pindex->pprev; pindex = pindex->pprev)
  118. + if (!fClient)
  119. {
  120. - if (pindex->nHeight < nBestHeight-2500 && !mapArgs.count("-checkblocks"))
  121. - break;
  122. - CBlock block;
  123. - if (!block.ReadFromDisk(pindex))
  124. - return error("LoadBlockIndex() : block.ReadFromDisk failed");
  125. - if (!block.CheckBlock())
  126. + CBlockIndex* pindexFork = NULL;
  127. + for (CBlockIndex* pindex = pindexBest; pindex && pindex->pprev; pindex = pindex->pprev)
  128. {
  129. - printf("LoadBlockIndex() : *** found bad block at %d, hash=%s\n", pindex->nHeight, pindex->GetBlockHash().ToString().c_str());
  130. - pindexFork = pindex->pprev;
  131. + if (pindex->nHeight < nBestHeight-2500 && !mapArgs.count("-checkblocks"))
  132. + break;
  133. + CBlock block;
  134. + if (!block.ReadFromDisk(pindex))
  135. + return error("LoadBlockIndex() : block.ReadFromDisk failed");
  136. + if (!block.CheckBlock())
  137. + {
  138. + printf("LoadBlockIndex() : *** found bad block at %d, hash=%s\n", pindex->nHeight, pindex->GetBlockHash().ToString().c_str());
  139. + pindexFork = pindex->pprev;
  140. + }
  141. + }
  142. + if (pindexFork)
  143. + {
  144. + // Reorg back to the fork
  145. + printf("LoadBlockIndex() : *** moving best chain pointer back to block %d\n", pindexFork->nHeight);
  146. + CBlock block;
  147. + if (!block.ReadFromDisk(pindexFork))
  148. + return error("LoadBlockIndex() : block.ReadFromDisk failed");
  149. + CTxDB txdb;
  150. + block.SetBestChain(txdb, pindexFork);
  151. }
  152. - }
  153. - if (pindexFork)
  154. - {
  155. - // Reorg back to the fork
  156. - printf("LoadBlockIndex() : *** moving best chain pointer back to block %d\n", pindexFork->nHeight);
  157. - CBlock block;
  158. - if (!block.ReadFromDisk(pindexFork))
  159. - return error("LoadBlockIndex() : block.ReadFromDisk failed");
  160. - CTxDB txdb;
  161. - block.SetBestChain(txdb, pindexFork);
  162. }
  163.  
  164. return true;
  165. diff -u old\main.cpp new\main.cpp
  166. --- old\main.cpp Sat Dec 18 18:35:59 2010
  167. +++ new\main.cpp Sun Dec 19 20:53:59 2010
  168. @@ -637,6 +637,9 @@
  169. if (!IsStandard())
  170. return error("AcceptToMemoryPool() : nonstandard transaction type");
  171.  
  172. + if (fClient)
  173. + return true;
  174. +
  175. // Do we already have it?
  176. uint256 hash = GetHash();
  177. CRITICAL_BLOCK(cs_mapTransactions)
  178. @@ -1308,23 +1311,26 @@
  179. if (!CheckBlock())
  180. return false;
  181.  
  182. - //// issue here: it doesn't know the version
  183. - unsigned int nTxPos = pindex->nBlockPos + ::GetSerializeSize(CBlock(), SER_DISK) - 1 + GetSizeOfCompactSize(vtx.size());
  184. -
  185. - map<uint256, CTxIndex> mapUnused;
  186. - int64 nFees = 0;
  187. - foreach(CTransaction& tx, vtx)
  188. + if (!fClient)
  189. {
  190. - CDiskTxPos posThisTx(pindex->nFile, pindex->nBlockPos, nTxPos);
  191. - nTxPos += ::GetSerializeSize(tx, SER_DISK);
  192. + //// issue here: it doesn't know the version
  193. + unsigned int nTxPos = pindex->nBlockPos + ::GetSerializeSize(CBlock(), SER_DISK) - 1 + GetSizeOfCompactSize(vtx.size());
  194. +
  195. + map<uint256, CTxIndex> mapUnused;
  196. + int64 nFees = 0;
  197. + foreach(CTransaction& tx, vtx)
  198. + {
  199. + CDiskTxPos posThisTx(pindex->nFile, pindex->nBlockPos, nTxPos);
  200. + nTxPos += ::GetSerializeSize(tx, SER_DISK);
  201.  
  202. - if (!tx.ConnectInputs(txdb, mapUnused, posThisTx, pindex, nFees, true, false))
  203. + if (!tx.ConnectInputs(txdb, mapUnused, posThisTx, pindex, nFees, true, false))
  204. + return false;
  205. + }
  206. +
  207. + if (vtx[0].GetValueOut() > GetBlockValue(pindex->nHeight, nFees))
  208. return false;
  209. }
  210.  
  211. - if (vtx[0].GetValueOut() > GetBlockValue(pindex->nHeight, nFees))
  212. - return false;
  213. -
  214. // Update block index on disk without changing it in memory.
  215. // The memory index structure will be changed after the db commits.
  216. if (pindex->pprev)
  217. @@ -1378,7 +1384,7 @@
  218. foreach(CBlockIndex* pindex, vDisconnect)
  219. {
  220. CBlock block;
  221. - if (!block.ReadFromDisk(pindex))
  222. + if (!block.ReadFromDisk(pindex, !fClient))
  223. return error("Reorganize() : ReadFromDisk for disconnect failed");
  224. if (!block.DisconnectBlock(txdb, pindex))
  225. return error("Reorganize() : DisconnectBlock failed");
  226. @@ -1395,7 +1401,7 @@
  227. {
  228. CBlockIndex* pindex = vConnect[i];
  229. CBlock block;
  230. - if (!block.ReadFromDisk(pindex))
  231. + if (!block.ReadFromDisk(pindex, !fClient))
  232. return error("Reorganize() : ReadFromDisk for connect failed");
  233. if (!block.ConnectBlock(txdb, pindex))
  234. {
  235. @@ -1526,7 +1532,7 @@
  236.  
  237. txdb.Close();
  238.  
  239. - if (pindexNew == pindexBest)
  240. + if (!fClient && pindexNew == pindexBest)
  241. {
  242. // Notify UI to display prev block's coinbase if it was ours
  243. static uint256 hashPrevBestCoinBase;
  244. @@ -1547,10 +1553,6 @@
  245. // These are checks that are independent of context
  246. // that can be verified before saving an orphan block.
  247.  
  248. - // Size limits
  249. - if (vtx.empty() || vtx.size() > MAX_BLOCK_SIZE || ::GetSerializeSize(*this, SER_NETWORK) > MAX_BLOCK_SIZE)
  250. - return error("CheckBlock() : size limits failed");
  251. -
  252. // Check proof of work matches claimed amount
  253. if (!CheckProofOfWork(GetHash(), nBits))
  254. return error("CheckBlock() : proof of work failed");
  255. @@ -1559,6 +1561,13 @@
  256. if (GetBlockTime() > GetAdjustedTime() + 2 * 60 * 60)
  257. return error("CheckBlock() : block timestamp too far in the future");
  258.  
  259. + if (fClient && vtx.empty())
  260. + return true;
  261. +
  262. + // Size limits
  263. + if (vtx.empty() || vtx.size() > MAX_BLOCK_SIZE || ::GetSerializeSize(*this, SER_NETWORK) > MAX_BLOCK_SIZE)
  264. + return error("CheckBlock() : size limits failed");
  265. +
  266. // First transaction must be coinbase, the rest must not be
  267. if (vtx.empty() || !vtx[0].IsCoinBase())
  268. return error("CheckBlock() : first tx is not coinbase");
  269. @@ -1623,13 +1632,14 @@
  270. return error("AcceptBlock() : out of disk space");
  271. unsigned int nFile = -1;
  272. unsigned int nBlockPos = 0;
  273. - if (!WriteToDisk(nFile, nBlockPos))
  274. - return error("AcceptBlock() : WriteToDisk failed");
  275. + if (!fClient)
  276. + if (!WriteToDisk(nFile, nBlockPos))
  277. + return error("AcceptBlock() : WriteToDisk failed");
  278. if (!AddToBlockIndex(nFile, nBlockPos))
  279. return error("AcceptBlock() : AddToBlockIndex failed");
  280.  
  281. // Relay inventory, but don't relay old inventory during initial block download
  282. - if (hashBestChain == hash)
  283. + if (!fClient && hashBestChain == hash)
  284. CRITICAL_BLOCK(cs_vNodes)
  285. foreach(CNode* pnode, vNodes)
  286. if (nBestHeight > (pnode->nStartingHeight != -1 ? pnode->nStartingHeight - 2000 : 55000))
  287. @@ -2405,6 +2415,8 @@
  288. {
  289. if (fShutdown)
  290. return true;
  291. + if (fClient && inv.type == MSG_TX)
  292. + continue;
  293. pfrom->AddInventoryKnown(inv);
  294.  
  295. bool fAlreadyHave = AlreadyHave(txdb, inv);
  296. @@ -2441,6 +2453,9 @@
  297.  
  298. if (inv.type == MSG_BLOCK)
  299. {
  300. + if (fClient)
  301. + return true;
  302. +
  303. // Send block from disk
  304. map<uint256, CBlockIndex*>::iterator mi = mapBlockIndex.find(inv.hash);
  305. if (mi != mapBlockIndex.end())
  306. @@ -2486,6 +2501,8 @@
  307.  
  308. else if (strCommand == "getblocks")
  309. {
  310. + if (fClient)
  311. + return true;
  312. CBlockLocator locator;
  313. uint256 hashStop;
  314. vRecv >> locator >> hashStop;
  315. @@ -2556,6 +2573,8 @@
  316.  
  317. else if (strCommand == "tx")
  318. {
  319. + if (fClient)
  320. + return true;
  321. vector<uint256> vWorkQueue;
  322. CDataStream vMsg(vRecv);
  323. CTransaction tx;
  324. @@ -2620,6 +2639,33 @@
  325.  
  326. if (ProcessBlock(pfrom, &block))
  327. mapAlreadyAskedFor.erase(inv);
  328. + }
  329. +
  330. +
  331. + else if (strCommand == "headers")
  332. + {
  333. + if (!fClient)
  334. + return true;
  335. + vector<CBlock> vHeaders;
  336. + vRecv >> vHeaders;
  337. +
  338. + uint256 hashBestBefore = hashBestChain;
  339. + foreach(CBlock& block, vHeaders)
  340. + {
  341. + block.vtx.clear();
  342. +
  343. + printf("received header %s\n", block.GetHash().ToString().substr(0,20).c_str());
  344. +
  345. + CInv inv(MSG_BLOCK, block.GetHash());
  346. + pfrom->AddInventoryKnown(inv);
  347. +
  348. + if (ProcessBlock(pfrom, &block))
  349. + mapAlreadyAskedFor.erase(inv);
  350. + }
  351. +
  352. + // Request next batch
  353. + if (hashBestChain != hashBestBefore)
  354. + pfrom->PushGetBlocks(pindexBest, uint256(0));
  355. }
  356.  
  357.  
  358. diff -u old\main.h new\main.h
  359. --- old\main.h Sat Dec 18 18:35:59 2010
  360. +++ new\main.h Sun Dec 19 20:53:59 2010
  361. @@ -619,6 +619,8 @@
  362.  
  363. bool ReadFromDisk(CDiskTxPos pos, FILE** pfileRet=NULL)
  364. {
  365. + assert(!fClient);
  366. +
  367. CAutoFile filein = OpenBlockFile(pos.nFile, 0, pfileRet ? "rb+" : "rb");
  368. if (!filein)
  369. return error("CTransaction::ReadFromDisk() : OpenBlockFile failed");
  370. @@ -1174,6 +1176,7 @@
  371.  
  372. bool ReadFromDisk(unsigned int nFile, unsigned int nBlockPos, bool fReadTransactions=true)
  373. {
  374. + assert(!fClient);
  375. SetNull();
  376.  
  377. // Open history file to read
  378. @@ -1231,7 +1234,7 @@
  379.  
  380.  
  381. //
  382. -// The block chain is a tree shaped structure starting with the
  383. +// The block index is a tree shaped structure starting with the
  384. // genesis block at the root, with each block potentially having multiple
  385. // candidates to be the next block. pprev and pnext link a path through the
  386. // main/longest chain. A blockindex may have multiple pprev pointing back
  387. diff -u old\net.cpp new\net.cpp
  388. --- old\net.cpp Wed Dec 15 22:33:09 2010
  389. +++ new\net.cpp Sun Dec 19 21:51:27 2010
  390. @@ -51,7 +51,15 @@
  391. pindexLastGetBlocksBegin = pindexBegin;
  392. hashLastGetBlocksEnd = hashEnd;
  393.  
  394. - PushMessage("getblocks", CBlockLocator(pindexBegin), hashEnd);
  395. + /// Client todo: After the initial block header download, start using getblocks
  396. + /// here instead of getheaders. For blocks generated after the first time the
  397. + /// program was run, we need to download full blocks to watch for received
  398. + /// transactions in them. We're able to download headers only for blocks
  399. + /// generated before we ever ran because they can't contain txes for us.
  400. + if (::fClient)
  401. + PushMessage("getheaders", CBlockLocator(pindexBegin), hashEnd);
  402. + else
  403. + PushMessage("getblocks", CBlockLocator(pindexBegin), hashEnd);
  404. }
  405.  
  406.  
  407.  
  408.  
  409. ----------
  410. From: Mike Hearn <mike@plan99.net>
  411. Date: Thu, Dec 30, 2010 at 12:27 AM
  412. To: Satoshi Nakamoto <satoshin@gmx.com>
  413.  
  414.  
  415. Thanks for the info.
  416.  
  417. I reached the same conclusions about client only nodes and this is
  418. what I've been implementing. I'm nearly there ..... I have block chain
  419. download, parsing and verification of the blocks/transactions done,
  420. with creation of spend transactions almost done.
  421.  
  422. v1 will basically do as you propose, with the possible optimization of
  423. storing only the blocks needed to form the block locator (with the
  424. exponential thinning). As Android provides local storage that is
  425. private to the app, you don't need to store the entire block chain to
  426. be able to accept new blocks ... just enough to ensure you can always
  427. stay on the longest chain.
  428.  
  429. By the way, your code is easy to read and has been an invaluable
  430. reference. So thanks for that.
  431.  
  432. In v2 I'm thinking of showing transactions before they are integrated
  433. into the block chain by running secure/locked down relay nodes that
  434. send messages to the phones when a transaction is accepted into the
  435. memory pool. Android provides a secure, low power back channel to
  436. every phone. Messages are stored server side if the device is offline
  437. and apps are automatically started on the phone to handle incoming
  438. messages.
  439.  
  440. So as long as the relay nodes are unhacked, this system should give
  441. enough trust that low value transactions can be shown in the UI
  442. immediately. It introduces some centralization/single points of
  443. failure, but if the relay mechanism dies or is hacked, the damage only
  444. lasts for 10 minutes until the new blocks are downloaded.
  445.  
  446. > Client-only re-implementations would not need to implement EvalScript at
  447. > all, or at most just implement the five ops used by the standard transaction
  448. > templates."
  449.  
  450. Indeed, there's no point in client-only implementations implementing
  451. EvalScript because they can't verify transactions aren't being double
  452. spent without storing and indexing the entire block chain. My code
  453. parses the scripts and then relies on them having a standard
  454. structure, but doesn't actually run them.
  455.  
  456. > Educated guess, and the maths work out to round numbers. I wanted something
  457. > that would be not too low if it was very popular and not too high if it
  458. > wasn't.
  459.  
  460. It'd be interesting to see the working for this. In some sense the
  461. number of coins is arbitrary as the nanocoin representation means the
  462. issuance is so huge it's practically infinite.
  463.  
  464. > A higher limit can be phased in once we have actual use closer to the limit
  465. > and make sure it's working OK.
  466.  
  467. It'd be worth implementing some kind of more robust auto update
  468. mechanism, or a schedule for the phase in of this, if only because
  469. when people evaluate "is BitCoin worth my time and effort" a solid
  470. plan for scaling up is good to have written down.
  471.  
  472. I'm not worried about the physical capabilities of the hardware, but
  473. more protocol ossification as the app is reimplemented and nodes which
  474. don't auto-update themselves increase in number. Client only
  475. reimplementations pose no problems of course, but other systems like
  476. SMTP have proven impossible to globally upgrade despite having
  477. extension mechanisms built in .... just too many implementations and
  478. too many installations.
  479.  
  480. ----------
  481. From: Satoshi Nakamoto <satoshin@gmx.com>
  482. Date: Fri, Jan 7, 2011 at 1:00 PM
  483. To: Mike Hearn <mike@plan99.net>
  484.  
  485.  
  486. I reached the same conclusions about client only nodes and this is
  487. what I've been implementing. I'm nearly there ..... I have block chain
  488. download, parsing and verification of the blocks/transactions done,
  489. with creation of spend transactions almost done.
  490.  
  491.  
  492. That's great! The first client-only implementation will really start to move things to the next step. Is it going to be open source, or Google proprietary?
  493.  
  494. ----------
  495. From: Mike Hearn <mike@plan99.net>
  496. Date: Fri, Jan 7, 2011 at 1:24 PM
  497. To: Satoshi Nakamoto <satoshin@gmx.com>
  498.  
  499.  
  500. > That's great! The first client-only implementation will really start to
  501. > move things to the next step. Is it going to be open source, or Google
  502. > proprietary?
  503.  
  504. Open source. It has to be - I am developing it as a personal project
  505. in my spare time and Googles policy is that this is only allowed if
  506. you open source the results. But I would have done that anyway.
  507.  
  508. I managed to spend my first coins on the testnet with my app a few
  509. days ago, hopefully will get another chance to make progress this
  510. weekend. Probably will have something to show publically sometime in
  511. Feb, touch wood.
  512.  
  513. ----------
  514. From: Satoshi Nakamoto <satoshin@gmx.com>
  515. Date: Mon, Jan 10, 2011 at 4:34 PM
  516. To: Mike Hearn <mike@plan99.net>
  517.  
  518.  
  519. Open source.
  520.  
  521.  
  522. Perfect. Once your code shows how to simplify it down, other authors can follow your lead. Client is a less daunting challenge than full implementation. If it's within reach of more developers, they'll come up with more polished UI and other things I didn't think of. I expect the original software will become the industrial old thing used by GPU farms and pool servers.
  523.  
  524. BTW, later a good feature for a client version is to keep your private keys encrypted and you give your password each time you send.
  525.  
  526.  
  527. I managed to spend my first coins on the testnet with my app a few
  528. days ago, hopefully will get another chance to make progress this
  529. weekend. Probably will have something to show publically sometime in
  530. Feb, touch wood.
  531.  
  532.  
  533. Great, keep me updated.
  534.  
  535.  
  536. I wanted something
  537. that would be not too low if it was very popular and not too high if it
  538. wasn't.
  539.  
  540. It'd be interesting to see the working for this. In some sense the
  541. number of coins is arbitrary as the nanocoin representation means the
  542. issuance is so huge it's practically infinite.
  543.  
  544.  
  545. It works out to an even 10 minutes per block:
  546. 21000000 / (50 BTC * 24hrs * 365days * 4years * 2) = 5.99 blocks/hour
  547.  
  548. I fudged it to 364.58333 days/year. The halving of 50 BTC to 25 BTC is after 210000 blocks or around 3.9954 years, which is approximate anyway based on the retargeting mechanism's best effort.
  549.  
  550. I thought about 100 BTC and 42 million, but 42 million seemed high.
  551.  
  552. I wanted typical amounts to be in a familiar range. If you're tossing around 100000 units, it doesn't feel scarce. The brain is better able to work with numbers from 0.01 to 1000.
  553.  
  554. If it gets really big, the decimal can move two places and cents become the new coins.
  555.  
  556.  
  557. ----------
  558. From: Mike Hearn <mike@plan99.net>
  559. Date: Mon, Jan 10, 2011 at 4:48 PM
  560. To: Satoshi Nakamoto <satoshin@gmx.com>
  561.  
  562.  
  563. Ah, of course, that makes sense.
  564.  
  565. By the way, if you didn't see it already, there's a discussion on the security of secp256k1 on the forum:
  566.  
  567. http://www.bitcoin.org/smf/index.php?topic=2699.0
  568.  
  569. Hal (i presume this is Hal Finney) seems to think the curve is at higher risk of attack than random curves. I guess you chose secp256k1 for the mentioned performance improvement?
  570.  
  571. ----------
  572. From: Satoshi Nakamoto <satoshin@gmx.com>
  573. Date: Mon, Jan 10, 2011 at 8:47 PM
  574. To: Mike Hearn <mike@plan99.net>
  575.  
  576.  
  577. By the way, if you didn't see it already, there's a discussion on the security of secp256k1 on the forum:
  578.  
  579. http://www.bitcoin.org/smf/index.php?topic=2699.0
  580.  
  581. Hal (i presume this is Hal Finney)
  582.  
  583.  
  584. Yes, it's him. He was supportive on the Cryptography list and ran one of the first nodes.
  585.  
  586. seems to think the curve is at higher risk of attack than random curves. I guess you chose secp256k1 for the mentioned performance improvement?
  587.  
  588.  
  589. I must admit, this project was 2 years of development before release, and I could only spend so much time on each of the many issues. I found guidance on the recommended size for SHA and RSA, but nothing on ECDSA which was relatively new. I took the recommended key size for RSA and converted to equivalent key size for ECDSA, but then increased it so the whole app could be said to be 256-bit security. I didn't find anything to recommend a curve type so I just... picked one. Hopefully there is enough key size to make up for any deficiency.
  590.  
  591. At the time, I was concerned whether the bandwidth and storage sizes would be practical even with ECDSA. RSA's huge keys were out of the question. Storage and bandwidth seemed tighter back then. I felt the size was either only just becoming practical, or would be soon. When I presented it, I was surprised nobody else was concerned about size, though I was also surprised how many issues they argued, and more surprised that every single one was something I had thought of and solved.
  592.  
  593. As it turns out, ECDSA verification time may be the greater bottleneck. (In my tests, OpenSSL was taking 3.5ms per ECDSA verify, or about 285 verifies per second) Client versions bypass the problem.
  594.  
  595. As things have evolved, the number of people who need to run full nodes is less than I originally imagined. The network would be fine with a small number of nodes if processing load becomes heavy.
Add Comment
Please, Sign In to add comment