Advertisement
Guest User

Security of Software, Distribution Models (2014)

a guest
Feb 8th, 2018
2,922
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 9.61 KB | None | 0 0
  1. Security of Software, Distribution Models: It's More Than Open vs Closed!
  2. (originally 2014 on Schneier's blog; revised 2018)
  3.  
  4. I've noticed in recent debates a false dichotomy: you can have "open source" or proprietary, but not benefits of both. Developers thinking there's only two possibilities might miss opportunities. This is especially true for users or buyers that are concerned about source copying, ability to repair things, or backdoors. The good news is there are many forms of source distribution available. The trustworthiness of review process also varies considerably. I'm going to briefly run through some of them here.
  5.  
  6. Source Availability Levels and Security Implications
  7.  
  8. Let's briefly look at levels of sharing for a given application's source.
  9.  
  10. 1. Totally closed source application.
  11.  
  12. This is a black box. It might have defects, backdoors, etc. One must trust provider or use various analysis/isolation mechanisms to reduce trust in them. This is the most dangerous type of software if you don't trust the provider.
  13.  
  14. 2. Application with some source available.
  15.  
  16. This is a black box except for certain components. This is common with cryptographic algorithms, protocols, etc. This allows you to review a subset of the implementation for unintentional flaws or backdoors *in that subset*. The same binary code should be in the application you receive. Backdooring and unintentional vulnerabilities are still possible in other parts of application. These may bypass the vetted component(s) entirely.
  17.  
  18. 3. Full source *and documentation + development tools* of application is available.
  19.  
  20. This is a white box. The application's functionality can be inspected for accidental or deliberate flaws. The documentation will help the readers understand the code, increasing review effectiveness. The tools (or compatible ones) being available ensure a binary can be produced from the source code. If tools & runtime are similarly open, then the whole thing can be vetted. Otherwise, a person is putting their trust into whatever layers are closed. They will have to vet those that are open or *trust 3rd parties* to do it for them. Note the similarity between 1-2 and 3 in trusting third parties.
  21.  
  22. Various source sharing arrangements and security implications
  23.  
  24. We can see that the full source must be vetted. Sharing it can happen in many ways. It doesn't have to be "give it to everybody" vs "give it to nobody." That's the false dichotomy I referred to. There are in fact plenty of options that go way back before "Open Source" became a widely known phrase. Surprisingly, there were even companies doing it for proprietary software. Here's a list of possibilities. Also assumed is that docs, compilers, etc are shared with anyone who gets source code.
  25.  
  26. 1. No source sharing of privileged code.
  27.  
  28. The code is kept secret. Only binaries are released. The user must fully trust developer of the code. Backdooring is much easier. The code might or might not get reviewed internally for general vulnerability reduction.
  29.  
  30. 2. The source code is given to a security evaluater, who publishes signed hash of each evaluated deliverable.
  31.  
  32. This was the first security evaluation model, esp in higher Orange Book levels. An evaluator with full access to code would determine a level of assurance to place in the code, enforce a minimum set of security standards, look for backdoors, and more. If you trust the evaluator, using the version they signed can increase confidence in software. If you don't trust evaluator, then trust level is same as 1. The main drawbacks here are extra cost of product due to evaluation and the likelihood that customer will always be at least one version behind.
  33.  
  34. 3. The source code is given to several security evaluators, who publish signed hash of evaluated deliverables.
  35.  
  36. This model is similar to modern practice for higher assurance systems under Common Criteria. As higher assurance certification isn't automatically OKed in every country, the government of each country wanting to use the product might ask for a copy of the source code & documentation for their own evaluation. There's typically one evaluator that still does most of the work. The others are a check against it. This increases trust in the system so long as you trust that the evaluators and developer won't work together. One can reduce that risk by using mutually suspicious reviewers. That might even be default in these schemes as such suspicion is a driver for multi-party evaluation in the first place. You're still a version behind.
  37.  
  38. 4. The source code is released to limited audience, such as paying users or security evaluators.
  39.  
  40. An early example from the 1960's was Burroughs B5000's OS (MCP) given in source form to customers for review and/or modification. This allows more validation than 2 or 3 as there is quite simply more labor ("more eyes") on it. There might also be 3rd parties one trusts among them. The risk of backdooring is lower and *potential* to reduce defects is higher than previous options. The actual security improvement from this process depends on the number of reviewers, their qualifications, and what part of application each focus on. Having only a few qualified people of unknown identity reviewing a limited portion of the code is essentially equal to No 2 or 3 in trustworthiness. If they're less qualified, then No 2-3 might have resulted in more trustworthy software by using paid, professional reviewers.
  41.  
  42. 5. The source code is available to all for review.
  43.  
  44. This allows the highest potential of security evaluation. Free open-source software (FOSS) is the vast majority of this. There are some commercial offerings whose source code is available for review or non-commercial use. There are also hybrid models like Open Core that mix proprietary and FOSS components. As with No. 4, the *actual* trustworthiness of this still depends on the review processes and trustworthiness of reviewers just like with No.'s 2 and 3. Easier overt backdoor protection than No.'s 2 or 3, though.
  45.  
  46. Reviewers: The critical part in both closed- and open-source evaluations
  47.  
  48. So, my side says open- vs closed-source doesn't matter much against TLA's if they can find vulnerabilities in it at will. Assuming invulnerable systems can even be produced, what does it take to claim one is there or comes close? A ridiculous amount of thorough analysis of every aspect of system lifecycle by people with expertise in many aspects of hardware, software, and security engineering. So, it's obvious that regardless of source sharing model, the real trust is derived from the developers and reviewers. Here's a few ways of looking at them.
  49.  
  50. 1. Skill. Are they good at whatever they're doing? How much knowledge or experience? Can they prevent and/or catch really subtle issues, corner cases, and esoteric attacks?
  51.  
  52. 2. Experience. How many developments or reviews have they done? Were the resulting systems more secure than others? How many problems did they miss over time?
  53.  
  54. 3. Time. How much time do they have? These analysis take a long time. The higher the security goal, the more analysis and the more time to put in. How much time was spent doing the development or assessment of the system?
  55.  
  56. 4. Tools. Does the reviewer have the tools needed to find the problems in the system leading to vulnerabilities? There are tools that can help catch all sorts of subtle bugs such as concurrency errors, pointer problems, and covert channels. Qualifications being equal, a reviewer lacking tools to support or automate aspects of their work might get less done.
  57.  
  58. 5. Commitment. How much do they really care? Will they let certain things slide? Will they gripe about every violation? How much hand-waiving will reviewers they tolerate from the developers?
  59.  
  60. 6. Coverage. How much of the project will be reviewed, how thoroughly, and how often? It's common in lower end evaluations (and FOSS) to test only part of the product. Higher quality or security requires more coverage of program artifacts and behavior in assessment.
  61.  
  62. 7. Trustworthiness. Will they mislead you to weaken your security? Should be number one, but it's often last one on people's lists in practice. So, I saved it for last. :)
  63.  
  64. So, looking at these, the worst kind of reviewer will be a saboteur who lies about effort put in, fixes low hanging fruit for reputation, and blesses code with subtle, critical flaws. The worst non-hostile review will be a casual one by a non-expert looking at a few pieces of code merely saying he or she "found no problems when inspecting the app's source code." That can give false sense of security assessment. High quality and security necessitate a number of mutually distrusting reviewers with good ratings in every area of No.'s 1-7. They check the product individiually. They keep each other honest by doing a certain amount of redundant analysis. Anything in Trusted Computing Base, components overall security depends on, should be checked by all of them.
  65.  
  66. Putting it together
  67.  
  68. I hope people now see the situation isn't black and white. Proprietary systems' source can be shared to varying degrees. Trustworthiness of closed, shared source, and open source depends on attributes of both the development process and reviewers. There are many combinations of source sharing and review processes to consider when looking for right tradeoff between what's practical and trustworthiness of systems. The trustworthiness will mostly be limited by amount of trustworthy developers and reviewers you have either volunteering or being paid. This combination of talent plus resources to keep them on the project long enough are usually the determining factor in how trustworthy the resulting system will be.
  69.  
  70. Nick P.
  71. Security Engineer/Researcher
  72. (High-assurance focus)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement