Advertisement
Guest User

proprietary vs open source - false dichotomy

a guest
Mar 11th, 2016
126
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 8.92 KB | None | 0 0
  1. Open Source vs Proprietary Software: A False Dichotomy in Software Security
  2.  
  3. I've noticed in recent debates a false dichotomy: you can have "open source" or proprietary, but not both. A person thinking there's only two possibilities might miss opportunities, esp in a business that's afraid of FOSS, backdoors, or both. The good news is there are many levels of source code availability and review possible. I'm going to briefly run through some of them here.
  4.  
  5. Source Availability Levels and Security Implications
  6.  
  7. Let's briefly look at levels of sharing for a given application's source.
  8.  
  9. 1. Totally closed source application.
  10.  
  11. This is a black box. It might have defects, backdoors, etc. One must trust provider or use various analysis/isolation mechanisms to reduce trust in them. This is the most dangerous type of software if you don't trust the provider.
  12.  
  13. 2. Application with some source available.
  14.  
  15. This is a black box except for certain components. This is common with cryptographic algorithms, protocols, etc. This allows you to review a subset of the implementation for unintentional flaws or backdoors *in that subset*. Backdooring and unintentional vulnerabilities are still possible in other parts of application.
  16.  
  17. 3. Full source *and documentation + development tools* of application is available.
  18.  
  19. This is a white box. The application's functionality can be inspected for accidental or deliberate flaws. The documentation will help the readers understand the code, increasing review effectiveness. The tools (or compatible ones) being available ensure a binary can be produced from the source code. If tools & runtime are similarly open, then the whole thing can be vetted. Otherwise, a person is putting their trust into whatever layers are closed, while trusting others to vet what is open.
  20.  
  21. Various source sharing arrangements and security implications
  22.  
  23. We can see that the full source must be vetted. Sharing it can happen in many ways. It doesn't have to be "give it to everybody" vs "give it to nobody." That's the false dichotomy I referred to. There are in fact plenty of options that go way back before "Open Source" became a widely known phrase. Surprisingly, there were proprietary companies doing this. Here's a somewhat unstructured list of possibilities. Also assumed is that docs, compilers, etc are shared with anyone who gets source code.
  24.  
  25. 1. No source sharing of privileged code.
  26.  
  27. The code is kept secret. Only binaries are released. The user must fully trust developer of the code. Backdooring is much easier. The code might or might not get reviewed internally for general vulnerability reduction.
  28.  
  29. 2. The source code is given to a security evaluater, who publishes signed hash of each evaluated deliverable.
  30.  
  31. This was the first security evaluation model, esp in higher Orange Book levels. An evaluator with full access to code would determine a level of assurance to place in the code, enforce a minimum set of security standards, look for backdoors, and more. If you trust the evaluator, using the version they signed can increase confidence in software. If you don't trust evaluator, then trust level is same as 1. The main drawbacks here are extra cost of product due to evaluation and the likelihood that customer will always be at least one version behind.
  32.  
  33. 3. The source code is given to several security evaluators, who publish signed hash of evaluated deliverables.
  34.  
  35. This model is similar to modern practice for higher assurance systems under Common Criteria. As higher assurance certification isn't automatically OKed in every country, the government of each country wanting to use the product might ask for a copy of the source code & documentation for their own evaluation. There's typically one evaluator that still does most of the work. The others are a check against it. This increases trust in the system so long as you trust that the evaluators and developer won't work together. One can reduce that risk by using mutually suspicious reviewers. That might even be default in these schemes as such suspicion is a driver for multi-party evaluation in the first place. You're still a version behind.
  36.  
  37. 4. The source code is given to select people, such as users or evaluators, under a non-disclosure agreement.
  38.  
  39. An early example here was Burroughs B5000's OS (MCP) given in source form to customers for review and/or modification. This allows more validation than 2 or 3 as there is quite simply more labor ("more eyes") and you can pick who to trust. The risk of backdooring is low and potential to reduce defects is greater than previous options. The actual security improvement from this process depends on the number of reviewers, their qualifications, and what part of application each focus on. Having only a few qualified people of unknown identity reviewing a limited portion of the code is essentially equal to No 2 or 3 in trustworthiness.
  40.  
  41. 5. The source code is available to all for review.
  42.  
  43. This allows the highest potential of security evaluation. Free open-source software (FOSS) is the vast majority of this. There are some commercial offerings whose source code is fully available, but only usable in commercial sector for a price. There are also hybrid models described here in an article mainly focusing on business models. As with 4, the *actual* trustworthiness of this still depends on the review processes and trustworthiness of reviewers just like with 2 and 3. Easier overt backdoor protection than 2 or 3, though.
  44.  
  45. Reviewers: The critical part in both closed- and open-source evaluations
  46.  
  47. So, my side says open- vs closed-source doesn't matter much against TLA's if they can find vulnerabilities in it at will. Assuming invulnerable systems can even be produced, what does it take to claim one is there or comes close? A ridiculous amount of thorough analysis of every aspect of security lifecycle by people with expertise in many aspects of hardware, software, and security engineering. So, it's obvious that regardless of source sharing model, the real trust is derived from the reviewers. Here's a few ways of looking at them.
  48.  
  49. 1. Skill. Are they good at whatever they're reviewing? How good? Can they catch really subtle issues, corner cases, and esoteric attacks? Do they have the knowledge and experience?
  50.  
  51. 2. Experience. How many reviews have they done? Were they successful? What is their track record at finding problems? How many were they seen to miss over time?
  52.  
  53. 3. Time. How much time do they have? These analysis take a long time. The more security, the more analysis and the more time to put in. How much time was spent doing a specific aspect of the review?
  54.  
  55. 4. Tools. Does the reviewer have the tools needed to find the problems in the system leading to vulnerabilities? There are tools that can help catch all sorts of subtle bugs such as concurrency errors, pointer problems, and covert channels. Qualifications being eqqual, a reviewer lacking tools to automate aspects of their work might get less done.
  56.  
  57. 5. Commitment. How much do they really care? Will they let certain things slide? Will they gripe about every violation? How much will they tolerate from the developers?
  58.  
  59. 6. Coverage. How much of the project will be reviewed, how thoroughly, and how often? It's common in lower end evaluations (and FOSS) for only so much to be tested. Higher quality efforts put more effort into reviews and testing.
  60.  
  61. 7. Trustworthiness. Will they mislead you to weak your security? Should be number one, but it's often last one people's list in practice. So, I saved it for last. :)
  62.  
  63. So, looking at these, the worst kind of reviewer will be a saboteur who lies about effort put in, fixes low hanging fruit for reputation, and blesses code with subtle, critical flaws. The worst non-hostile review will be a casual one by a non-expert looking at a few pieces of code merely saying he or she "inspected the app source code and found no problems." High quality & security necessitate a number of mutually distrusting reviewers will good attributes in about every area of this list. They check the product and keep each other honest by doing a certain amount of redundant analysis. Any truly core feature (e.g. TCB) is analyzed by all.
  64.  
  65. Putting it together
  66.  
  67. I hope people now see the situation isn't black and white. Proprietary systems' source can be opened to varying degrees. Trustworthiness of closed, open (non-free), and open (free) source depends on both development process and reviewers. There are many combinations of source sharing and review processes that lead to a variety of tradeoffs between trustworthiness and practical aspects of software development. You can have as much of each as you have resources to achieve them regardless of free or non-free, widespread distribution of source or select few, etc. The resources, from the reviewers involved to time/money available to design choices, are always the determining factor of trustworthiness achievable in a given application or system.
  68.  
  69. Nick P
  70. Security Engineer/Researcher
  71. (High assurance focus)
  72. 05/15/2014
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement