Security of Software, Distribution Models (2014)
a guest Feb 8th, 2018 842 Never
- Security of Software, Distribution Models: It's More Than Open vs Closed!
- (originally 2014 on Schneier's blog; revised 2018)
- I've noticed in recent debates a false dichotomy: you can have "open source" or proprietary, but not benefits of both. Developers thinking there's only two possibilities might miss opportunities. This is especially true for users or buyers that are concerned about source copying, ability to repair things, or backdoors. The good news is there are many forms of source distribution available. The trustworthiness of review process also varies considerably. I'm going to briefly run through some of them here.
- Source Availability Levels and Security Implications
- Let's briefly look at levels of sharing for a given application's source.
- 1. Totally closed source application.
- This is a black box. It might have defects, backdoors, etc. One must trust provider or use various analysis/isolation mechanisms to reduce trust in them. This is the most dangerous type of software if you don't trust the provider.
- 2. Application with some source available.
- This is a black box except for certain components. This is common with cryptographic algorithms, protocols, etc. This allows you to review a subset of the implementation for unintentional flaws or backdoors *in that subset*. The same binary code should be in the application you receive. Backdooring and unintentional vulnerabilities are still possible in other parts of application. These may bypass the vetted component(s) entirely.
- 3. Full source *and documentation + development tools* of application is available.
- This is a white box. The application's functionality can be inspected for accidental or deliberate flaws. The documentation will help the readers understand the code, increasing review effectiveness. The tools (or compatible ones) being available ensure a binary can be produced from the source code. If tools & runtime are similarly open, then the whole thing can be vetted. Otherwise, a person is putting their trust into whatever layers are closed. They will have to vet those that are open or *trust 3rd parties* to do it for them. Note the similarity between 1-2 and 3 in trusting third parties.
- Various source sharing arrangements and security implications
- We can see that the full source must be vetted. Sharing it can happen in many ways. It doesn't have to be "give it to everybody" vs "give it to nobody." That's the false dichotomy I referred to. There are in fact plenty of options that go way back before "Open Source" became a widely known phrase. Surprisingly, there were even companies doing it for proprietary software. Here's a list of possibilities. Also assumed is that docs, compilers, etc are shared with anyone who gets source code.
- 1. No source sharing of privileged code.
- The code is kept secret. Only binaries are released. The user must fully trust developer of the code. Backdooring is much easier. The code might or might not get reviewed internally for general vulnerability reduction.
- 2. The source code is given to a security evaluater, who publishes signed hash of each evaluated deliverable.
- This was the first security evaluation model, esp in higher Orange Book levels. An evaluator with full access to code would determine a level of assurance to place in the code, enforce a minimum set of security standards, look for backdoors, and more. If you trust the evaluator, using the version they signed can increase confidence in software. If you don't trust evaluator, then trust level is same as 1. The main drawbacks here are extra cost of product due to evaluation and the likelihood that customer will always be at least one version behind.
- 3. The source code is given to several security evaluators, who publish signed hash of evaluated deliverables.
- This model is similar to modern practice for higher assurance systems under Common Criteria. As higher assurance certification isn't automatically OKed in every country, the government of each country wanting to use the product might ask for a copy of the source code & documentation for their own evaluation. There's typically one evaluator that still does most of the work. The others are a check against it. This increases trust in the system so long as you trust that the evaluators and developer won't work together. One can reduce that risk by using mutually suspicious reviewers. That might even be default in these schemes as such suspicion is a driver for multi-party evaluation in the first place. You're still a version behind.
- 4. The source code is released to limited audience, such as paying users or security evaluators.
- An early example from the 1960's was Burroughs B5000's OS (MCP) given in source form to customers for review and/or modification. This allows more validation than 2 or 3 as there is quite simply more labor ("more eyes") on it. There might also be 3rd parties one trusts among them. The risk of backdooring is lower and *potential* to reduce defects is higher than previous options. The actual security improvement from this process depends on the number of reviewers, their qualifications, and what part of application each focus on. Having only a few qualified people of unknown identity reviewing a limited portion of the code is essentially equal to No 2 or 3 in trustworthiness. If they're less qualified, then No 2-3 might have resulted in more trustworthy software by using paid, professional reviewers.
- 5. The source code is available to all for review.
- This allows the highest potential of security evaluation. Free open-source software (FOSS) is the vast majority of this. There are some commercial offerings whose source code is available for review or non-commercial use. There are also hybrid models like Open Core that mix proprietary and FOSS components. As with No. 4, the *actual* trustworthiness of this still depends on the review processes and trustworthiness of reviewers just like with No.'s 2 and 3. Easier overt backdoor protection than No.'s 2 or 3, though.
- Reviewers: The critical part in both closed- and open-source evaluations
- So, my side says open- vs closed-source doesn't matter much against TLA's if they can find vulnerabilities in it at will. Assuming invulnerable systems can even be produced, what does it take to claim one is there or comes close? A ridiculous amount of thorough analysis of every aspect of system lifecycle by people with expertise in many aspects of hardware, software, and security engineering. So, it's obvious that regardless of source sharing model, the real trust is derived from the developers and reviewers. Here's a few ways of looking at them.
- 1. Skill. Are they good at whatever they're doing? How much knowledge or experience? Can they prevent and/or catch really subtle issues, corner cases, and esoteric attacks?
- 2. Experience. How many developments or reviews have they done? Were the resulting systems more secure than others? How many problems did they miss over time?
- 3. Time. How much time do they have? These analysis take a long time. The higher the security goal, the more analysis and the more time to put in. How much time was spent doing the development or assessment of the system?
- 4. Tools. Does the reviewer have the tools needed to find the problems in the system leading to vulnerabilities? There are tools that can help catch all sorts of subtle bugs such as concurrency errors, pointer problems, and covert channels. Qualifications being equal, a reviewer lacking tools to support or automate aspects of their work might get less done.
- 5. Commitment. How much do they really care? Will they let certain things slide? Will they gripe about every violation? How much hand-waiving will reviewers they tolerate from the developers?
- 6. Coverage. How much of the project will be reviewed, how thoroughly, and how often? It's common in lower end evaluations (and FOSS) to test only part of the product. Higher quality or security requires more coverage of program artifacts and behavior in assessment.
- 7. Trustworthiness. Will they mislead you to weaken your security? Should be number one, but it's often last one on people's lists in practice. So, I saved it for last. :)
- So, looking at these, the worst kind of reviewer will be a saboteur who lies about effort put in, fixes low hanging fruit for reputation, and blesses code with subtle, critical flaws. The worst non-hostile review will be a casual one by a non-expert looking at a few pieces of code merely saying he or she "found no problems when inspecting the app's source code." That can give false sense of security assessment. High quality and security necessitate a number of mutually distrusting reviewers with good ratings in every area of No.'s 1-7. They check the product individiually. They keep each other honest by doing a certain amount of redundant analysis. Anything in Trusted Computing Base, components overall security depends on, should be checked by all of them.
- Putting it together
- I hope people now see the situation isn't black and white. Proprietary systems' source can be shared to varying degrees. Trustworthiness of closed, shared source, and open source depends on attributes of both the development process and reviewers. There are many combinations of source sharing and review processes to consider when looking for right tradeoff between what's practical and trustworthiness of systems. The trustworthiness will mostly be limited by amount of trustworthy developers and reviewers you have either volunteering or being paid. This combination of talent plus resources to keep them on the project long enough are usually the determining factor in how trustworthy the resulting system will be.
- Nick P.
- Security Engineer/Researcher
- (High-assurance focus)
RAW Paste Data