Guest User

Why OpenBSD is Insecure v1 draft

a guest
Nov 29th, 2016
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. "OpenBSD is proactively secure with only 2 remote holes in default install in 20+ years."
  3. I debated this with them before. They have plenty of CVE's listed and do fix bugs on a regular basis. They don't usually call the bugs vulnerabilities, though. In Linux circles, it often happens that someone attempts to weaponize a bug to determine if it's a vulnerability. Then it's counted as such. OpenBSD team just counts it as a bug without assessing vulnerability. That sounds like the application of Enron accounting principles to keeping the official number of vulnerabilities low.
  5. The other issue is they just count "default install." Windows, other UNIX-based OS's, OpenVMS, OS/400, etc come with what you need out of the box. The stuff you will actually use in production. Vulnerabilities get reported against the OS + software people need to use with it. The OpenBSD approach is to not count vulnerabilities in softare you'll need to use since they're not "default." This artificially reduces the number below what will happen in practice as few people run OpenBSD without Internet services on the Internet.
  7. "OpenBSD pioneered and is still leading in code audit."
  9. Easiest myth to debunk given the inventors of software engineering were applying it from the get-go. First, Bob Barton in late 1950's came up with concept of design and verification teams for software that used a HLL for OS's, interface checks, a CPU that protected things like stacks/arrays, compilation from source, and so on. Applied it to B5000 (1961) that was proprietary with source shared with customers. Also secure against most code injections hitting other systems until maybe invention of return-oriented programming. My baseline for other hardware/software systems.
  11. Second, Dijkstra applied it in his THE multiprogramming system. The design and code-level correctness what highest point. The entire thing was decomposed into modules. There was strict, hierarchical layering for easier verification of control flow. They also uses some kind of formal specification against the code itself. Was among most reliable ever built at the time.
  13. Third, Margaret Hamilton's team at Apollo did verification for key modules. Her writings indicated they figured out many forms of software flaws through trial and error. They then looked for them systematically in all code. They especially looked for all interface errors at function calls. Those specs and code were taller than her. Ran flawlessly even saving the mission at one point.
  15. Next, in the 1970's, Fagan at IBM made this formal with Software Inspection Process (SIP). He noted the same kinds of defects kept showing up. His process took a list of them, went through the code finding as many as possible, prioritized what to fix, and applied fixes. Doing this instead of just basic tests worked wonders for QA. He kept doing it there (esp on mainframe stuff) and at other places. Later Cleanroom was invented at same place adding human verification of code & usage-based testing with some of lowest defect rates in industry at little extra cost. Structuring & formality a bit similar to Dijkstra's THE. Some of the principles, another with capability-architecture, went into the AS/400 that was basically one of most reliable systems ever built. I've never seen one go down.
  17. Also in the 1970's, a group of engineers formed a company designing systems and OS's that engineers would want. The culture is quite similar to OpenBSD's interesting enough despite how one hates the other. OpenVMS operating system was carefully designed by architects with attention to quality then coded in a careful way. They built features for a week straight, ran tests all weekend, got a report of failures Monday, spent a week fixing as many as possible, and so on. Combined with well-designed tech for reliability, individual servers ran for so many years admins forgot how to reboot them on occasion. Record for cluster uptime was 17 years at a railway. Low number of vulnerabilities back when supported since they disabled stuff by default plus had good, code quality & way better privilege architecture than UNIX. Back then at least.
  19. Around same time, the requirements for high-assurance security were solidifying with the evaluation of SCOMP demonstrator against them by 1985. The high-assurance kernels... STOP OS, GEMSOS, Boeing SNS, and LOCK... were done with every state of kernel mapped out in abstract machines to be analyzed against requirements, high-level spec, and security policy. Also hunted for leaks via covert channels. Code evaluated against it with much testing and then pentesting. Systems did phenomenal during independent pentesting vs MULTICS, the UNIX's, and other systems of the time. One big reason was they used architectures specifically designed for security where others tried to bolt security on (a no no). On capability side, System/38 and KeyKOS w/ KeySAFE were doing amazing despite not being designed for such evaluations. Just good architecture.
  21. What about securing UNIX code? Are they first with that? No.
  23. People wanted UNIX's power & apps instead of minimalist ones high-assurance security was providing. The first attempt at securing UNIX that I recall was UCLA Secure UNIX (1979): a rewrite of UNIX with a security kernel at bottom enforcing policy with rest structured. Unfortunately, UNIX was too huge at a *few dozen* syscalls to mathematically verify for correctness with primitive tools of the time. Even ignoring that, the API's were inherently insecure allowing all kinds of violations (esp leaks). They had to be modified in ways that broke app compatibility. That made standard UNIX potentially impossible to secure.
  25. Most of the research then went to Mach derivatives or high-assurance security. However, Trusted Information Systems did attempt to produce a commercial product, Trusted Xenix (1990?), by rewriting Microsoft Xenix's kernel with what methods from high-assurance they could apply: lower TCB, trusted path, mandatory controls, even made it immune to setuid root vulnerabilities while compatible with setuid programs. That last one was easy, too, despite nobody in UNIX security doing it at the time. Did at least 4 versions with last in 1994.
  27. Nonetheless, the result supported what the academics predicted with it getting a medium assurance ranking due to complexity of TCB and less protection mechanisms than clean-slate kernels. To maintain market share with some security, the UNIX security vendors resulted to making low-security knockoffs of such an OS called Compartmented Mode Workstations that had the security *features* but not *assurance activities*. Trusted Solaris 8 was dominant & Argus Pitbull being modern one that's the most full-featured. Qubes is like virtualization equivalent of these even sharing the color scheme for windows.
  29. So, OpenBSD neither pioneered code auditing nor led in UNIX security. The code auditing came from the Burroughs corporation, some CompSci people, and an Apollo team. Formal application started at Burroughs, then Apollo, then IBM's OS's, then in OpenVMS, and then in high-assurance projects with failed attempts at UNIX last. Like with code auditing, academics and proprietary vendors led the way on securing UNIX. They wisely stopped trying when finding out it was impossible without major breaks in compatibility and performance. The consensus was to run it in protected or user mode as a VM on top of a security or separation kernel with security-critical stuff, esp GUI or crypto, in partitions directly on kernel in isolated partitions. This was done initially by LOCK w/ LOCK/ix, attempted with Mach, commercially deployed by separation kernel vendors (eg Green Hills in 2005 or LynxSecure later), and FOSS by GenodeOS in early stages now.
  33. (Note: See p7 on LOCK/ix to see what all they had to change in UNIX to preserve security while just virtualizing it. Baking it into UNIX itself with compatibility and high confidence of LOCK's TCB would be a lot harder. Or impossible.)
  35. What OpenBSD did do was focus on eliminating vulnerabilities in their code plus add mitigations that they hope will knock them out. That's way weaker than mechanisms and languages that provably eliminate the possibility of classes of vulnerabilities. Recent example being Muen done in SPARK. Another benefit of OpenBSD was them making sure security stuff was on by default and integrating well with their system. Exceptional among the FOSS types. Also, releasing it as FOSS meaning their security-focused UNIX got more adoption than anything on my list.
  37. A few quick extras from All That Is Wrong I noted.
  39. I'll quickly say the security mitigations in general will do this for vulnerabilities preferrably in this order: prevent it entirely by design; block it during the attack; limit what it achieves when attack succeeds; recover to clean state. OpenBSD focuses on achieving perfection at coding & some library changes for the first with most of their mitigations focusing on the second. In the comments there, OpenBSD supporters didn't seem to believe in a need for the third where one contains damage from an exploit that succeeds. This was despite the author and commenters posting evidence that SELinux contained all kinds of vulnerabilities in the real world. They still insisted no benefit for MAC.
  41. The next counter they write is SELinux was complicated and could introduce bugs. This is particularly ridiculous because an OpenBSD implementation of MAC could be as simple as they chose with their great reputation at not introducing 0-days applied when they implement it. The funny thing is they claim they can't implement a MAC addon without security holes but will try to do a whole kernel that way. Such contradictions are usually a sign of bullshit motivated by social, not technical, factors.
  43. Another point was that you couldn't limit access as well in such a system. The old "trusted" operating systems let one separate administration of security from applications. This can come in handy if you're contracting out work on or trusting a possibly buggy program with the administration of common stuff that's not security critical. DAC model has flaws that make that difficult. MAC and capability models can handle it.
  45. And don't use it for mutually-untrusting apps or parties because it definitely didnt have a covert channel analysis to prevent leaks like proprietary attempts at secure UNIX did. Their VMM probably doesn't as well much like the cloud VM's didn't with all the "side channel" papers. The high-assurance VAX VMM for OpenVMS did do one among the other stuff for highly secure systems. It can and should be done if the OS or VMM aims at being secure. That precludes apps leaking secrets.
  47. I'm not even going to get into the consistent use of a language that makes their job harder over ones inherently memory-safe w/ or without GC. Or one that codes modules provably free of vulnerabilities they're worring about. That's for another essay. The main points above on vulnerabilities in default install, pioneering secure UNIX, pioneering code audits, and MAC being unnecessary or impossible to code are the worst of the OpenBSD propaganda. People should stop believing these as evidence is strongly against it.
  49. Nick P.
RAW Paste Data