The essence of security is that unauthorized behavior won't occur. That implies the system will always be in one of a known set of acceptable states. Security issues happen at levels from hardware to the app itself. One must also factor in risks such as malicious developers, repository control, distribution, initialization, configuration, and maintenance of secure state. You can be the best coder in the world. However, if you get any of the rest wrong the code quality will simply be something the attacker enjoys observing as he toys with his new machine. ;)
Here's an old (start) to trustworthy software I wrote on Schneier's blog that was mainly focused on preventing subversion. I've left out a few steps to simplify this post.
"1. Requirements for a deliverable must be unambiguous and formal.
2. Every high-level design element must correspond to one or more requirements and this should be shown in documentation.
3. The security policy must be unambiguous, compatible with requirements, and be embedded in the design/implementation. Correspondence must be shown.
4. The Low Level implementation modules must correspond with high level design elements, at least one each.
5. The source code must implement the low level design with provably low defect process and avoid risky constructs/libraries.
6. The object code must be shown to correspond to the source code and no security-critical functionality lost during optimizations.
(DO-178B Level A requires this & there are tools to help.)
7. At least one trustworthy, independent evaluator must evaluate the claims and sign the source code to detect later modifications by the developers or repository compromise. This should also be done for updates."
This was just the software. The TCB it runs on and all libraries it trusts must be secure. If they aren't, they must be isolated from main application in a way that contains failures. That's hard. The app must be securely configured, sanitize all input, use easily parsed protocols/storage, exist in predefined states, preferrably be written in a safe language, and have fail-safe crash strategy for unforseen errors with logging.
The high assurance security evaluations often required more. They wanted mathematical specification of requirements, security claims and design. The highest assurance systems wanted mathematical proof of security, correctness, and/or general assurance argument. They required loop-free layering (rare today), modularity (in style), strong focus on interfaces (in style), easily analysed implementaiton constructs (uncommon), extensive testing (in style), pen testing by pro attackers (uncommon to rare today), covert storage/timing channel mitigation (almost nonexistant, although attacks exist), repo software (in style for good dev's), physical security of repo+artifacts (uncommon), and independent evaluation of all of this by trusted, qualified 3rd party (almost non-existent).
Note that this was security in the Orange Book days. This was what it took to call a software+system combination secure on what were basically time-sharing machines, dumb terminals, and simplified desktops/networks. Things have gotten more complicated and risky since then, although issues are similar. A few follow...
a. Attacks on Intel SMM
b. Attacks on TXT
c. Malware in wild using processor errata (per Kaspersky)
d. DNS subverted because software ignored cosmic ray bit flips
e. DMA hardware (firewire attack)
f. Overprivileged or hard to control hardware (e.g. USB HID)
g. Perhipherals' firmware is programmable and easier to attack
2. Mainstream Operating Systems
a. Huge amounts of kernel code. Kernel bugs followed.
b. Huge amounts of trusted code that modifies OS state.
c. So bloated that ports to embedded devices are bragworthy.
d. Quality increased for many over time, but still tons of vulnerabilities.
e. Still plenty of issues with interfaces and legacy libraries.
f. They almost totally ignore covert channels.
a. Middleware quality varies considerably.
b. Documentation and actual behavior are often inconsistent.
c. Many famous middleware are unnecessarily complicated.
d. Combining secure code with insecure middleware often= insecure app
a. Technically a form of middleware, but I give special treatment.
b. Most common protocols designed pre-WWW and have inherent problems.
c. Many companies ignore superior alternatives to preserve legacy.
d. Hardcoded protocols eventually have issues and can't be replaced.
e. Complex protocols are hard to implement, yet used anyway.
f. Many attempts at secure protocols are subject to fall-back attacks.
a. Malicious developer allows compromise via clever, small change
(See Myer's on NFS subversion; obfuscated C contest; easter eggs)
b. App compromised during build process.
c. App compromised between user and developers.
c1. Binary modified after build.
c2. Modified during transmission.
c3. Search results lead to backdoored versions.
d. App compromised during installation by misconfig or malice.
e. TCB compromised, then malware subverts app.
f. Interactions of various software used to compromise one of them.
This is not a 100% comprehensive post on the issues. I've left out the most esoteric stuff like EMSEC. However, making a secure application involves ensuring the elimination of vulnerability across the entire lifecycle. It's not as easy as a few code reviews. The more effective assurance arguments require a great deal of specialised expertise, time, and money. There are also many tradeoffs that one must make. The quote below gets to the bottom of why.
"If you look for a one-word synopsis of computer design philosophy, it was and is SHARING. In the security realm, the one word synopsis is SEPARATION: keeping the bad guys away from the good guys stuff. [important part] So today, making a computer secure requires imposing a "separation paradigm" on top of an architecture built to share. That is tough! Even when partly successful, the residual problem is going to be covert channels."
-We Need Assurance, Brian Snow, NSA Technical Director
Old guard tried for decades to build secure systems/software. Certain simple, somewhat specialized systems went unbroken and seem secure. Others were just very hard to attack, limited damage and recovered well. Old principles strained due to issues like DMA and inherent difficulty of securing networked/web environment. The end result, which I promote, is that we can use proven methods to increase quality and assurance of software. We can't claim it's bulletproof but can have more confidence in it.
Increasing assurance across the board eliminates the low hanging fruit current malware authors enjoy, stops the majority of attackers, reduces overall losses, and helps to gradually reduce everyone's overall risk over time. Certain companies are taking the lead and producing very robust software: Praxis's Correct by Construction with SPARK; Green Hill's INTEGRITY-178B; Genode Lab's GenodeOS architecture; Dresden's Nizza Architecture & Mikro-SINA VPN; INRIA's CompCert compiler & Ocaml efforts; Microsoft's VerveOS; Secure64's SourceT OS; Sentinel's HYDRA firewall; Chlipala's Ur/Web; Cornell's SWIFT partitioning web apps. I hope more follow with strong, proven methods to increase assurance of systems. Because the other approaches aren't working, have never worked, and will never work.