Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Full post in reply to: https://www.reddit.com/r/cryptography/comments/1gzzuez/why_does_everyone_use_the_same_hash_functions/
- by u/cryptoam1
- Because reddit loves interpreting characters as formatting and has no way to shut off markdown for the entire post. FFS
- The problem is making small changes to currently standardized either creates a new hash that will likely be vulnerable to the same or a slightly modified attack(or be on a very small and unsafe margin)* or you essentially create an entirely new hash which requires it's own dedicated long term cryptanalytic effort to ensure it's own security. Suffice to say that trying to analyze all the new diversified hashes will be a PITA that's not worth the effort. In order to build confidence in a given hash, one needs to subject it to an intense effort to attack it by the public for long enough and then it needs to have a stable history against attacks that suggests security. Only then will a hash function get accepted as secure and usable for deployment. Making a bunch of diversified hashes spreads out the attack effort, leaving a stronger likelihood that one of the diversified hashes has a weakness that was not discovered due to the split effort. You don't want to have to suddenly withdraw and remove an active hash from use one year in after it's been approved and accepted. Besides, it's not like we are in a scenario where everything is stuck on a single hash function. At minimum, we have SHA-2, SHA-3, and Blake(ie Blake, Blake2, and Blake3), all of which are different. SHA-2 is a secure hash function based on the Merkle-Damgard framework**. It's old relatively speaking but has a strong security history. SHA-3 is a sponge permutation*** and has a strong security history itself. SHA-3 was in fact selected as a fallback hash if SHA-2 breaks and was chosen because it was permutation based and therefore built on different principles from SHA-2. BLAKE is built on the HAIFA framework****. HAIFA is a MD based framework that integrates multiple design choices that prevent certain attacks from working effectively against itself. It is built around a variation of the ChaCha/Salsa permutation which was modified to work for hashing instead of as a stream cipher. This means Blake hashes inherit much of the same security history as the ChaCha/Salsa stream ciphers. All three hash "families" have been found to be secure after extensive and lengthy analysis periods and their security histories are stable. They are built on solid principles and engineering against every known attack vector and are designed to resist attack in general. This means that we need some novel new attack to successfully break any of these hashes. With good likelihood, this new attack likely also compromises the security of all the other hashes as well(reducing their security margin greatly and possibly enough to render them broken as well). This means combining them likely will not provide the security hoped for in such a scenario. In that case, we would need new clean sheet designs that take into account said new attack method. In the unlikely(in my opinion compared to the above scenario) case where the attack only works against some of the hashes and not all, then yes, using a robust hash combiner would provide security. However, trying to combine hashes is more difficult(naive methods either do not provide the expected security or lose some security properties and preserve others) and potentially error prone*****. It's much easier to either already have support protocol wise for more hashes or migrate to the next version instead. Much less things to keep track of or worry about implementing incorrectly. In the end, trying to make diversified versions of the same hash function is just too much work for too little gain. At best, it buys a tiny amount of security. At worst, it leaves people using hashes that are unknowingly broken for critical systems. If a designer believes that the risk of one hash breaking to be so severe or high that they need to secure against that, there are complex robust combiners that can be used to counteract said risk. Meanwhile, most designers do not believe such a risk to be so critical that they need to dedicate design effort against a broken hash function and instead focus on more critical things like key management and authentication which are actual ways protocols have broken and much more common than bad hash functions besides idiots using MD4/5 or SHA-1 well after the warning bells became actual forest fires.
- * In fact, theoretically speaking; trying to modify the hash's internal parameters may actually make it more vulnerable or even enable hiding a backdoor for attackers.
- ** The MD framework takes a one way compression function and turns it into a hash function that can support arbitrary size inputs by using the function iteratively over chunks of input to update the internal state.
- *** The sponge permutation uses a secure public pseduorandom permutation to create many kinds of primitives. It allows the creation of a hash function by mixing chunks of inputs into a portion of the much larger state, then using the permutation to mix the entire state. Once the input is completely processed, a small portion of the state is emitted as part of the output and the permutation is used to mix the state. This is done until enough output has been produced.
- **** HAIFA extends the idea behind MD(that is using a one way compression function) by using a larger one and then adding specific inputs into it so that the hash blocks certain kinds of attacks that MD based hashes are potentially vulnerable to like length extension or multicollision attacks. These attacks do not necessarily break the security properties of a secure MD based hash but they can prove unexpected(ie length extension) or make a less secure MD hash more dangerous(multicollision attacks allows converting a single collision into potentially multiple ones for cheap after the first successful collision).
- ***** See IACR 2002-135 for information on both preserving and robust constructions in folklore(ie the simple attempts). Preserving combiners preserve a given security property given that all the component functions have the needed properties. A robust combiner guarantees the targeted security property if at least one of the components has said property. You would want a robust combiner. Note that a robust combiner for hashes does not necessary protect all the properties that are needed(ie it can protect collision resistance but not pseduorandomness). It would be ideal if all relevant security properties are robustly conserved. See the second paper links for more information on those combiners. Note how those are much more complex, require more memory, and have larger outputs.
- IACR-2002-135 link: https://eprint.iacr.org/2002/135
- Paper link: https://eprint.iacr.org/2002/135.pdf
- Robust multiproperty hash combiner link: https://link.springer.com/chapter/10.1007/978-3-540-78524-8_21
- Paper link: https://iacr.org/archive/tcc2008/49480370/49480370.pdf
Add Comment
Please, Sign In to add comment