Advertisement
ShiningDrake

Untitled

Feb 20th, 2019
101
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.29 KB | None | 0 0
  1.  
  2.  
  3. The hidden bit is an ingenious way to get computers to hold more data than there seems to be room for. To understand how it works, you must understand scientific notation. In scientific notation, the number 3245.209 becomes 3.245209 x 103. To keep the same level of precision, we must store every single digit. But computers store data in binary. So the first bit of data in binary scientific notation would always be a 1.
  4.  
  5. A number like 110.11 becomes 1.11011 x 22. A number like .011011 becomes 1.11011 x 2-2.
  6.  
  7. So the question is, if we always move the decimal point over so it rests right after the first 1, then we KNOW the first digit will always be a 1. This is just as obvious as knowing that if you see lightning there will be thunder, or if you never speed but happen to be speeding just for a second or two, there will be a speed trap right there! So why should we make the computer store the leading 1 when we know it is always going to be there?
  8.  
  9. That would be stupid and waste storage space.
  10.  
  11. So we don't store the 1. That unstored 1 is the "hidden bit."
  12.  
  13. So if we were storing 110.11 we would actually store 1011 and the 22. And if we were storing .011011 we would actually store 1011 and the 2-2.
  14.  
  15. So my question for you is: if the computer is storing 010101.01 what would it store?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement