16-bit Float Conversions

This CGI simply allows you to convert between 16-bit floats and their integer representations. In EE480, we have adopted a mutant float format that is essentially just the top 16 bits of what the IEEE 754 standard calls binary32. Nothing wrong with that; for example, ATI GPUs originally dropped the bottom 8 bits of binary32 because that simplified computations while still allowing 32-bit floats to copy in/out with the bottom 8 bits simply set to 0. The same trick works for us with the bottom 16 bits.


Enter/edit any of the following:

The decimal floating-point value:
1 becomes 0x3f80, i.e., SIGN=0, EXP=127, HIDDEN=1, MANT=0

The inverse, 1/1 is:
1, or 0x3f80, i.e., SIGN=0, EXP=127, HIDDEN=1, MANT=0

The internal hexadecimal representation:
0x3f80, i.e., SIGN=0, EXP=127, HIDDEN=1, MANT=0, becomes the floating-point value 1

Given the decimal floating-point values: and
1 + 1 = 2, or 0x4000, i.e., SIGN=0, EXP=128, HIDDEN=1, MANT=0
1 * 1 = 1, or 0x3f80, i.e., SIGN=0, EXP=127, HIDDEN=1, MANT=0


The C program that generated this page was written by Hank Dietz using the CGIC library to implement the CGI interface.


The Aggregate. Advanced Computer Architecture.