This CGI simply allows you to convert between 16-bit floats and their integer representations. In EE480, we have adopted a mutant float format that is essentially just the top 16 bits of what the IEEE 754 standard calls binary32. Nothing wrong with that; for example, ATI GPUs originally dropped the bottom 8 bits of binary32 because that simplified computations while still allowing 32-bit floats to copy in/out with the bottom 8 bits simply set to 0. The same trick works for us with the bottom 16 bits.
The C program that generated this page was written by Hank Dietz using the CGIC library to implement the CGI interface.