Managing the 8- to 32-bit processor migration
Kevin King, Renesas Electronics America
EDN (August 28, 2012)
Back when I started in electronics, working on discrete, 4-bit processors, I couldn’t have known I would one day have to worry about how big an integer was or discuss processors in a Gulliver’s Travels context. As geometries shrank and prices dwindled, however, there was a great migration of applications from 8- to 16- and then to 32-bit processors. Along the way, tools evolved to bring code generation and application development to new levels of efficiency—generating more headaches in the process.
The problem had its genesis in the engineers working with the first microcontrollers who assumed 16 bits for an integer would be “good enough.” Indeed, the early mainframe and minicomputer architectures differed in word length as well as in bit and byte ordering; the number of bits in an integer related to the architecture’s word length and varied from machine to machine.
With apologies to Jonathan Swift, engineers have revised the Lilliputians’ argument to debate which end of a number—the largest (big-endian) or smallest (little-endian)—should come first in memory. There are valid arguments on both sides of the “endianness,” or byte-order, debate, but this article focuses on the ramifications for developing applications using C code.
E-mail This Article | Printer-Friendly Page |
|