|
|||||||||||||||
Managing the 8- to 32-bit processor migrationKevin King, Renesas Electronics America Back when I started in electronics, working on discrete, 4-bit processors, I couldn’t have known I would one day have to worry about how big an integer was or discuss processors in a Gulliver’s Travels context. As geometries shrank and prices dwindled, however, there was a great migration of applications from 8- to 16- and then to 32-bit processors. Along the way, tools evolved to bring code generation and application development to new levels of efficiency—generating more headaches in the process. The problem had its genesis in the engineers working with the first microcontrollers who assumed 16 bits for an integer would be “good enough.” Indeed, the early mainframe and minicomputer architectures differed in word length as well as in bit and byte ordering; the number of bits in an integer related to the architecture’s word length and varied from machine to machine. With apologies to Jonathan Swift, engineers have revised the Lilliputians’ argument to debate which end of a number—the largest (big-endian) or smallest (little-endian)—should come first in memory. There are valid arguments on both sides of the “endianness,” or byte-order, debate, but this article focuses on the ramifications for developing applications using C code.
|
Home | Feedback | Register | Site Map |
All material on this site Copyright © 2017 Design And Reuse S.A. All rights reserved. |