Ceva-Waves Bluetooth 5.3 Low Energy Baseband Controller, software and profiles
Assertive debugging: correcting software as if we meant it
Assertive debugging: correcting software as if we meant it Assertive debugging is a new way to make embedded systems ensure their own health by having your code monitor itself. Debugging is an art that needs much further study .... The most effective debugging techniques seem to be those which are designed and built into the program itself—many of today's best programmers will devote nearly half of their programs to facilitating the debugging process on the other half; the first half... will eventually be thrown away, but the net result is a surprising gain in productivity. —Donald Knuth, The Art of Computer Programming1 As Don Knuth implies, debugging is a much-neglected subject, and we're paying a terrible price for that neglect. We've made little progress in debugging methods in half a century, with the result that projects everywhere are bogged down because of buggy software. The price in lost time and wasted resources, when the projects are commercial, must run into the billions; the price when the projects are military is paid not only in dollars but in lives. This situation is intolerable; new ideas and approaches must be found. This articles offers one such new approach. I propose that a new system for debugging software called the Assertive Debugging System (ADS) can transform debugging from a minor art form to a modern industrial process. ADS exploits an old idea—the assertions were first suggested by John von Neumann in 1947.2 ADS, however, does something with assertions that neither he nor anyone else, to my knowledge, has proposed, much less done: it uses them systematically and exhaustively rather than as ad hoc tools that are employed only when the programmer remembers them and feels like using them. In doing so, ADS transforms assertions from an idea that's been floating around for half a century without achieving much, into a technology that could effect a revolution in program development. And unlike the methods Knuth had in mind, it doesn't throw away that part of the program devoted to debugging, but preserves it as valuable documentation of the state of the subject program and for later reuse when that program is modified. Bugs: the major bottleneck In such critical applications, being able to take and prove you've taken serious debugging measures will become much more important, even legally required. For applications on which so much depends, today's half-hearted gestures at debugging will no longer be acceptable. ADS represents an approach to program debugging that directly addresses these issues: it enables developers to shorten the debugging process, and it supports the systematic and documentable debugging of software objects that I contend will soon be required just to stay in business—perhaps required just to stay out of jail. Debugging today This is how software was debugged in the mid '50s, and how it's debugged today. It's a process that will always, if time and customer patience permit, eventually find the bug that's troubling you—but, usually, only that particular occurrence of it, and only after a debugging effort of unpredictable length, and without leaving anyone the wiser about the program being debugged, or about how to find other such bugs. What is a bug? Really dangerous bugs What is needed, then, to deal with the debugging problem is some way to make bugs manifest themselves quickly, so as to give us the earliest possible warning of their existence, and let us take action before continued program execution can obliterate their traces. Ideally, we would like bugs to become so blatant that their presence can be detected even before they have acted; we want to catch them when they are just about to do their dirty work. That is what ADS is designed to do. How ADS works The rigorous and systematic testing of such assertions throughout execution amounts to erecting walls on both sides of the narrow path that a program must take if its results are to be correct, so that the slightest deviation from that path causes an almost immediate collision between the running program and some assertion. Consequently, something valuable is learned from every execution-time failure: a bug is found (or at least its hiding area is narrowed down significantly) or a programmer's misconception is uncovered. Using ADS At each compilation of a subject program, the activated assertions generate into the object program code that can be used to check the variables to which they apply, at every change of value, for violations of any of the constraints so imposed ("can be used" because not every test needs to be executed every time). When the monitoring code detects that any variable has violated (or in some cases, is about to violate) an assertion, it halts execution of the program, and takes the exception action specified by the programmer. At this point the programmer using ADS is in a very different position from that of the programmer of today whose program has stopped at a breakpoint. When ADS stops an execution, it's not because the program has just come to some point where the programmer hoped that an examination of some of his variables will reveal something; it's at a point—which may be far earlier or far later than the point at which that programmer would have inserted a breakpoint—where an anomaly has definitely been detected and almost certainly detected very close to its origin. Nor is the programmer limited, when ADS reports such an event, to the kind of hit-or-miss search that his counterpart today typically performs when at a conventional breakpoint; if the bug is not immediately apparent, the ADS user's next step is to rerun his program with a greater degree of monitoring enabled for all code dynamically preceding the point at which the anomaly was detected, so as to catch it at an even earlier moment. There is unfortunately no practical way to demonstrate the validity of these claims for ADS short of building and using that system, but a thought-experiment may be helpful, if not conclusive. Draw on your experience for a real bug that you've recently been involved with or create an imaginary bug based on experience. Recreate on paper the state of the program variables just before the faulty instruction caused the first deviation from correct behavior, but with assertion checking, as just described, enabled. Consider, that is, that every variable—every predictably varying construct—in the program was being monitored at every change of value for violation of any the conditions you would have specified if you'd been using ADS. See how close to the point where the bug first manifests itself ADS would have stopped and raised a red flag—and imagine how much easier it would be to pinpoint that bug with such help than without it. In almost all cases, I think you will find that the difference betw een conventional debugging and ADS debugging is so great as to amount a difference in kind. Cost of assertive debugging To arrive at a true estimate of the cost of the ADS approach, the main requirement is to make sure that it's being compared against a thoroughly realistic estimate of the cost of the alternative, the present method of debugging. Present-day debugging runs often yield little or no knowledge about the bug being sought, and the cost of those runs, in the full sense of cost, must be counted in. With ADS, on the other hand, every execution results in useful, documented knowledge; either it finds a violation of an assertion or it runs to completion, reporting that with respect at least to the assertions activated, the program is bug free. Even if ADS reports a violation, but it turns out that the program code is correct, and it's the programmer's assertion that was mistaken, something of value is learned. In fact, it may be that the information gained in that case is the most valuable of all; it's not an individual bug that's been found, but a misconception in the programmer's mind about her program—somethin g even more important to correct. Note, too, that the cost of ADS is almost entirely in machine cycles; what it saves is project schedule slippage, software-engineer time, and time-to-market. In short, it preserves assets that are growing ever more expensive, and it does so by using assets that are growing ever cheaper, and are already so cheap as to be in many cases not even worth metering. The formulation of assertions is done when the programmer declares the variables and data structures; that is, at the time when his intentions for them are clearer in his mind than they will ever be again. Present-day implementation systems already require him to give full static definitions of his variables and structures; ADS requires him to add explicit statements about how they're allowed or forbidden to change at run-time. In effect, the user does a lot of his debugging at the ideal time: when he's not under pressure to get a specific bug fixed to make a specific delivery date, and when his mind is clear and his knowledge fresh. Philosophies compared ADS, by contrast, says, "Way back in program design and development days, you told me what you meant by an anomaly for each of many of the variables and other constructs in this program; more recently, you told me which of these anomalies I was to keep looking for, and what I was to do when I found one. I have now found one and am reporting as instructed. The details are as follows ...." With ADS, the software engineer does the planning, the debugging system does the heavy lifting. The difference between the two could spell the difference between watching every project slip because of buggy software, and turning programming into a respectable and reliable industrial discipline. Mark Halpern is a programmer and software designer whose experience goes back to the days when Fortran was the latest thing in computing systems. You can find an account of his career and links to other articles he's written at www.rules-of-the-game.com. He welcomes comments; send them to markhalpern@iname.com. End notes ok...A couple of comments here. 1. Diagnostics, at least 'good' diagnostics need to follow the philosophy of being deeply embedded within the project. The most successful systems usually employ the notion of diagnostics/debugging as an integral part of the architecture. 2. Diagnostics/debugging can follow a couple of forms. The most common being reactive and proactive. Of course proactive will usually consume code resources, and reactive will in general get activated after a problem is detected. 3. It is generally better to try and implement and architect the system so that bugs are found, as much as possible, at compile/build time. This in general will save a lot of grief and headache later. 4. Wherever possible, always try to implement unhandled exception error and post-mortem diagnostics. Doing so allows developers to find that nasty pointer dereferencing/overrun errors after the fact. There are many ways to do this with modern processors and compilers today. Ken Wada I have been using ADS in my embedded applications for a number of years, and it has paid off significantly. In order to address the performance concerns (CPU cycles) on critical applications, I produce two builds of the final object code. The first build has all the debugging assertions included and is used during development and in-house product and stress testing. The second build is used for the actual final product, which has the assertions effectively "compiled out." This allows the final product to have optimum performance, but still allows the source code to contain assertions that can be used for future changes and debugging. Even if the assertions are not completely removed from the final product, usually the action taken when an assertion is found is more tame than the action taken in the development build. Bob Weinberger Your article in Embedded Systems Programming was interesting to me, however it left me hanging a bit. In the Using ADS section, you list types of constraints that assertions can be defined for and in the next paragraph you state, “they may be grouped in various ways, so that the programmer can activate or deactivate sets of related assertions with one command.” I was hoping to see some examples of this later in the article, such as #define macros, but there were none. Do you have any references to articles or header files that illustrate the types of assertions you covered and categories for making them conditional? One feature I’ve never seen in assert() libraries, even for embedded systems, is code that, instead of quitting the program, logs the occurrence in a buffer or sends it out over a comm. link, then continues. Or tells users where to set a software breakpoint in the assertion code and an explanation of how to use an SLD (source-level debugger) to back track to find the cause of the fired-off assertion. I am in the business of HW-based debuggers (e.g. JTAG controlled) that includes processor trace. Another technique if the system is real-time is to cause the trace to trigger without halting the CPU (like a logic analyzer) and look back in the trace for the program error. - Bruce Ableidinger Author response: First, I have not implemented an ADS-enabled compiler, nor do I know of any implemented by anyone else, so I cannot give actual examples of the usage of assertions in the way I propose. That's why I had to describe a thought-experiment in my article. Second, I don't regard the technical details of such a compiler as being of any great interest or difficulty; for the most part, they're just extensions of features that are currently available in several modern programming systems. Practically all systems today enable the user to give a name to a set of commands and to include or exclude any named set in a given compilation. The kinds of assertions that can be made, beyond those I listed, will be learned by experience. Third, what I see as the great difficulty is changing the programming culture. I wrote my article as a step toward making programmers understand that: (1) traditional debugging methods are becoming, or already are, completely unacceptable; (2) that the thorough, systematic use of assertions goes a long way to solving the debugging problem; and (3) that the cost of using ADS is small compared with the cost of sending out buggy software. If these lessons can be driven home, the rest is easy; the creation of the necessary software could be done within six months by a team of two or three good system programmers. I'm puzzled by your reference to a feature that would tell programmers where to set a breakpoint in the assertion code--as I envision ADS usage, the programmer does not set breakpoints; the assertions themselves implicitly create potential breakpoints. - Mark Halpern I just read your article in Embedded Systems Programming about assertive debugging. I thought it was an interesting idea but I am not sure I understand enough about it to evaluate it. Specifically, I understand that you are suggesting a systematic approach to building a set of invariants for the variables within a program. What I don't understand is: a) when constraint checking is done against invariants, If you happen to have a ready example of the use of your method, I would probably be able to find all my answers there. Thanks for the article and I look forward to understanding your idea better! - Jody Glider Author response: (b) in the default case, all assertions that have been compiled into the program are executed; if the programmer has exercised his power to disable any particular set(s) of such assertions for this particular execution, those assertions are not tested during this execution of the program (c) the programmer who causes the execution of the program controls just how much assertion-checking is done, and does so by means of compiler directives such as "Skip assertion-sets B, D-G, and M." Since no ADS-capable programming system yet exists, I cannot give you a realistic example of its use, but my proposal does not include anything highly novel in technology; the gist of my proposal is that assertions, a feature that has been known since the days of von Neumann, should be used thoroughly and systematically. The features that have to be added to programming systems to make them ADS-capable are simple, and mostly extensions of well-known existing features. - Mark Halpern Mark, you appear scarred by trying to debug with a less than capable debugger such as the GNU debugger, gdb. You should try Borland's or Watcom's debuggers. They provide immensely greater debugging productivity. I won't use gdb, its not as good as adding printf() statements. While Borland and Watcom show ALL the local variables at the same time, and allow the user to ask for global variables and to single step with a single key stroke, going to functions or not going to functions in detail depending on which key is truck. Watcom's allows setting break points AND logical tests for those break points so one can delay the break until a numeric value is in or out of some range or whatever else one can set in that logical selection. My programs tend to have many nested loops and these capabilities of not hitting a break point until some value is in range sure saves me millions of key strokes and frustration. Such sophistication is not in gdb, probably won't ever be. I don't know if Watcom's logical will compare strings, when I need that I put in source code and set my break inside that source code string compare. I do much with weather and sometimes its a 100,000 lines of input data before trash affects my programs. With gdb I could not debug. With Watcom I do it regularly. Open Watcom is coming to Linux also as well as supporting OS/2, DOS, and windoze. - Gerald N. Johnson Author response: What I want, and think we can achieve with advanced assertions, is a state in which we can say to our programming system "I've told you how I expect my program to work; now I want you to monitor it, and notify me immediately if it tries to do anything else. And I want you to document everything--what assertions were in effect during this execution, what exceptions you detected, what the state of all key variables was when the exceptions were detected, and so on." The programmer using such a system would not set breakpoints, not try to guess where the program might have bugs, not try to "save time" (his own or the computer's) by cutting corners in making assertions. He would assume that his program will contain bugs as originally written, and by means of systematic and thorough use of assertions, transfer almost all the burden of finding those bugs to the programming system. I think we can get to that position anytime we're prepared to make the necessary effort, and that it's a damned shame it wasn't done years and years ago. I hope it won't take some disasters caused by buggy software to make us take the necessary steps. - Mark Halpern I enjoyed very much reading your article on ADS at Embedded.com. As a former lead programmer, high-speed hardware debugging tools designer, CPU architect, and someone else that has been around the block several times since Fortran was hot, your perspective is welcome--it is based on reality instead of hype. Allow me to share a couple alternatives/complements that might make ADS less objectionable to the average programmer and in turn, more valuable to all of us. First, I am a believer in full trace, as deep as you can afford. Not the assembly-language reconstruction type, but full reconstruction up to the source language level. Combine that ADS and you would unravel the type 3b problems much more quickly than the iterative approach that will inevitably occur even with ADS's power. Second, I have at times, constructed "observers" that run in parallel with the mainline program. In the observer, one codifies the information that you propose be included directly in the mainline program via ADS. By moving the overhead of assuring the expected behavior of the program onto a separate processor ("hyper-threaded CPUs" make this almost free at run-time) we remove the objections about CPU overhead (mostly). We get the benefits of ADS without the impact on run-time performance (in most cases). Third, the ability to specifically log changes to variables in real-time in sorely lacking in most systems. While it is not as good as full trace, the ability to stop when a variable's value goes out of range, BEFORE it hits an assertion in many cases, is a valuable debugging aid. It moves the assertion work away from specific places in the code and into a debugging aid that runs in real time in parallel. The key in all of these methods is that they can be intertwined with the ADS approach to produce remarkably effective debugging environments. There are other, more sophisticated methods of further refining problem detection and debugging tasks, but the ones listed above have been tried and shown to be faster, in my experience with many "customer" programmers, than the more sophisticated methods. Thanks for the chance to consider your insights--I found them well constructed. Perhaps you will find my brief comments of interest as well. Regards, Author response: To respond to your suggestions intelligently, I need to know a little more about them--I'm not quite sure exactly what you're proposing by way of augmenting ADS. "Trace," for example--does that mean specifying to the compiler or other programming system one of more variables whose changes of value are to be recorded? And if so, what further information, if any, would be recorded along with each new value? What's the benefit of recording every change of value if they are within the permitted range? About "observers": if you have the hardware resources to check assertions without cost to the application program, great! I proposed embedding the assertions in the application program itself simply because I saw no alternative for most people, but if we have the resource, by all means lets use it! Of course programs containing ADS would be conditionally compiled to contain only those checks that the programmer thought necessary, and might well contain directives that further specified just what run-time checks the programmer wanted performed at this execution. On real-time logging of changes in variable values: this is just what assertions do, if I understand you correctly. When a constraint on the range of values a variable can take on is declared by means of an assertion, that variable is monitored throughout the execution, and just as you suggest, execution is stopped when the assertion-checking mechanism detects that a violation is about to occur. (This can happen only, of course, if the assertion is fully informed about the variable's run-time permitted values, but that would seem to be true in any case.) - Mark Halpern I've seen your article about assertive debugging, where you rightfully point out the importance of using assertions. I guess you may be interested in Bertrand Meyer's "Design by Contract" documentation and debugging methodology. This is a highly developed and proven, yet simple way to use assertions systematically. In our company we have used it successfully for over a decade now (without his Eiffel language, btw.--and it can be applied to non-object-oriented software as well). http://archive.eiffel.com/doc/manuals/technology/contract/ With best regards Author response: - Mark Halpern I found your article explaining this concept nicely. However, it would be very good (and useful to put it into practice) if you could share a piece of code where you have put ADS in place (and the same piece without ADS). - Vikas Nice article. As one who constantly preaches on the virtues of the venerable "ASSERT()" macro, I consumed your article with interest. But, like Jay's potato chips, it left me hungry for more. To quote Clara Peller from the old Wendy's commercials, "Where's the Beef !?" Is there going to be a "Part II" to your article showing practical implementation of your ideas for embedded programmers, hopefully in the "C" language? - David Meekhof That was a very interesting article on Assertive Debugging on Embedded.com. I was disappointed when I got to the end without seeing examples. Could you do a follow up article with examples? This way I can validate my thoughts on its implementation against the examples to be sure I properly understood it. . Thanks, Great article. I remember doing something related to this in a progressive R&D dep't writing in C. All functions had to return a status byte (Go/NoGo) and that status was dependent on checking variables within. We didn't use conditional compiling, though that is a good idea. We also didn't document the variable boundary conditions up front, also a good idea. The issues were what to do with a NoGo. If the decision is to terminate the flow of the program, we would call an endless loop function. This assumes we would use an emulator to break at that function and trace back a couple of steps to find the function that flagged the problem. I agree this is fast. Another possibility is that you want to identify the errant function without an emulator, using for example RS232. You set the status byte to a different value from each function, polling the value of the status byte from RS232. You may not be able to terminate the flow of the program in order to keep RS232 functioning. The above scenarios, although effective in the lab, have the disadvantage of taking the system down if a bug occurs in the field. Another possibility is self-correction, in which you decide not only that the variable is bad, but how you can report or recover from such an error without taking the system down. The overhead is immense and you cannot conditionally compile it out, to be used only on critical programs. This leaves the RS232 scenario as the most likely one to ship with the product. You don't halt the system with every little bug, the system behaves pretty much the same as if there were no ADS at all (most of the benefit of ADS being realized at development time) but you do have a status byte that you or your field service guy or even your customer can poll. The last alternative, to conditionally compile the ADS out entirely, should be used only when memory or speed is an issue, not to increase reliability or determinism: you have debugged the code with the ADS, if you compile it out you have an unknown and have to test it all over again. - Warren Thompson I enjoyed your recent Embedded Systems Programming article “Assertive Debugging.” You make excellent points. However, your historical perspective on using assertions in programming is not quite complete. Bertrand Mayer’s “Design by Contract” (DbC) is a very mature, powerful, software construction methodology, which takes assertions to the next level. No article about assertions is complete without mentioning DbC. Niall Murphy already wrote two excellent articles about assertions for the Embedded Systems Programming ("Assertiveness Training for Programmers," April 2001 and "Assert Yourself," May 2001). I've also written about assertions for CUJ ("An Exception or a Bug?", CUJ, August 2003), available online from http://www.quantum-leaps.com/writings/samek0308.pdf. Countless other articles about assertions and the implementation of the DbC have been written for ESP, DDJ, CUJ, and other magazines. I believe, however, that relatively little information exists on two pragmatic questions: 1. should we ship the final product with or without assertions? I’d be very interested in your opinion and advice about these issues. Best regards, The bulk of what you describe has been available in Ada for > 10 yrs. There is a significant processor burden and some up front costs in declaring type and subtypes w/ limited ranges; however, there is a great payback in finding and fixing bugs before fielding the system. Unfortunately there aren't many Ada compilers as the cost of entry into the market is high. I've used some C++ classes to monitor ranges of variables useful in converting floating point algorithms to fixed point (integers) mostly for digital signal processing. There is a significant run-time performance hit here as well. For mainstream debugging (read C) the tool that I've used to capture the hard to find bugs is an array bounds and pointer checking patch to gcc that Herman Ten Brugge maintains. Tools like valgrind also catch memory corruption type problems. - Bill Priest Author response: I know now, thanks to my readers, that several papers on assertion-based debugging have been published in recent years (though none of them as early as my first published remarks on the subject, in 1965), and I'm prepared to accept that much of what I call for may be available in Ada or DbC or other systems; what I cannot understand is why these facilities are not universally used, and why the debugging problem remains so acute despite their availability. I don't see why there should be a significant run-time penalty in programs that were debugged via an ADS system; as I mentioned in my article, once a program is apparently debugged and ready to be shipped, it should be compiled without all the assertion-checking code. If further bugs appear in the production version, the assertion-checking code can easily be compiled back into it. And if we're talking about real-time code, all that matters is that it obey real-time constraints--if it's safely within those limits, it doesn't matter how many cycles it consu mes. Maybe the technical work of creating really serious debugging tools has already been done, and all that remains is to get people to use them. But buggy software remains the main problem in almost every engineering and scientific project today, so there's some big bottleneck somewhere that has to be overcome. If my article gets more programmers aware of ADS-type tools, and ready to try them, I'll feel fully rewarded for writing it. - Mark Halpern Reader reply: It doesn't matter how many cycles it consumes as long as it meets the timing; unfortunately unless the processor is running at 10% capacity you are not likely to be able to turn on a lot of checking (assuming that the checking is at most a 10x hit; I've never seen checking this fast YMMV). The biggest problem I've seen to this in the real-time embedded domain is bigotry and lack of quality tools for Ada; anything except C (and starting to be C++) is seen as lunacy. These languages don't support range checking w/o writing code for it (most developers will resist this or claim that it will take too long). Thanks for a well written article on a problem that doesn't seem to be addressed for mainstream tools. Regards, Liked your article in assertive debugging. The company I work for has been using a similar system for the last 15 years. One big advantage to assertive debugging that you did not mention is that it leaves a paper trail of what bugs have been checked for. Very useful in case of product liability law suits. At Robo Vac Systems Inc. we make robotic floor cleaners and the saying around here about software bugs is, "A half ton scrub bot running amuck gives new meaning to the term fatal crash". - William Mark, This is not to say that error checking of internal variables is not needed, but this is simply good programming style not a revolution in software debugging methodology. In particular I have a problem with the three following items: 1] Over simplifying the costs of ADS, in many cases the additional cost in CPU cycles and memory (RAM and ROM) needed to implement ADS are prohibitive. Especially in cost sensitive, high volume, deeply embedded products. Best regards, I used method similar to the ADS for a number of real-time systems. The major advantage I found is that the various assertions act as "health" indicators of the system. From my experience any "unhealthy" sign will eventually present itself as bugs later on in the product life cycle. By investigating the cause of these unhealthy signs, I was able to find bugs that would have been left undetected for sometime. - Chi Ho Ng Excellent article! Ounces or even pounds of prevention are worth tons of cure. What's your take on Formal Methods such as Z or B (Z's evolute)? B essentially comprises assertions (pre-conditions, invariants). Regards, - George Hacken I would like to see details on how this differs from the typical use of Assertions, which have been written about in previous embedded system magazines? - Gerry Rigdon Another valuable area for comprehensive employment of assertions is support of software inspections, i.e., code reviews. Assertions explicitly identify the developer's expectations and assumptions, and aid inspectors' efforts in two ways: first by making explicit the developer's understanding of the software's intended functionality; and second by serving as a guide to the software's functionality, making it simpler to verify that the implementation conforms to those constraints and expectations. - Marc A. Criley Bugs that occur when a software is first developed are handled by a good design and good programming practices. Bugs more commonly arise as a program evolves, to meet changing and / or new requirements. In this case as the code evolves, assertions would also evolve. At this stage we would need a way to check for conflicting assertions and for missed out assertions. Therefore we need an assertion checker, that would work on the assertions that have been given by the programmer and find discrepancies among them. - Arunkumar Nevermind the strawman practices of Garbage-In-Garbage-Out and Defensive Programming, Halpern's ADS appears to be more ad-hoc and lacking the advantages of the mature proven discipline of Meyer's Design By Contract. From an historical perspective, we have: Let's keep our eyes on the prize and move forward not backward. - Todd Plessel As an Ada programmer for the last 20 years, I could not agree more with the idea of catching errors by specifying limits on values and types when the code is being written. The idea is certainly valuable, but the style of programming encouraged by C and its derivatives is directly contrary. As I read (again) about buffer overflows that are impossible in properly written Ada, I wonder if this important concept will ever penetrate the mind of the average programmer or program manager. - Kermit E Terrell Copyright 2005 © CMP Media LLC
By Mark Halpern, Courtesy of Embedded Systems Programming
May 11 2005 (13:35 PM)
URL: http://www.embedded.com/showArticle.jhtml?articleID=163101116
It's nearly impossible to find a scientific or engineering project these days that doesn't depend on computing, and almost as hard to find one that's not slipping its schedule because of buggy software. The debugging problem is a critical one for nearly all our projects. The penalties we pay for buggy software are already high: lost business when our customers are dissatisfied and lost sales when our products are tardy coming to market; these penalties will get much higher as we increasingly use computers for critical applications—mission-critical and even life-critical.
The most remarkable thing about debugging today is how little it differs from debugging at the dawn of the modern computing age, half a century ago. We still do it by letting a faulty program run up to what we conjecture is a critical point, then stop execution and look at the state of what we think are the key variables. If one of these variables differs in value from what we expected, we try to understand how it could have assumed that value. If we can't understand where it went wrong, we repeat the process, stopping at some earlier point. After an unpredictable number of iterations of this process, we stop the program close enough to the location of the bug, and the standard revelation occurs: we find that we have forgotten to reset some counter, flush some buffer, allow for the overflow of some data structure, or have committed one of the other half-dozen classic programming errors.
To make clear which bugs are the really troublesome bugs—the ones that ADS is meant to deal with—I offer here a rough taxonomy of software problems in general, with estimates of their relative gravity. You'll find nothing original in this taxonomy; all it does is gather and organize some common truths and put them in a form convenient for understanding ADS. Only programmer errors are considered here—problems caused by hardware failure, operator error, or other conditions not under the programmer's control are not nearly so difficult to deal with, nor so serious a problem. Programmer errors are:
Type 1 errors have nothing to do with computing; they're just plain old ignorance, carelessness, or stupidity, for which no general remedy is known. Type 2 errors are computer-related, but aren't particularly troublesome; they're so gross that they're usually found early in the program's design stage and are relatively uncommon. Type 3a are already reasonably well handled—most modern program-development systems detect all the common syntactic errors and closely pinpoint them. Sometimes they can even fix them, as the program used to compose this article silently changes hte to the.
Type 3b errors are the real villains: easy to introduce, hard to notice, and patient in waiting for the worst possible moment to manifest themselves. The reason they're so great a problem is that they're so trivial, so inconspicuous, so hard to focus on. Type 3b bugs (henceforth just "bugs") are dangerous precisely because they're seldom immediately troublesome. A program infected with them is often asymptomatic until it crashes disastrously or yields obviously faulty output. Generally these bugs let programs run with no sign of trouble long after they have in fact corrupted the results. By the time it's evident that something is wrong, much has happened to delete or corrupt the evidence needed to determine just where the problem originated; hence the long and painful period of backtracking that the debugging process almost always begins with.
The way to catch bugs while they're fresh and out in the open is by monitoring the behavior of a great variety of variables at run time, looking for violations of assertions made by the programmer when he defined them. "Variable" means here not just those quantities a mathematician would think of and label as such, but any program construct any of whose properties change in a predictable way, either absolutely or relative to some other program construct. Among these would be the numeric variables that specify how often a loop is to be traversed, how many characters a buffer can hold before it's to be written out, how many states a switch can assume, and so on; they define, collectively, the route the program is meant to follow. It's the major premise of ADS that no bug can take effect without soon causing some variable to violate a constraint, and that if such violations are systematically detected, virtually every bug will cause an alarm to sound while it's still "fresh," easily found, and understood.
For each of his program constructs, the programmer asserts at definition time all the constraints on its behavior he can think of. The possible constraints include the following; others will doubtless suggest themselves as experience in the use of ADS grows:
These assertions are expressed in a notation that's a natural extension of the source language the programmer uses, and they may be grouped in various ways, so that the programmer can activate or deactivate sets of related assertions with one command.
Most programmers exposed to the idea seem to agree that ADS would enable them to find bugs much more quickly, but many protest that the cost would be prohibitive; program execution, burdened with all that run-time checking, could cost hundreds of times more cycles than ordinary execution. Many also quail at the thought of supplying all the assertions that would enable ADS to rigorously monitor the execution of the program. These are not unreasonable concerns, but they're more imposing at first glance than after a hard look.
The great difference between debugging with ADS and with conventional tools is that ADS, once primed, takes the initiative and, within the limits set by its user, does a completely systematic job. In conventional debugging, the system in effect says to the user, "I know of no reason to stop execution at this point, but you have ordered a halt here by setting a breakpoint, so here's a window through which you can look at whatever variables you, in your present state of knowledge, think might be relevant to your bug hunt. If an anomaly does exist in the current state of your program, you're responsible for recognizing it; I wouldn't know an anomaly if I tripped over one."
Sr Embedded Systems Consultant
Aurium Technologies Inc.
Emerson Process Management
Sr Embedded Software Engineer
Thanks for your note on my recent ADS article.
b) what constraint checking is done when invoked, and
c) who controls when and how much constraint checking is done, and by what mechanism.
STSM, Storage Systems and Servers Research
(a) constraint-checking, or assertion testing, is done at execution time
I'm glad to learn that there are some better debuggers out there than there were back when Fortran was the hottest programming language available, but the facilities you describe don't achieve anything like what I tried to describe in my article.
- Mark Cummins
Thanks for your kind and thoughtful letter on my ADS article.
- Cuno Pfister
Oberon microsystems AG
Thanks for your note on my article, with the references to "Design by Contract" -- I hadn't known of this system. I'm glad to know that I'm not alone in my enthusiasm for assertions, and that your experience with them has been good.
Software Engineer
Gentex Corporation
- Matt Minnis
Varian, Inc.
2. if we ship with assertions enabled, what should the system do when assertion fires?
- Miro Samek
Your note on my recent ADS paper is one of many that tell me that not only is the idea not new, but that a facility of the kind I call for is already available in one or another programming system. You're the first to tell me that it's in Ada; almost everyone else tells me about Design by Contract (from Bertand Meyer). (Thanks for pointing me to Ten Brugge's site; I'd never heard of him before.)
In the Ada system I used the run-time penalty was roughly 10-20 to 1 for including the checking in every single file (you could only specify the checking be used on a file basis). This was for a real-time radar processing platform we enabled the checking for all unit tests and on selected higher level files (most algorithm code couldn't afford the checking). This worked fairly well as long as the unit tests had good coverage (we also had a test coverage analyzer). Unfortunately as is common w/ a lot of systems it is the interfaces between functions/classes (different developers) where functions are used incorrectly or functions had side-effects where the "real" bugs were found and only on a running system did we integrate these functions (initially) and since it had to run in real time the checking wasn't running and the system bugs appeared. What we did to help alleviate this was to create a "simulation" of the system running on the target platform; supplying the data in a much slower than real-time fash ion, for this setup we could turn all the checking on and find the bugs. It took a significant amount of effort to create a mechanism to capture "real" data and a committment to built-in test to develop an interface to inject the "real" data into the system (this required hardware modifications and some software tweaks). This was for a military system where peoples lives were on the line so we could justify the cost; on most commercial systems I've worked on there is a lot of resistance to add in these type of features.
- Bill
I just read your article and do not agree that ADS is the "Next Big Thing", in fact I feel that it promotes the very causes of buggy software - poor design, poor architecture, poor testing.
2] You neglect to mention that ADS is software itself, which requires a certain amount of debugging.
3] You base your whole premise of ADS on the assumption that present debugging techniques are akin to waiting for the system to blow up and then trying to find the errors. In my experience well designed unit testing of well architected software (object oriented, encapsulated, etc) that has a well-defined interface between modules (classes) can be extremely efficient and effective. By unit testing only small modules, with a well defined test plan, the cause of any errors is almost always readily apparent.
- David Katz
Embedded Development Group Leader, Engineering (STNA)
Bosch Security Systems
Firmware Engineer
LMI Technology
Chief Scientist
Altman Research Corp
Consultant
Quadrus Corporation
Senior Consultant
Turing Softwares
1st generation (1950s): GIGO
2nd generation (1970s): DP
3rd generation (1980s): DBC
4th generation (future): DBC-based static proofs.
Software Engineer
Boeing
Related Articles
- Shift Left for More Efficient Block Design and Chip Integration
- Design-Stage Analysis, Verification, and Optimization for Every Designer
- Debugging complex RISC-V processors
- 5 Steps to Confront the Talent Shortage With IP-Centric Design
- Optimizing embedded software for real-time multimedia processing
New Articles
Most Popular
- System Verilog Assertions Simplified
- System Verilog Macro: A Powerful Feature for Design Verification Projects
- Enhancing VLSI Design Efficiency: Tackling Congestion and Shorts with Practical Approaches and PnR Tool (ICC2)
- Synthesis Methodology & Netlist Qualification
- Streamlining SoC Design with IDS-Integrate™
E-mail This Article | Printer-Friendly Page |