If we wish, we can also use the interface to specify constraints, such as the maximum number of logic gates or the maximum number of clock cycles we wish to use (alternatively, we can simply leave these nitty-gritty details up to Cascade). At this point, we need to leap a little bit ahead of ourselves. What Cascade is eventually going to do for us is to take the embedded software functions we've selected and generate two main outputs: the RTL to implement a custom co-processor, and the microcode to run on this co-processor (Figure 1).
But before we get to that point, Cascade presents us with a selection of candidate architectures. We can pick an architecture that looks interesting and analyze it in more detail, such as seeing exactly how many clock cycles will be required for each of the off-loaded functions. If we wish, we can play with the constraints, off-load additional functions, examine different candidate architectures, and so forth.
Figure 1 — Cascade generates the RTL for the co-processor and the microcode to run on the co-processor (plus testbenches and synthesis scripts).
Once we've made our decision, we press the "Go" button and out pops the RTL for our custom co-processor and the microcode to run on that processor (all of the interfacing between the two processors is automatically handled by Cascade). But wait, there's more. Let's suppose that we actually fabricate our SoC, and — just as we get the first devices in our hands — someone comes running in saying "Oh no! I just heard that the software algorithms have been changed!"
And of course, we just know that one of the "Tom", "Dick", and "Harry" functions we off-loaded is going to be one of the affected functions. Well, fear not my braves, because we can instruct Cascade to keep using an existing co-processor implementation, and to simply generate new microcode to run on that co-processor (I told you it was cool!)
A few more points to ponder
In fact, the more you think about this stuff the cooler it gets. Let's consider the world from the perspective of the embedded software developers. Their standard development environment tells them everything about how their code performs in glorious Technicolor, including the functions that are going to cause problems.
Using conventional design flows, the software folks have no way to tell if the hardware guys and gals can actually build these functions in such a way that they will run fast enough. Now, with Cascade, even if the software guys don't know anything about the hardware, they can use the tool to see if the hardware can be constructed. And they can also play "what-if" games by moving different functions back and forth between the software and hardware domains.
Meanwhile, the hardware folks now have a real easy way to create hardware implementations of software functions without requiring any deep understanding as to what these functions actually do. Having said this, the hardware folks can use Cascade to evaluate different design tradeoffs, such as playing with cache sizes and experimenting with different candidate architectures.
I don't know about you, but I'm impressed. This is one of those ideas that — when you see it — you think "Well, that's obvious isn't it," but you also kick yourself for not thinking of it yourself. The software content of embedded systems is increasing at a ferocious rate, so it seems to me that this technology is in absolutely the right place at the right time, and I am delighted to present it with an official "Cool Beans" award. Until next time, have a good one!
Clive (Max) Maxfield is president of Techbites Interactive, a marketing consultancy firm specializing in high-tech. Author of Bebop to the Boolean Boogie (An Unconventional Guide to Electronics) and co-author of EDA: Where Electronics Begins, Max was once referred to as a "semiconductor design expert" by someone famous who wasn't prompted, coerced, or remunerated in any way.