Category Archives: Testing

FPGAs and floating point

Altera’s newest DSP block is quite clearly designed to deal with double-precision floating-point numbers from the start. They are betting (quite rightly IMHO) that the future is going to move away from pure fixed-point implementations.

Now, floating-point is not a magic wand that will cure all your arithmetic problems. But it will make the implementation of a large variety of signal processing algorithms much more straightforward. Which means products can get to market faster. Which means more iterations. And faster iterations are good – you get to learn more about your product.

Xilinx don’t appear to have an answer for this. Which typifies the difference between the two companies. Altera have always been much more of the “push-button flow” kind of people. Xilinx expect you to get bit deeper and hack a bit more at the “bit-level” (only figuratively, but still lower level than Altera). Yes, they’ve stuck pretty GUIs on top of their tools, but underneath it all, you can tell its a real hackers environment. And that suits the fixed-point types well – they can demonstrate a bit-accurate line from their simulations right through to the device. And they’ll spend days and weeks tweaking their coefficients for optimum size and speed in their filters.

My feeling is that over the next couple of years, those that ignore the potential of floating-point may get left behind.

It’s happened in a large amount of the embedded world – when I started “real work”, fresh from university, my colleagues couldn’t envisage a day when floating-point would be a sensible for volume production items. The silicon area was too enormous. But along came IEEE754 (which for all its flaws, made it worth-while for companies to develop silicon IP to a standard) and Moore’s “law” push relentlessly on. A high-end embedded processor which can be had for under $5 has many kB of RAM, 1 or 2 MB of flash memory, which is about 80% of the die. And a processor core and a bag of peripherals for the rest. The FPU hardly figures in the die size!

Altera have it right – they’re going to open up a bunch of currently non-existent markets to the option of using FPGAs. Us “fixies” are going to have to keep up…

Executable comments

Comments in code are very useful. But not as good as executable comments…

I write image processing code at work. One of my FPGAs has a piece of code which generates a signal which has a hard-coded number of clock cycles that it is low for. This is fine in the application – it never needs to change, it just has to match what the camera does, and the software programs that up the same every time.

So, in the (detailed) comments for this module, I made a note that this was the case. However, recently, I needed to change the value that the software sends to the camera to give a bit more time for the processing. So I changed my VHDL tests so that the represented the new value the camera would be using, and ran my regression suite. No problem, all works fine.

We pushed the code into the FPGA and tried it on the bench. All works fine except that this particular signal doesn’t match the camera any more. And my testbenches don’t model the chip at the other end of the link in that level of detail. What I should have done, as well as writing the comment was add some code to check that it was being obeyed.

If I assert something must be true in the comments (ie This signal should match this other signal timing) then I should add some code to tell me off if I make it untrue! The word assert is key – use a VHDL assertion to make sure that the two signals match:

process (clk, camera_sig, mysig) is
  variable camera_low, mysig_low:natural:=0;
begin — process
  if falling_edge(mysig) then
    assert camera_low = mysig_low
    report "Camera sig timing doesn’t match mysig timing"
    severity error;
  end if;
  if rising_edge(camera_sig) then
    camera_low := 0;
  end if;
  if rising_edge(mysig) then
    mysig_low := 0;
  end if;
  if rising_edge(clk) then
    if camera_sig = ’0′ then
      camera_low := camera_low + 1;
    end if;
    if mysig_low = ’0′ then
      mysig_low := mysig_low + 1;
    end if;
  end if;
end process;

The key assert is at the top of that process. The rest simply counts clock ticks while the relevant signal is low. You could also do it without the clocks and capture the time at the start and the end of the pulses to compare…

And that’s an executable comment!

How much hierarchy do I need?

When building a large FPGA design, there comes a point when you have to decide where to put the dividing lines between modules. Deciding where to draw the boundaries is a bit of an art.

You don’t want too much hierarchy as then each module does only a tiny task and it’s hard to get a big picture view on the functionality. If you put everything into one big blob then you get lost amongst it all, there’s no opportunity to reuse code, and it’s a nightmare to test and debug.

Another consideration is the number of signals you have interconnecting modules. If those signals also dive deep into the hierarchy, then each time you add a signal to one of the module interfaces, you have to replicate it and wire it up in each container module on the way down.

Pushing signals around

In terms of routing signals around, records can help with that: if you want to push an extra signal down the hierarchy, you just add it to the appropriate record and recompile. However, make sure you package things together sensibly – by function is often good. Not just “all the signals to this entity go in one record” as then you can end up with all the signals in the design in one big record!

A more modern approach is presented by Sigasi who have developed the Eclipse IDE into a VHDL development environment. This (as I understand it) allows you to drill signals through the hierarchy semi-automatically. I say “as I understand it” as when I tried the tool in its beta phase, it couldn’t copewith some of the apparently unconventional (but legal) HDL I write!


Testing is a huge part of the effort of any design, and this presents quite a useful way to ascertain where to split things up. “Things that are easy to test” is a good boundary in my experience.

For example – say you want an image processing system which does an “edge detecting” function, then does a “peak detecting” function on the results of that and then a “assign (x,y) coordinates to each peak we detected” function.

You could stick all of that into one big block of code, but when you get the wrong x coordinates out of the far end, you have no idea which sub-function went wrong.

Instead you design three blocks, one for each of the functions, and test them individually. You can get each part right in isolation.

Then when you wire them all together and test it, and it doesn’t work first time (!), you only have to look at the higher-level integration to see what’s wrong.

It may sound like more work this way (writing testbenches doesn’t feel like productive work, especially when you’re starting out). But once you get proficient (which means practicing, just like playing your scales on the piano :), you can do the testing very quickly because each individual test is fairly simple. And you can push your design to all the corner cases, which is much more difficult when testing the whole thing all together.