Author Archives: Martin

What is an embedded system?

I just wrote a post for an Infosec site, discussing some definitions of embedded systems. I survey some existing definitions, describe why I don’t feel they represent the current state of embedded systems, and offer another:

“It’s an embedded system if the end-user doesn’t control the code that it runs”.

Once an end user takes control of the code it changes from being an appliance and becomes a general purpose computer.

Read the whole post here:

Why use an FPGA?

“Please help me do this on an FPGA”

The question you shouldn’t ask!

A common refrain on many of the internet’s finest help forums and newsgroups is “I’m trying to do x using an FPGA, help!” And very often “x” is a task which would be more optimally (by many different measures!) be performed in another way. But there is a common assumption that if a task is “intensive” then FPGAs are the answer. One recent example was asking how to implement face-detection in FPGA. It quickly became apparent the the poster didn’t actually know how to perform face-detection at all, so adding FPGAs to the equation was not a great help!

For a quick answer to the question “why use an FPGA?”, I’ll reproduce this list that I used in a lecture to a class of undergraduates:

Use an FPGA if one or more of these apply:

  • you have hard real-time deadlines (measured in μs or ns)
  • you need more than one DSP (lots of arithmetic and parallelisable)
  • a suitable processor (or DSP) costs too much (money, weight, size, power)

And for students, there’s one more:

  • Because your assignment tells you to :) Although ideally it’ll be something that is at least representative of a reasonable FPGA task (not a traffic light controller or vending machine!)

What to do instead?

Software’s easy

Writing software, I’d hazard a guess that even amongst embedded software engineers (those that work at the really low level, not writing code for PCs) many of them don’t really know what their target processor looks like under the hood. They just click compile, wait maybe 10 seconds, and test. And that’s great, it makes for a very productive development environment. When you are sufficiently abstracted from the architecture, there’s enough performance from the tools and chips that you don’t (often) have to think hard about how to implement things, you can just get on with the interesting bit, creating your application.

FPGAs hurt

In comparison, FPGAs are painful to use – don’t get me wrong, the software tools and the silicon architectures have improved massively over the last few years – but compared to writing software, it’s a completely different realm. You have to be much more aware of the architecture of your device, know much more about how the tools operate, wait ages for them to run (and think fundamentally differently about algorithms and implementations). FPGA code takes tens of minutes to compile, and it’s much easier to push up against the performance limits, and then have to mess around with your code to make it more recognisable to the tools you are using.

Choosing an implementation

My advice is always “Avoid using an FPGA unless you have to”. And I say this as a great advocate of FPGAs!

If you can do it in Octave or Matlab on a PC, do so. In fact, even if you end up somewhere else, start from there so you can understand the problem properly.

If you don’t have enough processing power, make use of a GPU.

If that solution costs too much (in money, power, size, weight terms) then you’ll have to get cleverer. Start thinking about microcontrollers. They’re well-tooled up and very powerful. You can have an 80MHz 32-bit ARM for a few pounds (or Euros. Or Dollars) these days, you can do an awful lot with that.

If you’re still struggling for processing power, think about a DSP. But be careful – analyse what you are trying to do very carefully. Figure out which bits will suit a DSP (lots of multiplying and adding in parallel with memory moving) – suddenly you have to know your architecture, just to decide if it’s feasible. Be careful about memory bandwidth, caches are not magic and if your code requires data reads or writes that are randomly scattered about, expect to lose some performance.

The next stage on might be multiple DSPs… and once you start considering multiple DSPs, it might finally be time to think about an FPGA. The downside is you are responsible for so much more of the architecture. Floating-point maths is becoming a sensible option, but you’ll still want to look at the trade-off between development time using floating point and the device-size cost and power savings that come from using a fixed point implementation. You can take advantage of your knowledge of data access patterns to tune the memory controller – in fact you’ll probably have to – yet more grovelling around in the details. Add to this, the fact that it’s a lot harder to hire good FPGA people than DSP people (and they are harder to get than microcontroller people), and help on the internet can be harder to come by. You development time will lengthen as you build simulation models of the hardware you are talking to and have to debug them. And the hour-long build times will try your patience.

But if you have good reason to, go for it!

FPGAs are really well suited to

  • many image and radar processing tasks especially when cost, power and space constrained. (Disclosure: I wrote the first article)
  • financial analysis (when time constrained)
  • seismic analysis (lots of money at stake, the faster you process, the more processing you can do, and the less risk to your drilling)
  • hard-real-time, low-latency deadlines – single-digit microsecond response times to stimulus. See the second page of this flyer – I’ve worked on this project too.

Flash memory through the ages

I was reading bunnie’s recent post on the manufacturing techniques used in USB flash-drives… bare die manipulated by hand with a stick!

Today I found an old (128MB!) SD card from my Palm Tungsten-T. Circa 2005 if I remember rightly. Very different technology, actual chips soldered down on the board. And it’s clear that the SD card form factor was very much defined by the physical size of the NAND flash chips available at the time!

The innards of an old SD card


A while ago I compared Altera and Xilinx’s ARM-based FPGA combos. More information is now available publicly, so let’s see what we know now…

One thing that’s hard to miss is that Altera are making a big thing of their features to support applications with more taxing reliability and safety requirements.

Altera’s external DRAM interface supports error-checking and correction (ECC) on 32-bit wide memory, whereas Zynq can only do this on 16-bit wide memory, allowing Altera to keep a higher-performance system with ECC. The Altera SoCs also claim ECC on the large blocks of RAM within the processor subsystem (ie the L2 cache and peripheral memory buffers). It appears that Zynq only has parity (ie error checking, but not correction) on the cache and on-chip memory. In Xilinx’s favour, they have performed lots of failure testing (they always have – to a heroic degree!) and the entire processor subsystem has a silent data corruption rate of about 15 FIT. Not seen any FIT data for Altera yet.

Both vendors have memory protection within the microprocessor section to stop errant software processes stomping on each other’s data, but Altera appear to have additional protection within the DDR controller too, which presumably protects against accesses from the FPGA fabric going where they shouldn’t. Again, Zynq does not (as far as I can see) provide this feature.

Looking “mechanically”, Altera have devices which are pinout compatible with and without their many-gigabit-transceiver blocks, which would provide one of my applications with a useful development interface which could be dropped in production without a board respin.

Finally, Altera also have a single-core option. Of course, that only makes any difference if it saves enough money to make the silicon cheaper in any applications which can get away with a single core. Xilinx have clearly decided not… we’ll have to see!

Now running WordPress

This site is now running WordPress, rather then Drupal. Ultimately, I got fed up with the very tedious processes involved with managing a Drupal installation. This added to the fact that I somehow got the Image plugin broken such that I couldn’t upload any more images, and despite much Googling, couldn’t fix it. And this was the second time that had happened – the first time I fixed it by restoring from a backup… but that’s not really a proper solution!

So, here we are in WordPress land – updates involve me clicking a button and waiting… much simpler. Bear with me while I find all the little bits of formatting which are no doubt broken, especially as the markdown had to be hacked on by hand, and the code formatting is not (so far) as well configured as my Drupal setup.

Server upgraded is now running Squeeze (or Debian 6.0 as it’s more formally known).

I’m not a full-time admin, so I greatly appreciated the Debian upgrade guide – it reminds you of all the stuff you have known in the past, but have “swapped-out”, and what tasks to do in what order. In particular, the kernel changes from Debian 5 to 6 were significant enough that a potentially unbootable system may have resulted.

MYSQL broke during the upgrade – the dist-upgrade process appeared to install mysql-client rather than mysql-server which meant there was no server when rebooted, so if you noticed an error page for a short while, apologies. (Not that I expect anyone noticed as the background traffic is pretty low :)

And I allowed the upgrade to change more of my apache2 config than I should have – but that was a quick fix.

Stopping Laserjet 5 jams

Thanks to a kit of new rollers from Daytona plc and detailed service manual from HP for my Laserjet 5M, it’s now printing without jams again!

Should last another 100k pages now I hope

(Note to self for next time – replace the upper feed roller before the lower one as the sprockets will be easier to engage with the drive belt that way around.)

Aerial (or antenna!) wiring

We’ve been having a bunch of building work done, and today we finally moved the TV back downstairs to the new room! Built the new TV bench, hauled all the kit downstairs, plugged in… “No signal” said the TV. On fighting through the ivy to where the downlead comes down the house, found that the electrician hadn’t connected it to the new aerial wiring. Apparantly even these new-fangled digital signals don’t travel well from one piece of coax to another through several feet of air. My fix involved using some bicycle toe-clip mounting brackets, an old mints tin (thanks Ben!) and (the only piece of actual electrical materiel) a piece of choc-block. You can see the results in the picture – quality bodge or what :)

Splicing coax in a mint tin

]1 Splicing coax in a mint tin

FPGAs and ARMs – a summary

Today, I compared the new combined ARM and FPGA devices from Xilinx and Altera.

This post summarises that rather long post!


Well, there’s two interesting new series of devices. Both chip families look awesome (that’s not a word I habitually use, unlike in some parts of the internet… consider it high praise :). I foresee all sorts of unforeseen applications (if you’ll forgive the Sir Humphry-ism) enabled by the tight coupling of processor and logic. Can you choose between them? Well, Xilinx’s Zynq has more memory tightly coupled with the processors, maybe a little less on the FPGA side. Zynq also has the XADC, which shouldn’t be overlooked. A single-chip radar processor is feasible with the combination of XADC and large scratchpad. Altera have a more flexible FPGA to processor-memory interface, but Xilinx’s looks eminently good enough. Xilinx have a lot less details published as yet, so there’s no doubt more good stuff to learn from them, and Altera clearly still have things up their sleeves. I’ll update here as more information becomes available.