Tuesday, October 25, 2011

Study Group: Physical security and FPGAs - Part I

In this study group Marcin and Philip started the discussion on Physical security of FPGAs which will be followed by a second part in two weeks. They based the study group mostly on two papers by Saar Drimer, "Volatile FPGA design security - a survey" [pdf], "Security for volatile FPGAs (Technical Report)" [pdf] and on the paper "Protecting multiple cores in a single FPGA design" [pdf] by Drimer, Güneysu, Kuhn and Paar.

As we have both theoretic and practical cryptographers in our group, Marcin and Phil started by describing the basic technology of a Field-Programmable Gate Array: Unlike ASICs (Application-Specific Integrated Circuit), the function of a FPGA is not determined when it is manufactured. Instead, FPGAs have to be configured; based on their configuration mechanism, they are either
  • volatile, i.e. the configuration can be changed, or
  • non-volatile where the configuration is "burnt" into the device, e.g. using anti-fuses.
The key manufacturers of FPGAs are Xilinx, Altera and Actel which was bought by Microsemi.

The study group focused on volatile FPGAs, which essentially consist of an array of SRAM lookup tables (LUTs) which map a fixed number of input bits (e.g. 4) to a smaller number of ouput bits (e.g. 1) and configurable interconnect. The LUTs basically emulate the truth tables of the logic gates used in ASICs while the configurable interconnect assures that basically all LUTs can be "wired together" as required. Being SRAM, the LUTs have to be reconfigured at boot time which provides the volatility of the FPGA. A comparison of FPGAs to CPUs and ASICs results (roughly) in the following table:

CPU FPGA ASIC
Speed low mediumhigh
Power
consumption
high mediumlow
Implementation
cost
low low high (tapeout, masks)
Cost per unit mediumhigh low
Updateability good doablenone
Implementation C, Java,...HDLHDL

Of course FPGAs come with a couple of security related issues including such as
  • leakage (side channel attacks)
  • semi-invasive attacks (fault injection)
  • invasive attacks
  • trustworthy (remote) configurations (hw trojans, reliability)
  • protection of Intellectual Property in configuration bitstreams (IP cores).
A FPGA is configured using a "bitstream" which consists of the data to be written to the LUTs and of the configuration bits for the interconnect. Since updateability is one of the key features of a FPGA, one would of course want to do it remotely and allow only trustworthy, i.e. authenticated bitstreams to be loaded into a FPGA. At the same time, the bitstreams contain valuable IP and whoever can read the bitstream can either clone the device (by loading the bitstream into another FPGA) or reverse engineer the IP from the bitstream. Furthermore, it is common that a System Designer doesn't implement all the IP in a bitstream himself - instead he licenses IP cores from several IP Core vendors. (Commonly, an embedded system on a FPGA consists of a microprocessor core, some interface cores such as UART, USB or Ethernet, some accelerator cores e.g. for graphics, crypto, signal processing and a bus core to connect all of the other cores. These may all be supplied by different IP core vendors.) So the IP core vendors have to ensure that their cores are only used in devices for which someone bought a license from them.

The first protocol presented by Marcin and Phil is a simple remote update protocol for which the FPGA needs to have
  • a fixed update logic UL besides the configurable logic which can compute a keyed MAC,
  • a unique and public fingerprint F,
  • a version number V of its current bitstream (to guarantee freshness)
  • and of course a key which will be used as input for the keyed MAC.
Then, the FPGA has to identify itself and its current bitstream version to the update server so that both sides know F and V. Additionally, both sides need to generate nonces in order to avoid replay attacks; for the FPGA it is sufficient to use a counter while the update server should create a truly random nonce. The bitstream is then sent to the FPGA together with its MAC value which serves both as authentication and error protection mechanism. It is important for the remote update protocol to provide a backup solution for the case that something goes wrong during the backup - otherwise the FPGA would be unusable. One option is to have a default bitstream stored in some non-volatile ROM without using too much (expensive) memory. Note also that the remote update protocol does not need to decrypt the bitstream - commonly the encrypted bitstream is stored in the non-volatile memory as is and gets only decrypted when it is loaded into the configurable logic at boot time using a key and the decryption routine embedded into the boot logic.

The second protocol presented by Marcin and Phil addresses the bit stream decryption and IP core protection problem. Basically, if the bitstream is completely owned by the System Designer (SD), the SD could just encrypt it using a symmetric key he has loaded into the FPGA. But IP core vendors want to protect their IP even though it will always be used by SDs as part of a larger system. So a more complex system is needed and Diffie-Hellman style key-exchange can provide it as the presented 4 stage protocol with three parties (FV - FPGA Vendor, CV - Core Vendor, SD) shows. The needed prerequisits of the FPGA are
  • a unique key KFPGA,
  • unique identifier FID (both stored in non-volatile memory inside the FPGA),
  • some hardwired additional registers to store a bit stream encryption key for each CV and
  • a hardwired decryption module which has access to all of the above keys.
Obviously, none of the keys should be accessible to the outside world. The 4 stages are:
  1. Setup: FV generates a secret x and computes a public gx and embeds the secret into the personalisation bitstream PB and encrypts PB using KFPGA and sends [FID, gx, ENCKFPGA(PB)] to SD.
  2. Licensing: For each CV and each FPGA device, SD sends (FID, gx) to the CV who choses a secret y, computes a public gy and uses a key derivation function KDF (specified by the FV) to compute a key KCV=KDF(y,gx,FID) which he uses to encrypt his IP core and sends [ENCKCV(core), gy] to the SD.
  3. Personalisation: Having assembled his own bitstream containing all the encrypted cores of the CVs, the SD loads first the encrypted PB onto the FPGA which the FPGA is able to decrypt using KFPGA. The PB contains an implementation of KDF and of the vendor secret x. Furthermore, the SD provides all the gy to the FPGA which in turn uses the KDF to compute all KCV=KDF(x,gy,FID) which it stores in the registers of the decryption logic. (The computation works thanks to Diffie-Hellman.) Now the Personalisation logic (within the configurable part of the FPGA) isn't needed anymore as the (static) decryption logic is fully prepared to decrypt the bitstreams.
  4. Configuration: The SD sends all encrypted bitstreams to the FPGA which decrypts them using the respective KCV's which were stored during personalisation in the decryption registers.
Note that the SD learns neither the secret x nor any of the secret y's and therefore can not compute any KCV to decrypt any of the IP cores he licensed. Additionally, the KCV are tied to the individual FPGA chip thanks to FID so cloning of a device is not possible.

No comments:

Post a Comment