7. Advanced Topics

7.1. Security

The RAUC bundle format consists of the images and a manifest, contained in a SquashFS image. The SquashFS is followed by a public key signature over the full image. The signature is stored (together with the signer’s certificate) in the CMS (Cryptographic Message Syntax, see RFC5652) format. Before installation, the signer certificate is verified against the keyring(s) already stored on the system and the signers pulic key is then used to verify the bundle signature.

_images/rauc-bundle.svg

We selected the CMS to avoid designing and implementing our own custom security mechanism (which often results in vulnerabilities). CMS is well proven in S/MIME and has widely available implementations, while supporting simple as well as complex PKI use-cases (certificate expiry, intermediate CAs, revocation, algorithm selection, hardware security modules…) without additional complexity in RAUC itself.

RAUC uses OpenSSL as a library for signing and verification of bundles. A PKI with intermediate CAs for the unit tests is generated by the test/openssl-ca.sh shell script available from GitHub, which may also be useful as an example for creating your own PKI.

In the following sections, general CA configuration, some use-cases and corresponding PKI setups are described.

7.1.1. CA Configuration

OpenSSL uses an openssl.cnf file to define paths to use for signing, default parameters for certificates and additional parameters to be stored during signing. Configuring a CA correctly (and securely) is a complex topic and obviously exceeds the scope of this documentation. As a starting point, the OpenSSL manual pages (especially ca, req, x509, cms, verify and config) and Stefan H. Holek’s pki-tutorial are useful.

7.1.1.1. Certificate Key Usage Attributes

By default (for backwards compatibility reasons), RAUC does not check the certificate’s key usage attributes. When not using a stand-alone PKI for RAUC, it can be useful to enable checking via the check-purpose configuration option to allow only specific certificates for bundle installation.

When using OpenSSL to create your certificates, the key usage attributes can be configured in the X.509 V3 extension sections in your OpenSSL configuration file. The extension configuration section to be used by openssl ca is selected via the -extensions argument. For example, RAUC uses a certificate created with the following extensions to test the handling of the codeSigning extended key usage attribute:

[ v3_leaf_codesign ]
subjectKeyIdentifier=hash
authorityKeyIdentifier=keyid:always,issuer:always
basicConstraints = CA:FALSE
extendedKeyUsage=critical,codeSigning

As OpenSSL does not (yet) provide a purpose check for code signing, RAUC contains its own implementation, which can be enabled with the check-purpose=codesign configuration option. For the leaf (signer) certificate, the extendedKeyUsage attribute must exist and contain (at least) the codeSigning value. Also, if it has the keyUsage attribute, it must contain at least digitalSignature. For all other (issuer) certificates in the chain, the extendedKeyUsage attribute is optional, but if it is present, it must contain at least the codeSigning value.

This means that only signatures using certificates explicitly issued for code signing are accepted for the codesign purpose. Also, you can optionally use extendedKeyUsage attributes on intermediate CA certificates to limit which ones are allowed to issue code signing certificates.

7.1.2. Single Key

You can use openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -days 365 -nodes to create a key and a self-signed certificate. While you can use RAUC with these, you can’t:

  • replace expired certificates without updating the keyring
  • distinguish between development versions and releases
  • revoke a compromised key

7.1.3. Simple CA

By using the (self-signed) root CA only for signing other keys, which are used for bundle signing, you can:

  • create one key per developer, with limited validity periods
  • revoke keys and ship the CRL (Certificate Revocation List) with an update

With this setup, you can reduce the impact of a compromised developer key.

7.1.4. Separate Development and Release CAs

By creating a complete separate CA and bundle signing keys, you can give only specific persons (or roles) the keys necessary to sign final releases. Each device only has one of the two CAs in its keyring, allowing only installation of the corresponding updates.

While using signing also during development may seem unnecessary, the additional testing of the whole update system (RAUC, bootloader, migration code, …) allows finding problems much earlier.

7.1.5. Intermediate Certificates

RAUC allows you to include intermediate certificates in the bundle signature that can be used to close the trust chain during bundle signature verification.

To do this, specify the --intermediate argument during bundle creation:

rauc bundle --intermediate=/path/to/intermediate.ca.pem [...]

Note that you can specify the --intermediate argument multiple times to include multiple intermediate certificates to your bundle signature.

7.1.6. Resigning Bundles

RAUC allows to replace the signature of a bundle. A typical use case for this is if a bundle that was generated by an autobuilder and signed with a development certificate was tested successfully on your target and should now become a release bundle. For this it needs to be resigned with the release key without modifying the content of the bundle itself.

This is what the resign command of RAUC is for:

rauc resign --cert=<certfile> --key=<keyfile> --keyring=<keyring> <input-bundle> <output-bundle>

It verifies the bundle against the given keyring, strips the old signature and attaches a new one based on the key and cert files provided. If the old signature is no longer valid, you can use the --no-verify argument to disable verification.

7.1.6.1. Switching the Keyring – SPKI hashes

When switching from a development to a release signature, it is typically required to also equip the rootfs with a different keyring file.

While the development system should accept both development and release certificates, the release system should accept only release certificates.

One option to perform this exchange without having to build a new rootfs would be to include both a keyring for the development case as well as a keyring for the release case.

Doing this would be possible in a slot’s post-install hook, for example. Depending on whether the bundle to install was signed with a development or a release certificate, either the production or development keyring will be copied to the location where RAUC expects it to be.

To allow comparing hashes, RAUC generates SPKI hashes (i.e. hashes over the entire public key information of a certificate) out of each signature contained in the bundle’s trust chain. The SPKI hashes are invariant over changes in signature meta data (such as the validity dates) while allowing to securely compare the certificate ownership.

A simple call of rauc info will list the SPKI hashes for each certificate contained in the validated trust chain:

Certificate Chain:
 0 Subject: /O=Test Org/CN=Test Org Release-1
   Issuer: /O=Test Org/CN=Test Org Provisioning CA Release
   SPKI sha256: 94:67:AB:31:08:04:3D:2D:62:D5:EE:58:D6:2F:86:7A:F2:77:94:29:9B:46:11:00:EC:D4:7B:1B:1D:42:8E:5A
 1 Subject: /O=Test Org/CN=Test Org Provisioning CA Release
   Issuer: /O=Test Org/CN=Test Org Provisioning CA Root
   SPKI sha256: 47:D4:9D:73:9B:11:FB:FD:AB:79:2A:07:36:B7:EF:89:3F:34:5F:D4:9B:F3:55:0F:C1:04:E7:CC:2F:32:DB:11
 2 Subject: /O=Test Org/CN=Test Org Provisioning CA Root
   Issuer: /O=Test Org/CN=Test Org Provisioning CA Root
   SPKI sha256: 00:34:F8:FE:5A:DC:3B:0D:FE:64:24:07:27:5D:14:4D:E2:39:8C:68:CC:9A:86:DD:67:03:D7:15:11:16:B4:4E

A post-install hook instead can access the SPKI hashes via the environment variable RAUC_BUNDLE_SPKI_HASHES that will be set by RAUC when invoking the hook script. This variable will contain a space-separated list of the hashes in the same order they are listed in rauc info. This list can be used to define a condition in the hook for either installing one or the other keyring file on the target.

Example hook shell script code for above trust chain:

case "$1" in

      [...]

      slot-post-install)

              [...]

              # iterate over trust chain SPKI hashes (from leaf to root)
              for i in $RAUC_BUNDLE_SPKI_HASHES; do
                      # Test for development intermediate certificate
                      if [ "$i" == "46:9E:16:E2:DC:1E:09:F8:5B:7F:71:D5:DF:D0:A4:91:7F:FE:AD:24:7B:47:E4:37:BF:76:21:3A:38:49:89:5B" ]; then
                              echo "Activating development key chain"
                              mv "$RAUC_SLOT_MOUNT_POINT/etc/rauc/devel-keyring.pem" "$RAUC_SLOT_MOUNT_POINT/etc/rauc/keyring.pem"
                              break
                      fi
                      # Test for release intermediate certificate
                      if [ "$i" == "47:D4:9D:73:9B:11:FB:FD:AB:79:2A:07:36:B7:EF:89:3F:34:5F:D4:9B:F3:55:0F:C1:04:E7:CC:2F:32:DB:11" ]; then
                              echo "Activating release key chain"
                              mv "$RAUC_SLOT_MOUNT_POINT/etc/rauc/release-keyring.pem" "$RAUC_SLOT_MOUNT_POINT/etc/rauc/keyring.pem"
                              break
                      fi
              done
              ;;

      [...]
esac

7.1.7. PKCS#11 Support

RAUC can use certificates and keys which are stored in a PKCS#11-supporting smart-card, USB token (such as a YubiKey) or Hardware Security Module (HSM). For all commands which need create a signature bundle, convert and resign, PKCS#11 URLs can be used instead of filenames for the --cert and --key arguments.

For example, a bundle can be signed with a certificate and key available as pkcs11:token=rauc;object=autobuilder-1:

rauc bundle \
  --cert='pkcs11:token=rauc;object=autobuilder-1' \
  --key='pkcs11:token=rauc;object=autobuilder-1' \
  <input-dir> <output-file>

Note

Most PKCS#11 implementations require a PIN for signing operations. You can either enter the PIN interactively as requested by RAUC or use the RAUC_PKCS11_PIN environment variable to specify the PIN to use.

When working with PKCS#11, some tools are useful to configure and show your tokens:

p11-kit

p11-kit is an abstraction layer which provides access to multiple PKCS#11 modules.

It contains p11tool, which is useful to see available tokens and objects (keys and certificates) and their URLs:

$ p11tool --list-tokens
…
Token 5:
        URL: pkcs11:model=SoftHSM%20v2;manufacturer=SoftHSM%20project;serial=9f03d1aaed92ef58;token=rauc
        Label: rauc
        Type: Generic token
        Manufacturer: SoftHSM project
        Model: SoftHSM v2
        Serial: 9f03d1aaed92ef58
        Module: /usr/lib/softhsm/libsofthsm2.so
$ p11tool --login --list-all pkcs11:token=rauc
Token 'rauc' with URL 'pkcs11:model=SoftHSM%20v2;manufacturer=SoftHSM%20project;serial=9f03d1aaed92ef58;token=rauc' requires user PIN
Enter PIN: ****
Object 0:
        URL: pkcs11:model=SoftHSM%20v2;manufacturer=SoftHSM%20project;serial=9f03d1aaed92ef58;token=rauc;id=%01;object=autobuilder-1;type=public
        Type: Public key
        Label: autobuilder-1
        Flags: CKA_WRAP/UNWRAP;
        ID: 01

Object 1:
        URL: pkcs11:model=SoftHSM%20v2;manufacturer=SoftHSM%20project;serial=9f03d1aaed92ef58;token=rauc;id=%01;object=autobuilder-1;type=private
        Type: Private key
        Label: autobuilder-1
        Flags: CKA_WRAP/UNWRAP; CKA_PRIVATE; CKA_SENSITIVE;
        ID: 01

Object 2:
        URL: pkcs11:model=SoftHSM%20v2;manufacturer=SoftHSM%20project;serial=9f03d1aaed92ef58;token=rauc;id=%01;object=autobuilder-1;type=cert
        Type: X.509 Certificate
        Label: autobuilder-1
        ID: 01
OpenSC

OpenSC is the standard open source framework for smart card access.

It provides pkcs11-tool, which is useful to prepare a token for usage with RAUC. It can list, read/write objects, generate keypairs and more.

libp11

libp11 is an engine plugin for OpenSSL, which allows using keys on PKCS#11 tokens with OpenSSL.

It will automatically use p11-kit (if available) to access all configured PKCS#11 modules.

Note

If you cannot use p11-kit, you can also use the RAUC_PKCS11_MODULE environment variable to select the PKCS#11 module.

SoftHSM2

SoftHSM2 is software implementation of a HSM with a PKCS#11 interface.

It is used in the RAUC test suite to emulate a real HSM and can also be used to try the PKCS#11 functionality in RAUC without any hardware. The prepare_softhsm2 shell function in test/rauc.t can be used as an example on how to initialize SoftHSM2 token.

7.1.8. Protection Against Concurrent Bundle Modification

As the plain bundle format consists of a squashfs image with an appended CMS signature, RAUC must check the signature before accessing the squashfs. If an unprivileged process can manipulate the squashfs part of the bundle after the signature has been checked, it could use this to elevate its privileges.

The verity format is not affected by this problem, as the kernel checks the squashfs data as it is read.

To mitigate this problem when using the plain format, RAUC will check the bundle file for possible issues before accessing the squashfs:

  • ownership or permissions that would allow other users to open it for writing
  • storage on unsafe filesystems such as FUSE or NFS, where the data is supplied by an untrusted source (the rootfs is explicitly trusted, though)
  • storage on a filesystem mounted from a block device with a non-root owner
  • existing open file descriptors (via F_SETLEASE)

If the check fails, RAUC will attempt to take ownership of the bundle file and removes write permissions. This protects against processes trying to open writable file descriptors from this point on. Then, the checks above a repeated before setting up the loopback device and mounting the squashfs. If this second check fails, RAUC will abort the installation.

If RAUC had to take ownership of the bundle, this change is not reverted after the installation is completed. Note that, if the original user has write access to the containing directory, they can still delete the file.

7.2. Data Storage and Migration

Most systems require a location for storing configuration data such as passwords, ssh keys or application data. When performing an update, you have to ensure that the updated system takes over or can access the data of the old system.

7.2.1. Storing Data in The Root File System

In case of a writable root file system, it often contains additional data, for example cryptographic material specific to the machine, or configuration files modified by the user. When performing the update, you have to ensure that the files you need to preserve are copied to the target slot after having written the system data to it.

RAUC provides support for executing hooks from different slot installation stages. For migrating data from your old rootfs to your updated rootfs, simply specify a slot post-install hook. Read the Hooks chapter on how to create one.

7.2.2. Using Data Partitions

Often, there are a couple of reasons why you don’t want to or cannot store your data inside the root file system:

  • You want to keep your rootfs read-only to reduce probability of corrupting it.
  • You have a non-writable rootfs such as SquashFS.
  • You want to keep your data separated from the rootfs to ease setup, reset or recovery.

In this case you need a separate storage location for your data on a different partition, volume or device.

If the update concept uses full redundant root file systems, there are also good reasons for using a redundant data storage, too. Read below about the possible impact on data migration.

To let your system access the separate storage location, it has to be mounted into your rootfs. Note that if you intend to store configurable system information on your data partition, you have to map the default Linux paths (such as /etc/passwd) to your data storage. You can do this by using:

  • symbolic links
  • bind mounts
  • an overlay file system

It depends on the amount and type of data you want to handle which option you should choose.

7.2.3. Application Data Migration

_images/data_migration.svg

Both a single and a redundant data storage have their advantages and disadvantages. Note when storing data inside your rootfs you will have a redundant setup by design and cannot choose.

The decision about how to set up a configuration storage and how to handle it depends on several aspects:

  • May configuration formats change over different application versions?
  • Can a new application read (and convert) old data?
  • Does your infrastructure allow working on possibly obsolete data?
  • Enough storage to store data redundantly?

The basic advantages and disadvantages a single or a redundant setup implicate are listed below:

  Single Data Redundant Data
Setup easy assure using correct one
Migration no backup by default copy on update, migrate
Fallback tricky (reconvert data?) easy (old data!)

7.3. RAUC casync Support

Warning

casync support is still experimental and lacks some unit tests.

When evaluating, make sure to compile a recent casync version from the git for testing.

Using the Content-Addressable Data Synchronization tool casync for updating embedded / IoT devices provides a couple of benefits. By splitting and chunking the update artifacts into reusable pieces, casync allows to

  • stream remote bundles to the target without occupying storage / NAND
  • minimize transferred data for an update by downloading only the delta to the running system
  • reduce data storage on server side by eliminating redundancy
  • good handling for CDNs due to similar chunk sizes

For a full description of the way casync works and what you can do with it, refer to the blog post by its author Lennart Poettering or visit the GitHub site.

RAUC supports using casync index files instead of complete images in its bundles. This way the real size of the bundle comes down to the size of the index files required for referring to the individual chunks. The real image data contained in the individual chunks can be stored in one single repository, for a whole systems with multiple images as well as for multiple systems in different versions, etc. This makes the approach quite flexible.

_images/casync-basics.svg

7.3.1. Creating casync Bundles

Creating RAUC bundles with casync index files is a bit different from creating ‘conventional’ bundles. While the bundle format remains the same and you could also mix conventional and casync-based bundles, creating these bundles is not straight forward when using common embedded build systems such as Yocto, PTXdist or buildroot.

Because of this, we decided use a two-step process for creating casync RAUC bundles:

  1. Create ‘conventional’ RAUC bundle
  2. Convert to casync-based RAUC bundle

RAUC provides a command for creating casync-based bundles from ‘conventional’ bundles. Simply call:

rauc convert --cert=<certfile> --key=<keyfile> --keyring=<keyring> conventional-bundle.raucb casync-bundle.raucb

The conversion process will create two new artifacts:

  1. The converted bundle casync-bundle.raucb with casnyc index files instead of image files
  2. A casync chunk store casync-bundle.castr/ for all bundle images. This is a directory with chunks grouped by subfolders of the first 4 digits of their chunk ID.

7.3.2. Installing casync Bundles

The main difference between installing conventional bundles and bundles that contain casync index files is that RAUC requires access to the remote casync chunk store during installation of the bundle.

Due to the built-in network support of both casync and RAUC, it is possible to directly give a network URL as the source of the bundle:

rauc install https://server.example.com/deploy/bundle-20180112.raucb

By default, RAUC will assume the corresponding casync chunk store is located at the same location as the bundle (with the .castr extension instead of .raucb), in this example at https://server.example.com/deploy/bundle-20180112.castr. The default location can also be configured in the system config to point to a generic location that is valid for all installations.

When installing a bundle, the casync implementation will automatically handle the chunk download via an unprivileged helper binary.

_images/casync-extract.svg

7.3.2.1. Reducing Download Size – Seeding

Reducing the amount of data to be transferred over slow connections is one of the main goals of using casync for updating. Casync splits up the images or directory trees it handles into reusable chunks of similar size. Doing this both on the source as well as on the destination side allows comparing the hashes of the resulting chunks to know which parts are different.

When we update a system, we usually do not change its entire file tree, but only update a few libraries, the kernel, the application, etc. Thus, most of the data can be retrieved from the currently active system and does not need to be fetched via the network.

For each casync image that RAUC extracts to the target slot, it determines an appropriate seed. This is normally a redundant slot of the same class as the target slot but from the currently booted slot group.

_images/casync-rauc.svg

Note

Depending on your targets processing and storage speed, updating slots with casync can be a bit slower than conventional updates, because casync first has to process the entire seed slot to calculate the seed chunks. After this is done it will start writing the data and fetch missing chunks via the network.

7.4. Handling Board Variants With a Single Bundle

If you have hardware variants that require installing different images (e.g. for the kernel or for an FPGA bitstream), but have other slots that are common (such as the rootfs) between all hardware variants, RAUC allows you to put multiple different variants of these images in the same bundle. RAUC calls this feature ‘image variants’.

_images/rauc-image-variants.svg

If you want to make use of image variants, you first of all need to say which variant your specific board is. You can do this in your system.conf by setting exactly one of the keys variant-dtb, variant-file or variant-name.

[system]
...
variant-dtb=true

The variant-dtb is a boolean that allows (on device-tree based boards) to use the systems compatible string as the board variant.

[system]
...
variant-file=/path/to/file

A more generic alternative is the variant-file key. It allows to specify a file that will be read to obtain the variant name. Note that the content of the file should be a simple string without any line breaks. A typical use case would be to generate this file (in /run) during system startup from a value you obtained from your bootloader. Another use case is to have a RAUC post-install hook that copies this file from the old system to the newly updated one.

[system]
...
variant-name=myvariant-name

A third variant to specify the systems variant is to give it directly in your system.conf. This method is primary meant for testing, as this prevents having a generic rootfs image for all variants!

In your manifest, you can specify variants of an image (e.g. the kernel here) as follows:

[image.kernel.variant-1]
filename=variant1.img
...

[image.kernel.variant-2]
filename=variant1.img
...

It is allowed to have both a specific variant as well as a default image in the same bundle. If a specific variant of the image is available, it will be used on that system. On all other systems, the default image will be used instead.

If you have a specific image variant for one of your systems, it is mandatory to also have a default or specific variant for the same slot class for any other system you intend to update. RAUC will report an error if for example a bootloader image is only present for variant A when you try to install on variant B. This should prevent bricking your device by unintentional partial updates.

7.5. Manually Writing Images to Slots

In order to write an image to a slot without using update mechanics like hooks, slot status etc. use:

rauc write-slot <slotname> <image>

This uses the correct handler to write the image to the slot. It is useful for development scenarios as well as initial provisioning of embedded boards.

7.6. Updating the Bootloader

Updating the bootloader is a special case, as it is a single point of failure on most systems: The selection of which redundant system images should be booted cannot itself be implemented in a redundant component (otherwise there would need to be an even earlier selection component).

Some SoCs contain a fixed firmware or ROM code which already supports redundant bootloaders, possibly integrated with a HW watchdog or boot counter. On these platforms, it is possible to have the selection point before the bootloader, allowing it to be stored redundantly and updated as any other component.

If redundant bootloaders with fallback is not possible (or too inflexible) on your platform, you may instead be able to ensure that the bootloader update is atomic. This doesn’t support recovering from a buggy bootloader, but will prevent a non-bootable system caused by an error or power-loss during the update.

Whether atomic bootloader updates can be implemented depends on your SoC/firmware and storage medium.

7.6.1. Update eMMC Boot Partitions

RAUC supports updating eMMC boot partitions (see the JEDEC standard JESD84-B51 for details), one of which gets enabled atomically via configuration registers in the eMMC (ext_csd registers).

_images/emmc-bootloader-update.svg

The required slot type is boot-emmc. The device to be specified is expected to be the root device. The boot partitions are derived automatically. A system.conf could look like this:

[slot.bootloader.0]
device=/dev/mmcblk1
type=boot-emmc

Important

A kernel bug may prevent consistent toggling of the eMMC ext_csd boot partition register. Be sure your kernel is >= 4.16-rc7 (resp. >= 4.15.14, >= 4.14.31) or contains this patch: https://www.spinics.net/lists/linux-mmc/msg48271.html

7.6.2. Update Boot Partition in MBR

Some SOCs (like Xilinx ZynqMP) contain a fixed ROM code, which boots from the first partition in the MBR partition table of a storage medium. In order to atomically update the bootloader of such systems, RAUC supports changing the MBR partition table and thus switching between two partitions of the same size - one active boot partition (i.e. the partition is defined in the MBR partition table) and one inactive partition (i.e. there is no entry for it in the MBR partition table) which is used to update the bootloader.

_images/rauc-mbr-switch.svg

A memory region, where the two partitions are stored has to be defined in the configuration (see below) and initially a boot partition has to exist at either the start of the region or start + size / 2.

Consider the following example layout of a storage medium with a boot partition size of 33 Mbytes:

Start Size  
0x00000000 512 bytes MBR
0x00000200 160 Kbytes Space for state, barebox-environment, …
0x00028200
0x00028200
0x02128200
66 Mbytes
33 Mbytes
33 Mbytes
MBR switch region containing:
- active boot partition (entry in MBR)
- inactive boot partition (no entry in MBR)
0x04228200 Remaining size other partitions (partition table entries 2, 3, 4)

RAUC uses the start address and size defined in the first entry in the MBR partition table, to distinguish between active and inactive boot partition and updates the hidden, inactive partition. After the update the bootloader is switched by changing the first partition entry and writing the whole 512 bytes MBR atomically.

The required slot type is boot-mbr-switch. The device to be specified is expected to be the underlying block device. The boot partitions are derived by the definition of the values region-start and region-size. Both values have to be set in integer decimal bytes and can be post-fixed with K/M/G/T.

A system.conf section could look like this:

[slot.bootloader.0]
device=/dev/mmcblk1
type=boot-mbr-switch
region-start=164352
region-size=66M

7.6.3. Update Boot Partition in GPT

Systems booting via UEFI have a special partition, called the EFI system partition (ESP), which contains the bootloader to be started by the UEFI firmware. Also, some newer ARM SoCs support loading the bootloader directly from a GPT partition.

To allow atomic updates of these partitions, RAUC supports changing the GPT to switch the first GPT partition between the lower and upper halves of a region configured for that purpose. This works similarly to the handling of a MBR boot partition as described in the previous section. It requires RAUC to be compiled with GPT support (./configure --enable-gpt) and adds a dependency on libfdisk.

The required slot type is boot-gpt-switch. The device to be specified is expected to be the underlying block device. The boot partitions are derived by the definition of the values region-start and region-size. Both values have to be set in integer decimal bytes and can be post-fixed with K/M/G/T.

To ensure that the resulting GPT entries are well aligned, the region start must be a multiple of the grain value (as used by sfdisk), which is 1MB by default. Accordingly, the region size must be aligned to twice the grain value (to ensure that the start of the upper half is aligned).

Note that RAUC expects that the partition table always points exactly to one of the halves.

A system.conf section could look like this:

[slot.esp.0]
device=/dev/sda
type=boot-gpt-switch
region-start=1M
region-size=32M

7.6.4. Bootloader Update Ideas

The NXP i.MX6 supports up to four bootloader copies when booting from NAND flash. The ROM code will try each copy in turn until it finds one which is readable without uncorrectable ECC errors and has a correct header. By using the trait of NAND flash that interrupted writes cause ECC errors and writing the first page (containing the header) last, the bootloader images can be replaced one after the other, while ensuring that the system will boot even in case of a crash or power failure.

The slot type could be called “boot-imx6-nand” analogous to eMMC.

7.6.5. Considerations When Updating the Bootloader

Booting an old system with a new bootloader is usually not tested during development, increasing the risk of problems appearing only in the field. If you want to address this issue do not add the bootloader to your bundle, but rather use an approach like this:

  • Store a copy of the bootloader in the rootfs.
  • Use RAUC only to update the rootfs. The combinations to test can be reduced by limiting which old versions are supported by an update.
  • Reboot into the new system.
  • On boot, before starting the application, check that the current slot is ‘sane’. Then check if the installed bootloader is older than the version shipped in the (new) rootfs. In that case:
    • Disable the old rootfs slot and update the bootloader.
    • Reboot
  • Start the application.

This way you still have fallback support for the rootfs upgrade and need to test only:

  • The sanity check functionality and the bootloader installation when started from old bootloader and new rootfs
  • Normal operation when started from new bootloader and new rootfs

The case of new bootloader with old rootfs can never happen, because you disable the old one from the new before installing a new bootloader.

If you need to ensure that you can fall back to the secondary slot even after performing the bootloader update, you should check that the “other” slot contains the same bootloader version as the currently running one during the sanity check. This means that you need to update both slots in turn before the bootloader is updated.

7.7. Updating Sub-Devices

Besides the internal storage, some systems have external components or sub-devices which can be updated. For example:

  • Firmware for micro-controllers on modular boards
  • Firmware for a system management controller
  • FPGA bitstreams (stored in a separate flash)
  • Other Linux-based systems in the same enclosure
  • Software for third-party hardware components

In many cases, these components have some custom interface to query the currently installed version and to upload an update. They may or may not have internal redundancy or recovery mechanisms as well.

Although it is possible to configure RAUC slots for these and let it call a script to perform the installation, there are some disadvantages to this approach:

  • After a fallback to an older version in an A/B scenario, the sub-devices may be running an incompatible (newer) version.
  • A modular sub-device may be replaced and still has an old firmware version installed.
  • The number of sub-devices may not be fixed, so each device would need a different slot configuration.

Instead, a more robust approach is to store the sub-device firmware in the rootfs and (if needed) update them to the current versions during boot. This ensures that the sub-devices are always running the correct set of versions corresponding to the version of the main application.

If the bootloader falls back to the previous version on the main system, the same mechanism will downgrade the sub-devices as needed. During a downgrade, sub-devices which are running Linux with RAUC in an A/B scenario will detect that the image to be installed already matches the one in the other slot and avoid unnecessary installations.

7.8. Migrating to an Updated Bundle Version

As RAUC undergoes constant development, it might be extended and new features or enhancements will make their way into RAUC. Thus, also the sections and options contained in the bundle manifest may be extended over time.

To assure a well-defined and controlled update procedure, RAUC is rather strict in parsing the manifest and will reject bundles containing unknown configuration options.

But, this does not prevent you from being able to use those new RAUC features on your current system. All you have to do is to perform an intermediate update:

  • Create a bundle containing a rootfs with the recent RAUC version, but not containing the new RAUC features in its manifest.
  • Update your system and reboot
  • Now you have a system with a recent RAUC version which is able to interpret and appropriately handle a bundle with the latest options

7.9. Software Deployment

When designing your update infrastructure, you must think about how to deploy the updates to your device(s). In general, you have two major options: Deployment via storage media such as USB sticks or network-based deployment.

As RAUC uses signed bundles instead of e.g. trusted connections to enable update author verification, RAUC fully supports both methods with the same technique and you may also use both of them in parallel.

Some influential factors on the method to used can be:

  • Do you have network access on the device?
  • How many devices have to be updated?
  • Who will perform the update?

7.9.1. Deployment via Storage Media

_images/usb-updates.svg

This method is mainly used for decentralized updates of devices without network access (either due to missing infrastructure or because of security concerns).

To handle deployment via storage media, you need a component that detects the plugged-in storage media and calls RAUC to trigger the actual installation.

When using systemd, you could use automount units for detecting plugged-in media and trigger an installation.

7.9.2. Deployment via Deployment Server

_images/ota-updates.svg

Deployment over a network is especially useful when having a larger set of devices to update or direct access to these devices is tricky.

As RAUC focuses on update handling on the target side, it does not provide a deployment server out of the box. But if you do not already have a deployment infrastructure, there a few Open Source deployment server implementations available in the wilderness.

One of it worth being mentioned is hawkBit from the Eclipse IoT project, which also provides some strategies for rollout management for larger-scale device farms.

7.9.2.1. RAUC hawkBit updater (C)

The rauc-hawkbit-updater is a separate application project developed under the rauc organization umbrella. It aims to provide a ready-to-use bridge between the hawkBit REST DDR API on one side and the RAUC D-Bus API on the other.

For more information visit it on GitHub:

https://github.com/rauc/rauc-hawkbit-updater

7.9.2.2. The RAUC hawkBit client (python)

As a separate project, the RAUC development team provides a Python-based example application that acts as a hawkBit client via its REST DDI-API while controlling RAUC via D-Bus.

For more information visit it on GitHub:

https://github.com/rauc/rauc-hawkbit

It is also available via pypi:

https://pypi.python.org/pypi/rauc-hawkbit/