In the previous post we prepared a setup with a FriendlyElec NanoPC-T4 single board computer (SBC) we connected it through PCIe to a host computer. This time we will setup a Pine64 Rockpro64 board which comes with a more convenient PCIe 4x female edge connector instead of an M.2 slot. Both boards are based on the same Hexa-Core Rockchip RK3399 chip.

In this post we will explore how to build a custom Linux kernel and load a PCIe endpoint function driver to act as a NVMe disk.

Patches to the Linux Kernel

During the work that lead to this post I found out that the Linux kernel Driver for the RK3399 PCIe endpoint controller was not working properly. In order to make it work I had to modify it quite substantially. This was possible thanks to the RK3399 Technical Reference Manual (TRM) which documented the PCIe controller of the RK3399. The details of this go beyond the scope of this post. If this is of interest to you, you can find the patch series and discussion on the Linux Kernel Mailing List (LKML).

In order to have the RK3399 PCIe endpoint controller work the patches (the latest patch series) require to be applied to the Linux kernel. Hopefully the changes I proposed will get merged into the mainline Linux kernel some day. This way we won’t have to apply them by hand to have the RK3399 PCIe endpoint controller driver working. (Note: there are other boards with PCIe endpoint controllers with functional drivers and these patches only apply to the RK3399 SoC).

A Linux based NVMe drive

With the PCIe endpoint working for the RK3399 SoC we can now play with Linux PCIe endpoint function drivers. For the moment in the mainline kernel there are only a few drivers, a Test function, and a Non-Transparent Bridge (NTB). However, other drivers have been proposed to the Linux kernel mailing list. For example, an NVMe endpoint function. The “Test function” driver allows to test the functionality of the PCIe base address registers (BARs), interrupts (legacy and MSI/MSI-X), memory transactions over PCIe from and to host, with and without DMA. This test driver was used to validate the functionality of the PCIe endpoint controller driver I patched. The NTB function driver requires two PCIe endpoint controllers, which the RK3399 doesn’t have, so I could not test it. Therefore the most interesting endpoint function probably is the NVMe function.

A modified version of this driver was prepared and combined with my patches for the RK3399 endpoint controller by Damien Le Moal in his repository, I decided to try this driver as well and have a fork of his repository here (to apply some further patches).

This work allows to setup a Linux based NVMe drive. As can be seen in the picture below the RockPro64 board is recognised as an NVMe drive.

A RockPro64 board connected on PCIe (x1) recognised as an NVMe drive

The board uses a 64 GB USB3 stick as storage and presents itself to the host as a NVMe drive. The following screenshot shows what the host sees.

The Linux based NVMe drive as seen by the host

We can see both the internal 1 TB disk and the Rockpro64 based NVMe disk of size 62 GB (backed by a 64 GB USB3 key). The disk is functional and allows to read and write as if it was a standard drive. Here the backing drive is an USB key, but this can be another device, e.g., SATA SSD or a hard drive.

Why ?

This can be seen as a very convoluted way to write an USB key, so why would we want to do this ?

Having a Linux based NVMe drive allows to experiment with NVMe firmware development without the needs for a NVMe development kit or FPGA, this only requires a single board computer and some cables.

With this, any developer can jump into NVMe development, e.g., by implementing new NVMe standards such as the “Key-Value” command set specifications as soon as they come out.

How ?

For anyone that would like to start tinkering with NVMe on a Linux based SBC I’ll give the instructions to reproduce the setup above.

Requirements (Hardware)

  • RockPro64 board (79.99$)
  • MicroSD card min 4GB
  • USB stick
  • PCIe male-to-male connector (Tx-Rx swap) can be bought here or here, but the cheapest is to use PCIe riser cables, see previous post.
  • (Optional) Serial cable (FTDI) to communicate with the board, board could be accessed through SSH as well.

Build the Linux kernel with NVMe driver

The Linux kernel and rootfs can be built with Buildroot. For this, a public GitHub repository by Damien Le Moal already provides everything (patched kernel with NVMe endpoint function driver). Instructions for building are given here.

git clone
cd buildroot
# Checkout dev branch (maybe a newer version is available now)
git checkout rockpro64_ep_v21
# Prepare Buildroot for the board
make rockpro64_ep_defconfig
# Build (takes some time)

Note that if for some reason Buildroot gives the following error

Could not fetch special ref 'master'; assuming it is not special. Commit 'master' does not exist in this repository.

followed by some “404 Not Found” errors, then please modify the following file buildroot/support/download/git and apply the following patch

diff --git a/support/download/git b/support/download/git
index 1a1c315f73..d193dd9172 100755
--- a/support/download/git
+++ b/support/download/git
@@ -138,7 +138,7 @@ _git fetch origin -t
 # below, if there is an issue anyway. Since most of the cset we're gonna
 # have to clone are not such special refs, consign the output to oblivion
 # so as not to alarm unsuspecting users, but still trace it as a warning.
-if ! _git fetch origin "'${cset}:${cset}'" >/dev/null 2>&1; then
+if ! _git fetch -u origin "'${cset}:${cset}'" >/dev/null 2>&1; then
     printf "Could not fetch special ref '%s'; assuming it is not special.\n" "${cset}"

(Add “-u” in the _git fetch origin line), more info here.

Buildroot patches the Linux kernel with the patches found here. This applies my patch set for the RK3399 controller, some extra patches and adds the NVMe PCIe endpoint driver. Finally, a script is provided to launch the NVMe PCIe endpoint function (this script uses the configfs to setup the PCIe endpoint function).

Specifying a custom Linux kernel

For development we would prefer to use a custom Linux kernel rather than rely on patches through Buildroot. For this we can create a “” file in the Buildroot directory and add

LINUX_OVERRIDE_SRCDIR = /path/to/linux

Then the we can rebuild the kernel and generate the rootfs with the following Buildroot command

make linux-rebuild all

This will allow to directly modify the Linux kernel (do some hacking) and recompile it quickly. This procedure is documented in the Buildroot documentation here.

To start tinkering, a Linux kernel with the NVMe endpoint function is available here or here. (Be aware that if you clone one of these, you need to checkout the correct branch as linked). The driver itself is drivers/pci/endpoint/functions/pci-epf-nvme.c

Preparing the SD card

Once the Buildroot build finished (build instructions), the rootfs is available under “output/images/sdcard.img”, copy it to a SD card as follows. Be careful if you specify the wrong disk it will get overwritten !

# Here /dev/sdX should match your SD card, it could also be /dev/mmcblkX use lsblk to find out which it is
sudo dd if=output/images/sdcard.img of=/dev/sdX status=progress bs=1M && sudo sync

sudo fdisk /dev/sdX
# press 'w' then press 'Enter'

For me the SD card appears under /dev/sdf, be aware that if you specify the wrong disk, it will be overwritten ! Be careful ! Don’t overwrite your disk (e.g., here /dev/sda is a SATA SSD that I don’t want to overwrite !)

In my case the SD card is /dev/sdf

Once written the SD card can be plugged into the RockPro64 board and it is ready to power up. (Note that the host must be off, and we will turn the host on when the RockPro64 is ready to show itself as an NVMe drive).

RockPro64 with SD card, Serial cable, PCIe cable, and USB3 stick plugged in

Booting the RockPro64 and setting up the NVMe function

For communicating with the RockPro64 a USB serial adapter is used, as a serial terminal the picocom program is used (feel free to use your favourite terminal emulator program here, minicom, screen, putty, etc.)

sudo picocom /dev/ttyUSB0 -b 1500000

Under my machine the adapter appears under “/dev/ttyUSB0”, this might be different for you. For the baudrate it is 1,500,000. Once the terminal program launched the RockPro64 can be turned on (if already on, feel free to power cycle it).

RockPro64 boot

Once the board booted you are prompted with a login screen, the default Buildroot credentials are “root” and “buildroot” as a password. The provided script allows to load the NVMe endpoint function. It can be launched with :


Note that it expects a device under “/dev/sda” (the USB3 key) to act as a backend (where the data will be written), if necessary the script can be modified. The script loads the driver, then sets up the PCIe endpoint through the configfs.

Booting the host and interacting with the disk

The host can now be booted and the disk should appear, you should be able to look it up with the nvme-cli tool, list it with “lspci”, see it under “/dev/nvmeX”, format it, write files to it etc.

You could even benchmark it with fio or other tools, don’t expect state of the art performance (the controller is PCIe gen 2, depending on the cable used the connection is PCIe x1 or x4, the speed of the backing USB drive will also be limiting, as do the extra the software layers in between). This setup will not come close to commercial NVMe drives, however we now have an open source platform for NVMe experimentation and development !

(Note that there may still be bugs in the provided code, the RockPro64 console will show crashes if any happen, also, it doesn’t support a host reboot for the moment, the SoC requires to be power cycled as well, and the endpoint function needs to be setup again).


The Linux kernel is an amazing piece of software ! With the PCIe endpoint function framework we can now develop our own PCIe cards ! And for a cost of less than 100$ we have a development kit to build all sorts of things !

In this post we presented an Linux based NVMe drive but we could create some crazy projects, for example a graphics card based on the Mali GPU inside the SoC and the HDMI output on the board, or we could build a network card, or an ARM based co-processor, an emulator for a future PCIe card. The possibilities are endless !

At REDS we will use this to work on custom NVMe firmwares and explore “computational storage” where storage and computation (acceleration) are done on the same device.

Happy Hacking !