Introduction
This is a step-by-step guide to enable hardware (PL) – software (PS) Co-Simulation with QEMU and QuestaSim for a Vivado Zynq project running a Linux operating system and applications.
Being able to simulate the interactions between the software running on the ARM processing system (PS) and the FPGA Programmable Logic (PL) allows for full-system simulation and can help development of drivers as well as embedded software that relies on the PL.
Prequisites
- Linux-based operating system
- Vivado (this guide uses Vivado 2019.2, but the approach works for 2017 and 2018 versions as well)
- Petalinux 2019.2
- QuestaSim 2020
Foreword
When developing a project for a Zynq platform the traditional workflow can be represented by the following diagram :
The hardware part of the project would be developed in Vivado and parts of the design would be simulated either with Xsim, ModelSim, or QuestaSim. A hardware description file is passed to the PetaLinux tool in order to generate a BSP, the boot loaders, the Linux kernel, etc.
During development the software would then be run either on a real board or on the QEMU emulator. The problem with QEMU is that it has no knowledge of the PL (FPGA) part of the design, and thus it cannot directly be used to test this part. Development and testing of drivers is therefore often done with a real board or a software emulation model. This approach is not very practical, especially when the hardware in the PL is subject to changes during the project.
Xilinx does provide some Co-Simulation capabilities. This is presented in [UG1169] Xilinx Qemu user guide, in the chapter “Co-Simulating With QEMU”. The image below presents their environment.
Xilinx provides a patched version of QEMU (Xilinx’s QEMU) which can communicate with the Xilinx LibSystemCTLM-SoC library
This library offers a Transaction-Level-Modelling (TLM) model of the Zynq as well as bridges from transactions to actual ports (such as AXI). This allows for hardware / software Co-Simulation. Examples are given here : https://github.com/Xilinx/systemctlm-cosim-demo
The downside of this model is that it relies on SystemC as the simulation environment, and this has some issues.
First, RTL level simulation is only possible for Verilog through verilator. Moreover, only the SystemC signal tracing capabilities are available, so there are no real time waveforms capture capabilities as it would be possible with Xsim or ModelSim/QuestaSim.
Second, the Co-Simulation part, which can be seen on the following diagram, is not tightly coupled to the hardware project, therefore any changes in the hardware project (such as a change in the memory map or a new component) would not reflect in the Co-Simulation model. This would have to be adapted by hand
Finally, the simulation model can only take Verilog or SystemC files, therefore simulation of all other entities in the hardware project must be rewritten as a SystemC or Verilog model.
Goal
The goal of this guide is to explain how to change this current approach to get a more tightly coupled Co-Simulation. Xilinx also has this kind of Co-Simulation for their SDSoC workflows (now Vitis) for HLS accelerators but I could not find a complete Co-Simulation workflow for running Linux through QEMU while simulating the whole PL.
The idea is to create a model where any changes in the hardware project would be directly reflected in the co-simulation model. Therefore the co-simulation must be able to simulate everything in the hardware project except the PS (Processing System) part that is emulated by QEMU.
In order to generate this simulation environment we will use the Vivado hardware project. The Vivado project can automatically generate a simulation environment where the Zynq is replaced by the Zynq7 Verification IP (VIP). We will use this simulation environment as a basis for our co-simulation but replace the Zynq7 VIP by a Zynq7 model that can communicate with QEMU through the Remote Port (RP) protocol. This will allow for a full-system co-simulation.
This would allow to generate complex simulation environments in Vivado where even board level components could be simulated side-by-side with all the RTL components and block design IPs. A possible hierarchy can be seen below.
This entire hierarchy can be generated from Vivado and can already be simulated with the Zynq7 PS block being the Zynq VIP. The idea is to take the same simulation setup, but with the Zynq7 PS replaced by a model that can communicate with QEMU. This is indeed possible and is what we will show in this guide.
A simple Vivado project for the ZedBoard
This project will serve as an example project for co-simulation but this should work for any other Zynq based RTL project.
The guide is illustrated with an example project in Vivado 2019.2. A similar project can be created with an older version of Vivado and should also work. The guide was also tested with Vivado 2017.4 and worked fine.
Step-by-step instructions
First create a new Vivado project.
Choose a project name and location and select RTL project.
Choose ZedBoard as the target board for the project
Finish creating the project. Once the project is created, start by creating a new block design. We will create a simple project that can read the DIP switches and set the LEDs.
Then add the ZYNQ7 Processing System with the Add IP (+) button.
Double click the IP in order to configure it. To save time select the default ZedBoard presets.
In Peripheral I/O Pins disable TTC0, since it is not needed here.
Then use Run Block Automation to connect the Zynq interfaces.
Add two GPIO IPs and an AXI interconnect.
Configure the GPIOs so that one is set as input and the other as output, both 8 bits.
Once the GPIOs are configured, click DIP switches in the Board menu and choose the GPIO configured as input, this will connect the DIP switches to the GPIO.
Do the same for the LEDs but select the output GPIO block, this will connect the LEDs.
Run Connection Automation to connect everything.
The system is now fully connected.
Now check the memory map in the address editor.
The switches can be read at 0x4120_0000 and the LEDs can be written at 0x4121_0000.
Finally create a HDL wrapper
Let Vivado manage the wrapper.
Save and generate the bitstream by clicking Generate Bitstream, when prompted to launch synthesis and implementation choose ‘yes’.
When the bitstream is generated choose ‘cancel’ on the popup window.
Now choose “Export Hardware”.
In the window the name can be changed if needed, by default it takes the top entity name (here the HDL wrapper for the block design). Choose the path where to export to, here we chose to export it outside of the Vivado project (which is in /opt/pkg/projects/zedboard/vivado_project/zedboard) and include the bitstream. (Including the bitstream is not necessary for the Co-Simulation but allows us to run on an actual board).
This will generate an XSA file (previously HDF file) that can be used with Petalinux in order to generate a kernel for this hardware platform.
Creating the Petalinux project
This process is documented in detail in [UG1144].
A step-by-step guide is provided below.
Step-by-step instructions
Open a terminal window and source the settings from the Petalinux install (this will make the Petalinux commands visible to the terminal by setting the $PATH environment variable).
If you do not have Petalinux installed refer to [UG1144] chapter 2 for installation and setup.
$ source /opt/pkg/petalinux/2019.2/settings.sh
You can safely ignore the warning if you do not use or did not set up the tftp server. Then use the following command to create a project.
$ petalinux-create --type project --template zynq --name zedboard_petalinux_project
and move to the project folder
$ cd zedboard_petalinux_project
Configure the project based on the exported hardware with the following command (using the path where we exported the hardware in the last section).
$ petalinux-config --get-hw-description=/opt/pkg/projects/zedboard
If the terminal window is too small the command will fail, if needed resize your terminal and rerun the command, the command will open a menuconfig menu.
The only settings we need to change for the moment is the DTG Settings (Device Tree Generation): select DTG Settings and update MACHINE_NAME from “template” to “zedboard”
Save and exit. This will end the configuration and the project will be set up. This may take some time (a few minutes).
Now we can build the project, which will create the bootloaders (FSBL, U-boot), the Linux Kernel, the root file system, the QEMU emulator, etc. For more information on Petalinux and customizing any of the components refer to [UG1144].
Build the project with the following command
$ petalinux-build
Now it is possible to test if QEMU can start the generated kernel with
$ petalinux-boot --qemu --kernel
This will run the Linux kernel in QEMU, but for the moment there is no support for the PL (FPGA) side of things. You can quit QEMU by pressing ctrl-A then X.
In order to communicate with the part that simulates the PL (FPGA) we will need to tell QEMU to communicate with the simulation. The Xilinx version of QEMU (https://github.com/Xilinx/qemu) has a built-in mechanism to communicate with simulators called Remote Port (RP). When a Zynq or ZynqUltraScale machine is run in this QEMU version, it is possible to pass a hardware device tree blob with information to enable Co-Simulation ports.
The device tree available here can be passed as an argument to QEMU. The instructions on how to build it are available here, however these instructions do not seem up-to-date.
Therefore, I have created a script that will fetch the required device tree include file and use the Petalinux project to build a custom device tree based on the Linux device tree and this include. The device tree will then be copied to a qemu_cosim directory in the Petalinux project and the Linux device tree will be cleaned and rebuilt in order to not include the Co-Simulation entries (which are only needed by QEMU).
The script generate_qemu_device_tree_zynq7.sh and can be used to generate the device tree blob for QEMU. The script takes the path to the Petalinux project as an argument. E.g.,
$ ./generate_qemu_device_tree_zynq7.sh /opt/pkg/projects/zedboard/zedboard_petalinux_project/
This will generate a qemu_cosim directory in the Petalinux project directory and the generated device tree blob will be copied to this qemu_cosim directory.
QEMU can now be launched to use the Remote Port (RP) with the following command, which will suspend QEMU waiting for a connection on the Remote Port.
$ petalinux-boot --qemu --kernel --qemu-args "-redir tcp:1534::1534 -hw-dtb ./qemu_cosim/qemu_hw_system.dtb -machine-path ./qemu_cosim -icount 1 -sync-quantum 10000"
If QEMU is launched with RP it will wait for a connection before it continues and therefore it is impossible to use ctrl-A X to exit QEMU. You can either kill QEMU by closing the terminal or connect to it through the opened socket (UNIX File Socket). The socket is created in the qemu_cosim directory with the name qemu-rport-_cosim@0, and we can connect to this unix socket with, for instance, socat :
$ socat - UNIX-CONNECT:qemu_cosim/qemu-rport-_cosim@0
When disconnecting with socat (ctrl-c) QEMU will close.
During Co-Simulation this unix file socket will be used to communicate between QEMU and the simulator. We are now ready and set with QEMU, we can prepare the PL (FPGA) simulation environment and QuestaSim.
Further information
Further information can be found in [UG1169].
Generating the simulation environment
We are now ready to prepare the hardware simulation environment, once this is set up we will be able to start the co-simulation.
First, be sure to have the Xilinx simulation libraries ready for QuestaSim, if not check the Generate Xilinx libraries for QuestaSim section below.
Step-by-step instructions
In this section we will use the Vivado hardware project to setup a simulation environment. In the project settings set the target simulator to Questa Advanced Simulator, checking that the Compiled library location points to the correct location.
You may also need to set the QuestaSim installation directory.
We will now generate a simple simulation top entity for this project (not mandatory, but this allows to show where we can implement board level behavior simulation, so outside of the FPGA).
Choose add sources (+) in the sources window.
Add or create simulation sources.
Create a simulation file, here a VHDL file sim_top.vhd (but you can also use SystemVerilog or Verilog). Click ok and finish. Keep the default values for the next dialog box and click ok.
Here we create a simple testbench to instantiate the block_design_wrapper.
The code for the architecture is the following :
entity sim_top is
end sim_top;
architecture Behavioral of sim_top is
signal leds_obs : std_logic_vector(7 downto 0);
signal sw_sti : std_logic_vector(7 downto 0) := "10100110";
begin
dut : entity work.block_design_wrapper
port map (
sws_8bits_tri_i => sw_sti,
leds_8bits_tri_o => leds_obs
);
end Behavioral;
Here we just give a default value to the switches and assign a signal to the LEDs. In this testbench you can add the behavioral model for the board components (in VHDL, SystemVerilog, SystemC or other model). For this example we will not add anything else and we do not need to connect the RAM ports since they are handled by QEMU and will not matter here.
Once this top is created, click Simulation – Run Simulation – Run Behavioral Simulation.
If everything goes as expected this should open QuestaSim and start a simulation with the Zynq VIP (Verification IP) as the Zynq7 processing system.
This simulation does not do much, the Zynq7 VIP will check transactions on AXI buses etc. but will not run any software. The documentation can be found here. Nevertheless, this simulation will serve as a basis for our Co-Simulation, we will replace this Zynq7 VIP by a Co-Simulation model that will communicate with QEMU. The current simulation also shows us that everything compiles as expected and that we are ready to continue further.
Vivado created a directory in the Vivado project under zedboard.sim/sim_1/behav/questa, as can be seen below
This directory holds all the scripts that were used to launch the simulation above. We will build on this by adding scripts of our own as well as the Co-Simulation files.
The Co-Simulation files are available through https://github.com/rick-heig/zynq7-cosim
Clone this repository in the simulation directory (or elsewhere). Once cloned run the setup.sh script in the cloned directory. This script will clone https://github.com/Xilinx/libsystemctlm-soc and apply a patch to make it compatible with QuestaSim.
Now, from the simulation directory we will create symbolic links to the cloned files. This allows to clone only once and use the files for multiple projects. We need to link to src_sc, src_vhdl, and libsystemctlm-soc from the zynq7-cosim repo as can be seen below
Now we need to create the scripts to compile everything required for the Co-Simulation. When we use these scripts instead of those generated by Vivado, the Zynq7 VIP will be replaced by a Co-Simulation-enabled Zynq.
In order to generate the scripts and files required for Co-Simulation a python script is given in the zynq7-cosim repository in the scripts directory. The script takes two arguments as input :
1) The VHDL stub for the processing system which is in the Vivado project e.g.,
project_name.srcs/sources_1/bd/block_design_name/ip/block_design_name_processing_system7_0_0/block_design_name_processing_system7_0_0_stub.vhdl
2) The simulation directory, this is also where the files will be generated e.g.,
project_name.sim/sim_1/behav/questa
$ python3 generate_sim_files.py <path to processing system VHDL stub> <path to simulation directory>
As can be seen below.
This will generate an all.do script, this script will call the Vivado generated compile script followed by a custom compile script followed by the Vivado generated elaboration script and finally start the simulation.
The script zynq7_compile_cosim.do has also been generated by the program as well as a VHDL file here called block_design_processing_system7_0_0.vhd but the name may differ depending on your block design name.
This VHDL file is what will replace the Zynq7 VIP in the simulation, if we have a look into sim_top_compile.do that was generated by Vivado we can see that normally “../../../../zedboard.ip_user_files/bd/block_design/ip/block_design_processing_system7_0_0/sim/block_design_processing_system7_0_0.v” would be used but in our simulation this file is replaced by the VHDL file that will hold all the other Co-Simulation related entities.
The zynq7_compile_cosim.do file compiles everything and if we look at this script :
# Script to compile the CoSimulation files (auto-generated)
vlib questa_lib/work
vlib questa_lib/msim
vlib questa_lib/msim/xil_defaultlib
vmap xil_defaultlib questa_lib/msim/xil_defaultlib
# Zynq System Wrapper
sccom -work xil_defaultlib --std=c++11 -g -I./libsystemctlm-soc/libremote-port/ -I./libsystemctlm-soc/ ./libsystemctlm-soc/soc/xilinx/zynq/xilinx-zynq.cc
# Compile C files (not SystemC) for libremote-port
sccom -work xil_defaultlib -x c -fPIC -g ./libsystemctlm-soc/libremote-port/safeio.c
# The following file was patched to solve issues (maybe a flag would have fixed them too) TODO : Check this out
sccom -work xil_defaultlib -x c -fPIC -g ./libsystemctlm-soc/libremote-port/remote-port-proto.c
# The following file was patched to solve issues
sccom -work xil_defaultlib -x c -fPIC -g ./libsystemctlm-soc/libremote-port/remote-port-sk.c
# Lib Remote Port (RP) SystemC files
sccom -work xil_defaultlib -g -I./libsystemctlm-soc/libremote-port/ -I./libsystemctlm-soc/ ./libsystemctlm-soc/libremote-port/remote-port-tlm.cc
sccom -work xil_defaultlib -g -I./libsystemctlm-soc/libremote-port/ -I./libsystemctlm-soc/ ./libsystemctlm-soc/libremote-port/remote-port-tlm-memory-master.cc
sccom -work xil_defaultlib -g -I./libsystemctlm-soc/libremote-port/ -I./libsystemctlm-soc/ ./libsystemctlm-soc/libremote-port/remote-port-tlm-memory-slave.cc
sccom -work xil_defaultlib -g -I./libsystemctlm-soc/libremote-port/ -I./libsystemctlm-soc/ ./libsystemctlm-soc/libremote-port/remote-port-tlm-wires.cc
sccom -work xil_defaultlib -g -I./libsystemctlm-soc/libremote-port/ -I./libsystemctlm-soc/ ./libsystemctlm-soc/libremote-port/remote-port-tlm-memory-master.cc
sccom -work xil_defaultlib -g -I./libsystemctlm-soc/libremote-port/ -I./libsystemctlm-soc/ ./libsystemctlm-soc/libremote-port/remote-port-tlm-memory-slave.cc
# The main Zynq SystemC-TLM CoSimulation entity
sccom -work xil_defaultlib -D__M_AXI_GP0_AXLEN_WIDTH__=4 -D__M_AXI_GP0_ENABLE__=1 -D__M_AXI_GP0_DATA_WIDTH__=32 -D__M_AXI_GP0_ID_WIDTH__=12 -D__M_AXI_GP0_AXLOCK_WIDTH__=2 -D__M_AXI_GP0_ADDR_WIDTH__=32 -g -I./libsystemctlm-soc/libremote-port/ -I./libsystemctlm-soc/ -I./libsystemctlm-soc/soc/xilinx/zynq/ -Isrc_sc -I. -I./libsystemctlm-soc/tlm-bridges/ src_sc/zynq7_ps.cc
# Generation of the VHDL template (as reference only)
# Xilinx uses .veo and .vho for verilog and vhdl templates respectively (component description)
#vgencomp zynq7_ps > src_vhdl/zynq7_ps.vho
# Compilation of the VHDL wrapper around the SystemC entity (should be consistent with the template above)
vcom -work xil_defaultlib src_vhdl/zynq7_ps_wrapper.vhd
# Link (systemc.so)
sccom -link -work xil_defaultlib
# Compile VHDL BD wrapper (auto-generated)
vcom -work xil_defaultlib -2008 block_design_processing_system7_0_0.vhd
# Simulate (requires QEMU to be launched, twice, once for the optimization and then again for the simulation)
#
# petalinux-boot --qemu --kernel --qemu-args "-hw-dtb ./system.dtb -machine-path ./qemu-tmp -icount 1 -sync-quantum 10000"
# The hardware device tree is the linux device tree with extra include
# https://github.com/Xilinx/qemu-devicetrees
# https://github.com/Xilinx/qemu-devicetrees/blob/master/zynq-pl-remoteport.dtsi
#
# Time quantum and machine path should be the same between simulation and QEMU emulation
# (generic parameters of zynq_top component, can be changed in testbench)
We can see all the files that are compiled, with the project specific parts of the script being :
1) the AXI related parameters that are passed to the preprocessor so that the defines are set correctly in the SystemC Zynq Co-Simulation file (zynq7-cosim/src_sc/zynq_ps.h)
# The main Zynq SystemC-TLM CoSimulation entity
sccom -work xil_defaultlib -D__M_AXI_GP0_AXLEN_WIDTH__=4 -D__M_AXI_GP0_ENABLE__=1 -D__M_AXI_GP0_DATA_WIDTH__=32 -D__M_AXI_GP0_ID_WIDTH__=12 -D__M_AXI_GP0_AXLOCK_WIDTH__=2 -D__M_AXI_GP0_ADDR_WIDTH__=32 -g -I./libsystemctlm-soc/libremote-port/ -I./libsystemctlm-soc/ -I./libsystemctlm-soc/soc/xilinx/zynq/ -Isrc_sc -I. -I./libsystemctlm-soc/tlm-bridges/ src_sc/zynq7_ps.cc
2) the name of the VHDL file is provided.
# Compile VHDL BD wrapper (auto-generated)
vcom -work xil_defaultlib -2008 block_design_processing_system7_0_0.vhd
If we have a look at the VHDL file we can see that all generic constants have been set.
This file is based on the zynq7-cosim/src_vhdl/block_design_bd_processing_system7_0_0_template.vho template.
The python script will look at the VHDL stub file passed as the first argument, here /opt/pkg/projects/zedboard/vivado_project/zedboard/zedboard.srcs/sources_1/bd/block_design/ip/block_design_processing_system7_0_0/block_design_processing_system7_0_0_stub.vhdl in order to find the correct values for the parameters.
The script will infer which ports are enabled on the Zynq and extract the ports widths, etc. These could also be replaced by hand from the template.
The script will also copy the port list from the Zynq connect any interface used in the port map in the architecture. In the architecture a zynq7_ps_wrapper is instantiated (this is the src_vhdl/zynq7_ps_wrapper.vhd file) this wrapper serves to set default values to signals that are not used (e.g., disabled ports) and instantiates the SystemC model (src_src/zynq7_ps.h, src_sc/zynq7_ps.cc) as in the figure below.
As shown above, now we have a fully parameterized Zynq7 PS with the correct ports for this simulation (auto-generated from the script or manually generated from the template) that wraps the zynq7_ps_wrapper (unchanged) that wraps the SystemC component.
The compilation script can also be made manually from the script shown above. One detail is that the VHDL generic parameters for the AXI widths must match the preprocessor values passed to the SystemC model (they are passed as preprocessor values because templates cannot be instantiated by generics).
We now almost achieved a full Co-Simulation environment as shown below.
Now we have to configure one last bit before we can start the simulation. The Remote Port (RP) connection. Open the VHDL file that was generated in the simulation directory, in our case block_design_processing_system7_0_0.vhd and change the path that links to the QEMU socket to the correct path, from the default value :
to the actual value, this is where we created the hardware device tree for QEMU, so in our case :
/opt/pkg/projects/zedboard/zedboard_petalinux_project/qemu_cosim/qemu-rport-_cosim@0
The path must be preceded by the “unix:” tag in order to specify it is an unix file socket. Like so :
If you are not sure about the path you can go into the Petalinux project directory and check it out :
If QEMU was never launched in the Co-Simulation configuration the file may not yet exist. Launch QEMU with the following command :
$ petalinux-boot --qemu --kernel --qemu-args "-redir tcp:1534::1534 -hw-dtb ./qemu_cosim/qemu_hw_system.dtb -machine-path ./qemu_cosim -icount 1 -sync-quantum 10000"
From the Petalinux directory. Once the VHDL file is set with the correct path we can now start the simulation.
Co-Simulating with QEMU and QuestaSim
First start QEMU, which should wait for a connection on its remote port (RP).
$ petalinux-boot --qemu --kernel --qemu-args "-redir tcp:1534::1534 -hw-dtb ./qemu_cosim/qemu_hw_system.dtb -machine-path ./qemu_cosim -icount 1 -sync-quantum 10000"
Now, from the simulation directory run the all.do script.
This will start QuestaSim and QuestaSim will start compiling everything, then elaborating the design, optimizing the design and finally starting the simulation.
QuestaSim will stop when loading zynq7_ps which is a SystemC entity. If we look at QEMU we can see that it stopped :
Rerun the exact same QEMU command :
$ petalinux-boot --qemu --kernel --qemu-args "-redir tcp:1534::1534 -hw-dtb ./qemu_cosim/qemu_hw_system.dtb -machine-path ./qemu_cosim -icount 1 -sync-quantum 10000"
Now QEMU will connect with QuestaSim (QuestaSim requires to connect once during optimization and then again during simulation, that is why we need to launch it twice). After that QEMU is ready but did not boot Linux yet because it waits for the simulation to start. Here if we type in QuestaSim
run -all
We can actually see QEMU starting and loading the Linux kernel.
It may take a minute or two for the kernel to fully boot. We are currently simulating the PL in QuestaSim and running the PS in QEMU.
You can now use QuestaSim and QEMU as you would do normally, when you pause or stop the simulation in QuestaSim, QEMU will also pause/stop, here QuestaSim is the master and QEMU is the slave (QEMU waits for synchronization packets from QuestaSim).
After a while you will be prompted by the Linux login on QEMU :
Here you can login if you want, since we also used the -redir tcp:1534::1534 option for QEMU you can also use the traditional SDK (Vitis) workflow and connect to QEMU through the eclipse TCF protocol, this allows for easy application testing and debugging through the SDK.
Example 1
For this example we will just check out the AXI transactions on QuestaSim based on some devmem accesses to the LEDs et Switches GPIOs.
In QuestaSim add the waves you want to check out, here we chose to add the waves for the two GPIO IPs.
We also chose to add the LED and Switches signals from the simulation top
Now, let’s login into the Linux system with the default root login and root password (root:root). Once logged in we can start by reading the switches (that we arbitrarily set to 0x10100110 in the simulation top). The switches were mapped to 0x4120_0000, if unsure check the block diagram address editor in Vivado. So in QEMU once logged in we can type :
$ devmem 0x41200000 8
The 8 in the command is used to read 8 bits.
We can see that we got the 0xA6 = 0b101000110 value that we set in the simulation top.
In QuestaSim we can now check out the transaction in the waveforms :
We can see the AXI transaction to the GPIO0 module.
Now let’s drive some LEDs, in Linux we can write the LEDs GPIO like this :
$ devmem 0x41210000 8 0x55
we can read back the value with :
$ devmem 0x41210000 8
Now in QuestaSim we should see a write transaction to GPIO1 followed by a read transaction to GPIO1, the value of the LEDs in the simulation top should also have been updated :
We can see the two transactions, they are separated by quite a margin because of the big amount of cycles between when the two commands were typed into QEMU.
We also see at the bottom that the LEDs have been updated in the simulation top to 0x55.
This concludes our first example on Co-Simulation. Any kind of architecture can be simulated here such as :
where we not only have a block design, but also RTL sources and external simulated components. This allows us to do “full-system” testing / simulation.
More complex designs
This first example is simple but the Co-Simulation also works with more complex designs such as this example with a DMA :
which shows multiple AXI ports being used (e.g., M_AXI_GP0, S_AXI_GP0, S_AXI_HP0) as well as interrupts from PL to PS as shown above.
Example 2
This second example shows how to use the co-simulation with SDK / Vitis.
In order to debug with Xilinx SDK or Vitis, create a Linux application as you would normally and configure the debug configuration as follows (shown in Vitis 2019.2, but this is similar in XSDK e.g., 2017.4) :
Create a new “Single Application Debug” and choose a new target :
Name the target something to remember that we access QEMU, the port is 1534 and is the port we redirected in the QEMU command (in order to allow for the TCF protocol to go through). Here you can test the connection while the Co-Simulation is running.
The other parameters can be left by default. We can now run and debug a basic example (hello world) :
Here we may ask ourselves if this did really run on the Linux kernel that runs on the Co-Simulation. In QEMU we can actually see the application that was loaded by running :
$ ls /mnt
If we look at the debug configuration options :
We can see the TCF does load the application to /mnt
Since printing “Hello World” may not be the most interesting example, we can memory map the physical address 0x4121_0000 and read the value (current value of the LEDs)
#include <stdio.h>
#include <fcntl.h>
#include <stdint.h>
#include <unistd.h>
#include <sys/mman.h>
void *get_pointer_to_memory(const uint32_t mem_address, const uint32_t mem_size){
int mem_dev = open("dev/mem", O_RDWR | O_SYNC);
if(mem_dev == -1){
printf("[ERROR] : could not open /dev/mem\n");
return NULL;
}
uint32_t alloc_mem_size, page_mask, page_size;
void *mem_pointer, *virt_addr;
page_size = sysconf(_SC_PAGESIZE);
alloc_mem_size = (((mem_size/page_size) + 1) * page_size);
page_mask = (page_size - 1);
mem_pointer = mmap(NULL,
alloc_mem_size,
PROT_READ | PROT_WRITE,
MAP_SHARED,
mem_dev,
(mem_address & ~page_mask)
);
if(mem_pointer == MAP_FAILED){
printf("[ERROR] : mmap() failed");
return NULL;
}
virt_addr = (mem_pointer + (mem_address & page_mask));
return virt_addr;
}
int main()
{
uint32_t* p = (uint32_t*)get_pointer_to_memory(0x41210000, 0x100);
printf("LED value is : 0x%02X\n", *p);
return 0;
}
Let’s run the example in debug, select the debug with the configuration we created above, you can go step-by-step through the code or just let the code continue. The example will show the value of the LEDs GPIO register :
If we check QuestaSim we can see the transaction in the waveforms :
This concludes the second example that shows SDK / Vitis with co-simulation.
Notes
When you quit QuestaSim (Master) it will also quit QEMU (Slave) which will end the Co-Simulation session.
Notes for when the Vivado project is updated
Updates in the Vivado project must be incorporated into the Co-Simulation environment. To do so it is naturally possible to redo the steps above, but small changes, such as updates in existing files do not require to redo anything (the files get recompiled) but :
- If the Vivado design gets updated it may be required to update the PetaLinux project. This is mostly if you add IPs that rely on the Linux driver support and need to update the device tree or kernel for this support (check out [UG1144] on how to do this)
- In order to regenerate the simulation files (e.g., if you add files to the Vivado project or add IPs to the block diagram or change the simulation top), click on Simulation – Run Simulation – Run Behavioral Simulation and this will regenerate all the scripts without modifying the scripts and the files we added to the simulation for Co-Simulation.
- If the Zynq interfaces changed (e.g., added clocks or AXI interfaces or changed the data sizes etc.) the python script to generate the wrapper and simulation scripts must be rerun or the wrapper and compilation script must be manually edited to reflect the changes (more risky).
Clocks in the co-simulation are set through the VHDL generated wrapper and by default at 100 kHz (10000 ns period) in order to not slow down the simulation too much. All clocks have this default frequency. If your design uses e.g.,
- FCLK_CLK0 @ 50 MHz
- FCLK_CLK1 @ 100 MHz
- FCLK_CLK2 @ 200 MHz
Set the periods to :
- 20’000 ns
- 10’000 ns
- 5’000 ns
For example, to keep the correct ratios between the clocks. Depending on your application you can scale the clock periods more or less.
Supported Zynq7 Interfaces for Co-Simulation
Currently the following interfaces are supported :
- M_AXI_GP 0 and 1
- S_AXI_GP 0 and 1
- S_AXI_HP 0,1,2, and 3
- FCLK_CLK 0,1,2, and 3 and associated resets
- IRQ_F2P (all)
Other interfaces such as outputs from hard cores in the SoC (SPI, UART, USB etc.) are not supported in the PL but can be emulated by QEMU (or added to LibSystemCTLM-SoC and Remote Port and added to simulation, this is all open source…).
Generate Xilinx libraries for QuestaSim
The Xilinx libraries must be compiled for QuestaSim in order to simulate the Xilinx IPs. This can be done through Vivado (here shown in version 2017.4) :
We should make sure that the compilation was successful. Look at the resulting output on the TCL command line, there will probably be multiple errors due to the simulation command using the “-novopt” option, which is now deprecated in QuestaSim (since 10.6 iirq) and will throw an error.
There are two solutions to this: either change the global QuestaSim Settings to suppress the error, or change the settings used by Vivado to generate the commands.
- Edit the QuestaSim settings to suppress the “-novopt” error on a global scale. To do so edit the modelsim.ini in : /path/to/questasim/install/questasim/modelsim.ini and add : suppress = 12110;
- Edit the file config_compile_simlib.acd in : /path/to/xilinx/install//Xilinx/Vivado/20XX.X/data/parts/xilinx/compxlib/config_compile_simlib.acd and remove the “-novopt” options in the QuestaSim-related lines e.g., :
From :
questasim.verilog.simprim:-source -novopt +define+XIL_TIMING:string:library compile option
To :
questasim.verilog.simprim:-source -novopt +define+XIL_TIMING:string:library compile option
Compile the libraries again and check the reports. If a library still has an error, find out why, fix the problem and recompile the library (the commands used to compile the library, and the logs, are available in the directory where the library was compiled as hidden files e.g., .cxl.verilog.secureip.secureip.lin64.cmd, .cxl.verilog.secureip.secureip.lin64.log). The logs allow us to understand what went wrong and the commands allow us to replicate the compilation (sometimes with edited commands to fix a problem).
For Vivado 2019.2 there are some extra steps required such as :
creating symbolic links in questasim/gcc-7.4.0-linux_x86_64/lib/gcc/x86_64-pc-linux-gnu/7.4.0/ to :
/usr/lib/x86_64-linux-gnu/crt*
/usr/lib/x86_64-linux-gnu/libc.*
/usr/lib/x86_64-linux-gnu/libm.*
Because with Vivado 2019.2 the IP compiles some C/C++ files and requires these files that do not come with the QuestaSim default install.
Probably the C/C++ section of secureip will not compile correctly (change GCC 5.3.0 to 7.4.0 and rerun the commands for secureip, commands are in .cxl.xxxxxx.cmd)
If QuestaSim complains about ld not being found add a symbolic link to the system ld in the questasim directory e.g., in the questasim/gcc-7.4.0-linux_x86_64/lib/gcc/x86_64-pc-linux-gnu/7.4.0/ directory.
Conclusion
This step-by-step guide has shown how to accomplish full system co-simulation relying on QEMU for emulating the PS and QuestaSim for simulating the PL providing a complete development and debug solution for Zynq based projects.
This is a very powerful tool for the development of Linux drivers for PL components and simulation of the whole FPGA design.
This co-simulation set up could also be adapted to run bare-metal applications on QEMU as well as FreeRTOS.
References
Below are links to the documentation and repositories used for this tutorial
Xilinx documentation
- UG1144 : PetaLinux Tools Reference Guide
https://www.xilinx.com/support/documentation/sw_manuals/xilinx2019_2/ug1144-petalinux-tools-reference-guide.pdf - UG1157 : PetaLinux Tools Documentation – Command Line Reference Guide
https://www.xilinx.com/support/documentation/sw_manuals/xilinx2019_1/ug1157-petalinux-tools-command-line-guide.pdf - UG1169 : Xilinx QEMU User Guide
https://www.xilinx.com/support/documentation/sw_manuals/xilinx2018_2/ug1169-xilinx-qemu.pdf - UG1165 : Zynq-7000 SoC: Embedded Design Tutorial
https://www.xilinx.com/support/documentation/sw_manuals/xilinx2019_2/ug1165-zynq-embedded-design-tutorial.pdf
Xilinx repositories
- Co-Simulation library – SystemCTLM-SoC
https://github.com/Xilinx/libsystemctlm-soc - Xilinx Co-Simulation demos
https://github.com/Xilinx/systemctlm-cosim-demo - Xilinx Linux fork
https://github.com/Xilinx/linux-xlnx - Xilinx u-boot fork
https://github.com/Xilinx/u-boot-xlnx - Xilinx qemu fork
https://github.com/Xilinx/qemu
Related projects
Tightly-Coupled Co-simulation Framework for RISC-V Based Systems
Link to publication
Code : https://gitlab.com/reds-public/tccf
17 comments
Wow what a great write up!
Do you need to do all the scripting work if you use Xsim?
Hello.
Thank you for your interest. It would be interesting to have this work with XSim I agree. I don’t know if XSim supports SystemC (UG900 2019.2 explains how to compile C code and link it to SystemVerilog through the DPI but does not mention SystemC…).
The global approach would be the same except for the compilation script (the generated zynq7_compile.do). This script compiles all the SystemC files and VHDL wrappers.
The Questasim “sccom” would have to be replaced by Xsim “xsc” and then it should be linked correctly. The thing I am not sure about is if you can instantiate a SystemC module in VHDL or (System)Verilog in XSim.
From what I find on the forums : https://forums.xilinx.com/t5/Simulation-and-Verification/How-can-I-run-a-systemC-simulation-in-the-Vivado-simulator/td-p/1022635 it does not seem to be directly possible.
Therefore I don’t think the approach is compatible with XSim for the moment, the possibility I see for converting this to XSim would be to write a SystemVerilog module and use the DPI to call C or SystemC functions.
The entire Zynq7 could be written in SystemVerilog and communicate with the Remote Port (RP) through a C Socket with the DPI. This would also require to write an AXI-Full BFM (one slave and one master) but this only needs to be done once and can be adapted from the code here : https://github.com/Xilinx/libsystemctlm-soc/blob/master/tlm-bridges/axi2tlm-bridge.h https://github.com/Xilinx/libsystemctlm-soc/blob/master/tlm-bridges/tlm2axi-bridge.h (The TLM layer can be replaced by SystemVerilog queues).
Anyway, if you find a way to instantiate a SystemC module in VHDL or Verilog in an XSim simulation I would gladly help adapting the scripts for XSim.
Do you know if this will also work with modelsim?
Hello.
I think that if you have support for System-C 2.3.2 and mixed language simulation it should work.
Some versions of ModelSim may not allow for mixed language simulation (E.g., ModelSim PE Student Edition) so it really depends on the version.
E.g., with QuestaSim, I did not manage to have this work with older versions e.g., 10.7c due to the SystemC version.
So if you can simulate VHDL-(System)Verilog together and instanciate a SystemC module in VHDL-(System)Verilog it should work.
The commands for ModelSim/QuestaSim should be the same so I don’t think the script requires any modification.
Let us know about your experience, if the scripts or files require some modifications you can report an issue or pull request on github https://github.com/rick-heig/zynq7-cosim and I will add them to the repository so that other users can use them.
Regards, Rick.
Hi can i do it with Vivado Webpack Edition ?
Hello,
This will work with Webpack Edition given you use Questasim as a simulator.
will it work for ACP and HPC 0/1 port?
High performance ports HP 0 and 1 work, the DMA example above uses HP0 to access the DDR.
ACP is not implemented.
Hello i can’t use vivado 2019.2 if i use vivado 2020.2 will it work ?
This is probably fine, however there may be issue with the version of Petalinux used.
QEMU and the LibSystemCTLM-SoC library evolved from the time I wrote this post. The GitHub code I provided here as an example was not updated (no time to do this) so if the protocol (libremote-port that handles communication between QEMU and the sim) changed in newer versions of QEMU provided with Petalinux there may be incompatibilities between QEMU and the simulation. However everything is open source so with some coding it is possible to update everything.
https://github.com/Xilinx/libsystemctlm-soc
https://github.com/Xilinx/qemu
Regards,
Rick
Hi Rick,
First of all, I would like to thank you for providing such an amazing write up and detailed steps. I can imagine that doing it in a custom way wouldn’t have been easy 🙂
I am using Vivado 2019.2 and Questa 2019.2 and tried same steps and getting Error during elaboration.
Would you help if you have seen/aware of this issue/ Or any hint would be of really great help!.
Here is my elaboration log.
# — Loading entity block_design_xbar_0
# — Loading architecture STRUCTURE of m00_couplers_imp_1X21ZCV
# — Loading architecture STRUCTURE of m01_couplers_imp_1UJ7QBJ
# — Loading architecture STRUCTURE of s00_couplers_imp_1RQO0KS
# — Loading entity block_design_auto_pc_0
# — Loading module block_design_auto_pc_0
# — Loading module axi_protocol_converter_v2_1_20.axi_protocol_converter_v2_1_20_axi_protocol_converter
# — Loading module block_design_xbar_0
# — Loading module axi_crossbar_v2_1_21.axi_crossbar_v2_1_21_axi_crossbar
# — Loading architecture struct of block_design_processing_system7_0_0
# — Loading entity zynq7_ps_wrapper
# — Loading architecture struct of zynq7_ps_wrapper
# — Loading entity zynq7_ps
# — Loading shared library ~/Desktop/vivado_project/zedboard/zedboard.sim/sim_1/behav/questa/questa_lib/msim/xil_defaultlib/_sc/linux_x86_64_gcc-5.3.0/systemc.so
# — Loading systemc module ~/Desktop/vivado_project/zedboard/zedboard.sim/sim_1/behav/questa/questa_lib/msim/xil_defaultlib.zynq7_ps
# : No such file or directory
Thanks in Advance.
From the log it seems either the library “xil_defaultlib” is missing or the module zynq7_ps is missing from the library. But it seems to me that the library as a whole is missing, check in the path …/sim_1/behav/questa/questa_lib/msim/ if you have it and check for prior errors in the log(s) to see why it failed to be generated.
Regards,
Rick
Rick,
Thanks for sharing this. i am trying to replicate this on centos 7 system.
what to do if, linux is not booting aftter executing “run -all” command on questa?
i can see that, qemu is getting disconnected when i close questa. what can be the root cause for this ?
Thanks
Vinay
I suspect that for some reason the communication between the simulation and QEMU is stuck, the libremote-port library handles the communication between the simulation and QEMU. You can try to debug the sim (SystemC and C side) to see what it is doing. Because QEMU will wait for packets to be sent from the simulation side.
The simulation and emulation advance in locked-steps (with a time defined by the “sync-quantum …” option) and QEMU will wait for Questa.
At the time I wrote this post the LibSystemC-TLM-SoC library (which includes libremote-port) was in early stages, so it is possible something changed and is locking-up the simulation. I cannot say with certainty, but if you debug the C and SystemC side on Questa you’ll probably find where it is stuck. The lib is open source so you can try to figure out what is happening https://github.com/Xilinx/libsystemctlm-soc check the libremote-port source. More specifically https://github.com/Xilinx/libsystemctlm-soc/blob/master/libremote-port/remote-port-tlm.cc check if sync packets a sent and if not try to find out why. If the sync packets are not sent to QEMU the emulation will wait.
Source on the QEMU side is https://github.com/Xilinx/qemu/blob/master/hw/core/remote-port.c and related files in the same folder.
The source probably evolved on both sides since I wrote this post. So you may need to do some slight hacking here and there. Because the QEMU version provided with Petalinux is updated but I did not update my git fork provided here as an example. This could end up in incompatibility between both and is probably the issue here.
I am sorry I cannot give more details and don’t have the time to update my GitHub repo each time Xilinx updates their library (libremote-port or SystemCTLM-SoC). This is more of a proof of concept that it is possible to do full system cosimulation. If you want to implement this with the newest version, you’ll have to patch some things. Anyways, Xilinx QEMU, libremote-port, libSystemCTLM-SoC are all open source so with a bit of work you can replicate the setup in this blog. (I hope someday Xilinx will provide a full setup with support to do this, because they have all the tools and should not be that much of trouble to do, maybe it will come one day).
Best Regards,
Rick
Hello ,
Rick,
I was trying to implement your tutorial. First I would like to thanks for this wonderful tutorial.
My problem is whenver i am trying to implementing this :
petalinux-boot –qemu –kernel –qemu-args “-redir tcp:1534::1534 -hw-dtb ./qemu_cosim/qemu_hw_system.dtb -machine-path ./qemu_cosim -icount 1 -sync-quantum 10000”
It shows -redir is invalid option. Do you have any solution ?
Hello.
The documentation can be found here : https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/862912682/Networking+in+QEMU the -redir is documented there, while I wrote my post the -redir option was already deprecated (see screenshots) the issue is that the version of QEMU you use is newer and that the option (-redir) is no longer accepted (because it was removed).
Screenshot : https://blog.reds.ch/wp-content/uploads/2020/05/38_plnx.png
As shown in the screenshot, it is told that -redir is deprecated, they advise to use use “-netdev user,hostfwd=…” instead. So replacing the “-redir” option by something like “-netdev user,hostfwd=tcp::1534-:1534” instead should fix the issue (this is to forward TCP traffic between VM and host on port 1534).
For further info check the references below.
References :
QEMU doc : https://wiki.qemu.org/Documentation/Networking
Xilinx QEMU doc : https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/862912682/Networking+in+QEMU
Dear Rick,
Do you think it will work with Intel Questa Sim Starter edition ?
IS the directory structure is same as siemens questa sim ?
I found that intel questa sim is free and support mixed simulation. Its actually the siemens version.
But can I hook it up with vivado ?