Skip to content

Platform Evaluation Script: Open FPGA Stack for Intel Agilex FPGA

Term Abbreviation Description
Advanced Error Reporting AER The PCIe AER driver is the extended PCI Express error reporting capability providing more robust error reporting. (link)
Accelerator Functional Unit AFU Hardware Accelerator implemented in FPGA logic which offloads a computational operation for an application from the CPU to improve performance. Note: An AFU region is the part of the design where an AFU may reside. This AFU may or may not be a partial reconfiguration region.
Basic Building Block BBB Features within an AFU or part of an FPGA interface that can be reused across designs. These building blocks do not have stringent interface requirements like the FIM's AFU and host interface requires. All BBBs must have a (globally unique identifier) GUID.
Best Known Configuration BKC The software and hardware configuration Intel uses to verify the solution.
Board Management Controller BMC Supports features such as board power managment, flash management, configuration management, and board telemetry monitoring and protection. The majority of the BMC logic is in a separate component, such as an Intel Max10 or Intel Cyclone10 device; a small portion of the BMC known as the PMCI resides in the main Agilex FPGA.
Configuration and Status Register CSR The generic name for a register space which is accessed in order to interface with the module it resides in (e.g. AFU, BMC, various sub-systems and modules).
Data Parallel C++ DPC++ DPC++ is Intel’s implementation of the SYCL standard. It supports additional attributes and language extensions which ensure DCP++ (SYCL) is efficiently implanted on Intel hardware.
Device Feature List DFL The DFL, which is implemented in RTL, consists of a self-describing data structure in PCI BAR space that allows the DFL driver to automatically load the drivers required for a given FPGA configuration. This concept is the foundation for the OFS software framework. (link)
FPGA Interface Manager FIM Provides platform management, functionality, clocks, resets and standard interfaces to host and AFUs. The FIM resides in the static region of the FPGA and contains the FPGA Management Engine (FME) and I/O ring.
FPGA Management Engine FME Performs reconfiguration and other FPGA management functions. Each FPGA device only has one FME which is accessed through PF0.
Host Exerciser Module HEM Host exercisers are used to exercise and characterize the various host-FPGA interactions, including Memory Mapped Input/Output (MMIO), data transfer from host to FPGA, PR, host to FPGA memory, etc.
Input/Output Control IOCTL System calls used to manipulate underlying device parameters of special files.
Intel Virtualization Technology for Directed I/O Intel VT-d Extension of the VT-x and VT-I processor virtualization technologies which adds new support for I/O device virtualization.
Joint Test Action Group JTAG Refers to the IEEE 1149.1 JTAG standard; Another FPGA configuration methodology.
Memory Mapped Input/Output MMIO The memory space users may map and access both control registers and system memory buffers with accelerators.
oneAPI Accelerator Support Package oneAPI-asp A collection of hardware and software components that enable oneAPI kernel to communicate with oneAPI runtime and OFS shell components. oneAPI ASP hardware components and oneAPI kernel form the AFU region of a oneAPI system in OFS.
Open FPGA Stack OFS OFS is a software and hardware infrastructure providing an efficient approach to develop a custom FPGA-based platform or workload using an Intel, 3rd party, or custom board.
Open Programmable Acceleration Engine Software Development Kit OPAE-SDK The OPAE-SDK is a software framework for managing and accessing programmable accelerators (FPGAs). It consists of a collection of libraries and tools to facilitate the development of software applications and accelerators. The OPAE SDK resides exclusively in user-space.
Platform Interface Manager PIM An interface manager that comprises two components: a configurable platform specific interface for board developers and a collection of shims that AFU developers can use to handle clock crossing, response sorting, buffering and different protocols.
Platform Management Controller Interface PMCI The portion of the BMC that resides in the Agilex FPGA and allows the FPGA to communicate with the primary BMC component on the board.
Partial Reconfiguration PR The ability to dynamically reconfigure a portion of an FPGA while the remaining FPGA design continues to function. For OFS designs, the PR region is referred to as the pr_slot.
Port N/A When used in the context of the fpgainfo port command it represents the interfaces between the static FPGA fabric and the PR region containing the AFU.
Remote System Update RSU The process by which the host can remotely update images stored in flash through PCIe. This is done with the OPAE software command "fpgasupdate".
Secure Device Manager SDM The SDM is the point of entry to the FPGA for JTAG commands and interfaces, as well as for device configuration data (from flash, SD card, or through PCI Express* hard IP).
Static Region SR The portion of the FPGA design that cannot be dynamically reconfigured during run-time.
Single-Root Input-Output Virtualization SR-IOV Allows the isolation of PCI Express resources for manageability and performance.
SYCL SYCL SYCL (pronounced "sickle") is a royalty-free, cross-platform abstraction layer that enables code for heterogeneous and offload processors to be written using modern ISO C++ (at least C++ 17). It provides several features that make it well-suited for programming heterogeneous systems, allowing the same code to be used for CPUs, GPUs, FPGAs or any other hardware accelerator. SYCL was developed by the Khronos Group, a non-profit organization that develops open standards (including OpenCL) for graphics, compute, vision, and multimedia. SYCL is being used by a growing number of developers in a variety of industries, including automotive, aerospace, and consumer electronics.
Test Bench TB Testbench or Verification Environment is used to check the functional correctness of the Design Under Test (DUT) by generating and driving a predefined input sequence to a design, capturing the design output and comparing with-respect-to expected output.
Universal Verification Methodology UVM A modular, reusable, and scalable testbench structure via an API framework. In the context of OFS, the UVM enviroment provides a system level simulation environment for your design.
Virtual Function Input/Output VFIO An Input-Output Memory Management Unit (IOMMU)/device agnostic framework for exposing direct device access to userspace. (link)

1 Overview

1.1 About this Document

This document serves as a set-up and user guide for the checkout and evaluation of an Intel® FPGA SmartNIC N6001-PL development platform using Open FPGA Stack (OFS). After reviewing the document, you will be able to:

  • Set-up and modify the script to the your environment
  • Compile and simulate an OFS reference design
  • Run hardware and software tests to evaluate the complete OFS flow

Table 1-2: Software Version Summary

Component Version Description
FPGA Platform Intel® FPGA SmartNIC N6001-PL Intel platform you can use for your custom board development
OFS FIM Source Code Branch: https://github.com/OFS/ofs-n6001, Tag: ofs-2023.1-1 OFS Shell RTL for Intel Agilex FPGA (targeting Intel® FPGA SmartNIC N6001-PL)
OFS FIM Common Branch: release/ofs-2023.1, Tag: https://github.com/OFS/ofs-fim-common/releases/tag/ofs-2023.1-1 Common RTL across all OFS-based platforms
AFU Examples Branch: examples-afu , Tag:ofs-examples-https://github.com/OFS/examples-afu/releases/tag/ofs-2023.1-1 Tutorials and simple examples for the Accelerator Functional Unit region (workload region)
OPAE SDK Branch: 2.5.0-3, Tag: 2.5.0-3 Open Programmable Acceleration Engine Software Development Kit
Kernel Drivers Branch: ofs-2023.1-6.1-1, Tag: ofs-2023.1-6.1-1 OFS specific kernel drivers
OPAE Simulation Branch: opae-sim, Tag: 2.5.0-3 Accelerator Simulation Environment for hardware/software co-simulation of your AFU (workload)
Intel Quartus Prime Pro Edition Design Software 23.1 [Intel® Quartus® Prime Pro Edition Linux] Software tool for Intel FPGA Development
Operating System RHEL 8.6 Operating system on which this script has been tested

A download page containing the release and already-compiled FIM binary artifacts that you can use for immediate evaluation on the Intel® FPGA SmartNIC N6001-PL can be found on the OFS ofs-2023.1-1 official release drop on GitHub.


2 Introduction to OFS Evaluation Script

By following the setup steps and using the OFS evaluation script you can quickly evaluate many features that the OFS framework provides and also leverage this script for your own development.

2.1 Pre-Requisites

This script uses the following set of software tools which should be installed using the directory structure below. Tool versions can vary.

  • Intel Quartus® Prime Pro Software
  • Synopsys® VCS Simulator
  • Siemens® Questa® Simulator

Figure 2-1 Folder Hierarchy for Software Tools

  1. You must create a directory named "ofs-X.X.X" where the X represents the current release number, for example ofs-2023.1-1.

  2. You must clone the required OFS repositories as per Figure 2-2. Please refer to the BKC table for locations. When cloning the FIM repository, please follow the instructions in section 4.1 and 4.2 of the Intel® FPGA Interface Manager Developer Guide: OFS for Intel® Agilex® PCIe Attach FPGAs. Additionally, please go [OFS Getting Started User Guide] for the instructions for the BKC software installation.

  3. Once the repositories are cloned, copy the evaluation script (ofs_n6001_eval.sh) which is located at [eval_scripts] beneath the $IOFS_BUILD_ROOT directory location as shown in the example below:

Figure 2-2 Directory Structure for OFS Project

## ofs-2023.1-1
##  -> examples-afu
##  -> linux-dfl
##  -> ofs-n6001
##  -> oneapi-asp
##  -> oneAPI-samples
##  -> opae-sdk
##  -> opae-sim
##  -> ofs_n6001_eval.sh
  1. Open the README file named (README_ofs_n6001_eval.txt) which is located at [eval_scripts] which informs the user which sections to modify in the script prior to building the FIM and running hardware, software and simulation tests.

2.2 n6001 Evaluation Script modification

To adapt this script to the user environment please follow the instructions below which explains which line numbers to change in the ofs_n6001_eval.sh script

User Directory Creation

The user must create the top-level source directory and then clone the OFS repositories

mkdir ofs-2023.1-1

In the example above we have used ofs-2023.1-1 as the directory name

Set-Up Proxy Server (lines 65-67)

Please enter the location of your proxy server to allow access to external internet to build software packages.

Note: Failing to add proxy server will prevent cloning of repositories and the user will be unable to build the OFS framework.

export http_proxy=<user_proxy>
export https_proxy=<user_proxy>
export no_proxy=<user_proxy>

License Files (lines 70-72)

Please enter the the license file locations for the following tool variables

export LM_LICENSE_FILE=<user_license>
export DW_LICENSE_FILE=<user_license>
export SNPSLMD_LICENSE_FILE=<user_license>

Tools Location (line 85, 86, 87, 88)

Set Location of Quartus, Synopsys, Questasim and oneAPI Tools

export QUARTUS_TOOLS_LOCATION=/home
export SYNOPSYS_TOOLS_LOCATION=/home
export QUESTASIM_TOOLS_LOCATION=/home
export ONEAPI_TOOLS_LOCATION=/opt

Quartus Tools Version (line 93)

Set version of Quartus

export QUARTUS_VERSION=23.1

In the example above "23.1" is used as the Quartus tools version

OPAE Tools (line 106)

change OPAE SDK VERSION

export OPAE_SDK_VERSION=2.5.0-3

In the example above "2.5.0-3" is used as the OPAE SDK tools version

PCIe (Bus Number) (lines 231 and 238)

The Bus number must be entered by the user after installing the hardware in the chosen server, in the example below "b1" is the Bus Number for a single card as defined in the evaluation script.

export ADP_CARD0_BUS_NUMBER=b1

The evaluation script uses the bus number as an identifier to interrogate the card. The command below will identify the accelerater card plugged into a server.

lspci | grep acc

b1:00.0 Processing accelerators: Intel Corporation Device bcce (rev 01)<br>
b1:00.1 Processing accelerators: Intel Corporation Device bcce<br>
b1:00.2 Processing accelerators: Intel Corporation Device bcce<br>
b1:00.3 Processing accelerators: Red Hat, Inc. Virtio network device<br>
b1:00.4 Processing accelerators: Intel Corporation Device bcce<br>

The result identifies the card as being assigned "b1" as the bus number so the entry in the script changes to

export ADP_CARD0_BUS_NUMBER=b1

The user can also run the following command on the ofs_n6001_eval.sh script to automatically change the bus number to b1 in the ofs_n6001_eval.sh script.

grep -rli 'b1' * | xargs -i@ sed -i 'b1' @

if the bus number is 85 for example

85:00.0 Processing accelerators: Intel Corporation Device bcce (rev 01)<br>
85:00.1 Processing accelerators: Intel Corporation Device bcce<br>
85:00.2 Processing accelerators: Intel Corporation Device bcce<br>
85:00.3 Processing accelerators: Red Hat, Inc. Virtio network device<br>
85:00.4 Processing accelerators: Intel Corporation Device bcce<br>

the command to change to 85 in the evaluation script would be

grep -rli 'b1' * | xargs -i@ sed -i '85' @

The ofs_n6001_eval.sh script has now been modified to the server set-up and the user can proceed to build, compile and simulate the OFS stack


3 n6001 Evaluation Script

3.1 Overview

The figure below shows a snapshot of the full evaluation script menu showing all 62 options and each one one of 11 sub-menus which focus on different areas of evaluation. Each of these menu options are described in the next section.

Figure 3-1 ofs_n6001_eval.sh Evaluation Menu

3.1.1 ADP TOOLS MENU

By selecting "List of Documentation for ADP n6001 Project," a list of links to the latest OFS documentation appears. Note that these links will take you to documentation for the most recent release which may not correspond to the release version you are evaluating. To find the documentation specific to your release, ensure you clone the intel-ofs-docs tag that corresponds to your OFS version.

By selecting "Check Versions of Operating System and Quartus Premier Design Suite", the tool verifies correct Operating System, Quartus version, kernel parameters, license files and paths to installed software tools.

Menu Option Example Output
1 - List of Documentation for ADP n6001 Project Open FPGA Stack Overview
Guides you through the setup and build steps to evaluate the OFS solution
https://ofs.github.io
2 - Check versions of Operating System and Quartus Premier Design Suite (QPDS) Checking Linux release
Linux version 5.15.52-dfl (guest@hw-rae-svr4-l) (gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4), GNU ld version 2.30-79.el8) #1 SMP Fri Sep 23 17:19:37 BST 2022

Checking RedHat release
CentOS Linux release 8.3.2011

Checking Ubuntu release
cat: /etc/lsb-release: No such file or directory

Checking Kernel parameters
BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.15.52-dfl root=/dev/mapper/cl-root ro crashkernel=auto resume=/dev/mapper/cl-swap rd.lvm.lv=cl/root rd.lvm.lv=cl/swap rhgb quiet intel_iommu=on pcie=realloc hugepagesz=2M hugepages=200

Checking Licenses
LM_LICENSE_FILE is set to port@socket number:port@socket number
DW_LICENSE_FILE is set to port@socket number:port@socket number
SNPSLMD_LICENSE_FILE is set to port@socket number:port@socket number

Checking Tool versions
QUARTUS_HOME is set to /home/intelFPGA_pro/23.1/quartus
QUARTUS_ROOTDIR is set to /home/intelFPGA_pro/23.1/quartus
IMPORT_IP_ROOTDIR is set to /home/intelFPGA_pro/23.1/quartus/../ip
QSYS_ROOTDIR is set to /home/intelFPGA_pro/23.1/quartus/../qsys/bin

Checking QPDS Patches
Quartus Prime Shell
Version 23.1 Build XXX XX/XX/XXXX Patches X.XX SC Pro Edition
Copyright (C) XXXX Intel Corporation. All rights reserved.

3.1.2 ADP HARDWARE MENU

Identifies card by PCIe number, checks power, temperature and current firmware configuration.

Menu Option Example Output
3 - Identify Acceleration Development Platform (ADP) n6001 Hardware via PCIe PCIe card detected as
b1:00.0 Processing accelerators: Intel Corporation Device bcce (rev 01)
b1:00.1 Processing accelerators: Intel Corporation Device bcce
b1:00.2 Processing accelerators: Intel Corporation Device bcce
b1:00.4 Processing accelerators: Intel Corporation Device bcce
Host Server is connected to SINGLE card configuration

4 - Identify the Board Management Controller (BMC) Version and check BMC sensors Intel Acceleration Development Platform N6001
Board Management Controller NIOS FW version: 3.2.0
Board Management Controller Build version: 3.2.0
//****** BMC SENSORS ******//
Object Id : 0xEE00000
PCIe s:b:d.f : 0000:B1:00.0
Vendor Id : 0x8086
Device Id : 0xBCCE
SubVendor Id : 0x8086
SubDevice Id : 0x1771
Socket Id : 0x00
Ports Num : 01
Bitstream Id : 0x50102027135A894
Bitstream Version : 5.0.1
Pr Interface Id : 7dbb989d-5eb9-54f4-8a74-40ddff52e0e2


5 - Identify the FPGA Management Engine (FME) Version Intel Acceleration Development Platform N6001
Board Management Controller NIOS FW version: 3.2.0
Management Controller Build version: 3.2.0
//****** FME ******//
Object Id : 0xEE00000
PCIe s:b:d.f : 0000:B1:00.0
Vendor Id : 0x8086
Device Id : 0xBCCE
SubVendor Id : 0x8086
SubDevice Id : 0x1771
Socket Id : 0x00
Ports Num : 01
Bitstream Id : 0x50102027135A894
Bitstream Version : 5.0.1
Pr Interface Id : 7dbb989d-5eb9-54f4-8a74-40ddff52e0e2
Boot Page : user1
Factory Image Info : a7c6879683182ce61084c420e51f50b6
User1 Image Info : 8a7440ddff52e0e27dbb989d5eb954f4
User2 Image Info : a7c6879683182ce61084c420e51f50b6

6 - Check Board Power and Temperature Intel Acceleration Development Platform N6001
Board Management Controller NIOS FW version: 3.2.0
Board Management Controller Build version: 3.2.0
//****** POWER ******//
Object Id : 0xEE00000
PCIe s:b:d.f : 0000:B1:00.0
Vendor Id : 0x8086
Device Id : 0xBCCE
SubVendor Id : 0x8086
SubDevice Id : 0x1771
Socket Id : 0x00
Ports Num : 01
Bitstream Id : 0x50102027135A894
Bitstream Version : 5.0.1
Pr Interface Id : 7dbb989d-5eb9-54f4-8a74-40ddff52e0e2
( 1) VCCRT_GXER_0V9 Voltage : 0.91 Volts
etc ......................

Intel Acceleration Development Platform N6001
Board Management Controller NIOS FW version: 3.2.0
Board Management Controller Build version: 3.2.0
//****** TEMP ******//
Object Id : 0xEE00000
PCIe s:b:d.f : 0000:B1:00.0
Vendor Id : 0x8086
Device Id : 0xBCCE
SubVendor Id : 0x8086
SubDevice Id : 0x1771
Socket Id : 0x00
Ports Num : 01
Bitstream Id : 0x50102027135A894
Bitstream Version : 5.0.1
Pr Interface Id : 7dbb989d-5eb9-54f4-8a74-40ddff52e0e2
( 1) FPGA E-Tile Temperature [Remote] : 33.50 Celsius
etc ......................

7 - Check Accelerator Port status //****** PORT ******// Object Id : 0xED00000
PCIe s:b:d.f : 0000:B1:00.0
Vendor Id : 0x8086
Device Id : 0xBCCE
SubVendor Id : 0x8086
SubDevice Id : 0x1771
Socket Id : 0x00


8 - Check MAC and PHY status Intel Acceleration Development Platform N6001
Board Management Controller NIOS FW version: 3.2.0
Board Management Controller Build version: 3.2.0
//****** MAC ******//
Object Id : 0xEE00000
PCIe s:b:d.f : 0000:B1:00.0
Vendor Id : 0x8086
Device Id : 0xBCCE
SubVendor Id : 0x8086
SubDevice Id : 0x1771
Socket Id : 0x00
Ports Num : 01
Bitstream Id : 0x50102027135A894
Bitstream Version : 5.0.1
Pr Interface Id : 7dbb989d-5eb9-54f4-8a74-40ddff52e0e2
Number of MACs : 255
mac info is not supported

Intel Acceleration Development Platform N6001
Board Management Controller NIOS FW version: 3.2.0
Board Management Controller Build version: 3.2.0
//****** PHY ******//
Object Id : 0xEE00000
PCIe s:b:d.f : 0000:B1:00.0
Vendor Id : 0x8086
Device Id : 0xBCCE
SubVendor Id : 0x8086
SubDevice Id : 0x1771
Socket Id : 0x00
Ports Num : 01
Bitstream Id : 0x50102027135A894
Bitstream Version : 5.0.1
Pr Interface Id : 7dbb989d-5eb9-54f4-8a74-40ddff52e0e2

//****** HSSI information ******//
HSSI version : 1.0
Number of ports : 8
Port0 :25GbE DOWN
Port1 :25GbE DOWN
Port2 :25GbE DOWN
Port3 :25GbE DOWN
Port4 :25GbE DOWN
Port5 :25GbE DOWN
Port6 :25GbE DOWN
Port7 :25GbE DOWN

3.1.3 ADP PF/VF MUX MENU

This menu reports the number of PF/VF functions in the reference example and also allows you to reduce the number to 1PF and 1VF to reduce resource utilisation and create a larger area for your workload development. This selection is optional and if the user wants to implement the default number of PF's and VF's then option 9, 10 and 11 should not be used. Additionally the code used to make the PF/VF modification can be leveraged to increase or modify the number of PF/VFs in the existing design within the limits that the PCIe Subsystem supports (8PF/2K VFs)

Menu Option Description
9 - Check PF/VF Mux Configuration This menu selection displays the current configuration of the pcie_host.ofss file which is located at the following directory $OFS_ROOTDIR/tools/pfvf_config_tool

[ProjectSettings]
platform = n6001
family = Agilex
fim = base_x16
Part = AGFB014R24A2E2V
IpDeployFile = pcie_ss.sh
IpFile = pcie_ss.ip
OutputName = pcie_ss
ComponentName = pcie_ss
is_host = True

[pf0]
num_vfs = 3
pg_enable = True

[pf1]

[pf2]

[pf3]

[pf4]


10 - Modify PF/VF Mux Configuration As an example this menu selection modifies the pcie_host.ofss file to 1 PF located in the following directory $OFS_ROOTDIR/tools/pfvf_config_tool
This option also displays the the modified pcie_host.ofss file
11 - Build PF/VF Mux Configuration If option 10 is not used then then the default number of PF's and VF's is used to build the FIM, if option 10 is selected then only 1 VF is built to reduce logic utilisation

3.1.4 ADP FIM/PR BUILD MENU

Builds FIM, Partial Reconfiguration Region and Remote Signal Tap

Menu Option Description
12 - Check ADP software versions for ADP n6001 Project OFS_ROOTDIR is set to /home/user_area/ofs-X.X.X/ofs-n6001
OPAE_SDK_REPO_BRANCH is set to release/X.X.X
OPAE_SDK_ROOT is set to /home/user_area/ofs-X.X.X/ofs-n6001/../opae-sdk
LD_LIBRARY_PATH is set to /home/user_area/ofs-X.X.X/ofs-n6001/../opae-sdk/lib64:


13 - Build FIM for n6001 Hardware This option builds the FIM based on the setting for the $ADP_PLATFORM, $FIM_SHELL environment variable. Check this variable in the following file ofs_n6001_eval.sh

14 - Check FIM Identification of FIM for n6001 Hardware The FIM is identified by the following file fme-ifc-id.txt located at $OFS_ROOTDIR/$FIM_WORKDIR/syn/syn_top/

15 - Build Partial Reconfiguration Tree for n6001 Hardware This option builds the Partial Reconfiguration Tree which is needed for AFU testing/development and also for the oneAPI build flow

16 - Build Base FIM Identification(ID) into PR Build Tree template This option copies the contents of the fme-ifc-id.txt into the Partial Reconfiguration Tree to allow the FIM amd Partial Reconfiguration Tree to match and hence allow subsequent insertion of AFU and oneAPI workloads

17 - Build Partial Reconfiguration Tree for n6001 Hardware with Remote Signal Tap This option builds the Partial Reconfiguration Tree which is needed for AFU testing/development and also for the oneAPI build flow for the Remote Signal Tap flow

18 - Build Base FIM Identification(ID) into PR Build Tree template with Remote Signal Tap This option copies the contents of the fme-ifc-id.txt into the Partial Reconfiguration Tree for Remote Signal Tap to allow the FIM amd Partial Reconfiguration Tree to match and hence allow subsequent insertion of AFU and oneAPI workloads

3.1.5 ADP HARDWARE PROGRAMMING/DIAGNOSTIC MENU

The following submenu allows you to: * Program and check flash * Perform a remote system update (RSU) of the FPGA image into the FPGA * Bind virtual functions to VFIO PCIe driver * Run host exerciser (HE) commands such as loopback to test interfaces VFIO PCI driver binding * Read the control and status registers (CSRs) for bound modules that are part of the OFS reference design.

Menu Option Description
19 - Program BMC Image into n6001 Hardware The user must place a new BMC flash file in the following directory $OFS_ROOTDIR/bmc_flash_files. Once the user executes this option a new BMC image will be programmed. A remote system upgrade command is initiated to store the new BMC image

20 - Check Boot Area Flash Image from n6001 Hardware This option checks which location area in FLASH the image will boot from, the default is user1

Boot Page : user1

21 - Program FIM Image into user1 area for n6001 Hardware This option programs the FIM image "ofs_top_page1_unsigned_user1.bin" into user1 area in flash

22 - Initiate Remote System Upgrade (RSU) from user1 Flash Image into n6001 Hardware This option initiates a Remote System Upgrade and soft reboots the server and re-scans the PCIe bus for the new image to be loaded

2022-11-10 11:26:24,307 - [[pci_address(0000:b1:00.0), pci_id(0x8086, 0xbcce)]] performing RSU operation
2022-11-10 11:26:24,310 - [[pci_address(0000:b0:02.0), pci_id(0x8086, 0x347a)]] removing device from PCIe bus
2022-11-10 11:26:24,357 - waiting 10 seconds for boot
2022-11-10 11:26:34,368 - rescanning PCIe bus: /sys/devices/pci0000:b0/pci_bus/0000:b0
2022-11-10 11:26:35,965 - RSU operation complete

23 - Check PF/VF Mapping Table, vfio-pci driver binding and accelerator port status This option checks the current vfio-pci driver binding for the PF's and VF's

24 - Unbind vfio-pci driver This option unbinds the vfio-pci driver for the PF's and VF's

25 - Create Virtual Functions (VF) and bind driver to vfio-pci n6001 Hardware This option creates vfio-pci driver binding for the PF's and VF's
Once the VF's have been bound to the driver the user can select menu option 23 to check that the new drivers are bound

26 - Verify FME Interrupts with hello_events The hello_events utility is used to verify FME interrupts. This tool injects FME errors and waits for error interrupts, then clears the errors

27 - Run HE-LB Test This option runs 5 tests

1) checks and generates traffic with the intention of exercising the path from the AFU to the Host at full bandwidth
2) run a loopback throughput test using one cacheline per request
3) run a loopback read test using four cachelines per request
4) run a loopback write test using four cachelines per request
5) run a loopback throughput test using four cachelines per request

28 - Run HE-MEM Test This option runs 2 tests

1) Checking and generating traffic with the intention of exercising the path from FPGA connected DDR; data read from the host is written to DDR, and the same data is read from DDR before sending it back to the host
2) run a loopback throughput test using one cacheline per request

29 - Run HE-HSSI Test This option runs 1 test

HE-HSSI is responsible for handling client-side ethernet traffic. It wraps the 10G and 100G HSSI AFUs, and includes a traffic generator and checker. The user-space tool hssi exports a control interface to the HE-HSSI's AFU's packet generator logic

1) Send traffic through the 10G AFU
30 - Run Traffic Generator AFU Test This option runs 3 tests

TG AFU has an OPAE application to access & exercise traffic, targeting a specific bank

1) Run the preconfigured write/read traffic test on channel 0
2) Target channel 1 with a 1MB single-word write only test for 1000 iterations
3) Target channel 2 with 4MB write/read test of max burst length for 10 iterations
31 - Read from CSR (Command and Status Registers) for n6001 Hardware This option reads from the following CSR's
HE-LB Command and Status Register Default Definitions
HE-MEM Command and Status Register Default Definitions
HE-HSSI Command and Status Register Default Definitions

3.1.6 ADP HARDWARE AFU TESTING MENU

This submenu tests partial reconfiguration by building and loading an memory-mapped I/O example AFU/workload, executes software from host, and tests remote signal tap.

Menu Option Description
32 - Build and Compile host_chan_mmio example This option builds the host_chan_mmio example from the following repo $OFS_PLATFORM_AFU_BBB/plat_if_tests/$AFU_TEST_NAME, where AFU_TEST_NAME=host_chan_mmio. This produces a GBS (Green Bit Stream) ready for hardware programming

33 - Execute host_chan_mmio example This option builds the host code for host_chan_mmio example and programs the GBS file and then executes the test

34 - Modify host_chan_mmio example to insert Remote Signal Tap This option inserts a pre-defined host_chan_mmio.stp Signal Tap file into the OFS code to allow a user to debug the host_chan_mmio AFU example

35 - Build and Compile host_chan_mmio example with Remote Signal Tap This option builds the host_chan_mmio example from the following repo $OFS_PLATFORM_AFU_BBB/plat_if_tests/$AFU_TEST_NAME, where AFU_TEST_NAME=host_chan_mmio. This produces a GBS(Green Bit Stream) ready for hardware programming with Remote Signal tap enabled

36 - Execute host_chan_mmio example with Remote Signal Tap This option builds the host code for host_chan_mmio example and programs the GBS file and then executes the test. The user must open the Signal Tap window when running the host code to see the transactions in the Signal Tap window


3.1.7 ADP HARDWARE AFU BBB TESTING MENU

This submenu tests partial reconfiguration using a hello_world example AFU/workload, executes sw from host

Menu Option Description
37 - Build and Compile hello_world example This option builds the hello_ world example from the following repo $FPGA_BBB_CCI_SRC/samples/tutorial/afu_types/01_pim_ifc/$AFU_BBB_TEST_NAME, where AFU_BBB_NAME=hello_world. This produces a GBS(Green Bit Stream) ready for hardware programming

38 - Execute hello_world example This option builds the host code for hello_world example and programs the GBS file and then executes the test

3.1.8 ADP ONEAPI PROJECT MENU

Builds oneAPI kernel, executes sw from host and runs diagnostic tests

Menu Option Result
39 - Check oneAPI software versions for n6001 Project This option checks the setup of the oneAPI software and adds the relevant oneAPI environment variables to the terminal. This option also informs the user to match the oneAPI software version to the oneAPI-samples version

40 - Build and clone shim libraries required by oneAPI host This option builds the oneAPI directory structure

41 - Install oneAPI Host Driver This option Installs the oneAPI Host driver at the following location /opt/Intel/OpenCLFPGA/oneAPI/Boards/, and requires sudo permission

42 - Uninstall oneAPI Host Driver This option Uninstall's the oneAPI Host driver, and requires sudo permissions

43 - Diagnose oneAPI Hardware This option Checks ICD (Intel Client Driver) and FCD (FPGA Client Driver), oneAPI library locations and detects whether oneAPI BSP is loaded into the FPGA

44 - Build oneAPI BSP ofs_n6001 Default Kernel (hello_world) This option Builds the oneAPI BSP using hello_world kernel

45 - Build oneAPI MakeFile Environment This option Builds the oneAPI environment using a Makefile for kernel insertion

46 - Compile oneAPI Sample Application (board_test) for Emulation This option compiles the board_test kernel for Emulation

47 - Run oneAPI Sample Application (board_test) for Emulation This option executes the board_test kernel for Emulation

48 - Generate oneAPI Optimization report for (board_test) This option generates an optimization report for the board_test kernel

49 - Check PF/VF Mapping Table, vfio-pci driver binding and accelerator port status This option checks the current vfio-pci driver binding for the PF's and VF's

50 - Unbind vfio-pci driver This option unbinds the vfio-pci driver for the PF's and VF's

51 - Create Virtual Function (VF) and bind driver to vfio-pci n6001 Hardware This option creates vfio-pci driver binding for the PF's and VF's
Once the VF's have been bound to the driver the user can select menu option 45 to check that the new drivers are bound


52 - Program OpenCL BSP ofs_n6001 Default Kernel (hello_world) This option programs the FPGA with a aocf file based on the hello_world kernel

53 - Compile oneAPI Sample Application (board_test) for Hardware This option compiles the board_test kernel for Hardware

54 - Run oneAPI Sample Application (board_test) for Hardware This option builds the host code for board_test kernel and executes the program running through kernel and host bandwidth tests

3.1.9 ADP UNIT TEST PROJECT MENU

Builds, compiles and runs standalone simulation block tests. More unit test examples are found at the following location ofs_n6001/sim/unit_test

Menu Option Result
55 - Generate Simulation files for Unit Test This option builds the simulation file set for running a unit test simulation

56 - Simulate Unit Test dfh_walker and log waveform This option runs the dfh_walker based on the environment variable "UNIT_TEST_NAME=dfh_walker" in the evaluation script. A user can change the test being run by modifying this variable

3.1.10 ADP UVM PROJECT MENU

Builds, compiles and runs full chip simulation tests. The user should execute the options sequentially ie 68,69, 70 and 71

Menu Option Description
57 - Check UVM software versions for n6001 Project DESIGNWARE_HOME is set to /home/synopsys/vip_common/vip_Q-2020.03A
UVM_HOME is set to /home/synopsys/vcsmx/S-2021.09-SP1/linux64/rhel/etc/uvm
VCS_HOME is set to /home/synopsys/vcsmx/S-2021.09-SP1/linux64/rhel
VERDIR is set to /home/user_area/ofs-X.X.X/ofs-n6001/verification
VIPDIR is set to /home/user_area/ofs-X.X.X/ofs-n6001/verification

58 - Compile UVM IP This option cmpiles the UVM IP

59 - Compile UVM RTL and Testbench This option compiles the UVM RTL and Testbench

60 - Simulate UVM dfh_walking_test and log waveform This option runs the dfh_walking test based on the environment variable "UVM_TEST_NAME=dfh_walking_test" in the evaluation script. A user can change the test being run by modifying this variable

61 - Simulate all UVM test cases (Regression Mode) This option runs the n6001 regression mode, cycling through all UVM tests defined in verification/tests/test_pkg.svh file

3.1.11 ADP BUILD ALL PROJECT MENU

Builds the complete OFS flow, good for regression testing and overnight builds

For this menu a user can run a sequence of tests (compilation, build and simulation) and executes them sequentially. After the script is successfully executed, a set of binary files is produced which a you can use to evaluate your hardware. Log files are also produced which checks whether the tests passed.

A user can run a sequence of tests and execute them sequentially. In the example below when the user selects option 62 from the main menu the script will execute 24 tests ie (main menu options 2, 9, 12, 13, 14, 15, 16, 17, 18, 32, 34, 35, 37, 39, 40, 44, 45, 53, 55, 56, 57, 58, 59 and 60. These 24 menu options are chosen to build the complete OFS flow covering build, compile and simulation.

Menu Option Result
62 - Build and Simulate Complete n6001 Project Generating Log File with date and timestamp
Log file written to /home/guest/ofs-2.3.1/log_files/n6001_log_2022_11_10-093649/ofs_n6001_eval.log

Definition of Multi-Test Set-up

Menu Option 62 above in the evaluation script can be refined to tailor it to the users need and is principally defined by the variable below

MULTI_TEST[A,B]=C

where

A= Total Number of menu options in script
B= Can be changed to a number to select the test order
C= Menu Option in Script

Example 1
MULTI_TEST[62,0]=2

A= 62 is the total number of options in the script B= 0 indicates that this is the first test to be run in the script C= Menu option in Script ie 2- List of Documentation for ADP n6001 Project

Example 2
MULTI_TEST[62,0]=2
MULTI_TEST[62,1]=9

In the example above two tests are run in order ie 0, and 1 and the following menu options are executed ie 2- List of Documentation for ADP n6001 Project and 9 - Check ADP software versions for ADP n6001 Project

The user can also modify the build time by de-selecting options they do not wish to use, see below for a couple of use-case scenarios.

Default User Case

A user can run a sequence of tests and execute them sequentially. In the example below when the user selects option 62 from the main menu the script will execute 24 tests ie (main menu options 2, 9, 12, 13, 14, 15, 16, 17, 18, 32, 34, 35, 37, 39, 40, 44, 45, 53, 55, 56, 57, 58, 59 and 60. All other tests with an "X" indicates do not run that test.

User Case for ADP FIM/PR BUILD MENU

In the example below when the user selects option 62 from the main menu the script will only run options from the ADP FIM/PR BUILD MENU (7 options, main menu options 12, 13, 14, 15, 16, 17 and 18). All other tests with an "X" indicates do not run that test.


4 n6001 Common Test Scenarios

This section will describe the most common compile build scenarios if a user wanted to evaluate an acceleration card on their server. The Pre-requisite column indcates the menu comamnds that must be run befere executing the test eg To run Test 5 then a user needs to have run option 13, 15 and 16 before running options 23, 24, 25, 32 and 33.

Test Test Scenario Pre-Requisite Menu Option Menu Option
Test 1 FIM Build -
13

Test 2 Partial Reconfiguration Build 13
15, 16

Test 3 Program FIM and perform Remote System Upgrade 13
21, 22

Test 4 Bind PF and VF to vfio-pci drivers -
23, 24, 25

Test 5 Build, compile and test AFU on hardware 13, 15, 16
23, 24, 25, 32, 33

Test 6 Build, compile and test AFU Basic Building Blocks on hardware 13, 15, 16
23, 24, 25, 37, 38

Test 7 Build, compile and test oneAPI on hardware 13, 15, 16
39, 40, 41, 44, 45, 49, 50, 51, 52, 53, 54

Test 8 Build and Simulate Unit Tests -
55, 56

Test 9 Build and Simulate UVM Tests -
57, 58, 59, 60

Notices & Disclaimers

Intel® technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Performance varies by use, configuration and other factors. Your costs and results may vary. You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with the sole exception that you may publish an unmodified copy. You may create software implementations based on this document and in compliance with the foregoing that are intended to execute on the Intel product(s) referenced in this document. No rights are granted to create modifications or derivatives of this document. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. You are responsible for safety of the overall system, including compliance with applicable safety-related requirements or standards. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission of the Khronos Group™.