11. Terminology

We define some terms that will be used throughout this document.

Adapter

A supported PCI Express cable adapter. This is the PCI Express hardware installed in the Cluster Nodes. Some Cluster Nodes may have an integrated PCIe Express chipset that connects the system to a PCIe enabled backplane. For the sake of this guide, we still in some cases refer to this as an adapter.

Adapter number

Each adapter in a Cluster Node is identified by an adapter number.

Cluster Node

A computer which is part of the PCI Express interconnect, which means it has a PCI Express network connection to other nodes. All Cluster Nodes together constitute the cluster. A Cluster Node can be attached to multiple independent fabrics.

SBC

Single Board Computer (SBC).

CPU architecture

The CPU architecture relevant in this guide is characterized by the addressing width of the CPU (32 or 64 bit) and the instruction set (x86, PowerPC, Sparc, ARM etc.). If these two characteristics are identical, the CPU architecture is identical for the scope of this guide.

Device Lending

A solution that enables PCI Express devices to be dynamically reallocated to other systems on the PCIe network. Uses TLP rerouting over NTB to enable remote native access.

eXpressWare

Dolphins PCI Express software suite is named eXpressWare and enables applications to communicate over PCI Express cables and backplanes. Several interfaces and APIs are supported, from standard TCP/IP networking to the lowest level direct remote memory access and TLP rerouting. Each API has its benefits and can be selected based on application requirements.

Fabric

A fabric is an independent, closed communication network that connects a number of machines (here: all nodes in your cluster). Thus, with one adapter in each Cluster Node and all PCIe connections set up, the cluster is using a single fabric. Adding another adapter to each Cluster Node and connecting them would make a cluster with 2 fabrics.

Link

The cable between two adapters or between an adapter and a PCIe switch.

Cluster Management Node (frontend)

The single computer that is running software that monitors and configures the Cluster Nodes. The light weight cluster management service communicates with the Cluster Nodes out-of-band, which means via Ethernet.

Installation machine

The installation script is typically executed on the Cluster Management Node, but can also be executed on another machine that is neither a Cluster Node nor the Cluster Management Node, but has network (ssh) access to all Cluster Nodes and the Cluster Management Node. This machine is the installation machine.

Kernel build machine

The interconnect drivers are kernel modules and thus need to be built for the exact kernel running on the node (otherwise, the kernel will refuse to load them). To build kernel modules on a machine, the kernel-specific include files and kernel configuration have to be installed - these are not installed by default on most distributions. You will need to have one kernel build machine available which has these files installed (contained in the kernel-devel RPM that matches the installed kernel version) and that runs the exact same kernel version as the Cluster Nodes. Typically, the kernel build machine is one of the Cluster Nodes itself, but you can choose to build the kernel modules on any other machine that fulfills the requirements listed above.

Cluster

All Cluster Nodes constitute the cluster.

Network Manager

The Dolphin Network Manager is a daemon process named dis_networkmgr running on the Cluster Management Node. It is part of eXpressWare and manages and controls the cluster using the Node Manager running on all Cluster Nodes. The Network Manager knows the interconnect status of all Cluster Nodes.

The service name of the Network Manager is dis_networkmgr.

Node Manager

The Node Manager is a daemon process that is running on each Cluster Node and provides remote access to the interconnect driver and other Cluster Node status to the Network Manager. It reports status and performs actions like configuring the installed adapter.

The service name of the Node Manager is dis_nodemgr.

self-installing archive (SIA)

A self-installing archive (SIA) is a single executable shell command file for Linux that is used to compile and install eXpressWare in all required variants. It largely simplifies the deployment and management of a PCI Express based cluster.

Windows Installer (MSI)

The Windows Installer is an engine for the installation, maintenance, and removal of software on modern Microsoft Windows systems. The installation information, and often the files themselves, are packaged in installation packages, loosely relational databases structured as OLE Structured Storage Files and commonly known as "MSI files", from their default file extension.

VxWorks Windows Installer (MSI)

The VxWorks Windows Installer is an engine for the installation, maintenance, and removal of the VxWorks software on modern Microsoft Windows systems. The installation information, and often the files themselves, are packaged in installation packages, loosely relational databases structured as OLE Structured Storage Files and commonly known as "MSI files", from their default file extension.

SCI / D

D is the abbreviation for the Scalable Coherent Interface (SCI) class of host adapters and Dolphin's SCI interconnect introduced in 1993. SCI uses a switchless topology and therefore scales easily to large Clusters.

DX

This is a PCI Express Gen1 compliant interconnect based on the ASI standard. It requires a dedicated switch to build clusters larger than 2 Cluster Nodes. The DX product family was introduced on the market in 2006. It consists of a PCI Express Adapter card (DXH510), a 10 port switch (DXS510) and 8 slot PCI Expansion box (DXE510).

IX

This is a PCI Express Gen 2 interconnect from Dolphin based on standard PCI Express switches from IDT. The first PCI Express Adapter card (IXH610) was introduced on the market in December 2010. A compliant XMC Adapter (IXH620), 7 slot Expansion box (IXE600) and a 8 port switch box (IXS600) was added to the interconnect family in 2011. The IDT switches was discontinued by Renesas / End Of Life (EOL) in 2021.

PX

This is a PCI Express Gen 3 interconnect from Dolphin based on standard PCI Express chips from Broadcom/Avago/PLX. The first PCI Express Adapter card (PXH810) was introduced on the market in October 2015. The PXH810 and PXH812 cards are compliant with the IXS600 8 port switch.

The PCI Express Gen3 x16 SFF-8644 based PXH820 and PXH830 card was introduced in 2016. The PXH820 and PXH830 cards are compliant to the MXS824 switch.

MX

This is a PCI Express Gen 3.0 and 4.0 interconnect from Dolphin based on standard PCI Express PFX Switchtec chips from Microchip (Previously Microchip). The first PCI Express Gen 3.0 Adapter card (MXH830) was introduced on the market in September 2017. The first PCI Express Gen 4.0 card (MXH930/MXH932) was introduced on the market in July 2019. The MXP924 PXIe system switch module and the MXC948 Compact PCI Serial switch was introduced in 2021. The MXS924 switch was introduced in 2022. The MXH930, MXH830 and MXH832 cards are compliant the MXH824 switch. The MXH94x and MXH95x Samtec FireFly compliant adapters was introduced in 2021.

INX

This is a PCI Express Gen 3 interconnect from Dolphin based on the Intel NTB functionality available with some Intel CPUs. The software was introduced in July 2014.

SuperSockets

SuperSockets is a Berkeley sockets compliant socket API provided by Dolphin. SuperSockets is currently supported on systems using Linux and Windows.

SISCI

SISCI (Software Infrastructure for Shared-Memory Cluster Interconnect) is the user-level API to create applications that make direct use of the low level PCI Express interconnect shared memory capabilities.

To run SISCI applications, a service named dis_sisci has to be running; it loads the required kernel module and sets up the SISCI devices.

SmartIO

A part of eXpressWare enabling PCI Express devices to be dynamically added to a local system or shared between nodes on the PCIe network.

NodeId

Each Cluster Node in a fabric is identified by an assigned NodeId

x1, x2, x4, x8, x16

PCI express combine multiple lanes (serial high-speed communication channels using few electrical connections) into communication paths with a higher bandwidth. With PCI Express Gen. 1, each lane carries 2.5Gbit/s of traffic, with PCI Express Gen 2, each lane carries 5.0 Gbit/s and with PCI Express Gen3, each lane carries 8.0 Gibt/s. Combining 8 lanes into a single communication path is called x8 and thus delivers 40Gbit/s Bandwidth for Gen 2 or 64Bbit/s Bandwidth for Gen 3, while x16 doubles this bandwidth using 16 lanes and delivers 128Gbit/s for Gen3 in each direction.