HPC support¶
pytest-cocotb can submit simulation jobs to HPC cluster schedulers instead of running them locally. This is useful when simulations are resource-intensive and need to run on dedicated compute nodes.
Overview¶
The HPC support is built on two layers:
hpc-runner — an external package that provides job submission to schedulers (local, SGE, SLURM).
HPC runner classes — wrappers around cocotb’s simulator runners that redirect execution through hpc-runner.
The --modules option allows loading environment modules (e.g.
verilator/5.024) on the compute node before the simulator runs.
Supported simulators¶
Simulator |
HPC class |
Base cocotb class |
|---|---|---|
Verilator |
|
|
Icarus Verilog |
|
|
Xcelium |
|
|
VCS |
|
|
Questa |
|
|
How it works¶
Each HPC runner class uses Python’s MRO (method resolution order) to override
execution. The HpcExecutorMixin is placed before the simulator class
in the inheritance chain:
class HpcVerilator(HpcExecutorMixin, Verilator):
pass
This ensures that _execute_cmds() resolves to the mixin’s version, which
submits commands through the scheduler instead of running them as local
subprocesses. All other simulator-specific logic (build commands, test
commands, etc.) is inherited unchanged from the base cocotb class.
The mixin:
Skips local PATH checks for the simulator binary (it will be available after module loading on the compute node).
Extracts cocotb’s environment variable changes and passes them to the job as
env_varsandenv_appenddictionaries.Redirects simulator output to
log_filewhen set.
Scheduler configuration¶
The scheduler is configured by hpc-runner and is selected automatically based on the environment. By default:
If running on a node with SGE (
SGE_ROOTset), the SGE scheduler is used.If running on a node with SLURM (
SLURM_CONForsqueueavailable), the SLURM scheduler is used.Otherwise, the local scheduler runs jobs as subprocesses.
The local scheduler still supports module loading, so --modules works even
without a cluster.
Environment modules¶
Use --modules to load environment modules before simulation:
pytest --simulator xcelium --hdl-toplevel top --sources rtl/top.sv \
--modules xcelium/23.09 --modules python/3.11
Modules are loaded in the order specified, before the simulator command runs.
Example usage¶
Run on an HPC cluster with Xcelium, loading the required modules:
pytest --simulator xcelium --hdl-toplevel top --sources rtl/top.sv \
--modules xcelium/23.09 --modules python/3.11
The plugin automatically:
Creates an HPC-enabled runner via
get_hpc_runner("xcelium").Attaches the scheduler from
hpc_runner.get_scheduler().Submits build and test commands as cluster jobs.
Waits for each job to complete and raises on failure.
API reference¶
- pytest_cocotb.runners.get_hpc_runner(sim_name: str) type¶
Get an HPC-enabled runner class for the given simulator name.
- class pytest_cocotb.mixin.HpcExecutorMixin¶
Mixin that overrides cocotb runner execution to submit via hpc-runner.
Place this before the simulator class in the MRO so that
_execute_cmdsis resolved from here first:class HpcXcelium(HpcExecutorMixin, Xcelium): pass
All commands (build and test) are submitted through hpc-runner’s auto-detected scheduler. The
localscheduler runs jobs as subprocesses with module-loading support; cluster schedulers (SGE, SLURM) submit to the grid.