FLAME GPU 2
- page md_README
-
FLAME GPU is a GPU accelerated agent-based simulation library for domain independent complex systems simulations. Version 2 is a complete re-write of the existing library offering greater flexibility, an improved interface for agent scripting and better research software engineering, with CUDA/C++ and Python interfaces.
FLAME GPU provides a mapping between a formal agent specifications with C++ based scripting and optimised CUDA code. This includes a number of key Agent-based Modelling (ABM) building blocks such as multiple agent types, agent communication and birth and death allocation.
Agent-based (AB) modellers are able to focus on specifying agent behaviour and run simulations without explicit understanding of CUDA programming or GPU optimisation strategies.
Simulation performance is significantly increased in comparison with CPU alternatives. This allows simulation of far larger model sizes with high performance at a fraction of the cost of grid based alternatives.
Massive agent populations can be visualised in real time as agent data is already located on the GPU hardware.
Project Status
FLAME GPU 2 is currently in an pre-release (release candidate) state, and although we hope there will not be significant changes to the API prior to a stable release there may be breaking changes as we fix issues, adjust the API and improve performance. The use of native Python agent functions (agent functions expressed as Python syntax which are transpiled to C++) is currently supported (see examples) but classed as an experimental feature.
If you encounter issues while using FLAME GPU, please provide bug reports, feedback or ask questions via GitHub Issues and Discussions.
Documentation and Support
Installation
Pre-compiled python wheels are available for installation from Releases, and can also be installed via pip via whl.flamegpu.com. Wheels are not currently manylinux compliant. Please see the latest release for more information on the available wheels and installation instructions.
C++/CUDA installation is not currently available. Please refer to the section on Building FLAME GPU.
Creating your own FLAME GPU Model
Template repositories are provided as a simple starting point for your own FLAME GPU models, with separate template repositories for the CUDA C++ and Python interfaces. See the template repositories for further information on their use.
CUDA C++: FLAME GPU 2 example template project
Building FLAME GPU
FLAME GPU 2 uses CMake, as a cross-platform process, for configuring and generating build directives, e.g.
Makefileor.vcxproj. This is used to build the FLAMEGPU2 library, examples, tests and documentation.Requirements
Building FLAME GPU has the following requirements. There are also optional dependencies which are required for some components, such as Documentation or Python bindings.
CMake
>= 3.25.2CUDA
>= 12.0(Linux) or>= 12.4(Windows)FLAME GPU aims to support the 2 most recent major CUDA versions, currently
12and13.For native Windows builds, CUDA
12.0-12.3may work for some but not all parts of FLAME GPU due to c++20 compilation issues and MSVC support.A Compute Capability
>= 5.0(CUDA 12.x) or>= 7.5(CUDA 13.x) NVIDIA GPU is required for execution.
C++20 capable C++ compiler (host), compatible with the installed CUDA version
Microsoft Visual Studio 2022 (Windows)
Note: Visual Studio must be installed before the CUDA toolkit is installed. See the CUDA installation guide for Windows for more information.
Optionally:
cpplint for linting code
Doxygen to build the documentation
Python
>= 3.9for python integrationWith
setuptools,wheel,buildand optionallyvenvpython packages installedOn Windows, CUDA >= 12.4 is required for python integration
swig
>= 4.1.0for python integration (with c++20 support)Swig >=
4.1.0will be automatically downloaded by CMake if not provided (if possible).Swig
4.2.0and4.2.1is known to encounter issues in some cases. Consider using an alternate SWIG version
MPI (e.g. MPICH, OpenMPI) for distributed ensemble support
MPI 3.0+ tested, older MPIs may work but not tested.
FLAMEGPU2-visualiser dependencies
Building with CMake
Building via CMake is a three step process, with slight differences depending on your platform.
Create a build directory for an out-of tree build
Configure CMake into the build directory
Using the CMake GUI or CLI tools
Specifying build options such as the CUDA Compute Capabilities to target, the inclusion of Visualisation or Python components, or performance impacting features such as
FLAMEGPU_SEATBELTS. See CMake Configuration Options for details of the available configuration optionsCMake will automatically find and select compilers, libraries and python interpreters based on current environmental variables and default locations. See Mastering CMake for more information.
Python dependencies must be installed in the selected python environment. If needed you can instruct CMake to use a specific python implementation using the
Python_ROOT_DIRandPython_ExecutableCMake options at configure time.
Build compilation targets using the configured build system
See Available Targets for a list of available targets.
Linux
To build under Linux using the command line, you can perform the following steps.
For example, to configure CMake for
Releasebuilds, for consumer Pascal GPUs (Compute Capability61), with python bindings enabled, producing the static library andboids_bruteforceexample binary.# Create the build directory and change into it mkdir -p build && cd build # Configure CMake from the command line passing configure-time options. cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CUDA_ARCHITECTURES=61 -DFLAMEGPU_BUILD_PYTHON=ON # Build the required targets. In this case all targets cmake --build . --target flamegpu boids_bruteforce -j 8 # Alternatively make can be invoked directly make flamegpu boids_bruteforce -j8
Windows
Under Windows, you must instruct CMake on which Visual Studio and architecture to build for, using the CMake
-Aand-Goptions. This can be done through the GUI or the CLI.I.e. to configure CMake for consumer Pascal GPUs (Compute Capability
61), with python bindings enabled, and build the producing the static library andboids_bruteforceexample binary in the Release configuration:REM Create the build directory mkdir build cd build REM Configure CMake from the command line, specifying the -A and -G options. Alternatively use the GUI cmake .. -A x64 -G "Visual Studio 16 2019" -DCMAKE_CUDA_ARCHITECTURES=61 -DFLAMEGPU_BUILD_PYTHON=ON REM You can then open Visual Studio manually from the .sln file, or via: cmake --open . REM Alternatively, build from the command line specifying the build configuration cmake --build . --config Release --target flamegpu boids_bruteforce --verbose
On Windows, by default CMake will select the newest version of CUDA available when configuring. If you have multiple versions of CUDA installed then you can select an earlier installed CUDA version (e.g. CUDA 11.2) by additionally passing
-T cuda=11.2when calling CMake configure (cmake ..).Configuring and Building a single example
It is also possible to configure and build individual examples as standalone CMake projects.
I.e. to configure and build
game_of_lifeexample in release mode from the command line, using linux as an example:cd examples/game_of_life mkdir -p build cd build cmake .. -DCMAKE_BUILD_TYPE=Release -DCMAKE_CUDA_ARCHITECTURES=61 cmake --build . --target all
CMake Configuration Options
Option
Value
Description
CMAKE_BUILD_TYPERelease/Debug/MinSizeRel/RelWithDebInfoSelect the build configuration for single-target generators such as
makeCMAKE_CUDA_ARCHITECTURESe.g
60,"60;70"CUDA Compute Capabilities to build/optimise for, as a
;separated list. See CMAKE_CUDA_ARCHITECTURES. Defaults toall-majoror equivalent. Alternatively use theCUDAARCHSenvironment variable.FLAMEGPU_SEATBELTSON/OFFEnable / Disable additional runtime checks which harm performance but increase usability. Default
ONFLAMEGPU_BUILD_PYTHONON/OFFEnable Python target
pyflamegpuvia Swig. DefaultOFF. Python packagessetuptools,build&wheelrequiredFLAMEGPU_BUILD_PYTHON_VENVON/OFFUse a python
venvwhen building the python Swig target. DefaultON. Python packagevenvrequiredFLAMEGPU_BUILD_PYTHON_PATCHELFON/OFFUnder linux, Use
patchelfto remove explicit runtime dependency on versionedlibnvrtc-builtins.soforpyflamegpu. DefaultOFF.patchelfrequiredFLAMEGPU_BUILD_TESTSON/OFFBuild the C++/CUDA test suite. Default
OFF.FLAMEGPU_BUILD_TESTS_DEVON/OFFBuild the reduced-scope development test suite. Default
OFFFLAMEGPU_ENABLE_GTEST_DISCOVERON/OFFRun individual CUDA C++ tests as independent
ctesttests. This dramatically increases test suite runtime. DefaultOFF.FLAMEGPU_VISUALISATIONON/OFFEnable Visualisation. Default
OFF.FLAMEGPU_VISUALISATION_ROOTpath/to/visProvide a path to a local copy of the visualisation repository.
FLAMEGPU_ENABLE_NVTXON/OFFEnable NVTX markers for improved profiling. Default
OFFFLAMEGPU_WARNINGS_AS_ERRORSON/OFFPromote compiler/tool warnings to errors are build time. Default
OFFFLAMEGPU_RTC_EXPORT_SOURCESON/OFFAt runtime, export dynamic RTC files to disk. Useful for debugging RTC models. Default
OFFFLAMEGPU_RTC_DISK_CACHEON/OFFEnable/Disable caching of RTC functions to disk. Default
ON.FLAMEGPU_VERBOSE_PTXASON/OFFEnable verbose PTXAS output during compilation. Default
OFF.FLAMEGPU_CURAND_ENGINEXORWOW/PHILOX/MRGSelect the CUDA random engine. Default
XORWOWFLAMEGPU_ENABLE_GLMON/OFFExperimental feature for GLM type support within models. Default
OFF.FLAMEGPU_ENABLE_MPION/OFFEnable MPI support for distributed CUDAEnsembles, each MPI worker should have exclusive access to it’s GPUs e.g. 1 MPI worker per node. Default
OFF.FLAMEGPU_ENABLE_ADVANCED_APION/OFFEnable advanced API functionality (C++ only), providing access to internal sim components for high-performance extensions. No stability guarantees are provided around this interface and the returned objects. Documentation is limited to that found in the source. Default
OFF.FLAMEGPU_SHARE_USAGE_STATISTICSON/OFFShare usage statistics (telemetry) to support evidencing usage/impact of the software. Default
ON.FLAMEGPU_TELEMETRY_SUPPRESS_NOTICEON/OFFSuppress notice encouraging telemetry to be enabled, which is emitted once per binary execution if telemetry is disabled. Defaults to
OFF, or the value of a system environment variable of the same name.FLAMEGPU_TELEMETRY_TEST_MODEON/OFFSubmit telemetry values to the test mode of TelemetryDeck. Intended for use during development of FLAMEGPU rather than use. Defaults to
OFF, or the value of a system environment variable of the same name.FLAMEGPU_ENABLE_LINT_FLAMEGPUON/OFFEnable/Disable creation of the
lint_flamegputarget. DefaultONif this repository is the root CMAKE_SOURCE_DIR, otherwiseOFFFLAMEGPU_SWIG_MINIMUM4.1.0The minimum version of SWIG required.
FLAMEGPU_SWIG_DOWNLOAD4.3.0The version of SWIG to download if the required version is not found.
FLAMEGPU_SWIG_EXACTON/OFFRequire the exact version of SWIG specified in
FLAMEGPU_SWIG_MINIMUM. This enables downgrading swig. DefaultOFFFor a list of available CMake configuration options, run the following from the
builddirectory:cmake -LH ..
Available Targets
Target
Description
allLinux target containing default set of targets, including everything but the documentation and lint targets
ALL_BUILDThe windows equivalent of
allall_lintRun all available Linter targets
flamegpuBuild FLAME GPU static library
pyflamegpuBuild the python bindings for FLAME GPU
docsThe FLAME GPU API documentation (if available)
testsBuild the CUDA C++ test suite, if enabled by
FLAMEGPU_BUILD_TESTS=ONtests_devBuild the CUDA C++ test suite, if enabled by
FLAMEGPU_BUILD_TESTS_DEV=ON<example>Each individual model has it’s own target. I.e.
boids_bruteforcecorresponds toexamples/boids_bruteforcelint_<other>Lint the
<other>target. I.e.lint_flamegpuwill lint theflamegputargetFor a full list of available targets, run the following after configuring CMake:
cmake --build . --target help
Usage
Once compiled individual models can be executed from the command line, with a range of default command line arguments depending on whether the model implements a single Simulation, or an Ensemble of simulations.
To see the available command line arguments use the
-hor--helpoptions, for either C++ or python models.I.e. for a
Releasebuild of thegame_of_lifemodel, run:./bin/Release/game_of_life --helpVisual Studio
If wishing to run examples within Visual Studio it is necessary to right click the desired example in the Solution Explorer and select
Debug > Start New Instance. Alternatively, ifSet as StartUp Projectis selected, the main debugging menus can be used to initiate execution. To configure command line argument for execution within Visual Studio, right click the desired example in the Solution Explorer and selectProperties, in this dialog selectDebuggingin the left hand menu to display the entry field forcommand arguments. Note, it may be necessary to change the configuration as the properties dialog may be targeting a different configuration to the current build configuration.Environment Variables
Several environmental variables are used or required by FLAME GPU 2.
Running the Test Suite(s)
CUDA C++ Test Suites
The test suite for the CUDA/C++ library can be executed using CTest, or by manually running the test executable(s).
can be used to orchestrate running multiple test suites for different aspects of FLAME GPU 2.
The test suite can be executed using CTest by running
ctest, orctest -VVfor verbose output of sub-tests, from the the build directory.More verbose CTest output for the GoogleTest based CUDA C++ test suite(s) can be enabled by configuring CMake with
FLAMEGPU_ENABLE_GTEST_DISCOVERset toON. This however will dramatically increase test suite execution time.Configure CMake to build the desired tests suites as desired, using
FLAMEGPU_BUILD_TESTS=ON,FLAMEGPU_BUILD_TESTS_DEV=ONand optionallyFLAMEGPU_ENABLE_GTEST_DISCOVER=ON.Build the
tests,tests_devtargets as requiredRun the test suites via ctest, using
-vvfor more-verbose output. Multiple tests can be ran concurrently using-j <jobs>. Use-R <regex>to only run matching tests.ctest -vv -j 8
To run the CUDA/C++ test suite(s) manually, which allows use of
--gtest_filter:Configure CMake with
FLAMEGPU_BUILD_TESTS=ONBuild the
teststargetRun the test suite executable for the selected configuration i.e.
./bin/Release/tests
Python Testing via pytest
To run the python test suite:
Configure CMake with
FLAMEGPU_BUILD_PYTHON=ONBuild the
pyflamegputargetActivate the generated python
venvfor the selected configuration, which haspyflamegpuandpytestinstalledIf using Bash (linux, bash for windows)
source lib/Release/python/venv/bin/activate
If using
cmd:call lib\Release\python\venv\Scripts\activate.batOr if using Powershell:
powershell . lib\Release\python\venv\Scripts\activate.ps1Run
pyteston thetests/pythondirectory. This may take some time.python3 -m pytest ../tests/python
Usage Statistics (Telemetry)
Support for academic software is dependant on evidence of impact. Without evidence it is difficult/impossible to justify investment to add features and provide maintenance. We collect a minimal amount of anonymous usage data so that we can gather usage statistics that enable us to continue to develop the software under a free and permissible licence.
Information is collected when a simulation, ensemble or test suite run have completed.
The TelemetryDeck service is used to store telemetry data. All data is sent to their Ingest API v2 endpoint of https://nom.telemetrydeck.com/v2/. For more details please review the TelmetryDeck privacy policy.
We do not collect any personal data such as usernames, email addresses or hardware identifiers but we do generate a random user identifier. This identifier is salted and hashed by Telemetry deck.
More information can be found in the FLAMEGPU documentation.
Telemetry is enabled by default, but can be opted out by:
Setting an environment variable
FLAMEGPU_SHARE_USAGE_STATISTICStoOFF,falseor0(case insensitive).If this is set during the first CMake configuration it will be used for all subsequent CMake configurations until the CMake Cache is cleared, or it is manually changed.
If this is set during simulation, ensemble or test execution (i.e. runtime) it will also be respected
Setting the
FLAMEGPU_SHARE_USAGE_STATISTICSCMake option toOFFor another false-like CMake value, which will default telemetry to be off for executions.Programmatically overriding the default value by:
Calling
flamegpu::io::Telemetry::disable()orpyflamegpu.Telemetry.disable()prior to the construction of anySimulation,CUDASimulationorCUDAEnsembleobjects.Setting the
telemetryconfig property of aSimulation.Config,CUDASimulation.SimulationConfigorCUDAEnsemble.EnsembleConfigtofalse.
Contributing
Feel free to submit Pull Requests, create Issues or open Discussions.
See CONTRIBUTING.md for more detailed information on how to contribute to FLAME GPU.
Authors and Acknowledgment
See Contributors for a list of contributors towards this project.
If you refer to the technical, algorithmic or performance aspects of FLAME GPU then please cite “FLAME GPU 2: A framework for flexible and performant agent based simulation on GPUs” (DOI: https://doi.org/10.1002/spe.3207). If you use this software in your work, please cite DOI 10.5281/zenodo.5428984. Release specific DOI are also provided via Zenodo.
Alternatively, CITATION.cff provides citation metadata, which can also be accessed from GitHub.
License
FLAME GPU is distributed under the MIT Licence.
Known issues
There are currently several known issues which will be fixed in future releases (where possible). For a full list of known issues pleases see the Issue Tracker.