Dependency Graph

The dependency graph functionality, is new to FLAME GPU 2 and provides a more natural means of specifying the execution order a model.

Specifying a dependency, e.g. a depends on b ensures that function a will not run until function b has completed. This can be used to define the order you want behaviours to take place in, and to ensure that a function which outputs messages is complete before another function attempts to read them.

Dependency specification cannot be mixed with manually specified layers introduced in the next section.

Specifying Dependencies

Dependencies are specified between AgentFunctionDescription, SubmodelDescription and HostFunctionDescription objects. These are specified using the dependsOn() method available within each.

// Declare that agent_fn2 depends on agent_fn1
agent_fn2.dependsOn(agent_fn1)

// Declare that host_fn1 depends on agent_fn2
host_fn1.dependsOn(agent_fn2)

Any of the objects can depend on multiple other objects:

// Declare that agent_fn6 depends on agent_fn3, agent_fn4 and agent_fn5
agent_fn6.dependsOn(agent_fn3, agent_fn4, agent_fn5)

Specifying Roots

Any functions or submodels which have no dependencies are roots. These must be added to the dependency graph:

// Add agent_fn1 as a root
model.addExecutionRoot(agent_fn1);

You do not need to manually add every function or submodel to the graph. Adding the roots is enough, as the others will be included as a result of the dependency specifications.

Host Layer Functions

In order to add a host layer function to the dependency graph, a HostFunctionDescription object must be created to wrap it:

// Define a host function called host_fn1
FLAMEGPU_HOST_FUNCTION(host_fn1) {
    // Behaviour goes here
}

// ... other code ...

// Wrap it in a HostFunctionDescription, giving it the name "HostFunction1"
HostFunctionDescription hf("HostFunction1", host_fn1);

// Specify that it depends on an agent function "f"
hf.dependsOn(f);

If you are using the layers API directly, you do not need to wrap your host layer functions in HostFunctionDescription objects.

Generating Layers

When you have specified all your dependencies and roots, you must instruct the model to generate execution layers from the dependency graph:

// Generate the actual execution layers from the dependency graph
model.generateLayers();

If you wish to see the actual layers generated, you can use the getConstructedLayersString() method of the model description to obtain a string representation of the layers:

// Get the constructed layers and store them in variable actualLayers
std::string actualLayers = model.getConstructedLayersString();

// Print the layers to the console
std::cout << actualLayers << std::endl;

Visualising the Dependencies

FLAME GPU 2 can automatically produce a GraphViz format graph of your dependency tree. You can use this to visually validate that behaviours will be happening in the order you expect them to.

// Produce a diagram of the dependency graph, saved as graphdiagram.gv
model.generateDependencyGraphDOTDiagram("graphdiagram.gv");

As an example, the following code would produce the graph below in a file named diamond.gv:

f2.dependsOn(f);
f3.dependsOn(f);
f4.dependsOn(f2, f3);
model.addExecutionRoot(f);
model.generateDependencyGraphDOTDiagram("diamond.gv");
digraph {
  Function1[style = filled, color = red];
  Function2[style = filled, color = red];
  Function4[style = filled, color = red];
  Function3[style = filled, color = red];
  Function4[style = filled, color = red];
  Function1 -> Function2;
  Function2 -> Function4;
  Function1 -> Function3;
  Function3 -> Function4;
}

Accessing the DependencyGraph

In general you should not need to directly access the dependency graph as all relevant functionality can be accessed via the model description. If for some reason you do need direct access, you can request it from via a ModelDescription as follows:

// Access the DependencyGraph of model
flamegpu::DependencyGraph& graph = model.getDependencyGraph();