Skip to Content
This documentation is provided with the HEAT environment and is relevant for this HEAT instance only.
GuidesBuilding Node TemplatesIntroduction

Building Node Templates: Introduction

This brief chapter sets the scene for the Node Template guide. After reading you should know what a HEAT package looks like, which components you must supply, and how they fit together inside a running cluster. The remaining pages will drill into the practical work: writing runners, authoring manifests, deploying, and wiring templates into a DAG via a Session Template.

The big picture

ConceptAnalogyPurpose in HEAT
RunnerExpertA container image that knows how to do one or more things.
Node TemplateSkillDeclares what the thing is (input/output shape, configuration schema).
Session TemplatePlanA DAG that orchestrates when the thing is executed. Used per session.
Manifest (.heat)ResumeRegisters your experts & skills with the cluster so the system can use them.

Minimum ingredients for a new feature

  1. Runner image - package your logic and include the heat-runtime Python library.
  2. Node templates - generic, idempotent task declarations that reference a runner type.
  3. HEAT manifest - single JSON file (*.heat) that lists your runners, node templates, and high-level feature metadata.
  4. Registration step - upload the manifest via API or Cluster Manager. HEAT persists the definitions and starts scheduling.

Once a manifest is registered you can iterate on session templates without rebuilding images; tweak node configuration only.

Anatomy of a .heat manifest

Below is a trimmed skeleton, full schema is linked from the $schema property. Use it as a starting point; validate locally before upload.

my-feature.heat
{ "$schema": "https://heatcommon.blob.core.windows.net/schema/heat-manifest/v1.json", "manifestVersion": "1.0.0", "name": "my-feature", "description": "Enriches VR telemetry with AI driven tagging", "owner": "simulation-team@company.io", "website": "https://example.com", "createdAt": "2025-05-07T10:30:00Z", "feature": { "internalName": "myFeature", "displayName": "AI Tagging", "icon": "bolt", "isEnabled": true, "ownershipType": "ThirdParty", "description": "Adds semantic tags to raw simulation data." }, "runnerTypes": [ { "name": "ai-tagger", "containerImage": "registry.example.com/ai-tagger:1.0.0", "cpuLimit": "2", "memoryLimit": "4Gi", "description": "Tags events using an ML model", "enabled": true, "featureInternalName": "myFeature" } ], "nodeTemplates": [ { "name": "tag-events", "type": "Processing", "acceptsMultipleInputs": false, "requiresConfiguration": true, "configurationSchema": { "type": "object", "properties": {"model": {"type":"string"}} }, "featureInternalName": "myFeature", "supportedRunners": ["ai-tagger"], "hasNonJsonArtifacts": false } ] }

In the example above, we assume we have built a Runner using the HEAT runtime which is available at registry.example.com/ai-tagger:1.0.0 - and requires a maximum of 2 cpu cores and 4GB of memory. We have also registered a node template called tag-events that our runner supports.

This manifest simply tells HEAT that these concepts exist, it’s critical to ensure that your runner logic actually implements appropriate logic for the node templates you prescribe. In this instance, we would expect the ai-tagger application to support the tag-events node template type, as HEAT will provision your logic whenever any nodes require processing with that template.

Runtime behaviour

Tasks are created in HEAT when the state of a session’s graph changes. Node processing will generally result in an artifact (output data) known as the Node Output, which means downstream nodes (children) will require work to ensure they have reprocessed with the latest data. This piece of work exists as a task within HEAT, which contains the output at the point of task creation. Updating a node therefore may generate tasks.

In this instance, runner pod represents your code running in the cluster using our HEAT runtime. Our scheduler ensures your code is provisioned an running as a pod within the cluster when tasks exist.

  1. The Runner Pod starts and continuously polls the Task API for work that matches the node templates it advertises.
  2. For each task it receives, the runner processes inputs, respects the provided configuration, and posts a JSON/binary result.
  3. HEAT updates state, potentially enqueuing downstream tasks. Idempotence is critical - each task must succeed when replayed with identical inputs.

Typical folder layout

A typical feature project might consist of your heat manifest, dependencies and then containerisation pieces to describe how to build your image. A runner is a containerised application, where the entry-point would typically be executing your python application and triggering the runtime.

my-feature/
my-feature/ ├── my-feature.heat # manifest ├── docker-compose.yml # for building the docker image ├── Dockerfile # for building the docker image ├── requirements.txt # dependencies (ie. heat runtime) └── src/ └── main.py # your source code

Where next?

Next sectionFocus
Developing RunnersBuild/extend the Python entry-point & package the image.
Building a ManifestField-by-field walkthrough and validation tips.
Deploying ChangesCI guidance and Cluster Manager upload.
Using in a Session TemplateReferencing your node templates inside a DAG.