Welcome to HEAT
HEAT is a scalable platform that supports the full data lifecycle, from ingestion and storage to analytics and visualization. The flexible architecture can accommodate virtually any data source, letting you define declarative transformation pipelines and modular workflows. With specialized modules for simulation data processing and robust developer tooling, HEAT streamlines the process of converting raw, unstructured data into actionable human performance insights.
This documentation is designed for system administrators, integrators and developers looking to configure, maintain and extend a HEAT environment. We recommend you follow the guide below to best leverage HEAT’s powerful features and familiarize with our best practices to unlock actionable insights from your data.
Getting Started
Before diving into specific tasks or role-based guides, it’s helpful to understand the core concepts that define how HEAT operates. This section outlines the foundational ideas, provides a quick look at data ingestion, and points you to more detailed topics.
1. Understand Core Concepts
HEAT is built around four core functions that transform raw data into actionable insights:
- Capture: Collect and ingest data from various sources such as simulators or surveys.
- Store: Persist data to a connected data source so it’s readily accessible for analysis.
- Analyze: Apply transformations and algorithms to detect patterns and derive insights from stored data.
- Visualize: Present findings through an intuitive dashboard designed for instructors and trainees.
The goal is to provide clear, actionable insights - whether that’s improving individual performance, reducing training costs, or accelerating overall training throughput.
For a deeper dive into HEAT’s building blocks - such as Projects, Sessions, Session Templates see Core Concepts.
2. Explore the Services
HEAT is composed of multiple services and tools that work in tandem to accomplish the four core functions. Below is an overview of each key service, with links to more detailed sections:
Data Collector enables ingestion from supported simulation engines such as Unreal, Unity, and VBS. By connecting these engines (or other sources) to HEAT, you establish a consistent data pipeline for analyzing human performance at scale. See Data Collector for more on setup and supported platforms.
Cluster Manager handles environment-wide monitoring, troubleshooting, and configuration for your HEAT deployment. It provides logging, health checks, resource usage dashboards, and tooling for developers to streamline integration. See Cluster Manager for details on usage and best practices.
HEAT Auth offers secure authentication and role-based access control within the HEAT ecosystem. It ensures data privacy, enforces user privileges, and integrates with external identity providers where required. Visit HEAT Auth to see how to configure user roles and implement single sign-on (SSO).
Dashboard delivers an intuitive view of captured and analyzed data, focusing on personalized performance metrics. It’s designed for quick insight into key trends and comparisons that matter most to trainers and trainees. Head to Dashboard for customization options and feature details.
HEAT API exposes programmatic endpoints for ingestion, retrieval, and session management. External systems can push data into HEAT or query existing data flows to support a wide range of integrations. Check HEAT API for endpoint references, authentication guidelines, and usage examples.
Analytics & Algorithms consist of runners and our session templates. Runner are secure, containerized processes that implement HEAT’s data workflows. HEAT includes powerful, built-in runners for simulation-specific analytics and common data transformations. Developers can also build custom runners and workflows using the HEAT runtime to add algorithms for domain-specific needs, all while benefiting from the platform’s scalable infrastructure. See Analytics & Algorithms for details.
3. Environment Setups & Requirements
Under the hood, HEAT runs on a Kubernetes foundation, ensuring consistent deployment and scalability across multiple environments. Whether you’re leveraging full-scale cloud resources or operating in an air-gapped scenario, each deployment profile supports the same core HEAT functionality - subject to the available compute and network resources.
HEAT can be delivered in three primary configurations:
- Azure: Ideal for a cloud-first approach, benefiting from Azure’s managed services, high-availability features, and on-demand scaling.
- On-Prem: Deployed on a private Kubernetes cluster, suitable for organizations with strict security/compliance requirements or data-sovereignty concerns.
- Local: A smaller-scale Kubernetes setup for isolated environments, demos, or development/testing, with reduced high-availability needs.
While these profiles differ in terms of HA capabilities, scale-on-demand features, and infrastructure prerequisites, the platform’s core feature set and behavior remain consistent. HEAT instances are managed and licensed by VRAI, but non-Azure editions will require a Kubernetes infrastructure that may involve an initial setup by our team.
Note: This documentation does not cover the installation or deployment of a new HEAT instance. For details on environment selection, resource requirements, and networking considerations, see Environments & System Requirements.
4. Ingest Your First Data
Getting data into HEAT generally involves creating an Ingest, which generates an endpoint tied to a specific Session Template and Project. When data is sent to that endpoint, HEAT automatically creates a new session and begins processing. Most HEAT deployments include multiple managed data sources as well as a default Session Template for the primary simulator or integration target. You can, however, create additional templates, projects, or data sources as needed.
In short, you’ll need:
- A simulation or integration to push data into HEAT (eg. simulator, external service)
- A suitable data source to store ingested data (e.g., your database, blob store, or the ones we provide).
- A Session Template that defines your data workflow (e.g., DAG template representing workflow and inputs/outputs).
- An Ingest in HEAT, which provides the endpoint for your simulation/integration to push data into.
- A Dashboard configuration to visualize the processed data.
All of this can be achieved programmatically via the API and/or via the Cluster Manager.
For a detailed walkthrough on setting up these components and starting your first data ingestion, see Data Ingestion.
5. Iteration & Advanced Topics
- Session Management: Review or modify stored sessions, track performance metrics across multiple runs.
- Custom Pipelines: Build your own session templates and runners for domain-specific logic.
- Dashboard Customization: Add new widgets or pages to the HEAT dashboard to visualize specialized metrics.
- Security & Access Control: Manage user roles and authentication, and data governance.
For a deeper understanding, continue exploring: