Skip to content

Overview

exalsius manages the full lifecycle of GPU resources for AI development and inference. The following sections cover each phase:

  1. Manage nodes — Import GPU nodes into your node pool via SSH.
  2. Deploy clusters — Provision Kubernetes clusters on selected nodes.
  3. Start workspaces — Launch notebooks, dev pods, or inference endpoints on your clusters.

Each section builds on the previous one. Follow them end-to-end or jump to the stage you need.

REST API

You can also access exalsius programmatically. The REST API exposes all resources and operations available through the platform.

Open the REST API documentation