Skip to content

Quickstart

This guide walks you through the full exalsius workflow: install the CLI, log in, import a GPU node, deploy a cluster, and start your first AI workspace.

Closed beta

exalsius is currently in closed beta. Reach out to run.it@exalsius.ai to request access.

Prerequisites

  • Python 3.12 or newer
  • An exalsius account (see closed beta note above)
  • A GPU machine you can reach via SSH, or a cloud provider account to provision one

Virtual environment

We recommend installing the CLI in a dedicated virtual environment:

python -m venv .venv && source .venv/bin/activate

1. Install the CLI

pip install exls

Verify the installation:

exls --help

For uv, pipx, and troubleshooting, see the installation guide.

2. Log in

exls login

This opens your browser to complete authentication. For more details, see the authentication guide. On headless systems, the CLI displays a device code and QR code so you can authenticate from another device.

3. Import a node

Add a GPU machine to your node pool. The interactive mode walks you through each field (hostname, endpoint, SSH user, SSH key); alternatively, pass everything as flags.

exls nodes import
The CLI prompts for the node's endpoint, SSH username, pricing, and SSH key. You can import an existing key or provide a path to a new one. After confirming, the CLI asks whether you want to add another node.

exls nodes import-ssh \
  --endpoint <ip:port> \
  --username <ssh-username> \
  --ssh-key-path ~/.ssh/id_rsa

Verify the node was imported:

exls nodes list

Expected output:

┏━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ ID           ┃ Hostname ┃ Import Time    ┃ Status    ┃ Provider ┃ Instance Type  ┃ Price ┃ Username ┃ SSH Key  ┃ Endpoint         ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ 2bc47866-... │ node-a   │ 47 minutes ago │ ADDED     │ AWS      │ g5.xlarge      │ $1.20 │ ubuntu   │ exls-key │ 203.0.113.10:22 │
│ e8e5408a-... │ node-b   │ 47 minutes ago │ AVAILABLE │ GCP      │ a2-highgpu-1g  │ $1.35 │ ubuntu   │ exls-key │ 203.0.113.11:22 │
│ ca7579ce-... │ node-c   │ 47 minutes ago │ AVAILABLE │ Azure    │ Standard_NC4as │ $1.10 │ ubuntu   │ exls-key │ 203.0.113.12:22 │
└──────────────┴──────────┴────────────────┴───────────┴──────────┴────────────────┴───────┴──────────┴──────────┴──────────────────┘

See manage nodes for details on node types and configuration.

4. Deploy a cluster

Create a cluster on the nodes you just imported. Running exls clusters deploy without arguments automatically enters interactive mode.

exls clusters deploy
The CLI prompts you to name the cluster, select worker nodes from your node pool, and optionally enables LLM inference preparation of the cluster. After showing a summary, it asks for confirmation before deploying.

exls clusters deploy \
  --name my-cluster \
  --worker-nodes e8e5408a-... ca7579ce-... \
  --prepare-llm-inference-environment \
  --follow
Use --name and --worker-nodes to avoid interactive prompts. Add optional flags like --prepare-llm-inference-environment as needed. The --follow flag streams deployment logs so you can watch the cluster come up.

When deployment finishes, verify the cluster is ready:

exls clusters list

Expected output:

┏━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━┓
┃ ID                   ┃ Name        ┃ Status ┃ Created At ┃ Updated At ┃ Creator     ┃
┡━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━┩
│ 0f77aece-faba-...    │ testcluster │ READY  │ 6 days ago │ 6 days ago │ my-username │
└──────────────────────┴─────────────┴────────┴────────────┴────────────┴─────────────┘

See deploy clusters for multi-node setups, telemetry, and naming options.

5. Start a workspace

Deploy a workspace on your cluster. exalsius supports jupyter, marimo, dev-pod, and llm-inference workspace types. For all workspace commands and options, see start workspaces.

Run the deploy command without flags — the CLI prompts for any required values:

exls workspaces deploy jupyter
The CLI prompts for the Jupyter password, selects your cluster automatically (or asks you to pick one if you have several), displays a deployment summary, and asks for confirmation.

Pass all options as flags:

exls workspaces deploy dev-pod --ssh-password <your-password>
When you have a single cluster, exalsius selects it automatically. With multiple clusters, you are prompted to choose one. The CLI displays a summary and asks for confirmation before deploying.

Other workspace types

Each workspace type prompts for its own required values when flags are omitted:

  • jupyter, marimo — password
  • llm-inference — HuggingFace token and model name
  • dev-pod — requires --ssh-password or --ssh-public-key explicitly

Check the workspace status:

exls workspaces list

Connect using the address shown in the Access field. The Access field is filled in automatically as soon as the workspace is ready.

First-time startup

The first workspace on a new cluster takes longer to start while container images are pulled. Subsequent workspaces start faster.

Clean up

When you are done, delete the resources you created to avoid unnecessary costs:

exls workspaces delete <WORKSPACE-ID-or-NAME>
exls clusters delete <CLUSTER-ID-or-NAME>
exls nodes delete <NODE-ID-or-NAME>

exls clusters delete will prompt for confirmation before removing the cluster and its associated resources.

Next steps

You have a running AI workspace on your own GPU cluster. From here: