CLI Reference#

embedl-hub#

[bold cyan]embedl-hub[/] end-to-end Edge-AI workflow CLI

Usage

embedl-hub [OPTIONS] COMMAND [ARGS]...

Options

-V, --version#

Print embedl-hub version and exit.

-v, --verbose#

Increase verbosity (-v, -vv, -vvv).

--install-completion#

Install completion for the current shell.

--show-completion#

Show completion for the current shell, to copy it or customize the installation.

auth#

Store the API key for embedl-hub CLI.

Examples#

Configure the API key:

$ embedl-hub auth –api-key <your-key>

Usage

embedl-hub auth [OPTIONS]

Options

--api-key <api_key>#

Required Set or update API key. Generate one at https://hub.embedl.com/profile.

compile#

Compile a model for on-device deployment.

Usage

embedl-hub compile [OPTIONS] COMMAND [ARGS]...

onnxruntime#

Compile using ONNX Runtime toolchain.

Usage

embedl-hub compile onnxruntime [OPTIONS] COMMAND [ARGS]...
embedl-onnxruntime#

Compile a model using embedl-onnxruntime over SSH.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Compile model.onnx on a remote device:

$ embedl-hub compile onnxruntime embedl-onnxruntime -m model.onnx --host 192.168.1.10 --user pi

Usage

embedl-hub compile onnxruntime embedl-onnxruntime [OPTIONS]

Options

-m, --model <model>#

Required Path to an ONNX model file, or to a directory containing the ONNX model and any associated data files, to be compiled.

--host <host>#

Required SSH hostname or IP address of the remote device.

--user <username>#

Required SSH username for authentication.

--port <port>#

SSH port number.

Default:

22

--key-file <key_file>#

Path to the SSH private key file.

-s, --size <size>#

Input size of the model (e.g., 1,3,224,224).

--exec-path <exec_path>#

Path to the embedl-onnxruntime executable on the remote device. Defaults to ‘embedl-onnxruntime’ (assumes it is on $PATH).

--cli-args <extra_args>#

Additional CLI arguments forwarded verbatim to embedl-onnxruntime. Repeatable.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

qai-hub#

Compile a model to ONNX Runtime via Qualcomm AI Hub.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Compile model.onnx for Samsung Galaxy S24:

$ embedl-hub compile onnxruntime qai-hub -m model.onnx -s 1,3,224,224 -d "Samsung Galaxy S24"

Usage

embedl-hub compile onnxruntime qai-hub [OPTIONS]

Options

-m, --model <model>#

Required Path to the TorchScript model file, ONNX model file, or to a directory containing the ONNX model and any associated data files, to be compiled.

-d, --device <device>#

Required Target device name for deployment. Use command list-devices to view all available options.

-s, --size <size>#

Required Input size of the model (e.g., 1,3,224,224).

--quantize-io#

Quantize input and output tensors.

Default:

False

--data <data_path>#

Path to calibration data directory for quantization.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

tensorrt#

Compile using TensorRT toolchain.

Usage

embedl-hub compile tensorrt [OPTIONS] COMMAND [ARGS]...
trtexec#

Compile a model to TensorRT engine via trtexec over SSH.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Compile model.onnx on a remote NVIDIA device:

$ embedl-hub compile tensorrt trtexec -m model.onnx --host 192.168.1.10 --user nvidia

Usage

embedl-hub compile tensorrt trtexec [OPTIONS]

Options

-m, --model <model>#

Required Path to an ONNX model file, or to a directory containing the ONNX model and any associated data files, to be compiled.

--host <host>#

Required SSH hostname or IP address of the remote device.

--user <username>#

Required SSH username for authentication.

--port <port>#

SSH port number.

Default:

22

--key-file <key_file>#

Path to the SSH private key file.

-s, --size <size>#

Input size of the model (e.g., 1,3,224,224).

--exec-path <exec_path>#

Path to the trtexec executable on the remote device. Defaults to ‘trtexec’ (assumes it is on $PATH).

--cli-args <extra_args>#

Additional CLI arguments forwarded verbatim to trtexec. Repeatable.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

tflite#

Compile using TFLite toolchain.

Usage

embedl-hub compile tflite [OPTIONS] COMMAND [ARGS]...
local#

Compile an ONNX model to TFLite locally via onnx2tf.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Compile model.onnx to TFLite format:

$ embedl-hub compile tflite local -m model.onnx

Usage

embedl-hub compile tflite local [OPTIONS]

Options

-m, --model <model>#

Required Path to an ONNX model file, or to a directory containing the ONNX model and any associated data files, to be compiled.

--fp16#

Enable FP16 quantization for the TFLite model.

Default:

False

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

qai-hub#

Compile a model to TFLite via Qualcomm AI Hub.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Compile model.onnx for Samsung Galaxy S24:

$ embedl-hub compile tflite qai-hub -m model.onnx -s 1,3,224,224 -d "Samsung Galaxy S24"

Usage

embedl-hub compile tflite qai-hub [OPTIONS]

Options

-m, --model <model>#

Required Path to the TorchScript model file, ONNX model file, or to a directory containing the ONNX model and any associated data files, to be compiled.

-d, --device <device>#

Required Target device name for deployment. Use command list-devices to view all available options.

-s, --size <size>#

Required Input size of the model (e.g., 1,3,224,224).

--quantize-io#

Quantize input and output tensors.

Default:

False

--data <data_path>#

Path to calibration data directory for quantization.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

init#

Configure persistent CLI context.

Stores values used by all other commands in a local config file (~/.config/embedl-hub/config.yaml). The config file records:

  • The active project (created automatically when the name does not yet exist on the server).

  • The artifact directory where every compile, profile, and invoke run writes its outputs.

Examples#

Create a new project with a random name:

$ embedl-hub init

Create or set a named project:

$ embedl-hub init -p "My Flower Detector App"

Set a custom artifact directory:

$ embedl-hub init --artifact-dir ~/my-artifacts

Usage

embedl-hub init [OPTIONS]

Options

-p, --project <project>#

Project name or id

--artifact-dir <artifact_dir>#

Directory for storing run artifacts. Persisted in config so all commands use the same location.

invoke#

Run inference on a compiled model.

Usage

embedl-hub invoke [OPTIONS] COMMAND [ARGS]...

onnxruntime#

Invoke using ONNX Runtime toolchain.

Usage

embedl-hub invoke onnxruntime [OPTIONS] COMMAND [ARGS]...
embedl-onnxruntime#

Run inference on an ONNX Runtime model on a remote device via SSH.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Invoke model on a remote device:

$ embedl-hub invoke onnxruntime embedl-onnxruntime -m my_model.onnx -i input.npz --host 192.168.1.10 --user pi

Usage

embedl-hub invoke onnxruntime embedl-onnxruntime [OPTIONS]

Options

-m, --model <model>#

Path to a compiled ONNX model file.

--from-run <from_run>#

Load a compiled model from a previous run. Use “latest” to pick the most recent matching run, or provide a run-ID prefix.

-i, --input <input_data>#

Required Path to input data (.npz file mapping input names to arrays).

--host <host>#

Required SSH hostname or IP address of the remote device.

--user <username>#

Required SSH username for authentication.

--port <port>#

SSH port number.

Default:

22

--key-file <key_file>#

Path to the SSH private key file.

--exec-path <exec_path>#

Path to the embedl-onnxruntime executable on the remote device. Defaults to ‘embedl-onnxruntime’ (assumes it is on $PATH).

--cli-args <extra_args>#

Additional CLI arguments forwarded verbatim to embedl-onnxruntime. Repeatable.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

qai-hub#

Run inference on an ONNX Runtime model via Qualcomm AI Hub.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Invoke an .onnx model on Samsung Galaxy S25:

$ embedl-hub invoke onnxruntime qai-hub -m my_model.onnx -i input.npz -d "Samsung Galaxy S25"

Usage

embedl-hub invoke onnxruntime qai-hub [OPTIONS]

Options

-m, --model <model>#

Path to a compiled ONNX model file or directory.

--from-run <from_run>#

Load a compiled model from a previous run. Use “latest” to pick the most recent matching run, or provide a run-ID prefix.

-i, --input <input_data>#

Required Path to input data (.npz file mapping input names to arrays).

-d, --device <device>#

Required Target device name for deployment. Use command list-devices to view all available options.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

tensorrt#

Invoke using TensorRT toolchain.

Usage

embedl-hub invoke tensorrt [OPTIONS] COMMAND [ARGS]...
trtexec#

Run inference on a TensorRT model on a remote NVIDIA device via SSH.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Invoke a TensorRT engine on a remote NVIDIA device:

$ embedl-hub invoke tensorrt trtexec -m my_model.engine -i input.npz --host 192.168.1.10 --user nvidia

Usage

embedl-hub invoke tensorrt trtexec [OPTIONS]

Options

-m, --model <model>#

Path to a compiled TensorRT engine file.

--from-run <from_run>#

Load a compiled model from a previous run. Use “latest” to pick the most recent matching run, or provide a run-ID prefix.

-i, --input <input_data>#

Required Path to input data (.npz file mapping input names to arrays).

--host <host>#

Required SSH hostname or IP address of the remote device.

--user <username>#

Required SSH username for authentication.

--port <port>#

SSH port number.

Default:

22

--key-file <key_file>#

Path to the SSH private key file.

--exec-path <exec_path>#

Path to the trtexec executable on the remote device. Defaults to ‘trtexec’ (assumes it is on $PATH).

--cli-args <extra_args>#

Additional CLI arguments forwarded verbatim to trtexec. Repeatable.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

tflite#

Invoke using TFLite toolchain.

Usage

embedl-hub invoke tflite [OPTIONS] COMMAND [ARGS]...
qai-hub#

Run inference on a TFLite model via Qualcomm AI Hub.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Invoke a .tflite model on Samsung Galaxy S25:

$ embedl-hub invoke tflite qai-hub -m my_model.tflite -i input.npz -d "Samsung Galaxy S25"

Usage

embedl-hub invoke tflite qai-hub [OPTIONS]

Options

-m, --model <model>#

Path to a compiled .tflite model file.

--from-run <from_run>#

Load a compiled model from a previous run. Use “latest” to pick the most recent matching run, or provide a run-ID prefix.

-i, --input <input_data>#

Required Path to input data (.npz file mapping input names to arrays).

-d, --device <device>#

Required Target device name for deployment. Use command list-devices to view all available options.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

list-devices#

List available devices (default subcommand: ‘embedl’).

Usage

embedl-hub list-devices [OPTIONS] COMMAND [ARGS]...

embedl#

List all available target devices from Embedl Cloud.

Usage

embedl-hub list-devices embedl [OPTIONS]

qai-hub#

List all available target devices from Qualcomm AI Hub.

Usage

embedl-hub list-devices qai-hub [OPTIONS]

log#

Show past runs from the artifact directory.

Usage

embedl-hub log [OPTIONS] [RUN_IDS]... COMMAND [ARGS]...

Options

-n, --count <count>#

Maximum number of runs to display.

Default:

20

-a, --all#

Show all runs including failed ones.

Default:

False

--failed#

Show only failed runs.

Default:

False

--oneline#

Compact one-line-per-run output.

Default:

False

--full-command#

Show the full binary path in CLI commands.

Default:

False

-pn, --project-name <project_name>#

Project name to show logs for. Defaults to the configured project.

Arguments

RUN_IDS#

Optional argument(s)

Run ID(s) or prefixes to display. Bypasses status filters.

profile#

Profile a compiled model on a target device.

Usage

embedl-hub profile [OPTIONS] COMMAND [ARGS]...

onnxruntime#

Profile using ONNX Runtime toolchain.

Usage

embedl-hub profile onnxruntime [OPTIONS] COMMAND [ARGS]...
embedl-onnxruntime#

Profile an ONNX Runtime model on a remote device via SSH.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Profile model on a remote device:

$ embedl-hub profile onnxruntime embedl-onnxruntime -m my_model.onnx --host 192.168.1.10 --user pi

Usage

embedl-hub profile onnxruntime embedl-onnxruntime [OPTIONS]

Options

-m, --model <model>#

Path to a compiled ONNX model file.

--from-run <from_run>#

Load a compiled model from a previous run. Use “latest” to pick the most recent matching run, or provide a run-ID prefix.

--host <host>#

Required SSH hostname or IP address of the remote device.

--user <username>#

Required SSH username for authentication.

--port <port>#

SSH port number.

Default:

22

--key-file <key_file>#

Path to the SSH private key file.

--exec-path <exec_path>#

Path to the embedl-onnxruntime executable on the remote device. Defaults to ‘embedl-onnxruntime’ (assumes it is on $PATH).

--cli-args <extra_args>#

Additional CLI arguments forwarded verbatim to embedl-onnxruntime. Repeatable.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

qai-hub#

Profile an ONNX Runtime model via Qualcomm AI Hub.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Profile an .onnx model on Samsung Galaxy S25:

$ embedl-hub profile onnxruntime qai-hub -m my_model.onnx -d "Samsung Galaxy S25"

Usage

embedl-hub profile onnxruntime qai-hub [OPTIONS]

Options

-m, --model <model>#

Path to a compiled ONNX model file or directory.

--from-run <from_run>#

Load a compiled model from a previous run. Use “latest” to pick the most recent matching run, or provide a run-ID prefix.

-d, --device <device>#

Required Target device name for deployment. Use command list-devices to view all available options.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

tensorrt#

Profile using TensorRT toolchain.

Usage

embedl-hub profile tensorrt [OPTIONS] COMMAND [ARGS]...
trtexec#

Profile a TensorRT model on a remote NVIDIA device via SSH.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Profile a TensorRT engine on a remote NVIDIA device:

$ embedl-hub profile tensorrt trtexec -m my_model.engine --host 192.168.1.10 --user nvidia

Usage

embedl-hub profile tensorrt trtexec [OPTIONS]

Options

-m, --model <model>#

Path to a compiled TensorRT engine file.

--from-run <from_run>#

Load a compiled model from a previous run. Use “latest” to pick the most recent matching run, or provide a run-ID prefix.

--host <host>#

Required SSH hostname or IP address of the remote device.

--user <username>#

Required SSH username for authentication.

--port <port>#

SSH port number.

Default:

22

--key-file <key_file>#

Path to the SSH private key file.

--exec-path <exec_path>#

Path to the trtexec executable on the remote device. Defaults to ‘trtexec’ (assumes it is on $PATH).

--cli-args <extra_args>#

Additional CLI arguments forwarded verbatim to trtexec. Repeatable.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

tflite#

Profile using TFLite toolchain.

Usage

embedl-hub profile tflite [OPTIONS] COMMAND [ARGS]...
aws#

Profile a TFLite model on the Embedl device cloud (AWS).

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Profile a .tflite model on Samsung Galaxy S25:

$ embedl-hub profile tflite aws -m my_model.tflite -d "Samsung Galaxy S25"

Profile with custom TFLite benchmark parameters:

$ embedl-hub profile tflite aws -m my_model.tflite -d "Samsung Galaxy S25" -p num_threads=2 -p warmup_runs=10

Usage

embedl-hub profile tflite aws [OPTIONS]

Options

-m, --model <model>#

Path to a compiled .tflite model file.

--from-run <from_run>#

Load a compiled model from a previous run. Use “latest” to pick the most recent matching run, or provide a run-ID prefix.

-d, --device <device>#

Required Target device name for deployment. Use command list-devices to view all available options.

-p, --param <benchmark_params>#

Benchmark parameter as a key=value pair. Can be specified multiple times. Example: -p num_threads=4 -p warmup_runs=5.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

qai-hub#

Profile a TFLite model via Qualcomm AI Hub.

Artifacts are written to the directory configured by embedl-hub init --artifact-dir.

Examples#

Profile a .tflite model on Samsung Galaxy S25:

$ embedl-hub profile tflite qai-hub -m my_model.tflite -d "Samsung Galaxy S25"

Usage

embedl-hub profile tflite qai-hub [OPTIONS]

Options

-m, --model <model>#

Path to a compiled .tflite model file.

--from-run <from_run>#

Load a compiled model from a previous run. Use “latest” to pick the most recent matching run, or provide a run-ID prefix.

-d, --device <device>#

Required Target device name for deployment. Use command list-devices to view all available options.

-pn, --project-name <project_name>#

Name of the project to use for the run. (Overrides context set with ‘embedl-hub init’)

-rn, --run-name <run_name>#

Optional name for the run. (If not set, a random name will be generated)

-t, --tag <tags>#

Tag to log for the run, in key=value format. Repeatable.

show#

Print the active project name and artifact directory.

Usage

embedl-hub show [OPTIONS]