Workflow Walkthroughs
Step-by-step guides for the three first-class workflows. The legacy demo aliases still exist for one release cycle, but this page uses the workflow/tool grammar that new automation should prefer.
Profile note: - Base install supports inspect-series plus a light seasonal_naive forecast baseline. - Use ts-agents[recommended] or uv sync for the full forecasting and activity-recognition walkthroughs below.
Inspect a series
1. Run the workflow
uv run ts-agents workflow run inspect-series \
--input-json '{"series":[1,2,3,4,5,6,7,8,9,10]}' \
--output-dir outputs/inspectThis single command:
- computes descriptive statistics
- estimates periodicity and autocorrelation
- writes a structured JSON summary
- writes a Markdown report and an autocorrelation plot
2. Interpret the outputs
All artifacts land in outputs/inspect/ by default.
| Artifact | What it tells you |
|---|---|
summary.json |
Stats, periodicity, autocorrelation, and recommended next steps |
autocorrelation.png |
Visual lag structure for the inspected series |
report.md |
Markdown summary of the diagnostic run |
3. Customize the run
# Use a bundled series instead of inline JSON
uv run ts-agents workflow run inspect-series \
--run-id Re200Rm200 \
--variable bx001_real \
--output-dir outputs/inspect-re200
# Increase the autocorrelation horizon
uv run ts-agents workflow run inspect-series \
--input-json '{"series":[1,2,3,4,5,6,7,8,9,10]}' \
--max-lag 12 \
--output-dir outputs/inspect-maxlagKey flags:
| Flag | Default | Options |
|---|---|---|
--run-id / --variable |
off | bundled dataset lookup |
--input-json |
off | inline JSON or JSON file path |
--max-lag |
auto | any positive integer |
--output-dir |
workflow-specific temp path | any path |
Forecasting comparison
1. Run the workflow
uv run ts-agents workflow run forecast-series \
--input-json '{"series":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]}' \
--horizon 5 \
--methods seasonal_naive,arima,theta \
--output-dir outputs/forecastCompares forecasting methods on a deterministic inline series and writes comparison artifacts.
2. Interpret the outputs
Artifacts land in outputs/forecast/ by default.
| Artifact | What it tells you |
|---|---|
forecast_comparison.json |
Per-method RMSE, MAE, MAPE and the best method |
forecast_comparison.png |
Overlay plot of each method’s forecast vs actuals |
forecast.json |
The selected best-model forecast in JSON form |
forecast.csv |
The selected best-model forecast as CSV |
report.md |
Markdown table with metrics and a recommendation |
Example report snippet:
Compared Methods: arima, theta | Best Method (RMSE): arima
arima: RMSE=0.0000, MAE=0.0000, MAPE=0.00%
theta: RMSE=0.0148, MAE=0.0134, MAPE=0.07%
Recommendation: ARIMA (RMSE: 0.0000)
3. Customize the run
# More methods and a longer horizon
uv run ts-agents workflow run forecast-series \
--input-json '{"series":[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]}' \
--methods seasonal_naive,arima,ets,theta \
--horizon 8 \
--output-dir outputs/forecast-longerKey flags:
| Flag | Default | Options |
|---|---|---|
--methods |
seasonal_naive,arima,theta |
seasonal_naive, arima, ets, theta (comma-separated) |
--horizon |
1 |
Any positive integer |
--run-id / --variable |
off | bundled dataset lookup |
--output-dir |
workflow-specific temp path | Any path |
Activity recognition (synthetic data)
1. Generate data and run the workflow
uv run python data/make_synthetic_labeled_stream.py \
--scenario gait --seconds 40 --seed 1337 \
--out data/demo_labeled_stream.csv
uv run ts-agents workflow run activity-recognition \
--input data/demo_labeled_stream.csv \
--label-col label \
--value-cols x,y,z \
--output-dir outputs/activityThis workflow:
- reads a labeled-stream CSV
- sweeps candidate window sizes
- evaluates the best window on a held-out split
- writes plots, JSON metrics, and a Markdown report
2. Interpret the outputs
All artifacts land in outputs/activity/ by default.
| Artifact | What it tells you |
|---|---|
window_selection.json |
Per-window scores and the chosen best_window_size |
window_scores.png |
Visual score vs window size |
eval.json |
Final metrics, confusion matrix, class counts |
confusion_matrix.png |
Heatmap of predicted vs true labels |
report.md |
Markdown summary of the workflow run |
3. Customize the run
# Different synthetic scenario
uv run python data/make_synthetic_labeled_stream.py \
--scenario industrial --seconds 40 --seed 1337 \
--out data/demo_labeled_stream_industrial.csv
# Narrower sweep with an explicit classifier and metric
uv run ts-agents workflow run activity-recognition \
--input data/demo_labeled_stream_industrial.csv \
--label-col label \
--value-cols x,y,z \
--window-sizes 32,64,128 \
--classifier minirocket \
--metric balanced_accuracy \
--output-dir outputs/activity-industrialKey flags:
| Flag | Default | Options |
|---|---|---|
--window-sizes |
32,64,96,128,160 |
Comma-separated integers |
--classifier |
auto |
auto, minirocket, rocket, knn |
--metric |
balanced_accuracy |
accuracy, balanced_accuracy, f1_macro |
--output-dir |
workflow-specific temp path | Any path |
Activity recognition (WISDM data)
This WISDM walkthrough is a source-checkout workflow. It uses the repo-root dataset data/wisdm_subset.csv, which is not bundled into the published wheel.
Build a custom WISDM stream
To download the full WISDM dataset and build your own labeled stream:
python data/make_demo_labeled_stream_wisdm.py \
--subject 1600 --device watch --sensor accel \
--activities walking,jogging,sitting,standing \
--trim-policy per_class_seconds \
--per-class-seconds walking=180,jogging=60,sitting=180,standing=180 \
--out data/demo_labeled_stream.csvThen run the workflow on that CSV:
uv run ts-agents workflow run activity-recognition \
--input data/demo_labeled_stream.csv \
--label-col label \
--value-cols x,y,z \
--output-dir outputs/activity-wisdmOutput artifacts are the same as the synthetic walkthrough: window_selection.json, window_scores.png, eval.json, confusion_matrix.png, and report.md.