Using and Customizing Skills
Skills are reusable workflow definitions that tell an AI agent how to use ts-agents workflows and tools for a specific task. Each skill is a Markdown file (SKILL.md) with YAML frontmatter describing the workflow, required tools, and step-by-step instructions.
The 5 canonical skills
| Skill | Description |
|---|---|
| activity-recognition | End-to-end labeled-stream activity recognition: generate data, select window size, evaluate classifier, produce plots and report |
| forecasting | Forecast future values, choose baselines, compare methods (ARIMA, ETS, Theta) |
| classification | Supervised time-series classification with KNN/DTW, ROCKET, or HIVE-COTE |
| diagnostics | Quick EDA: descriptive stats, autocorrelation, periodicity, and spectral density |
| decomposition | Trend/seasonal/residual decomposition via STL, MSTL, or Holt-Winters |
All skills live in skills/<name>/SKILL.md and are synced to agent-specific directories on export.
Invoking a skill
Skills work with any agent that reads project-level instruction files. Point the agent at the skill and provide context:
Claude Code
Run the activity-recognition skill on data/demo_labeled_stream.csv
Codex / other agents
Use the forecasting skill to inspect method availability and run
`workflow run forecast-series` on a single-series CSV with horizon 12
The agent reads the SKILL.md, discovers the required tools, and executes the workflow.
Customizing skill outputs
Workflow-first outputs vs low-level tool outputs
Prefer first-class workflows when the goal is an artifact bundle:
workflow run inspect-seriesfor summary/report artifactsworkflow run forecast-seriesfor forecast plots, CSVs, and Markdown reportsworkflow run activity-recognitionfor window-size and evaluation artifacts
Use tool run when you need one targeted analysis function, a compatibility wrapper, or a saved JSON payload from a specific low-level tool. That split is especially important for forecasting: the workflow is artifact-first, while the forecast_*_with_data wrappers are compatibility/data surfaces.
Low-level plot-producing tools
First-class workflows write artifact files directly under --output-dir, or into an auto-generated run directory when --output-dir is omitted; that is the canonical machine-facing contract. The lower-level *_with_data tools below expose artifact refs for targeted plotting tasks. Save them with --json when you want a machine-readable record of the tool output and artifact paths. --extract-images remains available only for legacy text outputs that still contain embedded [IMAGE_DATA:...] tokens.
uv run ts-agents tool run stl_decompose_with_data \
--run Re200Rm200 --var bx001_real \
--json \
--save outputs/stl.jsonThe table below lists every tool that produces a plot, organized by analysis phase:
| Phase | Tool | Plot description |
|---|---|---|
| Diagnostics | compute_autocorrelation_with_data |
ACF bar chart by lag |
| Spectral | compute_psd_with_data |
Log-log power spectral density |
| Spectral | compute_coherence_with_data |
Coherence vs frequency |
| Decomposition | stl_decompose_with_data |
4-panel: original, trend, seasonal, residual |
| Decomposition | mstl_decompose_with_data |
4-panel: original, trend, seasonal, residual |
| Decomposition | holt_winters_decompose_with_data |
4-panel: original, trend, seasonal, residual |
| Patterns | detect_peaks_with_data |
Series with peak markers |
| Patterns | segment_changepoint_with_data |
Series with changepoint lines |
| Patterns | analyze_matrix_profile_with_data |
2-panel: series + matrix profile distances |
| Patterns | find_motifs_with_data |
Series with highlighted top motif |
| Patterns | find_discords_with_data |
Series with highlighted anomaly |
| Patterns | segment_fluss_with_data |
Series with segment boundaries |
| Patterns | analyze_recurrence_with_data |
2D recurrence plot |
Forecasting forecast_arima_with_data, forecast_ets_with_data, forecast_theta_with_data, forecast_seasonal_naive_with_data, forecast_ensemble_with_data, and compare_forecasts_with_data are now data-only compatibility wrappers. Treat them as structured forecast summaries, not plot-producing tools. Use workflow run forecast-series --output-dir ... when you want forecast plots, CSVs, and Markdown reports as artifacts.
PDF reports via a mixed workflow/tool skill
A skill can orchestrate many plot-producing tools, save their structured output, compose a Quarto document, and render to PDF — all in one workflow. Here is a complete SKILL.md example:
---
name: comprehensive-report
description: Full diagnostic, decomposition, pattern, and forecast analysis rendered as a PDF report
tasks: [report, pdf, comprehensive]
workflows:
- forecast-series
tools:
- describe_series_with_data
- compute_autocorrelation_with_data
- compute_psd_with_data
- stl_decompose_with_data
- detect_peaks_with_data
- segment_changepoint_with_data
- analyze_matrix_profile_with_data
- forecast_theta_with_data
- forecast_ets_with_data
- forecast_arima_with_data
---
## Objective
Produce a single PDF report with diagnostic/decomposition/pattern plots plus
saved forecast summaries. If you need forecast plots, CSVs, or a generated
forecast report, call `workflow run forecast-series` separately and attach
those artifacts instead of expecting them from the compatibility wrappers. The
`workflows:` frontmatter above is the explicit signal that this skill also uses
the public workflow surface in addition to the listed low-level tools.
## Workflow
For the given `--run` and `--var`, execute each phase below.
Save each tool response as JSON so you can read both the structured data and
any `result.artifacts[*].path` entries. Save the forecasting tool outputs as JSON and
summarize them in the report instead of expecting forecast PNGs from the
compatibility wrappers.
### 1. Diagnostics
```bash
uv run ts-agents tool run describe_series_with_data \
--run $RUN --var $VAR \
--json \
--save outputs/$RUN/describe.json
uv run ts-agents tool run compute_autocorrelation_with_data \
--run $RUN --var $VAR \
--json \
--save outputs/$RUN/acf.json
uv run ts-agents tool run compute_psd_with_data \
--run $RUN --var $VAR \
--json \
--save outputs/$RUN/psd.json
```
### 2. Decomposition
```bash
uv run ts-agents tool run stl_decompose_with_data \
--run $RUN --var $VAR \
--json \
--save outputs/$RUN/stl.json
```
### 3. Pattern detection
```bash
uv run ts-agents tool run detect_peaks_with_data \
--run $RUN --var $VAR \
--json \
--save outputs/$RUN/peaks.json
uv run ts-agents tool run segment_changepoint_with_data \
--run $RUN --var $VAR \
--json \
--save outputs/$RUN/changepoints.json
uv run ts-agents tool run analyze_matrix_profile_with_data \
--run $RUN --var $VAR \
--json \
--save outputs/$RUN/mp.json
```
### 4. Forecasting summaries
```bash
uv run ts-agents tool run forecast_theta_with_data \
--run $RUN --var $VAR --param horizon=30 \
--json \
--save outputs/$RUN/theta.json
uv run ts-agents tool run forecast_ets_with_data \
--run $RUN --var $VAR --param horizon=30 \
--json \
--save outputs/$RUN/ets.json
uv run ts-agents tool run forecast_arima_with_data \
--run $RUN --var $VAR --param horizon=30 \
--json \
--save outputs/$RUN/arima.json
```
### 5. Compose the Quarto document
Create `outputs/$RUN/report.qmd` with this structure:
```
---
title: "Comprehensive Analysis: $RUN / $VAR"
format: pdf
---
## Diagnostics
(paste key stats from describe.json)


## Decomposition

## Patterns



## Forecasting
Summarize `theta.json`, `ets.json`, and `arima.json`:
- horizon and forecast length
- point forecast values
- intervals / uncertainty fields when present
If you also ran `workflow run forecast-series`, reference its forecast plot,
CSV, and Markdown artifacts here instead of expecting PNGs from the forecast
wrappers.
## Summary
(synthesize findings across all phases)
```
### 6. Render to PDF
```bash
cd outputs/$RUN && quarto render report.qmd --to pdf
```
## Guardrails
- If any tool fails, log the error and continue with remaining tools.
- If the series has fewer than 50 points, skip forecasting and warn the user.
- Always verify that the plot-producing tool JSON files (`acf.json`, `psd.json`, `stl.json`, `peaks.json`, `changepoints.json`, `mp.json`) contain the expected `result.artifacts[*].path` entries before composing the QMD.
- Treat the saved forecast JSON files as the forecasting source of truth unless
you explicitly ran `workflow run forecast-series`; do not expect
`--extract-images` output from the forecast wrappers.Example prompts
These prompts ask an agent to run the comprehensive workflow:
Run a full analysis of Re200Rm200 / bx001_real: diagnostics, STL decomposition, peak detection, changepoint segmentation, matrix profile, and Theta + ARIMA forecasts with horizon 30. Save the diagnostic/decomposition/pattern plots, summarize the forecast JSON outputs, and render a PDF report.
I need a comprehensive PDF report for every variable in the Re200Rm200 run. For each variable: compute ACF, PSD, STL decomposition, detect peaks, and forecast 20 steps with ETS. Compile the plots plus the saved forecast summaries into a single Quarto PDF.
Requesting deeper analysis
Chain skills in a single prompt:
Run diagnostics on Re200Rm200 / bx001_real to check for seasonality, then decompose with STL, then forecast the trend component 30 steps ahead using ARIMA and Theta. Compare the forecasts.
This produces diagnostics artifacts, decomposition plots, and a forecast comparison — all in one session.
Modifying skill behavior
Edit skills/<name>/SKILL.md directly:
- Add a step — append to the
## Workflowsection (e.g., “7. Compare STL and Holt-Winters residuals”) - Change defaults — edit parameter values in the workflow steps
- Add guardrails — insert validation steps (e.g., “If the series has fewer than 100 points, warn the user and stop”)
After editing, re-export so all agents pick up the change:
uv run ts-agents skills export --all-agentsCreating a new skill
skills/
my-skill/
SKILL.md # required — workflow definition
SKILL.md structure:
---
name: my-custom-skill
description: One-line description of what the skill does
tasks: [keyword1, keyword2]
tools: [tool_a, tool_b]
---
## Objective
What problem this skill solves.
## Workflow
1. Step one — call `tool_a` with these parameters ...
2. Step two — call `tool_b` ...
3. Summarize results.
## Guardrails
- Validation rules, limits, warnings.Validate and export:
uv run ts-agents skills validate
uv run ts-agents skills export --all-agentsExport and sync commands
# List registered skills
uv run ts-agents skills list
# Validate frontmatter
uv run ts-agents skills validate
# Export to all agent directories (copy)
uv run ts-agents skills export --all-agents
# Export with symlinks (local dev, Unix only)
uv run ts-agents skills export --all-agents --symlink
# Export to a single agent
uv run ts-agents skills export --agent claude
# Generate an aggregate summary
uv run ts-agents skills export --out skills/SKILLS.mdSupported agents: claude, codex, gemini, windsurf, github.