Examples¶
This page provides practical examples for common tasks with sleap-io. Each example includes working code that you can copy and adapt for your needs.
Prerequisites
All examples assume you have sleap-io installed:
Or run any example script directly with uv
:
This automatically handles dependencies without needing to manage environments.
Most examples use import sleap_io as sio
for brevity.
Basic I/O operations¶
Load and save in different formats¶
Convert between supported formats with automatic format detection.
import sleap_io as sio
# Load from SLEAP file
labels = sio.load_file("predictions.slp")
# Save to NWB file
labels.save("predictions.nwb")
Tip
sleap-io automatically detects the format from the file extension. Supported formats include .slp
, .nwb
, .labelstudio.json
, .h5
(JABS), and .mat
(LEAP).
See also
Labels.save
: Save method with format options- Formats: Complete list of supported formats
Convert labels to raw arrays¶
Extract pose data as NumPy arrays for analysis or visualization.
import sleap_io as sio
labels = sio.load_slp("tests/data/slp/centered_pair_predictions.slp")
# Convert predictions to point coordinates in a single array
trx = labels.numpy()
n_frames, n_tracks, n_nodes, xy = trx.shape
assert xy == 2 # x and y coordinates
# Convert to array with confidence scores appended
trx_with_scores = labels.numpy(return_confidence=True)
n_frames, n_tracks, n_nodes, xy_score = trx_with_scores.shape
assert xy_score == 3 # x, y, and confidence score
Expected output shapes
For a dataset with 100 frames, 2 tracks, and 3 nodes:
- Without scores:
(100, 2, 3, 2)
- With scores:
(100, 2, 3, 3)
See also
Labels.numpy
: Full documentation of array conversion options
Video operations¶
Read video data¶
Load and access video frames directly.
import sleap_io as sio
video = sio.load_video("test.mp4")
n_frames, height, width, channels = video.shape
frame = video[0] # Get first frame
height, width, channels = frame.shape
# Access specific frames
middle_frame = video[n_frames // 2]
last_frame = video[-1]
Info
Video loading uses imageio-ffmpeg
by default. For alternative backends, install optional dependencies:
See also
sio.load_video
: Video loading functionVideo
: Video class documentation
Re-encode video¶
Fix video seeking issues by re-encoding with optimal settings.
Why re-encode?
Some video formats are not readily seekable at frame-level accuracy. Re-encoding with default settings ensures reliable seeking with minimal quality loss.
See also
save_video
: Video saving options and codec settings
Trim labels and video¶
Extract a subset of frames with corresponding labels.
import sleap_io as sio
# Load existing data
labels = sio.load_file("labels.slp")
# Create a new labels file with frames 1000-2000 from video 0
clip = labels.trim("clip.slp", list(range(1_000, 2_000)), video=0)
# The new file contains:
# - A trimmed video saved as "clip.mp4"
# - Labels with adjusted frame indices
Tip
The trim
method automatically:
- Creates a new video file with only the specified frames
- Adjusts frame indices in the labels to match the new video
- Preserves all instance data and tracks
See also
Labels.trim
: Full trim method documentation
Data creation¶
Create labels from raw data¶
Build a complete labels dataset programmatically.
import sleap_io as sio
import numpy as np
# Create skeleton
skeleton = sio.Skeleton(
nodes=["head", "thorax", "abdomen"],
edges=[("head", "thorax"), ("thorax", "abdomen")]
)
# Create video
video = sio.load_video("test.mp4")
# Create instance from numpy array
instance = sio.Instance.from_numpy(
points=np.array([
[10.2, 20.4], # head
[5.8, 15.1], # thorax
[0.3, 10.6], # abdomen
]),
skeleton=skeleton
)
# Create labeled frame
lf = sio.LabeledFrame(video=video, frame_idx=0, instances=[instance])
# Create labels
labels = sio.Labels(videos=[video], skeletons=[skeleton], labeled_frames=[lf])
# Save
labels.save("labels.slp")
Creating predicted instances
To create predictions with confidence scores:
See also
- Model: Complete data model documentation
Labels
: Labels container classInstance
: Instance class for manual annotationsPredictedInstance
: Instance class for predictions
Dataset management¶
Make training/validation/test splits¶
Split your dataset for machine learning workflows.
import sleap_io as sio
# Load source labels
labels = sio.load_file("labels.v001.slp")
# Make splits and export with embedded images
labels.make_training_splits(
n_train=0.8,
n_val=0.1,
n_test=0.1,
save_dir="split1",
seed=42
)
# Splits are saved as self-contained SLP package files
labels_train = sio.load_file("split1/train.pkg.slp")
labels_val = sio.load_file("split1/val.pkg.slp")
labels_test = sio.load_file("split1/test.pkg.slp")
Info
The .pkg.slp
extension indicates a self-contained package with embedded images, making the splits portable and shareable.
See also
Labels.make_training_splits
: Full documentation of splitting options
Working with dataset splits (LabelsSet)¶
Manage multiple related datasets as a group.
import sleap_io as sio
# Load source labels
labels = sio.load_file("labels.v001.slp")
# Create splits and get them as a LabelsSet
labels_set = labels.make_training_splits(n_train=0.8, n_val=0.1, n_test=0.1)
# Access individual splits
train_labels = labels_set["train"]
val_labels = labels_set["val"]
test_labels = labels_set["test"]
# Save the entire LabelsSet
labels_set.save("splits/") # Saves as SLP files by default
# Save as Ultralytics YOLO format
labels_set.save("yolo_dataset/", format="ultralytics")
# Load a LabelsSet from a directory
loaded_set = sio.load_labels_set("splits/")
Loading from specific files
Tip
LabelsSet is particularly useful when exporting to formats that expect separate train/val/test files, like YOLO.
See also
LabelsSet
: LabelsSet class documentationload_labels_set
: Loading function for label sets
Data manipulation¶
Fix video paths¶
Update file paths when moving projects between systems.
import sleap_io as sio
# Load labels without trying to open the video files
labels = sio.load_file("labels.v001.slp", open_videos=False)
# Fix paths using prefix replacement
labels.replace_filenames(prefix_map={
"D:/data/sleap_projects": "/home/user/sleap_projects",
"C:/Users/sleaper/Desktop/test": "/home/user/sleap_projects",
})
# Save labels with updated paths
labels.save("labels.v002.slp")
Path separators
The prefix map handles path separators automatically, but be consistent with forward slashes (/
) for cross-platform compatibility.
Tip
Use open_videos=False
when loading to avoid errors from missing videos at the old paths.
See also
Labels.replace_filenames
: Additional path manipulation options
Save labels with embedded images¶
Create self-contained label files with embedded video frames.
import sleap_io as sio
# Load source labels
labels = sio.load_file("labels.v001.slp")
# Save with embedded images for frames with user labeled data and suggested frames
labels.save("labels.v001.pkg.slp", embed="user+suggestions")
Embedding options
"user"
: Only frames with manual annotations"user+suggestions"
: Manual annotations plus suggested frames"all"
: All frames with any labels (including predictions)"source"
: Embed source video if labels were loaded from embedded data
See also
Labels.save
: Complete save options including embedding
Replace skeleton¶
Change the skeleton structure while preserving existing annotations.
import sleap_io as sio
# Load existing labels with skeleton nodes: "head", "trunk", "tti"
labels = sio.load_file("labels.slp")
# Create a new skeleton with different nodes
new_skeleton = sio.Skeleton(["HEAD", "CENTROID", "TAIL_BASE", "TAIL_TIP"])
# Replace skeleton with node correspondence mapping
labels.replace_skeleton(
new_skeleton,
node_map={
"head": "HEAD",
"trunk": "CENTROID",
"tti": "TAIL_BASE"
# "TAIL_TIP" will have NaN values since there's no correspondence
}
)
# Save with the new skeleton format
labels.save("labels_with_new_skeleton.slp")
Warning
Nodes without correspondence in the node_map
will have NaN values in the resulting instances.
Tip
This is particularly useful when converting between different annotation tools or skeleton conventions.
See also
Labels.replace_skeleton
: Additional skeleton manipulation options
Convert to and from numpy arrays¶
Work with pose data as NumPy arrays for filtering or analysis.
import sleap_io as sio
import numpy as np
labels = sio.load_file("predictions.slp")
# Convert to array of shape (n_frames, n_tracks, n_nodes, xy)
trx = labels.numpy()
# Apply temporal filtering (example: simple moving average)
window_size = 5
trx_filtered = np.convolve(trx.reshape(-1), np.ones(window_size)/window_size, mode='same').reshape(trx.shape)
# Update the labels with filtered data
labels.update_from_numpy(trx_filtered)
# Save the filtered version
labels.save("predictions.filtered.slp")
Advanced filtering with movement
For more sophisticated analysis and filtering, check out the movement
library for pose processing.
Warning
When updating from numpy, the array shape must match the original data structure exactly.
See also
Labels.numpy
: Array conversion optionsLabels.update_from_numpy
: Updating labels from arraysmovement
: Advanced pose processing library
NWB format operations¶
Working with NWB files¶
Neurodata Without Borders (NWB) provides a standardized format for neurophysiology data. sleap-io offers comprehensive NWB support with automatic format detection.
import sleap_io as sio
# Load any NWB file - automatically detects if it contains
# annotations (PoseTraining) or predictions (PoseEstimation)
labels = sio.load_nwb("pose_data.nwb")
# Save with automatic format detection
# Uses "annotations" if data has user labels, "predictions" otherwise
sio.save_nwb(labels, "output.nwb")
# Force specific format
sio.save_nwb(labels, "training.nwb", nwb_format="annotations")
sio.save_nwb(labels, "inference.nwb", nwb_format="predictions")
# Export with embedded video frames for sharing complete datasets
sio.save_nwb(labels, "dataset_export.nwb", nwb_format="annotations_export")
Format auto-detection
The harmonization layer automatically determines the appropriate format:
- Annotations: Used when data contains user-labeled instances (training data)
- Predictions: Used when data contains only predicted instances (inference results)
- Annotations Export: Use explicitly to create self-contained files with embedded video frames
Save training data with rich metadata¶
Include detailed experimental metadata when saving training annotations.
from sleap_io.io.nwb_annotations import save_labels
# Save with comprehensive metadata
save_labels(
labels,
"training_data.nwb",
session_description="Mouse skilled reaching task - training dataset",
identifier="mouse_01_session_03_annotations",
session_start_time="2024-01-15T09:30:00",
annotator="John Doe",
nwb_kwargs={
# Session metadata
"session_id": "session_003",
"experimenter": ["John Doe", "Jane Smith"],
"lab": "Motor Control Lab",
"institution": "University of Example",
# Experimental details
"experiment_description": "Skilled reaching task with food pellet reward",
"protocol": "Protocol 2024-001",
"surgery": "Cranial window implant over M1",
# Subject information
"subject": {
"subject_id": "mouse_01",
"age": "P90",
"sex": "M",
"species": "Mus musculus",
"strain": "C57BL/6J",
"weight": "25g"
}
}
)
Metadata best practices
Include as much metadata as possible for reproducibility:
- Experimental protocol details
- Subject information
- Recording conditions
- Annotator identity for tracking labeling provenance
Export dataset with embedded videos¶
Create self-contained NWB files with video frames for sharing complete datasets.
from sleap_io.io.nwb_annotations import export_labels, export_labeled_frames
# Method 1: Export complete dataset with all videos
export_labels(
labels,
output_dir="export/",
nwb_filename="complete_dataset.nwb",
as_training=True, # Include manual annotations
include_videos=True, # Embed all video frames
include_skeleton=True # Include skeleton definition
)
# Method 2: Export only frames with labels as a new video
export_labeled_frames(
labels,
output_path="labeled_frames.avi", # MJPEG video output
labels_output_path="labeled_frames.nwb", # Corresponding labels
fps=30.0, # Output frame rate
scale=1.0 # Video scale factor
)
# The export includes a FrameMap JSON file tracking frame origins
import json
with open("labeled_frames.frame_map.json", "r") as f:
frame_map = json.load(f)
print(f"Exported {frame_map['total_frames']} frames from {len(frame_map['videos'])} videos")
Export formats
- Full export: Includes all video frames, creating large but complete files
- Labeled frames only: Exports just frames with annotations, reducing file size
- Frame provenance: JSON metadata tracks which frames came from which source videos
Convert between NWB and other formats¶
Use NWB as an interchange format between different pose tracking tools.
import sleap_io as sio
# Load from DeepLabCut
dlc_data = sio.load_file("dlc_predictions.h5")
# Save as NWB predictions
sio.save_nwb(dlc_data, "dlc_in_nwb.nwb", nwb_format="predictions")
# Load SLEAP training data
sleap_labels = sio.load_file("training.slp")
# Export as NWB with videos for sharing
sio.save_nwb(sleap_labels, "training_export.nwb", nwb_format="annotations_export")
# Convert NWB back to SLEAP format
nwb_labels = sio.load_nwb("training_export.nwb")
nwb_labels.save("converted.slp")
Format preservation
NWB format preserves:
- Complete skeleton structure with node names
- Track identities
- Confidence scores
- User vs predicted instance types
- Video metadata (when using
annotations_export
)
See also
- NWB Format Documentation: Complete NWB format reference
load_nwb
: NWB loading functionsave_nwb
: NWB saving function with format options