Update pre-commit-config.yaml + pyproject.toml + ceil rerun & transformer dependencies version (#1520)

* chore: update .gitignore

* chore: update pre-commit

* chore(deps): update pyproject

* fix(ci): multiple fixes

* chore: pre-commit apply

* chore: address review comments

* Update pyproject.toml

Co-authored-by: Ben Zhang <5977478+ben-z@users.noreply.github.com>
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>

* chore(deps): add todo

---------

Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Ben Zhang <5977478+ben-z@users.noreply.github.com>
This commit is contained in:
Steven Palma
2025-07-17 14:30:20 +02:00
committed by GitHub
parent 0938a1d816
commit 378e1f0338
78 changed files with 1450 additions and 636 deletions

View File

@@ -26,6 +26,7 @@ pip install -e ".[docs]"
You will also need `nodejs`. Please refer to their [installation page](https://nodejs.org/en/download)
---
**NOTE**
You only need to generate the documentation to inspect it locally (if you're planning changes and want to
@@ -63,6 +64,7 @@ doc-builder preview lerobot docs/source/
The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
---
**NOTE**
The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
@@ -89,6 +91,7 @@ Sections that were moved:
[ <a href="#section-b">Section A</a><a id="section-a"></a> ]
```
and of course, if you moved it to another file, then:
```
@@ -119,7 +122,6 @@ and objects like True, None or any strings should usually be put in `code`.
Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
````
```
# first line of code

View File

@@ -5,17 +5,18 @@ In this tutorial, we'll show how to use asynchronous inference (_async inference
**Try async inference with all the policies** supported by LeRobot!
**What you'll learn:**
1. Why asynchronous inference matters and how it compares to, more traditional, sequential inference.
2. How to spin-up a `PolicyServer` and connect a `RobotClient` from the same machine, and even over the network.
3. How to tune key parameters (`actions_per_chunk`, `chunk_size_threshold`) for your robot and policy.
If you get stuck, hop into our [Discord community](https://discord.gg/s3KuuzsPFb)!
In a nutshell: with *async inference*, your robot keeps acting while the policy server is already busy computing the next chunk of actions---eliminating "wait-for-inference" lags and unlocking smoother, more reactive behaviours.
In a nutshell: with _async inference_, your robot keeps acting while the policy server is already busy computing the next chunk of actions---eliminating "wait-for-inference" lags and unlocking smoother, more reactive behaviours.
This is fundamentally different from synchronous inference (sync), where the robot stays idle while the policy computes the next chunk of actions.
---
## Getting started with async inference
You can read more information on asynchronous inference in our [blogpost](https://huggingface.co/blog/async-robot-inference). This guide is designed to help you quickly set up and run asynchronous inference in your environment.
@@ -53,40 +54,53 @@ python src/lerobot/scripts/server/robot_client.py \
--aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
--debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
```
In summary, you need to specify instructions for:
- `SERVER`: the address and port of the policy server
- `ROBOT`: the type of robot to connect to, the port to connect to, and the local `id` of the robot
- `POLICY`: the type of policy to run, and the model name/path on server to the checkpoint to run. You also need to specify which device should the sever be using, and how many actions to output at once (capped at the policy max actions value).
- `CLIENT`: the threshold for the chunk size before sending a new observation to the server, and the function to aggregate actions on overlapping portions. Optionally, you can also visualize the queue size at runtime, to help you tune the `CLIENT` parameters.
Importantly,
- `actions_per_chunk` and `chunk_size_threshold` are key parameters to tune for your setup.
- `aggregate_fn_name` is the function to aggregate actions on overlapping portions. You can either add a new one to a registry of functions, or add your own in `robot_client.py` (see [here](NOTE:addlinktoLOC))
- `debug_visualize_queue_size` is a useful tool to tune the `CLIENT` parameters.
Done! You should see your robot moving around by now 😉
---
## Done! You should see your robot moving around by now 😉
## Async vs. synchronous inference
Synchronous inference relies on interleaving action chunk prediction and action execution. This inherently results in *idle frames*, frames where the robot awaits idle the policy's output: a new action chunk.
Synchronous inference relies on interleaving action chunk prediction and action execution. This inherently results in _idle frames_, frames where the robot awaits idle the policy's output: a new action chunk.
In turn, inference is plagued by evident real-time lags, where the robot simply stops acting due to the lack of available actions.
With robotics models increasing in size, this problem risks becoming only more severe.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/sync.png" width="80%"></img>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/sync.png"
width="80%"
></img>
</p>
<p align="center">
<i>Synchronous inference</i> makes the robot idle while the policy is
computing the next chunk of actions.
</p>
<p align="center"><i>Synchronous inference</i> makes the robot idle while the policy is computing the next chunk of actions.</p>
To overcome this, we design async inference, a paradigm where action planning and execution are decoupled, resulting in (1) higher adaptability and, most importantly, (2) no idle frames.
Crucially, with async inference, the next action chunk is computed *before* the current one is exhausted, resulting in no idleness.
Crucially, with async inference, the next action chunk is computed _before_ the current one is exhausted, resulting in no idleness.
Higher adaptability is ensured by aggregating the different action chunks on overlapping portions, obtaining an up-to-date plan and a tighter control loop.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/async.png" width="80%"></img>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/async.png"
width="80%"
></img>
</p>
<p align="center">
<i>Asynchronous inference</i> results in no idleness because the next chunk is
computed before the current chunk is exhausted.
</p>
<p align="center"><i>Asynchronous inference</i> results in no idleness because the next chunk is computed before the current chunk is exhausted.</p>
---
@@ -105,6 +119,8 @@ python -m lerobot.scripts.server.policy_server \
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.scripts.server.configs import PolicyServerConfig
from lerobot.scripts.server.policy_server import serve
@@ -115,6 +131,8 @@ config = PolicyServerConfig(
)
serve(config)
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
@@ -147,6 +165,8 @@ python src/lerobot/scripts/server/robot_client.py \
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
import threading
from lerobot.robots.so100_follower import SO100FollowerConfig
@@ -201,6 +221,8 @@ if client.start():
# (Optionally) plot the action queue size
visualize_action_queue_size(client.action_queue_size)
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
@@ -216,20 +238,30 @@ The following two parameters are key in every setup:
</thead>
<tbody>
<tr>
<td><code>actions_per_chunk</code></td>
<td>
<code>actions_per_chunk</code>
</td>
<td>50</td>
<td>How many actions the policy outputs at once. Typical values: 10-50.</td>
<td>
How many actions the policy outputs at once. Typical values: 10-50.
</td>
</tr>
<tr>
<td><code>chunk_size_threshold</code></td>
<td>
<code>chunk_size_threshold</code>
</td>
<td>0.7</td>
<td>When the queue is ≤ 50% full, the client sends a fresh observation. Value in [0, 1].</td>
<td>
When the queue is ≤ 50% full, the client sends a fresh observation.
Value in [0, 1].
</td>
</tr>
</tbody>
</table>
<Tip>
Different values of `actions_per_chunk` and `chunk_size_threshold` do result in different behaviours.
Different values of `actions_per_chunk` and `chunk_size_threshold` do result
in different behaviours.
</Tip>
On the one hand, increasing the value of `actions_per_chunk` will result in reducing the likelihood of ending up with no actions to execute, as more actions will be available when the new chunk is computed.
@@ -249,10 +281,18 @@ We found the default values of `actions_per_chunk` and `chunk_size_threshold` to
- We found values around 0.5-0.6 to work well. If you want to tweak this, spin up a `RobotClient` setting the `--debug-visualize-queue-size` to `True`. This will plot the action queue size evolution at runtime, and you can use it to find the value of `chunk_size_threshold` that works best for your setup.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/queues.png" width="80%"></img>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/queues.png"
width="80%"
></img>
</p>
<p align="center">
<i>
The action queue size is plotted at runtime when the
`--debug-visualize-queue-size` flag is passed, for various levels of
`chunk_size_threshold` (`g` in the SmolVLA paper).
</i>
</p>
<p align="center"><i>The action queue size is plotted at runtime when the `--debug-visualize-queue-size` flag is passed, for various levels of `chunk_size_threshold` (`g` in the SmolVLA paper).</i></p>
---

View File

@@ -6,21 +6,22 @@ PR [#777](https://github.com/huggingface/lerobot/pull/777) improves the LeRobot
### What changed?
| | Before PR #777 | After PR #777 |
| --------------------------------- | ------------------------------------------------- | --------------------------------------------------------------------------- |
| **Joint range** | Degrees `-180...180°` | **Normalised range** Joints: `100...100` Gripper: `0...100` |
| **Zero position (SO100 / SO101)** | Arm fully extended horizontally | **In middle of the range for each joint** |
| **Boundary handling** | Software safeguards to detect ±180 ° wrap-arounds | No wrap-around logic needed due to mid-range zero |
| | Before PR #777 | After PR #777 |
| --------------------------------- | ------------------------------------------------- | ------------------------------------------------------------ |
| **Joint range** | Degrees `-180...180°` | **Normalised range** Joints: `100...100` Gripper: `0...100` |
| **Zero position (SO100 / SO101)** | Arm fully extended horizontally | **In middle of the range for each joint** |
| **Boundary handling** | Software safeguards to detect ±180 ° wrap-arounds | No wrap-around logic needed due to mid-range zero |
---
### Impact on existing datasets
* Recorded trajectories created **before** PR #777 will replay incorrectly if loaded directly:
* Joint angles are offset and incorrectly normalized.
* Any models directly finetuned or trained on the old data will need their inputs and outputs converted.
- Recorded trajectories created **before** PR #777 will replay incorrectly if loaded directly:
- Joint angles are offset and incorrectly normalized.
- Any models directly finetuned or trained on the old data will need their inputs and outputs converted.
### Using datasets made with the previous calibration system
We provide a migration example script for replaying an episode recorded with the previous calibration here: `examples/backward_compatibility/replay.py`.
Below we take you through the modifications that are done in the example script to make the previous calibration datasets work.
@@ -33,20 +34,31 @@ Below we take you through the modifications that are done in the example script
Let's break this down.
New codebase uses `.pos` suffix for the position observations and we have removed `main_` prefix:
<!-- prettier-ignore-start -->
```python
key = f"{name.removeprefix('main_')}.pos"
```
<!-- prettier-ignore-end -->
For `"shoulder_lift"` (id = 2), the 0 position is changed by -90 degrees and the direction is reversed compared to old calibration/code.
<!-- prettier-ignore-start -->
```python
action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
```
<!-- prettier-ignore-end -->
For `"elbow_flex"` (id = 3), the 0 position is changed by -90 degrees compared to old calibration/code.
<!-- prettier-ignore-start -->
```python
action["elbow_flex.pos"] -= 90
```
<!-- prettier-ignore-end -->
To use degrees normalization we then set the `--robot.use_degrees` option to `true`.
```diff
python examples/backward_compatibility/replay.py \
--robot.type=so101_follower \
@@ -63,6 +75,7 @@ Policies output actions in the same format as the datasets (`torch.Tensors`). Th
To find these transformations, we recommend to first try and and replay an episode of the dataset your policy was trained on using the section above.
Then, add these same transformations on your inference script (shown here in the `record.py` script):
```diff
action_values = predict_action(
observation_frame,

View File

@@ -7,11 +7,13 @@ LeRobot offers multiple options for video capture, including phone cameras, buil
To instantiate a camera, you need a camera identifier. This identifier might change if you reboot your computer or re-plug your camera, a behavior mostly dependant on your operating system.
To find the camera indices of the cameras plugged into your system, run the following script:
```bash
python -m lerobot.find_cameras opencv # or realsense for Intel Realsense cameras
```
The output will look something like this if you have two cameras connected:
```
--- Detected Cameras ---
Camera #0:
@@ -31,7 +33,6 @@ Camera #0:
> [!WARNING]
> When using Intel RealSense cameras in `macOS`, you could get this [error](https://github.com/IntelRealSense/librealsense/issues/12307): `Error finding RealSense cameras: failed to set power state`, this can be solved by running the same command with `sudo` permissions. Note that using RealSense cameras in `macOS` is unstable.
## Use Cameras
Below are two examples, demonstrating how to work with the API.
@@ -39,10 +40,10 @@ Below are two examples, demonstrating how to work with the API.
- **Asynchronous frame capture** using an OpenCV-based camera
- **Color and depth capture** using an Intel RealSense camera
<hfoptions id="shell_restart">
<hfoption id="Open CV Camera">
<!-- prettier-ignore-start -->
```python
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
from lerobot.cameras.opencv.camera_opencv import OpenCVCamera
@@ -70,10 +71,12 @@ try:
finally:
camera.disconnect()
```
<!-- prettier-ignore-end -->
</hfoption>
<hfoption id="Intel Realsense Camera">
<!-- prettier-ignore-start -->
```python
from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig
from lerobot.cameras.realsense.camera_realsense import RealSenseCamera
@@ -103,15 +106,18 @@ try:
finally:
camera.disconnect()
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
## Use your phone
<hfoptions id="use phone">
<hfoption id="Mac">
To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
- Sign in both devices with the same Apple ID.
- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
@@ -125,40 +131,67 @@ Your iPhone should be detected automatically when running the camera setup scrip
If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
1. *Install `v4l2loopback-dkms` and `v4l-utils`*. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
1. _Install `v4l2loopback-dkms` and `v4l-utils`_. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
<!-- prettier-ignore-start -->
```python
sudo apt install v4l2loopback-dkms v4l-utils
```
2. *Install [DroidCam](https://droidcam.app) on your phone*. This app is available for both iOS and Android.
3. *Install [OBS Studio](https://obsproject.com)*. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
<!-- prettier-ignore-end -->
2. _Install [DroidCam](https://droidcam.app) on your phone_. This app is available for both iOS and Android.
3. _Install [OBS Studio](https://obsproject.com)_. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
<!-- prettier-ignore-start -->
```python
flatpak install flathub com.obsproject.Studio
```
4. *Install the DroidCam OBS plugin*. This plugin integrates DroidCam with OBS Studio. Install it with:
<!-- prettier-ignore-end -->
4. _Install the DroidCam OBS plugin_. This plugin integrates DroidCam with OBS Studio. Install it with:
<!-- prettier-ignore-start -->
```python
flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
```
5. *Start OBS Studio*. Launch with:
<!-- prettier-ignore-end -->
5. _Start OBS Studio_. Launch with:
<!-- prettier-ignore-start -->
```python
flatpak run com.obsproject.Studio
```
6. *Add your phone as a source*. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
7. *Adjust resolution settings*. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
8. *Start virtual camera*. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
9. *Verify the virtual camera setup*. Use `v4l2-ctl` to list the devices:
<!-- prettier-ignore-end -->
6. _Add your phone as a source_. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
7. _Adjust resolution settings_. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
8. _Start virtual camera_. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
9. _Verify the virtual camera setup_. Use `v4l2-ctl` to list the devices:
<!-- prettier-ignore-start -->
```python
v4l2-ctl --list-devices
```
<!-- prettier-ignore-end -->
You should see an entry like:
```
VirtualCam (platform:v4l2loopback-000):
/dev/video1
```
10. *Check the camera resolution*. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
10. _Check the camera resolution_. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
<!-- prettier-ignore-start -->
```python
v4l2-ctl -d /dev/video1 --get-fmt-video
```
<!-- prettier-ignore-end -->
You should see an entry like:
```
>>> Format Video Capture:
>>> Width/Height : 640/480

View File

@@ -4,18 +4,22 @@ In this tutorial you will go through the full Human-in-the-Loop Sample-Efficient
HIL-SERL is a sample-efficient reinforcement learning algorithm that combines human demonstrations with online learning and human interventions. The approach starts from a small set of human demonstrations, uses them to train a reward classifier, and then employs an actor-learner architecture where humans can intervene during policy execution to guide exploration and correct unsafe behaviors. In this tutorial, you'll use a gamepad to provide interventions and control the robot during the learning process.
It combines three key ingredients:
1. **Offline demonstrations & reward classifier:** a handful of human-teleop episodes plus a vision-based success detector give the policy a shaped starting point.
2. **On-robot actor / learner loop with human interventions:** a distributed Soft Actor Critic (SAC) learner updates the policy while an actor explores on the physical robot; the human can jump in at any time to correct dangerous or unproductive behaviour.
3. **Safety & efficiency tools:** joint/end-effector (EE) bounds, crop region of interest (ROI) preprocessing and WandB monitoring keep the data useful and the hardware safe.
It combines three key ingredients: 1. **Offline demonstrations & reward classifier:** a handful of human-teleop episodes plus a vision-based success detector give the policy a shaped starting point. 2. **On-robot actor / learner loop with human interventions:** a distributed Soft Actor Critic (SAC) learner updates the policy while an actor explores on the physical robot; the human can jump in at any time to correct dangerous or unproductive behaviour. 3. **Safety & efficiency tools:** joint/end-effector (EE) bounds, crop region of interest (ROI) preprocessing and WandB monitoring keep the data useful and the hardware safe.
Together these elements let HIL-SERL reach near-perfect task success and faster cycle times than imitation-only baselines.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hilserl-main-figure.png" alt="HIL-SERL workflow" title="HIL-SERL workflow" width="100%"></img>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hilserl-main-figure.png"
alt="HIL-SERL workflow"
title="HIL-SERL workflow"
width="100%"
></img>
</p>
<p align="center"><i>HIL-SERL workflow, Luo et al. 2024</i></p>
<p align="center">
<i>HIL-SERL workflow, Luo et al. 2024</i>
</p>
This guide provides step-by-step instructions for training a robot policy using LeRobot's HilSerl implementation to train on a real robot.
@@ -29,6 +33,7 @@ This guide provides step-by-step instructions for training a robot policy using
## What kind of tasks can I train?
One can use HIL-SERL to train on a variety of manipulation tasks. Some recommendations:
- Start with a simple task to understand how the system works.
- Push cube to a goal region
- Pick and lift cube with the gripper
@@ -53,6 +58,7 @@ pip install -e ".[hilserl]"
The training process begins with proper configuration for the HILSerl environment. The configuration class of interest is `HILSerlRobotEnvConfig` in `lerobot/envs/configs.py`. Which is defined as:
<!-- prettier-ignore-start -->
```python
class HILSerlRobotEnvConfig(EnvConfig):
robot: RobotConfig | None = None # Main robot agent (defined in `lerobot/robots`)
@@ -72,7 +78,7 @@ class HILSerlRobotEnvConfig(EnvConfig):
reward_classifier_pretrained_path: str | None = None # For reward model
number_of_steps_after_success: int = 0 # For reward classifier, collect more positive examples after a success to train a classifier
```
<!-- prettier-ignore-end -->
### Finding Robot Workspace Bounds
@@ -131,6 +137,7 @@ Create a configuration file for recording demonstrations (or edit an existing on
5. Configure `robot`, `cameras`, and other hardware settings
Example configuration section:
```json
"mode": "record",
"repo_id": "username/pick_lift_cube",
@@ -150,6 +157,7 @@ HIL-Serl learns actions in the end-effector space of the robot. Therefore, the t
For that we need to define a version of the robot that takes actions in the end-effector space. Check the robot class `SO100FollowerEndEffector` and its configuration `SO100FollowerEndEffectorConfig` for the default parameters related to the end-effector space.
<!-- prettier-ignore-start -->
```python
class SO100FollowerEndEffectorConfig(SO100FollowerConfig):
"""Configuration for the SO100FollowerEndEffector robot."""
@@ -172,6 +180,7 @@ class SO100FollowerEndEffectorConfig(SO100FollowerConfig):
}
)
```
<!-- prettier-ignore-end -->
The `Teleoperator` defines the teleoperation device. You can check the list of available teleoperators in `lerobot/teleoperators`.
@@ -189,9 +198,16 @@ To setup the gamepad, you need to set the `control_mode` to `"gamepad"` and defi
```
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/gamepad_guide.jpg?raw=true" alt="Figure shows the control mappings on a Logitech gamepad." title="Gamepad Control Mapping" width="100%"></img>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/gamepad_guide.jpg?raw=true"
alt="Figure shows the control mappings on a Logitech gamepad."
title="Gamepad Control Mapping"
width="100%"
></img>
</p>
<p align="center">
<i>Gamepad button mapping for robot control and episode management</i>
</p>
<p align="center"><i>Gamepad button mapping for robot control and episode management</i></p>
**Setting up the SO101 leader**
@@ -215,7 +231,10 @@ During the online training, press `space` to take over the policy and `space` ag
<div class="video-container">
<video controls width="600">
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so101_leader_tutorial.mp4" type="video/mp4" />
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so101_leader_tutorial.mp4"
type="video/mp4"
/>
</video>
</div>
@@ -231,6 +250,7 @@ python -m lerobot.scripts.rl.gym_manipulator --config_path src/lerobot/configs/e
```
During recording:
1. The robot will reset to the initial position defined in the configuration file `fixed_reset_joint_positions`
2. Complete the task successfully
3. The episode ends with a reward of 1 when you press the "success" button
@@ -239,13 +259,13 @@ During recording:
6. The process automatically continues to the next episode
7. After recording all episodes, the dataset is pushed to the Hugging Face Hub (optional) and saved locally
### Processing the Dataset
After collecting demonstrations, process them to determine optimal camera crops.
Reinforcement learning is sensitive to background distractions, so it is important to crop the images to the relevant workspace area.
Visual RL algorithms learn directly from pixel inputs, making them vulnerable to irrelevant visual information. Background elements like changing lighting, shadows, people moving, or objects outside the workspace can confuse the learning process. Good ROI selection should:
- Include only the essential workspace where the task happens
- Capture the robot's end-effector and all objects involved in the task
- Exclude unnecessary background elements and distractions
@@ -267,6 +287,7 @@ python -m lerobot.scripts.rl.crop_dataset_roi --repo-id username/pick_lift_cube
5. The script outputs cropping parameters and creates a new cropped dataset
Example output:
```
Selected Rectangular Regions of Interest (top, left, height, width):
observation.images.side: [180, 207, 180, 200]
@@ -274,11 +295,15 @@ observation.images.front: [180, 250, 120, 150]
```
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/crop_dataset.gif" width="600"/>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/crop_dataset.gif"
width="600"
/>
</p>
<p align="center"><i>Interactive cropping tool for selecting regions of interest</i></p>
<p align="center">
<i>Interactive cropping tool for selecting regions of interest</i>
</p>
**Updating Configuration**
@@ -294,8 +319,7 @@ Add these crop parameters to your training configuration:
**Recommended image resolution**
Most vision-based policies have been validated on square inputs of either **128×128** (default) or **64×64** pixels. We therefore advise setting the resize_size parameter to [128, 128] or [64, 64] if you need to save GPU memory and bandwidth. Other resolutions are possible but have not been extensively tested.
Most vision-based policies have been validated on square inputs of either **128×128** (default) or **64×64** pixels. We therefore advise setting the resize_size parameter to [128, 128] or [64, 64] if you need to save GPU memory and bandwidth. Other resolutions are possible but have not been extensively tested.
### Training a Reward Classifier
@@ -332,13 +356,13 @@ Example configuration section for data collection:
```json
{
"mode": "record",
"repo_id": "hf_username/dataset_name",
"dataset_root": "data/your_dataset",
"num_episodes": 20,
"push_to_hub": true,
"fps": 10,
"number_of_steps_after_success": 15
"mode": "record",
"repo_id": "hf_username/dataset_name",
"dataset_root": "data/your_dataset",
"num_episodes": 20,
"push_to_hub": true,
"fps": 10,
"number_of_steps_after_success": 15
}
```
@@ -395,21 +419,25 @@ python -m lerobot.scripts.train --config_path path/to/reward_classifier_train_co
To use your trained reward classifier, configure the `HILSerlRobotEnvConfig` to use your model:
<!-- prettier-ignore-start -->
```python
env_config = HILSerlRobotEnvConfig(
reward_classifier_pretrained_path="path_to_your_pretrained_trained_model",
# Other environment parameters
)
```
<!-- prettier-ignore-end -->
or set the argument in the json config file.
```json
{
"reward_classifier_pretrained_path": "path_to_your_pretrained_model"
"reward_classifier_pretrained_path": "path_to_your_pretrained_model"
}
```
Run `gym_manipulator.py` to test the model.
```bash
python -m lerobot.scripts.rl.gym_manipulator --config_path path/to/env_config.json
```
@@ -422,11 +450,13 @@ The reward classifier will automatically provide rewards based on the visual inp
Create the necessary json configuration files for the reward classifier and the environment. Check the examples [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/tree/main).
2. **Collect a dataset**:
```bash
python -m lerobot.scripts.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
```
3. **Train the classifier**:
```bash
python -m lerobot.scripts.train --config_path src/lerobot/configs/reward_classifier_train_config.json
```
@@ -459,6 +489,7 @@ python -m lerobot.scripts.rl.learner --config_path src/lerobot/configs/train_con
```
The learner:
- Initializes the policy network
- Prepares replay buffers
- Opens a `gRPC` server to communicate with actors
@@ -473,6 +504,7 @@ python -m lerobot.scripts.rl.actor --config_path src/lerobot/configs/train_confi
```
The actor:
- Connects to the learner via `gRPC`
- Initializes the environment
- Execute rollouts of the policy to collect experience
@@ -496,10 +528,19 @@ The training proceeds automatically:
- A successful experiment is one where the human has to intervene at the start but then reduces the amount of interventions as the policy improves. You can monitor the intervention rate in the `wandb` dashboard.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hil_effect.png?raw=true" alt="Figure shows the control mappings on a Logitech gamepad." title="Gamepad Control Mapping" width="100%"></img>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hil_effect.png?raw=true"
alt="Figure shows the control mappings on a Logitech gamepad."
title="Gamepad Control Mapping"
width="100%"
></img>
</p>
<p align="center"><i>Example showing how human interventions help guide policy learning over time</i></p>
<p align="center">
<i>
Example showing how human interventions help guide policy learning over time
</i>
</p>
- The figure shows the plot of the episodic reward over interaction step. The figure shows the effect of human interventions on the policy learning.
- The orange curve is an experiment without any human interventions. While the pink and blue curves are experiments with human interventions.
@@ -510,7 +551,9 @@ The training proceeds automatically:
If you have `wandb.enable` set to `true` in your configuration, you can monitor training progress in real-time through the [Weights & Biases](https://wandb.ai/site/) dashboard.
### Guide to Human Interventions
The learning process is very sensitive to the intervention strategy. It will takes a few runs to understand how to intervene effectively. Some tips and hints:
- Allow the policy to explore for a few episodes at the start of training.
- Avoid intervening for long periods of time. Try to intervene in situation to correct the robot's behaviour when it goes off track.
- Once the policy starts achieving the task, even if its not perfect, you can limit your interventions to simple quick actions like a simple grasping commands.
@@ -518,26 +561,36 @@ The learning process is very sensitive to the intervention strategy. It will tak
The ideal behaviour is that your intervention rate should drop gradually during training as shown in the figure below.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/intervention_rate_tutorial_rl.png?raw=true" alt="Intervention rate" title="Intervention rate during training" width="100%"></img>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/intervention_rate_tutorial_rl.png?raw=true"
alt="Intervention rate"
title="Intervention rate during training"
width="100%"
></img>
</p>
<p align="center"><i>Plot of the intervention rate during a training run on a pick and lift cube task</i></p>
<p align="center">
<i>
Plot of the intervention rate during a training run on a pick and lift cube
task
</i>
</p>
### Key hyperparameters to tune
Some configuration values have a disproportionate impact on training stability and speed:
- **`temperature_init`** (`policy.temperature_init`) initial entropy temperature in SAC. Higher values encourage more exploration; lower values make the policy more deterministic early on. A good starting point is `1e-2`. We observed that setting it too high can make human interventions ineffective and slow down learning.
- **`policy_parameters_push_frequency`** (`policy.actor_learner_config.policy_parameters_push_frequency`) interval in *seconds* between two weight pushes from the learner to the actor. The default is `4 s`. Decrease to **1-2 s** to provide fresher weights (at the cost of more network traffic); increase only if your connection is slow, as this will reduce sample efficiency.
- **`policy_parameters_push_frequency`** (`policy.actor_learner_config.policy_parameters_push_frequency`) interval in _seconds_ between two weight pushes from the learner to the actor. The default is `4 s`. Decrease to **1-2 s** to provide fresher weights (at the cost of more network traffic); increase only if your connection is slow, as this will reduce sample efficiency.
- **`storage_device`** (`policy.storage_device`) device on which the learner keeps the policy parameters. If you have spare GPU memory, set this to `"cuda"` (instead of the default `"cpu"`). Keeping the weights on-GPU removes CPU→GPU transfer overhead and can significantly increase the number of learner updates per second.
Congrats 🎉, you have finished this tutorial!
> [!TIP]
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
Paper citation:
```
@article{luo2024precise,
title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},

View File

@@ -11,7 +11,6 @@ This guide explains how to use the `gym_hil` simulation environments as an alter
Currently, the main environment is a Franka Panda robot simulation based on MuJoCo, with tasks like picking up a cube.
## Installation
First, install the `gym_hil` package within the LeRobot environment:
@@ -25,8 +24,6 @@ pip install -e ".[hilserl]"
- A gamepad or keyboard to control the robot
- A Nvidia GPU
## Configuration
To use `gym_hil` with LeRobot, you need to create a configuration file. An example is provided [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/gym_hil_env.json). Key configuration sections include:
@@ -35,14 +32,15 @@ To use `gym_hil` with LeRobot, you need to create a configuration file. An examp
```json
{
"type": "hil",
"name": "franka_sim",
"task": "PandaPickCubeGamepad-v0",
"device": "cuda"
"type": "hil",
"name": "franka_sim",
"task": "PandaPickCubeGamepad-v0",
"device": "cuda"
}
```
Available tasks:
- `PandaPickCubeBase-v0`: Basic environment
- `PandaPickCubeGamepad-v0`: With gamepad control
- `PandaPickCubeKeyboard-v0`: With keyboard control
@@ -65,6 +63,7 @@ Available tasks:
```
Important parameters:
- `gripper_penalty`: Penalty for excessive gripper movement
- `use_gripper`: Whether to enable gripper control
- `end_effector_step_sizes`: Size of the steps in the x,y,z axes of the end-effector
@@ -76,40 +75,49 @@ Important parameters:
To run the environment, set mode to null:
<!-- prettier-ignore-start -->
```python
python -m lerobot.scripts.rl.gym_manipulator --config_path path/to/gym_hil_env.json
```
<!-- prettier-ignore-end -->
### Recording a Dataset
To collect a dataset, set the mode to `record` whilst defining the repo_id and number of episodes to record:
<!-- prettier-ignore-start -->
```python
python -m lerobot.scripts.rl.gym_manipulator --config_path path/to/gym_hil_env.json
```
<!-- prettier-ignore-end -->
### Training a Policy
To train a policy, checkout the configuration example available [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/train_gym_hil_env.json) and run the actor and learner servers:
<!-- prettier-ignore-start -->
```python
python -m lerobot.scripts.rl.actor --config_path path/to/train_gym_hil_env.json
```
<!-- prettier-ignore-end -->
In a different terminal, run the learner server:
<!-- prettier-ignore-start -->
```python
python -m lerobot.scripts.rl.learner --config_path path/to/train_gym_hil_env.json
```
<!-- prettier-ignore-end -->
The simulation environment provides a safe and repeatable way to develop and test your Human-In-the-Loop reinforcement learning components before deploying to real robots.
Congrats 🎉, you have finished this tutorial!
> [!TIP]
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
Paper citation:
```
@article{luo2024precise,
title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},

View File

@@ -3,6 +3,7 @@
This tutorial will explain how to train a neural network to control a real robot autonomously.
**You'll learn:**
1. How to record and visualize your dataset.
2. How to train a policy using your data and prepare it for evaluation.
3. How to evaluate your policy and visualize the results.
@@ -14,7 +15,10 @@ By following these steps, you'll be able to replicate tasks, such as picking up
<div class="video-container">
<video controls width="600">
<source src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot_task.mp4" type="video/mp4" />
<source
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot_task.mp4"
type="video/mp4"
/>
</video>
</div>
@@ -51,6 +55,8 @@ python -m lerobot.teleoperate \
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.teleoperators.so101_leader import SO101LeaderConfig, SO101Leader
from lerobot.robots.so101_follower import SO101FollowerConfig, SO101Follower
@@ -74,10 +80,13 @@ while True:
action = teleop_device.get_action()
robot.send_action(action)
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
The teleoperate command will automatically:
1. Identify any missing calibrations and initiate the calibration procedure.
2. Connect the robot and teleop device and start teleoperation.
@@ -104,6 +113,8 @@ python -m lerobot.teleoperate \
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
from lerobot.teleoperators.koch_leader import KochLeaderConfig, KochLeader
@@ -134,6 +145,8 @@ while True:
action = teleop_device.get_action()
robot.send_action(action)
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
@@ -144,11 +157,13 @@ Once you're familiar with teleoperation, you can record your first dataset.
We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
Add your token to the CLI by running this command:
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```
Then store your Hugging Face repository name in a variable:
```bash
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
@@ -174,6 +189,8 @@ python -m lerobot.record \
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
from lerobot.datasets.lerobot_dataset import LeRobotDataset
@@ -270,40 +287,49 @@ robot.disconnect()
teleop.disconnect()
dataset.push_to_hub()
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
#### Dataset upload
Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/so101_test) that you can obtain by running:
```bash
echo https://huggingface.co/datasets/${HF_USER}/so101_test
```
Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
You can also push your local dataset to the Hub manually, running:
```bash
huggingface-cli upload ${HF_USER}/record-test ~/.cache/huggingface/lerobot/{repo-id} --repo-type dataset
```
#### Record function
The `record` function provides a suite of tools for capturing and managing data during robot operation:
##### 1. Data Storage
- Data is stored using the `LeRobotDataset` format and is stored on disk during recording.
- By default, the dataset is pushed to your Hugging Face page after recording.
- To disable uploading, use `--dataset.push_to_hub=False`.
##### 2. Checkpointing and Resuming
- Checkpoints are automatically created during recording.
- If an issue occurs, you can resume by re-running the same command with `--resume=true`.
- To start recording from scratch, **manually delete** the dataset directory.
##### 3. Recording Parameters
Set the flow of data recording using command-line arguments:
- `--dataset.episode_time_s=60`
Duration of each data recording episode (default: **60 seconds**).
- `--dataset.reset_time_s=60`
@@ -312,7 +338,9 @@ Set the flow of data recording using command-line arguments:
Total number of episodes to record (default: **50**).
##### 4. Keyboard Controls During Recording
Control the data recording flow using keyboard shortcuts:
- Press **Right Arrow (`→`)**: Early stop the current episode or reset time and move to the next.
- Press **Left Arrow (`←`)**: Cancel the current episode and re-record it.
- Press **Escape (`ESC`)**: Immediately stop the session, encode videos, and upload the dataset.
@@ -327,13 +355,14 @@ Avoid adding too much variation too quickly, as it may hinder your results.
If you want to dive deeper into this important topic, you can check out the [blog post](https://huggingface.co/blog/lerobot-datasets#what-makes-a-good-dataset) we wrote on what makes a good dataset.
#### Troubleshooting:
- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
## Visualize a dataset
If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
```bash
echo ${HF_USER}/so101_test
```
@@ -356,6 +385,8 @@ python -m lerobot.replay \
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
import time
@@ -388,6 +419,8 @@ for idx in range(dataset.num_frames):
robot.disconnect()
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
@@ -396,6 +429,7 @@ Your robot should replicate movements similar to those you recorded. For example
## Train a policy
To train a policy to control your robot, use the [`python -m lerobot.scripts.train`](../src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/so101_test \
@@ -408,14 +442,16 @@ python -m lerobot.scripts.train \
```
Let's explain the command:
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so101_test`.
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
Training should take several hours. You will find checkpoints in `outputs/train/act_so101_test/checkpoints`.
To resume training from a checkpoint, below is an example command to resume from `last` checkpoint of the `act_so101_test` policy:
```bash
python -m lerobot.scripts.train \
--config_path=outputs/train/act_so101_test/checkpoints/last/pretrained_model/train_config.json \
@@ -427,17 +463,20 @@ If you do not want to push your model to the hub after training use `--policy.pu
Additionally you can provide extra `tags` or specify a `license` for your model or make the model repo `private` by adding this: `--policy.private=true --policy.tags=\[ppo,rl\] --policy.license=mit`
#### Train using Collab
If your local computer doesn't have a powerful GPU you could utilize Google Collab to train your model by following the [ACT training notebook](./notebooks#training-act).
#### Upload policy checkpoints
Once training is done, upload the latest checkpoint with:
```bash
huggingface-cli upload ${HF_USER}/act_so101_test \
outputs/train/act_so101_test/checkpoints/last/pretrained_model
```
You can also upload intermediate checkpoints with:
```bash
CKPT=010000
huggingface-cli upload ${HF_USER}/act_so101_test${CKPT} \
@@ -467,6 +506,8 @@ python -m lerobot.record \
```
</hfoption>
<hfoption id="API example">
<!-- prettier-ignore-start -->
```python
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
from lerobot.datasets.lerobot_dataset import LeRobotDataset
@@ -539,9 +580,12 @@ for episode_idx in range(NUM_EPISODES):
robot.disconnect()
dataset.push_to_hub()
```
<!-- prettier-ignore-end -->
</hfoption>
</hfoptions>
As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so101_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so101_test`).
1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so101_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so101_test`).
2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_so101_test`).

View File

@@ -3,6 +3,7 @@
This tutorial will explain how to train a neural network to control a robot in simulation with imitation learning.
**You'll learn:**
1. How to record a dataset in simulation with [gym-hil](https://github.com/huggingface/gym-hil) and visualize the dataset.
2. How to train a policy using your data.
3. How to evaluate your policy in simulation and visualize the results.
@@ -55,13 +56,21 @@ Note that to teleoperate the robot you have to hold the "Human Take Over Pause P
**Gamepad Controls**
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/gamepad_guide.jpg?raw=true" alt="Figure shows the control mappings on a Logitech gamepad." title="Gamepad Control Mapping" width="100%"></img>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/gamepad_guide.jpg?raw=true"
alt="Figure shows the control mappings on a Logitech gamepad."
title="Gamepad Control Mapping"
width="100%"
></img>
</p>
<p align="center">
<i>Gamepad button mapping for robot control and episode management</i>
</p>
<p align="center"><i>Gamepad button mapping for robot control and episode management</i></p>
**Keyboard controls**
For keyboard controls use the `spacebar` to enable control and the following keys to move the robot:
```bash
Arrow keys: Move in X-Y plane
Shift and Shift_R: Move in Z axis
@@ -74,14 +83,21 @@ For keyboard controls use the `spacebar` to enable control and the following key
If you uploaded your dataset to the hub you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id.
<p align="center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/dataset_visualizer_sim.png" alt="Figure shows the dataset visualizer" title="Dataset visualization" width="100%"></img>
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/dataset_visualizer_sim.png"
alt="Figure shows the dataset visualizer"
title="Dataset visualization"
width="100%"
></img>
</p>
<p align="center">
<i>Dataset visualizer</i>
</p>
<p align="center"><i>Dataset visualizer</i></p>
## Train a policy
To train a policy to control your robot, use the [`python -m lerobot.scripts.train`](../src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/il_gym \
@@ -93,25 +109,29 @@ python -m lerobot.scripts.train \
```
Let's explain the command:
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/il_gym`.
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
Training should take several hours, 100k steps (which is the default) will take about 1h on Nvidia A100. You will find checkpoints in `outputs/train/il_sim_test/checkpoints`.
#### Train using Collab
If your local computer doesn't have a powerful GPU you could utilize Google Collab to train your model by following the [ACT training notebook](./notebooks#training-act).
#### Upload policy checkpoints
Once training is done, upload the latest checkpoint with:
```bash
huggingface-cli upload ${HF_USER}/il_sim_test \
outputs/train/il_sim_test/checkpoints/last/pretrained_model
```
You can also upload intermediate checkpoints with:
```bash
CKPT=010000
huggingface-cli upload ${HF_USER}/il_sim_test${CKPT} \
@@ -144,9 +164,9 @@ mjpython -m lerobot.scripts.rl.eval_policy --config_path=path/to/eval_config_gym
</hfoptions>
> [!WARNING]
> While the main workflow of training ACT in simulation is straightforward, there is significant room for exploring how to set up the task, define the initial state of the environment, and determine the type of data required during collection to learn the most effective policy. If your trained policy doesn't perform well, investigate the quality of the dataset it was trained on using our visualizers, as well as the action values and various hyperparameters related to ACT and the simulation.
> While the main workflow of training ACT in simulation is straightforward, there is significant room for exploring how to set up the task, define the initial state of the environment, and determine the type of data required during collection to learn the most effective policy. If your trained policy doesn't perform well, investigate the quality of the dataset it was trained on using our visualizers, as well as the action values and various hyperparameters related to ACT and the simulation.
Congrats 🎉, you have finished this tutorial. If you want to continue with using LeRobot in simulation follow this [Tutorial on reinforcement learning in sim with HIL-SERL](https://huggingface.co/docs/lerobot/hilserl_sim)
> [!TIP]
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).

View File

@@ -1,6 +1,10 @@
<div class="flex justify-center">
<a target="_blank" href="https://huggingface.co/lerobot">
<img alt="HuggingFace Expert Acceleration Program" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot-logo-thumbnail.png" style="width: 100%"></img>
<img
alt="HuggingFace Expert Acceleration Program"
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot-logo-thumbnail.png"
style="width: 100%"
></img>
</a>
</div>

View File

@@ -5,45 +5,56 @@
Currently only available from source.
Download our source code:
```bash
git clone https://github.com/huggingface/lerobot.git
cd lerobot
```
Create a virtual environment with Python 3.10, using [`Miniconda`](https://docs.anaconda.com/miniconda/install/#quick-command-line-install)
```bash
conda create -y -n lerobot python=3.10
```
Then activate your conda environment, you have to do this each time you open a shell to use lerobot:
```bash
conda activate lerobot
```
When using `miniconda`, install `ffmpeg` in your environment:
```bash
conda install ffmpeg -c conda-forge
```
> [!TIP]
> This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
> ```bash
> conda install ffmpeg=7.1.1 -c conda-forge
> ```
> - _[On Linux only]_ If you want to bring your own ffmpeg: Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
>
> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
>
> ```bash
> conda install ffmpeg=7.1.1 -c conda-forge
> ```
>
> - _[On Linux only]_ If you want to bring your own ffmpeg: Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
Install 🤗 LeRobot:
```bash
pip install -e .
```
### Troubleshooting
If you encounter build errors, you may need to install additional dependencies: `cmake`, `build-essential`, and `ffmpeg libs`.
To install these for linux run:
```bash
sudo apt-get install cmake build-essential python-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev pkg-config
```
For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
## Optional dependencies
@@ -51,20 +62,26 @@ For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/
LeRobot provides optional extras for specific functionalities. Multiple extras can be combined (e.g., `.[aloha,feetech]`). For all available extras, refer to `pyproject.toml`.
### Simulations
Install environment packages: `aloha` ([gym-aloha](https://github.com/huggingface/gym-aloha)), `xarm` ([gym-xarm](https://github.com/huggingface/gym-xarm)), or `pusht` ([gym-pusht](https://github.com/huggingface/gym-pusht))
Example:
```bash
pip install -e ".[aloha]" # or "[pusht]" for example
```
### Motor Control
For Koch v1.1 install the Dynamixel SDK, for SO100/SO101/Moss install the Feetech SDK.
```bash
pip install -e ".[feetech]" # or "[dynamixel]" for example
```
### Experiment Tracking
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
```bash
wandb login
```

View File

@@ -21,16 +21,13 @@ Please refer to the [`MotorsBus`](https://github.com/huggingface/lerobot/blob/ma
For a good example of how it can be used, you can have a look at our own [SO101 follower implementation](https://github.com/huggingface/lerobot/blob/main/lerobot/robots/so101_follower/so101_follower.py)
Use these if compatible. Otherwise, you'll need to find or write a Python interface (not covered in this tutorial):
- Find an existing SDK in Python (or use bindings to C/C++)
- Or implement a basic communication wrapper (e.g., via pyserial, socket, or CANopen)
You're not alone—many community contributions use custom boards or firmware!
For Feetech and Dynamixel, we currently support these servos:
- Feetech:
- STS & SMS series (protocol 0): `sts3215`, `sts3250`, `sm8512bl`
- SCS series (protocol 1): `scs0009`
- Dynamixel (protocol 2.0 only): `xl330-m077`, `xl330-m288`, `xl430-w250`, `xm430-w350`, `xm540-w270`, `xc430-w150`
For Feetech and Dynamixel, we currently support these servos: - Feetech: - STS & SMS series (protocol 0): `sts3215`, `sts3250`, `sm8512bl` - SCS series (protocol 1): `scs0009` - Dynamixel (protocol 2.0 only): `xl330-m077`, `xl330-m288`, `xl430-w250`, `xm430-w350`, `xm540-w270`, `xc430-w150`
If you are using Feetech or Dynamixel servos that are not in this list, you can add those in the [Feetech table](https://github.com/huggingface/lerobot/blob/main/lerobot/motors/feetech/tables.py) or [Dynamixel table](https://github.com/huggingface/lerobot/blob/main/lerobot/motors/dynamixel/tables.py). Depending on the model, this will require you to add model-specific information. In most cases though, there shouldn't be a lot of additions to do.
@@ -41,6 +38,8 @@ In the next sections, we'll use a `FeetechMotorsBus` as the motors interface for
Youll first need to specify the config class and a string identifier (`name`) for your robot. If your robot has special needs that you'd like to be able to change easily, it should go here (e.g. port/address, baudrate).
Here, we'll add the port name and one camera by default for our robot:
<!-- prettier-ignore-start -->
```python
from dataclasses import dataclass, field
@@ -64,6 +63,7 @@ class MyCoolRobotConfig(RobotConfig):
}
)
```
<!-- prettier-ignore-end -->
Have a look at our [Cameras tutorial](./cameras) to understand how to detect and add your camera.
@@ -71,6 +71,7 @@ Next, we'll create our actual robot class which inherits from `Robot`. This abst
Here we'll create a simple 5-DoF robot with one camera. It could be a simple arm but notice that the `Robot` abstract class does not assume anything on your robot's form factor. You can let you imagination run wild when designing new robots!
<!-- prettier-ignore-start -->
```python
from lerobot.cameras import make_cameras_from_configs
from lerobot.motors import Motor, MotorNormMode
@@ -96,10 +97,11 @@ class MyCoolRobot(Robot):
)
self.cameras = make_cameras_from_configs(config.cameras)
```
<!-- prettier-ignore-end -->
## Step 2: Define Observation and Action Features
These two properties define the *interface contract* between your robot and tools that consume it (such as data collection or learning pipelines).
These two properties define the _interface contract_ between your robot and tools that consume it (such as data collection or learning pipelines).
> [!WARNING]
> Note that these properties must be callable even if the robot is not yet connected, so avoid relying on runtime hardware state to define them.
@@ -109,6 +111,8 @@ These two properties define the *interface contract* between your robot and tool
This property should return a dictionary describing the structure of sensor outputs from your robot. The keys match what `get_observation()` returns, and the values describe either the shape (for arrays/images) or the type (for simple values).
Example for our 5-DoF arm with one camera:
<!-- prettier-ignore-start -->
```python
@property
def _motors_ft(self) -> dict[str, type]:
@@ -130,6 +134,8 @@ def _cameras_ft(self) -> dict[str, tuple]:
def observation_features(self) -> dict:
return {**self._motors_ft, **self._cameras_ft}
```
<!-- prettier-ignore-end -->
In this case, observations consist of a simple dict storing each motor's position and a camera image.
### `action_features`
@@ -137,10 +143,13 @@ In this case, observations consist of a simple dict storing each motor's positio
This property describes the commands your robot expects via `send_action()`. Again, keys must match the expected input format, and values define the shape/type of each command.
Here, we simply use the same joints proprioceptive features (`self._motors_ft`) as with `observation_features`: the action sent will simply the goal position for each motor.
<!-- prettier-ignore-start -->
```python
def action_features(self) -> dict:
return self._motors_ft
```
<!-- prettier-ignore-end -->
## Step 3: Handle Connection and Disconnection
@@ -150,16 +159,19 @@ These methods should handle opening and closing communication with your hardware
This property should simply reflect that communication with the robot's hardware is established. When this property is `True`, it should be possible to read and write to the hardware using `get_observation()` and `send_action()`.
<!-- prettier-ignore-start -->
```python
@property
def is_connected(self) -> bool:
return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
```
<!-- prettier-ignore-end -->
### `connect()`
This method should establish communication with the hardware. Moreover, if your robot needs calibration and is not calibrated, it should start a calibration procedure by default. If your robot needs some specific configuration, this should also be called here.
<!-- prettier-ignore-start -->
```python
def connect(self, calibrate: bool = True) -> None:
self.bus.connect()
@@ -171,25 +183,31 @@ def connect(self, calibrate: bool = True) -> None:
self.configure()
```
<!-- prettier-ignore-end -->
### `disconnect()`
This method should gracefully terminate communication with the hardware: free any related resources (threads or processes), close ports, etc.
Here, we already handle this in our `MotorsBus` and `Camera` classes so we just need to call their own `disconnect()` methods:
<!-- prettier-ignore-start -->
```python
def disconnect(self) -> None:
self.bus.disconnect()
for cam in self.cameras.values():
cam.disconnect()
```
<!-- prettier-ignore-end -->
## Step 4: Support Calibration and Configuration
LeRobot supports saving and loading calibration data automatically. This is useful for joint offsets, zero positions, or sensor alignment.
> Note that depending on your hardware, this may not apply. If that's the case, you can simply leave these methods as no-ops:
> ```python
<!-- prettier-ignore-start -->
```python
> @property
> def is_calibrated(self) -> bool:
> return True
@@ -202,7 +220,8 @@ LeRobot supports saving and loading calibration data automatically. This is usef
This should reflect whether your robot has the required calibration loaded.
```python
```
<!-- prettier-ignore-end -->python
@property
def is_calibrated(self) -> bool:
return self.bus.is_calibrated
@@ -216,6 +235,8 @@ The goal of the calibration is twofold:
It should implement the logic for calibration (if relevant) and update the `self.calibration` dictionary. If you are using Feetech or Dynamixel motors, our bus interfaces already include methods to help with this.
<!-- prettier-ignore-start -->
```python
def calibrate(self) -> None:
self.bus.disable_torque()
@@ -245,11 +266,13 @@ def calibrate(self) -> None:
self._save_calibration()
print("Calibration saved to", self.calibration_fpath)
```
<!-- prettier-ignore-end -->
### `configure()`
Use this to set up any configuration for your hardware (servos control modes, controller gains, etc.). This should usually be run at connection time and be idempotent.
<!-- prettier-ignore-start -->
```python
def configure(self) -> None:
with self.bus.torque_disabled():
@@ -260,6 +283,7 @@ def configure(self) -> None:
self.bus.write("I_Coefficient", motor, 0)
self.bus.write("D_Coefficient", motor, 32)
```
<!-- prettier-ignore-end -->
## Step 5: Implement Sensors Reading and Action Sending
@@ -269,6 +293,7 @@ These are the most important runtime functions: the core I/O loop.
Returns a dictionary of sensor values from the robot. These typically include motor states, camera frames, various sensors, etc. In the LeRobot framework, these observations are what will be fed to a policy in order to predict the actions to take. The dictionary keys and structure must match `observation_features`.
<!-- prettier-ignore-start -->
```python
def get_observation(self) -> dict[str, Any]:
if not self.is_connected:
@@ -284,6 +309,7 @@ def get_observation(self) -> dict[str, Any]:
return obs_dict
```
<!-- prettier-ignore-end -->
### `send_action()`
@@ -291,6 +317,7 @@ Takes a dictionary that matches `action_features`, and sends it to your hardware
For simplicity, we won't be adding any modification of the actions in our example here.
<!-- prettier-ignore-start -->
```python
def send_action(self, action: dict[str, Any]) -> dict[str, Any]:
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items()}
@@ -300,6 +327,7 @@ def send_action(self, action: dict[str, Any]) -> dict[str, Any]:
return action
```
<!-- prettier-ignore-end -->
## Adding a Teleoperator

View File

@@ -10,8 +10,8 @@ This repository contains example notebooks for using LeRobot. These notebooks de
We provide a ready-to-run Google Colab notebook to help you train ACT policies using datasets from the Hugging Face Hub, with optional logging to Weights & Biases.
| Notebook | Colab |
|:---------|:------|
| Notebook | Colab |
| :------------------------------------------------------------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Train ACT with LeRobot](https://github.com/huggingface/notebooks/blob/main/lerobot/training-act.ipynb) | [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-act.ipynb) |
Expected training time for 100k steps: ~1.5 hours on an NVIDIA A100 GPU with batch size of `64`.

View File

@@ -3,9 +3,18 @@
SmolVLA is Hugging Faces lightweight foundation model for robotics. Designed for easy fine-tuning on LeRobot datasets, it helps accelerate your development!
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/640e21ef3c82bd463ee5a76d/aooU0a3DMtYmy_1IWMaIM.png" alt="SmolVLA architecture." width="500"/>
<br/>
<em>Figure 1. SmolVLA takes as input (i) multiple cameras views, (ii) the robots current sensorimotor state, and (iii) a natural language instruction, encoded into contextual features used to condition the action expert when generating an action chunk.</em>
<img
src="https://cdn-uploads.huggingface.co/production/uploads/640e21ef3c82bd463ee5a76d/aooU0a3DMtYmy_1IWMaIM.png"
alt="SmolVLA architecture."
width="500"
/>
<br />
<em>
Figure 1. SmolVLA takes as input (i) multiple cameras views, (ii) the
robots current sensorimotor state, and (iii) a natural language
instruction, encoded into contextual features used to condition the action
expert when generating an action chunk.
</em>
</p>
## Set Up Your Environment
@@ -32,6 +41,7 @@ We recommend checking out the dataset linked below for reference that was used i
In this dataset, we recorded 50 episodes across 5 distinct cube positions. For each position, we collected 10 episodes of pick-and-place interactions. This structure, repeating each variation several times, helped the model generalize better. We tried similar dataset with 25 episodes, and it was not enough leading to a bad performance. So, the data quality and quantity is definitely a key.
After you have your dataset available on the Hub, you are good to go to use our finetuning script to adapt SmolVLA to your application.
</Tip>
## Finetune SmolVLA on your data
@@ -56,7 +66,8 @@ cd lerobot && python -m lerobot.scripts.train \
```
<Tip>
You can start with a small batch size and increase it incrementally, if the GPU allows it, as long as loading times remain short.
You can start with a small batch size and increase it incrementally, if the
GPU allows it, as long as loading times remain short.
</Tip>
Fine-tuning is an art. For a complete overview of the options for finetuning, run
@@ -66,12 +77,20 @@ python -m lerobot.scripts.train --help
```
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/640e21ef3c82bd463ee5a76d/S-3vvVCulChREwHDkquoc.gif" alt="Comparison of SmolVLA across task variations." width="500"/>
<br/>
<em>Figure 2: Comparison of SmolVLA across task variations. From left to right: (1) pick-place cube counting, (2) pick-place cube counting, (3) pick-place cube counting under perturbations, and (4) generalization on pick-and-place of the lego block with real-world SO101.</em>
<img
src="https://cdn-uploads.huggingface.co/production/uploads/640e21ef3c82bd463ee5a76d/S-3vvVCulChREwHDkquoc.gif"
alt="Comparison of SmolVLA across task variations."
width="500"
/>
<br />
<em>
Figure 2: Comparison of SmolVLA across task variations. From left to right:
(1) pick-place cube counting, (2) pick-place cube counting, (3) pick-place
cube counting under perturbations, and (4) generalization on pick-and-place
of the lego block with real-world SO101.
</em>
</p>
## Evaluate the finetuned model and run it in real-time
Similarly for when recording an episode, it is recommended that you are logged in to the HuggingFace Hub. You can follow the corresponding steps: [Record a dataset](./getting_started_real_world_robot#record-a-dataset).