chore(rl): move rl related code to its directory at top level (#2002)

* chore(rl): move rl related code to its directory at top level

* chore(style): apply pre-commit to renamed headers

* test(rl): fix rl imports

* docs(rl): update rl headers doc
This commit is contained in:
Steven Palma
2025-09-23 16:32:34 +02:00
committed by GitHub
parent 9d0cf64da6
commit d6a32e9742
12 changed files with 44 additions and 41 deletions

View File

@@ -518,7 +518,7 @@ During the online training, press `space` to take over the policy and `space` ag
Start the recording process, an example of the config file can be found [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/env_config_so100.json):
```bash
python -m lerobot.scripts.rl.gym_manipulator --config_path src/lerobot/configs/env_config_so100.json
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config_so100.json
```
During recording:
@@ -549,7 +549,7 @@ Note: If you already know the crop parameters, you can skip this step and just s
Use the `crop_dataset_roi.py` script to interactively select regions of interest in your camera images:
```bash
python -m lerobot.scripts.rl.crop_dataset_roi --repo-id username/pick_lift_cube
python -m lerobot.rl.crop_dataset_roi --repo-id username/pick_lift_cube
```
1. For each camera view, the script will display the first frame
@@ -618,7 +618,7 @@ Before training, you need to collect a dataset with labeled examples. The `recor
To collect a dataset, you need to modify some parameters in the environment configuration based on HILSerlRobotEnvConfig.
```bash
python -m lerobot.scripts.rl.gym_manipulator --config_path src/lerobot/configs/reward_classifier_train_config.json
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/reward_classifier_train_config.json
```
**Key Parameters for Data Collection**
@@ -764,7 +764,7 @@ or set the argument in the json config file.
Run `gym_manipulator.py` to test the model.
```bash
python -m lerobot.scripts.rl.gym_manipulator --config_path path/to/env_config.json
python -m lerobot.rl.gym_manipulator --config_path path/to/env_config.json
```
The reward classifier will automatically provide rewards based on the visual input from the robot's cameras.
@@ -777,7 +777,7 @@ The reward classifier will automatically provide rewards based on the visual inp
2. **Collect a dataset**:
```bash
python -m lerobot.scripts.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
```
3. **Train the classifier**:
@@ -788,7 +788,7 @@ The reward classifier will automatically provide rewards based on the visual inp
4. **Test the classifier**:
```bash
python -m lerobot.scripts.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
```
### Training with Actor-Learner
@@ -810,7 +810,7 @@ Create a training configuration file (example available [here](https://huggingfa
First, start the learner server process:
```bash
python -m lerobot.scripts.rl.learner --config_path src/lerobot/configs/train_config_hilserl_so100.json
python -m lerobot.rl.learner --config_path src/lerobot/configs/train_config_hilserl_so100.json
```
The learner:
@@ -825,7 +825,7 @@ The learner:
In a separate terminal, start the actor process with the same configuration:
```bash
python -m lerobot.scripts.rl.actor --config_path src/lerobot/configs/train_config_hilserl_so100.json
python -m lerobot.rl.actor --config_path src/lerobot/configs/train_config_hilserl_so100.json
```
The actor: