Commit Graph

1047 Commits

Author SHA1 Message Date
Robin Glauser
ed49c9935a Adding magnitude encoding bits for feetech motors according to https://github.com/Kotakku/FT_SCServo_Debug_Qt/blob/master/servo/sms_sts.h and https://gitee.com/ftservo/FTServo_Python/blob/main/scservo_sdk/sms_sts.py (#2223) 2025-10-17 15:15:03 +02:00
Infinity4B
52455d03a7 fix eval-related doc errors (#2183)
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
2025-10-17 14:34:21 +02:00
Steven Palma
4afb253825 fix(dependencies): wandb > 0.22.0 uses a different version of protobuf (#2230) 2025-10-17 13:59:31 +02:00
Steven Palma
96c664e09f fix(scripts): warmup in find cameras script (#2229) 2025-10-17 13:59:10 +02:00
Steven Palma
8bd0aec618 chore(ci): relax stale bot for PRs (#2222) 2025-10-16 17:44:50 +02:00
Pepijn
e82e7a02e9 feat(train): add accelerate for multi gpu training (#2154)
* Enhance training and logging functionality with accelerator support

- Added support for multi-GPU training by introducing an `accelerator` parameter in training functions.
- Updated `update_policy` to handle gradient updates based on the presence of an accelerator.
- Modified logging to prevent duplicate messages in non-main processes.
- Enhanced `set_seed` and `get_safe_torch_device` functions to accommodate accelerator usage.
- Updated `MetricsTracker` to account for the number of processes when calculating metrics.
- Introduced a new feature in `pyproject.toml` for the `accelerate` library dependency.

* Initialize logging in training script for both main and non-main processes

- Added `init_logging` calls to ensure proper logging setup when using the accelerator and in standard training mode.
- This change enhances the clarity and consistency of logging during training sessions.

* add docs and only push model once

* Place  logging under accelerate and update docs

* fix pre commit

* only log in main process

* main logging

* try with local rank

* add tests

* change runner

* fix test

* dont push to hub in multi gpu tests

* pre download dataset in tests

* small fixes

* fix path optimizer state

* update docs, and small improvements in train

* simplify accelerate main process detection

* small improvements in train

* fix OOM bug

* change accelerate detection

* add some debugging

* always use accelerate

* cleanup update method

* cleanup

* fix bug

* scale lr decay if we reduce steps

* cleanup logging

* fix formatting

* encorperate feedback pr

* add min memory to cpu tests

* use accelerate to determin logging

* fix precommit and fix tests

* chore: minor details

---------

Co-authored-by: AdilZouitine <adilzouitinegm@gmail.com>
Co-authored-by: Steven Palma <steven.palma@huggingface.co>
2025-10-16 17:41:55 +02:00
Ryan Pennings
845b359d39 Fix homunculus teleoperator input lag (#2196)
Removes input lag by making changes to the serial
reading loop
- remove serial flush as this only clears
output buffer
- read all data in the input buffer in per loop
and use the latest line as the state to clear
the input buffer
previously was only reading one line per loop,
which in combination with teleoperator script loop
busy_wait function (which is slowing the
_read_loops down) was causing a backlog in input
buffer

Co-authored-by: Martino Russi <77496684+nepyope@users.noreply.github.com>
2025-10-16 11:39:05 +02:00
Steven Palma
a6ff3cfebb chore(deps): libero dep pointing to main (#2201) 2025-10-14 18:19:49 +02:00
Jade Choghari
271d92dcaa feat(sim): add metaworld env (#2088)
* add metaworld

* smol update

Signed-off-by: Jade Choghari <chogharijade@gmail.com>

* update design

* Update src/lerobot/envs/metaworld.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Jade Choghari <chogharijade@gmail.com>

* update

* small changes

* iterate on review

* small fix

* small fix

* add docs

* update doc

* add better gif

* smol doc fix

* updage gymnasium

* add note

* depreciate gym-xarm

* more changes

* update doc

* comply with mypy

* more fixes

* update readme

* precommit

* update pusht

* add pusht instead

* changes

* style

* add changes

* update

* revert

* update v2

* chore(envs): move metaworld config to its own file + remove comments + simplify _format_raw_obs (#2200)

* update final changes

---------

Signed-off-by: Jade Choghari <chogharijade@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
2025-10-14 17:21:18 +02:00
Michel Aractingi
8e940bf361 Feat/expand add features (#2202)
* make add_feature take multiple features at a time and rename to add_features

* - New function: modify_features that was a combination of remove features and add features.
 - This function is important for when we want to add a feature and remove another so we can do it in one time to avoid copying and creating the dataset multiple times
2025-10-14 16:19:50 +02:00
Steven Palma
6e8be57eb2 chore(policies): deprecate pi0fast (#2203) 2025-10-14 16:00:42 +02:00
Francesco Capuano
723013c71b feat(scripts): Introduce build_inference_frame/make_robot_action util to easily allow API-based Inference (#2143)
* fix: expose a function explicitly building a frame for inference

* fix: first make dataset frame, then make ready for inference

* fix: reducing reliance on lerobot record for policy's ouptuts too

* fix: encapsulating squeezing out + device handling from predict action

* fix: remove duplicated call to build_inference_frame and add a function to only perform data type handling (whole conversion is: keys matching + data type conversion)

* fix(policies): right utils signature + docstrings (#2198)

---------

Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
2025-10-14 15:47:32 +02:00
Steven Palma
bf6ac5e110 fix(datasets): conversion script function naming (#2199)
Co-authored-by: gagalo123 <bamianweifen@gmail.com>
2025-10-14 14:36:32 +02:00
Steven Palma
3ce5bcf24d feat(deps): add setuptools dependency (#2187) 2025-10-14 14:00:52 +02:00
Francesco Capuano
6f5bb4d4a4 fix outdated example in docs (#2182)
* fix outdated example

Signed-off-by: Francesco Capuano <74058581+fracapuano@users.noreply.github.com>

* Update docs/source/il_robots.mdx

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Francesco Capuano <74058581+fracapuano@users.noreply.github.com>

---------

Signed-off-by: Francesco Capuano <74058581+fracapuano@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-13 16:43:23 +02:00
Francesco Capuano
f29311ccb0 fix: very minor fix but hey devil is in details (#2168)
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
2025-10-13 10:44:53 +02:00
Michel Aractingi
0c79cf8f4e Add missing finalize calls in example (#2175)
- add missing calls to dataset.finalize in the example recording scripts
- add section in the dataset docs on calling dataset.finalize
2025-10-11 21:15:43 +02:00
Michel Aractingi
f2ff370459 Incremental parquet writing (#1903)
* incremental parquet writing

* add .finalise() and a backup __del__ for stopping writers

* fix missing import

* precommit fixes added back the use of embed images

* added lazy loading for hf_Dataset to avoid frequently reloading the dataset during recording

* fix bug in video timestamps

* Added proper closing of parquet file before reading

* Added rigorous testing to validate the consistency of the meta data after creation of a new dataset

* fix bug in episode index during clear_episode_buffer

* fix(empty concat): check for empty paths list before data files concatenation

* fix(v3.0 message): updating v3.0 backward compatibility message.

* added fixes for the resume logic

* answering co-pilot review

* reverting some changes and style nits

* removed unused functions

* fix chunk_id and file_id when resuming

* - fix parquet loading when resuming
- add test to verify the parquet file integrity when resuming so that data files are now overwritten

* added general function get_file_size_in_mb and removed the one for video

* fix table size value when resuming

* Remove unnecessary reloading of the parquet file when resuming record.
Write to a new parquet file when resuming record

* added back reading parquet file for image datasets only

* - respond to Qlhoest comments
- Use pyarrows `from_pydict` function
- Add buffer for episode metadata to write to the parquet file in batches to improve efficiency
- Remove the  use of `to_parquet_with_hf_images`

* fix(dataset_tools) with the new logic using proper finalize
bug in finding the latest path of the metdata that was pointing to the data files
added check for the metadata size in the case the metadatabuffer was not written yet

* nit in flush_metadata_buffer

* fix(lerobot_dataset) return the right dataset len when a subset of the dataset is requested

---------

Co-authored-by: Harsimrat Sandhawalia <hs.sandhawalia@gmail.com>
2025-10-11 11:01:30 +02:00
Juan Pizarro
25f60c301b use TeleopEvents.RERECORD_EPISODE in gym_manipulator (#2165)
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
2025-10-11 00:15:42 +02:00
Jade Choghari
0699b46d87 refactor(envs): add custom-observation-size (#2167) 2025-10-10 20:41:37 +02:00
Michel Aractingi
b8f7e401d4 Dataset tools (#2100)
* feat(dataset-tools): add dataset utilities and example script

- Introduced dataset tools for LeRobotDataset, including functions for deleting episodes, splitting datasets, adding/removing features, and merging datasets.
- Added an example script demonstrating the usage of these utilities.
- Implemented comprehensive tests for all new functionalities to ensure reliability and correctness.

* style fixes

* move example to dataset dir

* missing lisence

* fixes mostly path

* clean comments

* move tests to functions instead of class based

* - fix video editting, decode, delete frames and rencode video
- copy unchanged video and parquet files to avoid recreating the entire dataset

* Fortify tooling tests

* Fix type issue resulting from saving numpy arrays with shape 3,1,1

* added lerobot_edit_dataset

* - revert changes in examples
- remove hardcoded split names

* update comment

* fix comment
add lerobot-edit-dataset shortcut

* Apply suggestion from @Copilot

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Michel Aractingi <michel.aractingi@huggingface.co>

* style nit after copilot review

* fix: bug in dataset root when editing the dataset in place (without setting new_repo_id

* Fix bug in aggregate.py when accumelating video timestamps; add tests to fortify aggregate videos

* Added missing output repo id

* migrate delete episode to using pyav instead of decoding, writing frames to disk and encoding again.
Co-authored-by: Caroline Pascal <caroline8.pascal@gmail.com>

* added modified suffix in case repo_id is not set in delete_episode

* adding docs for dataset tools

* bump av version and add back time_base assignment

* linter

* modified push_to_hub logic in lerobot_edit_dataset

* fix(progress bar): fixing the progress bar issue in dataset tools

* chore(concatenate): removing no longer needed concatenate_datasets usage

* fix(file sizes forwarding): forwarding files and chunk sizes in metadata info when splitting and aggregating datasets

* style fix

* refactor(aggregate): Fix video indexing and timestamp bugs in dataset merging

There were three critical bugs in aggregate.py that prevented correct dataset merging:

1. Video file indices: Changed from += to = assignment to correctly reference
   merged video files

2. Video timestamps: Implemented per-source-file offset tracking to maintain
   continuous timestamps when merging split datasets (was causing non-monotonic
   timestamp warnings)

3. File rotation offsets: Store timestamp offsets after rotation decision to
   prevent out-of-bounds frame access (was causing "Invalid frame index" errors
   with small file size limits)

Changes:
- Updated update_meta_data() to apply per-source-file timestamp offsets
- Updated aggregate_videos() to track offsets correctly during file rotation
- Added get_video_duration_in_s import for duration calculation

* Improved docs for split dataset and added a check for the possible case that the split size results in zero episodes

* chore(docs): update merge documentation details

Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>

---------

Co-authored-by: CarolinePascal <caroline8.pascal@gmail.com>
Co-authored-by: Jack Vial <vialjack@gmail.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
2025-10-10 12:32:07 +02:00
Pepijn
656fc0f059 Remove validate_robot_cameras_for_policy (#2150)
* Remove validate_robot_cameras_for_policy as with rename processor the image keys can be renamed an mapped

* fix precommit
2025-10-10 11:34:21 +02:00
Steven Palma
829d2d1ad9 fic(docs): local docs links (#2149) 2025-10-09 15:20:07 +02:00
Pepijn
4ccf28437a Add act documentation (#2139)
* Add act documentation

* remove citation as we link the paper

* simplify docs

* fix pre commit
2025-10-08 20:07:14 +02:00
Steven Palma
9a49e57c72 refactor(datasets): add compress_level parameter to write_image() and set it to 1 (#2135)
* refactor(datasets): add compress_level parameter to write_image() and set it to 1

* docs(dataset): add docs to write_image()
2025-10-08 20:06:56 +02:00
Steven Palma
6c28ef894a chore(docs): add missing license headers (#2140) 2025-10-08 14:27:52 +02:00
Steven Palma
bf3c8746b7 feat(devices): add lazy loading for 3rd party robots cameras and teleoperators (#2123)
* feat(devices): add lazy loading for 3rd party robots cameras and teleoperators

Co-authored-by: Darko Lukić <lukicdarkoo@gmail.com>

* feat(devices): load device class based on assumptions in naming

* docs(devices): instructions for using 3rd party devices

* docs: address review feedback

* chore(docs): add example for 3rd party devices

---------

Co-authored-by: Darko Lukić <lukicdarkoo@gmail.com>
2025-10-07 17:46:22 +02:00
Pepijn
9f32e00f90 fix(async): Add pre and post processing to async inference and update docs (#2132)
* Add pre and post processing to async inference and update docs

* precommit fix typo

* fix tests

* refactor(async): no None branching for processors in _predict_action_chunk

---------

Co-authored-by: Steven Palma <steven.palma@huggingface.co>
2025-10-07 15:10:31 +02:00
Michel Aractingi
fcaa0ea5f9 remove extra time base set. (#2133)
Co-authored-by: CarolinePascal <caroline8.pascal@gmail.com>
2025-10-07 14:09:36 +02:00
Iulia Feroli
5ac9356135 Update README.md to fix broken link to example notebook for visuals (#2117)
Folder structure of examples seems to have changed with extra `dataset` folder and the notebook has also changed names.

Signed-off-by: Iulia Feroli <iuliaferoli@gmail.com>
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
2025-10-07 09:43:32 +02:00
Steven Palma
b74e2a6113 feat(deps): ceil dependency versions (#2091) 2025-10-05 17:53:43 +02:00
Pepijn
a4bed41132 Improve docs pi (#2110)
* Improve docs and add numpy to pi install requirments

* fix formatting

* update command

* remvoe numpy dep
2025-10-03 12:06:18 +02:00
Michel Aractingi
5c8dd883be fix bug in augment_dataset_quantile_stats.py that was not detecting… (#2106)
* fix bug in `augment_dataset_quantile_stats.py` that was not detecting the image features because we were looping over hf_dataset. Now we loop over the dataset itself

* Update src/lerobot/datasets/v30/augment_dataset_quantile_stats.py

Signed-off-by: Michel Aractingi <michel.aractingi@huggingface.co>

---------

Signed-off-by: Michel Aractingi <michel.aractingi@huggingface.co>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-02 18:28:44 +02:00
Michel Aractingi
38f6fc816b (chore) improve v3 message, allow converting local datasets to V3 (#1948)
Co-authored-by: CarolinePascal <caroline8.pascal@gmail.com>
2025-10-02 15:49:18 +02:00
Pepijn
abde7be3b3 Add OpenPi, Pi0 and Pi0.5 (#1910)
* initial commit

* change device in test

* do detailed import

* adhere to python 3.11 syntax

* fix autodocstring

* additionally

* do same in other files

* add model. prefix to all keys in state dict

* use dummy stats

* add pi05

* also shorten action_steps

* fix test

* all test pass! and fix tokenizer max length between 05 and 0

* remove test

* fix transformer dependency

* fix test

* split pi0 and pi05 policy in seperate files

* fix test

* fix push to hub test

* add some comments, license and readme

* remove warning in config

* add pi05 to factory

* remove check

* rename action_horizon to chunk_size

* clean up padding of state and action (more in line with lerobot pi0)

* add openpi image transforms for training and add more flexibility to _preprocess_images similar to lerobot pi0

* fix key match from pytorch state dict (similar keys to openpi implementation now)

* also for pi05

* update to python 3.11

* revert to openpi transformer replace python 3.11

* fix(modeling pi0): nit  warning message

* use safeauto_docstring

* fix: remove unused param

* fix from pretrained

* add preprocess tests

* also compile forward method

* Do not add model prefix to normalization

* use same name for action and state dim as lerobot pi0 and remove fixed image keys

* load from pretrained_path

* temp: hardcode base model

* fix override self.pretrained_path = None overwrite

* rename to loss

* remove additional image augmentations, lerobot dataset already does this

* Add docs

* put tests in test folder

* Add test to instatiate all base models

* go back to python 3.10

* update docs

* adapt docs pi05

* change docs: finetune base model options

* minor docs fixes and dependencies

* remove todo

* cast float64 to float32 for mps

* skip if no transformers

* fix tests

* add new models to modelcard

* add back init

* fix circular input

* feat: only run pi test on GPU

* remove require_nightly_gpu

* replace decorator test_pi0_openpi

* rename action_dim, state_dim to max_action_dim, max_state_dim

* fix doc and constants

* cleanup tests

* fix from pretrained

* fix tests

* add comment pi0 pi05 tests, add image features to pi0 pi05 hub tests

* fix, state is included in language not in flow head

* Move test to specific folder

* and paligemma task with newline

* remove add_special_tokens, not needed

* feedback pr

* Remove previous pi0 and rename pi0_openpi and pi05_openpi

* Add Quantile stats to LeRobotDataset (#1985)

* - Add RunningQuantileStats class for efficient histogram-based quantile computation
- Integrate quantile parameters (compute_quantiles, quantiles) into LeRobotDataset
- Support quantile computation during episode collection and aggregation
- Add comprehensive function-based test suite (24 tests) for quantile functionality
- Maintain full backward compatibility with existing stats computation
- Enable configurable quantiles (default: [0.01, 0.99]) for robust normalization

* style fixes, make quantiles computation by default to new datasets

* fix tests

* - Added DEFAULT_QUANTILES=[0.01, 0.10, 0.50, 0.90, 0.99] to be computed for each features instead of being chosen by the user
- Fortified tests.

* - add helper functions to reshape stats
- add missing test for quantiles

* - Add QUANTILE normalization mode to normalize the data with the 1st and 99th percentiles.
- Add QUANTILE10 normalization mode to normalize the data with the 10th and 90th percentiles.

* style fixes

* Added missing lisence

* Simplify compute_stats

* - added script `augment_dataset_quantile_stats.py` so that we can add quantile stats to existing v3 datasets that dont have quatniles
- modified quantile computation instead of using the edge for the value, interpolate the values in the bin

* rename pi0/pi05 files

* Remove open pi patch and use custom transformer branch for now

* renaming

* fix

* Revert "fix"

This reverts commit 1ea65730ac2cbca6e5869df734fbd4392561b3c6.

* fix naming

* feet(pi0/pi0.5): add pipeline (#2009)

* feat(processor): convert openpi model with processor

* TODO: Make test works

* fix(modeling_pi0openpi): update attention mask value and time scaling; improve task handling in tests

- Changed the attention mask value from `self.config.attention_mask_value` to a fixed value of `-2.3819763e38`.
- Updated time scaling in the `sample_noise` method to use a constant factor of `0.999` and an offset of `0.001`.
- Enhanced task handling in tests to ensure proper formatting and batch size consistency.
- Cleaned up commented-out test code for clarity.

* refactor(pi0): rename PI0OpenPIConfig and PI0OpenPIPolicy to PI0Config and PI0Policy

- Updated imports and references throughout the codebase to reflect the new naming convention.
- Introduced a new processor file for PI0 to handle pre-processing and post-processing steps.
- Adjusted tests to utilize the renamed classes, ensuring consistency and functionality.
- Enhanced clarity and maintainability by removing outdated naming conventions.

* refactor(pi05): rename PI0OpenPIPolicy to PI0Policy and update configuration

- Renamed `PI0OpenPIPolicy` to `PI0Policy` for consistency with naming conventions.
- Updated the `PI05OpenPIConfig` to include a new `tokenizer_max_length` attribute and changed the normalization mode for state from `MEAN_STD` to `QUANTILES`.
- Simplified model initialization in `PI05OpenPIPolicy` by removing unused `dataset_stats` parameter.
- Added a new processor class for `Pi05PrepareStateTokenizerProcessorStep` with `@dataclass` for improved readability.
- Introduced a test script to compare the integration of the PI0OpenPI policy with the original implementation, ensuring local testing compatibility.

* feat(processor): convert openpi model with processor

* TODO: Make test works

* fix(modeling_pi0openpi): update attention mask value and time scaling; improve task handling in tests

- Changed the attention mask value from `self.config.attention_mask_value` to a fixed value of `-2.3819763e38`.
- Updated time scaling in the `sample_noise` method to use a constant factor of `0.999` and an offset of `0.001`.
- Enhanced task handling in tests to ensure proper formatting and batch size consistency.
- Cleaned up commented-out test code for clarity.

* refactor(pi0): rename PI0OpenPIConfig and PI0OpenPIPolicy to PI0Config and PI0Policy

- Updated imports and references throughout the codebase to reflect the new naming convention.
- Introduced a new processor file for PI0 to handle pre-processing and post-processing steps.
- Adjusted tests to utilize the renamed classes, ensuring consistency and functionality.
- Enhanced clarity and maintainability by removing outdated naming conventions.

* refactor(pi05): rename PI0OpenPIPolicy to PI0Policy and update configuration

- Renamed `PI0OpenPIPolicy` to `PI0Policy` for consistency with naming conventions.
- Updated the `PI05OpenPIConfig` to include a new `tokenizer_max_length` attribute and changed the normalization mode for state from `MEAN_STD` to `QUANTILES`.
- Simplified model initialization in `PI05OpenPIPolicy` by removing unused `dataset_stats` parameter.
- Added a new processor class for `Pi05PrepareStateTokenizerProcessorStep` with `@dataclass` for improved readability.
- Introduced a test script to compare the integration of the PI0OpenPI policy with the original implementation, ensuring local testing compatibility.

* refactor(pi05): update imports and rename configuration classes

- Changed imports to reflect the new naming convention for PI05 configuration and policy classes.
- Renamed `PI05OpenPIConfig` to `PI05Config` and `PI05OpenPIPolicy` to `PI05Policy` for consistency.
- Introduced a new processor file for PI05, implementing pre-processing and post-processing steps.
- Updated tests to utilize the renamed classes, ensuring functionality and consistency across the codebase.

* update(pi05): increase tokenizer_max_length for improved processing

- Changed the `tokenizer_max_length` from 48 to 200 to enhance the model's capability in handling longer sequences.
- This adjustment aims to improve the overall performance and flexibility of the PI05 configuration.

* add default for state (max_state_dim)

* correct naming

* fix import

* cleanup code

* remove unused test

* us quantiles for action

* move to device

* remove discrete state assert

* fix pi05 test

* move pi05 to device

* use base models in comparison tests

* small renames for tests

* change number of tokens pi05 test

* fix openpi tokenization in test

* fix hub test

* fix test

* assert lerobot vs openpi tests

---------

Co-authored-by: Pepijn <pepijn@huggingface.co>

* add headers

* add back previously removed imports

* update if statement load processor with dataset stats

* remove to avoid circular import

* inject dataset stats for pretrained models

* check normalization before applying

* add link to  quantile augument script

* fix(policies): transformers import for ci in PI0 & PI05 (#2039)

* fix(policies): transformers import for ci in PI0

* fix(policies): transformers import for ci in PI05

* test(processor): fix expected raise when normalization types are missing (#2040)

* switch normalization order pipeline for pi05

* Fix/quantiles script (#2064)

* refactor augment stats with quantiles script
add parallelization for faster processing
shift the quantile normalization between -1 1

* fix replay buffer tests

* fix comment

* overwrite the pipeline normalization features with the policy features

* remove double normalization overwrite

* cleanup from pretrained

* remove typo

* also set norm_map

* fix(augment_quantiles) images incorrectly divided by 255

* clamp quantiles

* link to lerobot base models

* rename tests

* encorperate PR feedback

* update docstring for RunningQuantileStats

* update doc links

* Revert "clamp quantiles"

This reverts commit 172207471c8f2cb62958e9a9e6a0535ba3ff67d4.

* fix self.paligemma

* fix tests related to quantiles that were scaled to [0,1], the new range is [-1, 1]

* fix libero doc and use different transformer branch

* use fix branch instead of feat

* update results libero

* add new line

* fix formatting

* precommit

* update results libero

* update libero doc

* update title

* final changes

* add quantiles to test

* run pre commit

---------

Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Steven Palma <steven.palma@huggingface.co>
2025-10-02 13:14:45 +02:00
Akhil Ivaturi
b6c528a438 Making Envs module pass MyPy checks (#2048)
* Fix configs.py None MyPy error

* Use img_tensor instead of img in utils.py

* Add type assertion in factory.py

* Resolve merge conflict

* Uncomment envs moodule for mypy checks in pyproject.toml

---------

Signed-off-by: Adil Zouitine <adilzouitinegm@gmail.com>
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
2025-10-01 16:11:48 +02:00
Adil Zouitine
6d331310ab feat(mypy): configure mypy settings and add module overrides for gradual typing (#2101) 2025-10-01 15:14:41 +02:00
Adil Zouitine
5dfdec9288 feat(mypy): enable type checking for envs module and configure mypy settings in pyproject.toml (#2099)
* feat(mypy): enable type checking for envs module and configure mypy settings in pyproject.toml

* Add mypy configuration to check only the envs module.
* Exclude examples, benchmarks, and tests from type checking.
* Set ignore_missing_imports to true and follow_imports to skip.

* chore: comment out mypy configuration in pyproject.toml and pre-commit-config.yaml

* Comment out mypy settings to disable type checking for the envs module.
* Update pre-commit configuration to reflect changes in mypy settings.
2025-10-01 13:19:51 +02:00
Caroline Pascal
50977a2c28 fix(video_path): setting video_path to None during conversion for images datasets (#2095) 2025-10-01 11:03:52 +02:00
Adil Zouitine
a0d7627d81 feat(train): include input and output features in processor overrides for normalization (#2088) (#2090)
Signed-off-by: AdilZouitine <adilzouitinegm@gmail.com>
2025-09-29 17:37:26 +02:00
Adil Zouitine
1ad2da403d feat(policies): add noise parameter to action prediction methods (#2063)
* feat(policies): add noise parameter to action prediction methods

- Introduced `ActionSelectKwargs` TypedDict for better type hinting.
- Updated `predict_action_chunk` and `select_action` methods in `PreTrainedPolicy` and its subclasses to accept a `noise` parameter.
- Modified `generate_actions` and `conditional_sample` methods in `DiffusionModel` to utilize the new noise parameter for action generation.

* refactor(policies): make ActionSelectKwargs TypedDict fields optional

- Updated `ActionSelectKwargs` to inherit with `total=False`, allowing for optional fields.
2025-09-29 17:02:19 +02:00
Adil Zouitine
2d3a605b3c Revert feat(normalization): add validation for empty features in NormalizerProcessorStep and UnnormalizerProcessorStep (#2087)
Revert "feat(normalization): add validation for empty features in NormalizerProcessorStep and UnnormalizerProcessorStep (#2087)"

This reverts commit f173265354.
2025-09-29 16:55:52 +02:00
Adil Zouitine
f173265354 feat(normalization): add validation for empty features in NormalizerProcessorStep and UnnormalizerProcessorStep (#2087)
* feat(normalization): add validation for empty features in NormalizerProcessorStep and UnnormalizerProcessorStep

* refactor(normalization): streamline feature reconstruction logic in _NormalizationMixin

* refactor(tests): remove unused preprocessor initialization in test_act_backbone_lr

---------

Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
2025-09-29 16:02:15 +02:00
Steven Palma
bbcf66bd82 chore: enable simplify in ruff lint (#2085) 2025-09-29 15:06:56 +02:00
Steven Palma
c378a325f0 chore: enable pyugrade ruff lint (#2084) 2025-09-29 13:28:53 +02:00
Qizhi Chen
90684a9690 Improve V3 aggregate implementation (#2077)
* fix return type

* improve apply with vertorize op

* Update src/lerobot/datasets/aggregate.py

Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
2025-09-29 11:18:54 +02:00
Steven Palma
f59eb54f5c chore: remove unused code (#2062) 2025-09-29 10:49:36 +02:00
Qizhi Chen
62e9849ffd use abs path when concatenating (#2076) 2025-09-28 14:18:22 +02:00
Francesco Capuano
e3b572992e Save Cropped Dataset to Hub (#2071)
* fix: cast fps argument from dataset to int

* fix: typo

* fix: specify repo-id
2025-09-27 16:07:53 +02:00
Jade Choghari
5b647e3bcb docs(fix): libero example command (#2060)
Signed-off-by: Jade Choghari <chogharijade@gmail.com>
2025-09-26 15:09:42 +02:00