* Enhance OpenCVCamera with FOURCC support and validation
- Added FOURCC configuration option to OpenCVCamera and OpenCVCameraConfig for specifying video format.
- Implemented _validate_fourcc method to validate and set the camera's FOURCC code.
- Updated _configure_capture_settings to apply FOURCC settings before FPS and resolution.
- Enhanced camera detection to include default FOURCC code in camera info.
- Updated documentation to reflect new FOURCC parameter and its implications on performance.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Add tests for FOURCC configuration in OpenCVCamera
- Implemented tests to validate FOURCC configuration and its application in OpenCVCamera.
- Added checks for valid FOURCC codes and ensured that invalid codes raise appropriate errors.
- Included a test for camera connection functionality using specified FOURCC settings.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix circular import in __init__.py - change to relative import
* Update src/lerobot/cameras/opencv/configuration_opencv.py
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
Signed-off-by: hls <56255627+forgetwhatuwant@users.noreply.github.com>
* Update src/lerobot/cameras/opencv/configuration_opencv.py
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
Signed-off-by: hls <56255627+forgetwhatuwant@users.noreply.github.com>
* fix(camera_opencv): ensure MSMF hardware transform compatibility on Windows before importing OpenCV
* This change reverts the import from a relative import (.) back to the absolute import (lerobot.) as it was previously
* opencv/config: satisfy Ruff SIM102 by merging nested if for fourcc validation
* style(opencv/config): apply ruff-format changes
---------
Signed-off-by: hls <56255627+forgetwhatuwant@users.noreply.github.com>
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: forgetwhatuwant <forgetwhatuwant@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
* fix: update policy handling and type annotations
added typehint and addressed the error of mypy
* fix: rename should_push_to_hub to push_to_hub
I find that there are other dependencies of push_to_hub so I fix the property name back to original one.
* fix: typo
* fix: changed the position of try-except block
As the copilot said, use raise before `hf_hub_download` would stop program even it is able to download
* fix: update pre-commit configuration and mypy settings
add args: --follow-imports=silent to pass error which have no relationship with src/lerobot/configs
* fix: remove the specific path in .pre-commit-config.yaml
* feat: enhance typehint to adapt mypy strict mode.
* fix: remove duplicate FileNotFoundError check in PreTrainedConfig
* fix: make "pre-commit run --all-files" pass
* fix: replace logging with logger for better logging practices
* fix: fixed extra changes of lint and format changes
* fix: fixed extra changes out of "configs" module
* Update src/lerobot/configs/policies.py
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
Signed-off-by: tetsugo02 <131431116+tetsugo02@users.noreply.github.com>
* fix: add logging for scratch job
---------
Signed-off-by: Adil Zouitine <adilzouitinegm@gmail.com>
Signed-off-by: tetsugo02 <131431116+tetsugo02@users.noreply.github.com>
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
* feat(mypy-compliant): Ensure the model module passes MyPy type checks
* fix
* uncomment pyproject.toml for model module
* fix
* fix
---------
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
* refactor(env): introduce explicit gym ID handling in EnvConfig/factory
This commit introduces properties for the gym package/ID associated
with and environment config. They default to the current defaults
(`gym_{package_name}/{task_id}`) to avoid breaking changes, but allow
for easier use of external gym environments.
Subclasses of `EnvConfig` can override the default properties to allow
the factory to import (i.e. register) the gym env from a specific module,
and also instantiate the env from any ID string.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* more changes
* quality
* fix test
---------
Co-authored-by: Ben Sprenger <ben.sprenger@rogers.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com>
* Prevents resource leak in video_utils when getting width and height
Added the with statement when opening the image to ensure that the file handle is properly closed after its contents are read.
Otherwise, shutil.rmtree(img_dir) will fail when called after the encode_video_frames function completes.
Signed-off-by: Lycoris <32864669+lycoris1129@users.noreply.github.com>
---------
Signed-off-by: Lycoris <32864669+lycoris1129@users.noreply.github.com>
* Enhance training and logging functionality with accelerator support
- Added support for multi-GPU training by introducing an `accelerator` parameter in training functions.
- Updated `update_policy` to handle gradient updates based on the presence of an accelerator.
- Modified logging to prevent duplicate messages in non-main processes.
- Enhanced `set_seed` and `get_safe_torch_device` functions to accommodate accelerator usage.
- Updated `MetricsTracker` to account for the number of processes when calculating metrics.
- Introduced a new feature in `pyproject.toml` for the `accelerate` library dependency.
* Initialize logging in training script for both main and non-main processes
- Added `init_logging` calls to ensure proper logging setup when using the accelerator and in standard training mode.
- This change enhances the clarity and consistency of logging during training sessions.
* add docs and only push model once
* Place logging under accelerate and update docs
* fix pre commit
* only log in main process
* main logging
* try with local rank
* add tests
* change runner
* fix test
* dont push to hub in multi gpu tests
* pre download dataset in tests
* small fixes
* fix path optimizer state
* update docs, and small improvements in train
* simplify accelerate main process detection
* small improvements in train
* fix OOM bug
* change accelerate detection
* add some debugging
* always use accelerate
* cleanup update method
* cleanup
* fix bug
* scale lr decay if we reduce steps
* cleanup logging
* fix formatting
* encorperate feedback pr
* add min memory to cpu tests
* use accelerate to determin logging
* fix precommit and fix tests
* chore: minor details
---------
Co-authored-by: AdilZouitine <adilzouitinegm@gmail.com>
Co-authored-by: Steven Palma <steven.palma@huggingface.co>
Removes input lag by making changes to the serial
reading loop
- remove serial flush as this only clears
output buffer
- read all data in the input buffer in per loop
and use the latest line as the state to clear
the input buffer
previously was only reading one line per loop,
which in combination with teleoperator script loop
busy_wait function (which is slowing the
_read_loops down) was causing a backlog in input
buffer
Co-authored-by: Martino Russi <77496684+nepyope@users.noreply.github.com>
* make add_feature take multiple features at a time and rename to add_features
* - New function: modify_features that was a combination of remove features and add features.
- This function is important for when we want to add a feature and remove another so we can do it in one time to avoid copying and creating the dataset multiple times
* fix: expose a function explicitly building a frame for inference
* fix: first make dataset frame, then make ready for inference
* fix: reducing reliance on lerobot record for policy's ouptuts too
* fix: encapsulating squeezing out + device handling from predict action
* fix: remove duplicated call to build_inference_frame and add a function to only perform data type handling (whole conversion is: keys matching + data type conversion)
* fix(policies): right utils signature + docstrings (#2198)
---------
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
* incremental parquet writing
* add .finalise() and a backup __del__ for stopping writers
* fix missing import
* precommit fixes added back the use of embed images
* added lazy loading for hf_Dataset to avoid frequently reloading the dataset during recording
* fix bug in video timestamps
* Added proper closing of parquet file before reading
* Added rigorous testing to validate the consistency of the meta data after creation of a new dataset
* fix bug in episode index during clear_episode_buffer
* fix(empty concat): check for empty paths list before data files concatenation
* fix(v3.0 message): updating v3.0 backward compatibility message.
* added fixes for the resume logic
* answering co-pilot review
* reverting some changes and style nits
* removed unused functions
* fix chunk_id and file_id when resuming
* - fix parquet loading when resuming
- add test to verify the parquet file integrity when resuming so that data files are now overwritten
* added general function get_file_size_in_mb and removed the one for video
* fix table size value when resuming
* Remove unnecessary reloading of the parquet file when resuming record.
Write to a new parquet file when resuming record
* added back reading parquet file for image datasets only
* - respond to Qlhoest comments
- Use pyarrows `from_pydict` function
- Add buffer for episode metadata to write to the parquet file in batches to improve efficiency
- Remove the use of `to_parquet_with_hf_images`
* fix(dataset_tools) with the new logic using proper finalize
bug in finding the latest path of the metdata that was pointing to the data files
added check for the metadata size in the case the metadatabuffer was not written yet
* nit in flush_metadata_buffer
* fix(lerobot_dataset) return the right dataset len when a subset of the dataset is requested
---------
Co-authored-by: Harsimrat Sandhawalia <hs.sandhawalia@gmail.com>
* feat(dataset-tools): add dataset utilities and example script
- Introduced dataset tools for LeRobotDataset, including functions for deleting episodes, splitting datasets, adding/removing features, and merging datasets.
- Added an example script demonstrating the usage of these utilities.
- Implemented comprehensive tests for all new functionalities to ensure reliability and correctness.
* style fixes
* move example to dataset dir
* missing lisence
* fixes mostly path
* clean comments
* move tests to functions instead of class based
* - fix video editting, decode, delete frames and rencode video
- copy unchanged video and parquet files to avoid recreating the entire dataset
* Fortify tooling tests
* Fix type issue resulting from saving numpy arrays with shape 3,1,1
* added lerobot_edit_dataset
* - revert changes in examples
- remove hardcoded split names
* update comment
* fix comment
add lerobot-edit-dataset shortcut
* Apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Michel Aractingi <michel.aractingi@huggingface.co>
* style nit after copilot review
* fix: bug in dataset root when editing the dataset in place (without setting new_repo_id
* Fix bug in aggregate.py when accumelating video timestamps; add tests to fortify aggregate videos
* Added missing output repo id
* migrate delete episode to using pyav instead of decoding, writing frames to disk and encoding again.
Co-authored-by: Caroline Pascal <caroline8.pascal@gmail.com>
* added modified suffix in case repo_id is not set in delete_episode
* adding docs for dataset tools
* bump av version and add back time_base assignment
* linter
* modified push_to_hub logic in lerobot_edit_dataset
* fix(progress bar): fixing the progress bar issue in dataset tools
* chore(concatenate): removing no longer needed concatenate_datasets usage
* fix(file sizes forwarding): forwarding files and chunk sizes in metadata info when splitting and aggregating datasets
* style fix
* refactor(aggregate): Fix video indexing and timestamp bugs in dataset merging
There were three critical bugs in aggregate.py that prevented correct dataset merging:
1. Video file indices: Changed from += to = assignment to correctly reference
merged video files
2. Video timestamps: Implemented per-source-file offset tracking to maintain
continuous timestamps when merging split datasets (was causing non-monotonic
timestamp warnings)
3. File rotation offsets: Store timestamp offsets after rotation decision to
prevent out-of-bounds frame access (was causing "Invalid frame index" errors
with small file size limits)
Changes:
- Updated update_meta_data() to apply per-source-file timestamp offsets
- Updated aggregate_videos() to track offsets correctly during file rotation
- Added get_video_duration_in_s import for duration calculation
* Improved docs for split dataset and added a check for the possible case that the split size results in zero episodes
* chore(docs): update merge documentation details
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
---------
Co-authored-by: CarolinePascal <caroline8.pascal@gmail.com>
Co-authored-by: Jack Vial <vialjack@gmail.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>