Streamline processor loading logic
This commit is contained in:
committed by
uzhilinsky
parent
fc7b7bc694
commit
06c632b144
@@ -104,11 +104,13 @@ The training config is used to determine which data transformations should be ap
|
||||
There are also a number of checkpoints that are available as exported JAX graphs, which we trained ourselves using our internal training code. These can be served using the following command:
|
||||
|
||||
```bash
|
||||
uv run scripts/serve_policy.py --env ALOHA policy:exported --policy.dir=s3://openpi-assets/exported/pi0_aloha/model --policy.processor=trossen_biarm_single_base_cam_24dim
|
||||
uv run scripts/serve_policy.py --env ALOHA policy:exported --policy.dir=s3://openpi-assets/exported/pi0_aloha/model [--policy.processor=trossen_biarm_single_base_cam_24dim]
|
||||
```
|
||||
|
||||
In this case, the data transformations are taken from the default policy and the processor name will be used to determine which norms stats should be used to normalize the transformed data.
|
||||
|
||||
For these exported models, norm stats are loaded from processors that are exported along with the model, while data transformations are defined in the corresponding default policy (see `create_default_policy` in [scripts/serve_policy.py](scripts/serve_policy.py)). The processor name is optional, and if not provided, we will do the following:
|
||||
- Try using the default environment processor name
|
||||
- Load a processor if there is only one available
|
||||
- Raise an error if there are multiple processors available and ask to provide a processor name
|
||||
|
||||
### Running with Docker:
|
||||
|
||||
|
||||
Reference in New Issue
Block a user