Files
sci-gui-agent-benchmark/README.md
2024-01-16 12:15:21 +08:00

34 lines
1.7 KiB
Markdown

# DesktopEnv: An Environment towards Human-like Computer Task Mastery
## Setup guide
### For members of the team
todo
### For users of the environment
todo
## Road map of infra (Proposed)
- [x] Explore VMWare, and whether it can be connected and control through mouse package
- [x] Explore Windows and MacOS, whether it can be installed
- MacOS is closed source and cannot be legally installed
- Windows is available legally and can be installed
- [x] Build gym-like python interface for controlling the VM
- [x] Recording of actions (mouse movement, click, keyboard) for humans to annotate, and we can replay it and compress it
- [x] Build a simple task, e.g. open a browser, open a website, click on a button, and close the browser
- [x] Set up a pipeline and build agents implementation (zero-shot) for the task
- [x] Start to design on which tasks inside the DesktopENv to focus on, start to wrap up the environment to be public
- [x] Start to annotate the examples for ~~training~~ and testing
- [x] Error handling during file passing and file opening, etc.
- [x] Add accessibility tree from the OS into the observation space
- [ ] Add pre-process and post-process action support for benchmarking setup and evaluation
- [ ] Multiprocess support, this can enable the reinforcement learning to be more efficient
## Road map of benchmark, tools and resources (Proposed)
- [ ] Improve the annotation tool base on DuckTrack, make it more robust which align on accessibility tree
- [ ] Annotate the steps of doing the task
- [ ] Build a website for the project
- [ ] Crawl all resources we explored from the internet, and make it easy to access
- [ ] Set up ways for community to contribute new examples