Files
sci-gui-agent-benchmark/README.md

36 lines
1.9 KiB
Markdown

# DesktopEnv: An Environment towards Human-like Computer Task Mastery
## Setup guide
### x86_64
todo
### Apple Silicon
Please refer to https://docs.google.com/document/d/1KBdeZwmZs2Vi_Wsnngb3Wf1-RiwMMpXTftwMqP2Ztak/edit#heading=h.uh0x0tkl7fuw
## Road map of infra (Proposed)
- [x] Explore VMWare, and whether it can be connected and control through mouse package
- [x] Explore Windows and MacOS, whether it can be installed
- MacOS is closed source and cannot be legally installed
- Windows is available legally and can be installed
- [x] Build gym-like python interface for controlling the VM
- [x] Recording of actions (mouse movement, click, keyboard) for humans to annotate, and we can replay it and compress it
- [x] Build a simple task, e.g. open a browser, open a website, click on a button, and close the browser
- [x] Set up a pipeline and build agents implementation (zero-shot) for the task
- [x] Start to design on which tasks inside the DesktopENv to focus on, start to wrap up the environment to be public
- [x] Start to annotate the examples for ~~training~~ and testing
- [x] Error handling during file passing and file opening, etc.
- [x] Add accessibility tree from the OS into the observation space
- [x] Add pre-process and post-process action support for benchmarking setup and evaluation
- [ ] Multiprocess support, this can enable the reinforcement learning to be more efficient
- [ ] Experiment logging and visualization system
- [ ] Add more tasks, maybe scale to 300 for v1.0.0, and create a dynamic leaderboard
## Road map of benchmark, tools and resources (Proposed)
- [ ] Improve the annotation tool base on DuckTrack, make it more robust which align on accessibility tree
- [ ] Annotate the steps of doing the task
- [ ] Build a website for the project
- [ ] Crawl all resources we explored from the internet, and make it easy to access
- [ ] Set up ways for community to contribute new examples