Files
sci-gui-agent-benchmark/README.md
2024-04-02 22:18:46 +08:00

127 lines
5.4 KiB
Markdown

# OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
<p align="center">
<a href="https://os-world.github.io/">Website</a>
<a href="">Paper</a>
</p>
## Updates
- 2024-04-04: We released our [paper](), [environment and benchmark](https://github.com/xlang-ai/OSWorld), and [project page](https://os-world.github.io/). Check it out!
## Install
### Non-virtualized platform
Suppose you are operating on a system that has not been virtualized, meaning you are not utilizing a virtualized environment like AWS, Azure, or k8s. If this is the case, proceed with the instructions below. However, if you are on a virtualized platform, please refer to the [virtualized platform](https://github.com/xlang-ai/OSWorld?tab=readme-ov-file#virtualized-platform) section.
1. First, clone this repository and `cd` into it. Then, install the dependencies listed in `requirements.txt`. It is recommended that you use the latest version of Conda to manage the environment, but you can also choose to manually install the dependencies. Please ensure that the version of Python is >= 3.9.
```bash
# Clone the OSWorld repository
git clone https://github.com/xlang-ai/OSWorld
# Change directory into the cloned repository
cd OSWorld
# Optional: Create a Conda environment for OSWorld
# conda create -n osworld python=3.9
# conda activate osworld
# Install required dependencies
pip install -r requirements.txt
```
2. Install [VMware Workstation Pro](https://www.vmware.com/products/workstation-pro/workstation-pro-evaluation.html) (for systems with Apple Chips, you should install [VMware Fusion](https://www.vmware.com/go/getfusion)) and configure the `vmrun` command. Verify the successful installation by running the following:
```bash
vmrun -T ws list
```
If the installation along with the environment variable set is successful, you will see the message showing the current running virtual machines.
3. Obtain the virtual machine image. If you are using Linux or Windows with an x86_64 CPU, install the environment package and download the examples and the virtual machine image by executing the following commands:
Remove the `nogui` parameter if you wish to view the activities within the virtual machine.
```bash
gdown https://drive.google.com/drive/folders/1HX5gcf7UeyR-2UmiA15Q9U-Wr6E6Gio8 -O Ubuntu --folder
vmrun -T ws start "Ubuntu/Ubuntu.vmx" nogui
vmrun -T ws snapshot "Ubuntu/Ubuntu.vmx" "init_state"
```
For macOS with Apple chips, you should install the specially prepared virtual machine image by executing the following commands:
```bash
gdown https://drive.google.com/drive/folders/xxx -O Ubuntu --folder
vmrun -T fusion start "Ubuntu/Ubuntu.vmx"
vmrun -T fusion snapshot "Ubuntu/Ubuntu.vmx" "init_state"
```
### Virtualized platform
We are working on supporting it 👷. Please hold tight!
## Quick Start
Run the following minimal example to interact with the environment:
```python
from desktop_env.envs.desktop_env import DesktopEnv
example = {
"id": "94d95f96-9699-4208-98ba-3c3119edf9c2",
"instruction": "I want to install Spotify on my current system. Could you please help me?",
"config": [
{
"type": "execute",
"parameters": {
"command": [
"python",
"-c",
"import pyautogui; import time; pyautogui.click(960, 540); time.sleep(0.5);"
]
}
}
],
"evaluator": {
"func": "check_include_exclude",
"result": {
"type": "vm_command_line",
"command": "which spotify"
},
"expected": {
"type": "rule",
"rules": {
"include": ["spotify"],
"exclude": ["not found"]
}
}
}
}
env = DesktopEnv(
path_to_vm="Ubuntu/Ubuntu.vmx",
action_space="pyautogui"
)
obs = env.reset(task_config=example)
obs, reward, done, info = env.step("pyautogui.rightClick()")
```
You will see all the logs of the system running normally, including the successful creation of the environment, completion of setup, and successful execution of actions. In the end, you will observe a successful right-click on the screen, which means you are ready to go.
## Run Benchmark
### Run the Baseline Agent
If you wish to run the baseline agent used in our paper, you can execute the following command as an example under the GPT-4V pure-screenshot setting:
```bash
python run.py --path_to_vm Ubuntu/Ubuntu.vmx --headless --observation_type screenshot --model gpt-4-vision-preview --result_dir ./results
```
The results, which include screenshots, actions, and video recordings of the agent's task completion, will be saved in the `./results` directory in this case. You can then run the following command to obtain the result:
```bash
python show_result.py
```
### Run Evaluation of Your Agent
Please start by reading through the [agent interface](https://github.com/xlang-ai/OSWorld/blob/main/mm_agents/README.md) and the [environment interface](https://github.com/xlang-ai/OSWorld/blob/main/desktop_env/README.md).
Correctly implement the agent interface and import your customized version in the `run.py` file.
Afterward, you can execute a command similar to the one in the previous section to run the benchmark on your agent.
## Citation
If you find this environment useful, please consider citing our work:
```
@article{OSWorld,
title={},
author={},
journal={arXiv preprint arXiv:xxxx.xxxx},
year={2024}
}
```