Files
sci-gui-agent-benchmark/mm_agents
Junli Wang 1503eb3994 Finish Aguvis eval on OSWorld (#107)
* Initialize Aguvis eval on OSWorld

* Debug

* Debug

* v1, internal version

* Add experiments script

* Fix minor bugs

* Update new endpoint

* Update ip

* Update

* Update

* Update

* Update

* Update

* Update

* Update

* Update

* Fix model name

* Fix docker close issues; update prompting

* Fix missed

* Fix the default port to avoid crashing on examples like '_update_browse_history_setup'

* Fix server and chromium ports in setup

* Revert and add missed dependency

* Add VLC port for docker

* Update

* Aguvis Grounding

* Add Aguvis as planner

* fix parse bug

* fix pause

* fix planner prompt

* Aguvis Grounding

* fix

* fix

* fix

* add logger for each example

* Modify Aguvis Planner Prompts

* fix logger setup

* fix absolute coordinates

* Finish Aguvis Evaluation on OSWorld

* Merge origin/main into junli/aguvis

* Remove screenshot

---------

Co-authored-by: Tianbao Xie <tianbaoxie@U-492FC39R-0217.local>
Co-authored-by: Timothyxxx <384084775@qq.com>
Co-authored-by: FredWuCZ <fredwucz@outlook.com>
2024-11-24 16:43:25 +08:00
..
2024-03-18 00:22:57 +08:00
2024-05-09 02:04:58 +08:00

Agent

Prompt-based Agents

Supported Models

We currently support the following models as the foundational models for the agents:

  • GPT-3.5 (gpt-3.5-turbo-16k, ...)
  • GPT-4 (gpt-4-0125-preview, gpt-4-1106-preview, ...)
  • GPT-4V (gpt-4-vision-preview, ...)
  • Gemini-Pro
  • Gemini-Pro-Vision
  • Claude-3, 2 (claude-3-haiku-2024030, claude-3-sonnet-2024022, ...)
  • ...

And those from the open-source community:

  • Mixtral 8x7B
  • QWEN, QWEN-VL
  • CogAgent
  • Llama3
  • ...

In the future, we will integrate and support more foundational models to enhance digital agents, so stay tuned.

How to use

from mm_agents.agent import PromptAgent

agent = PromptAgent(
    model="gpt-4-vision-preview",
    observation_type="screenshot",
)
agent.reset()
# say we have an instruction and observation
instruction = "Please help me to find the nearest restaurant."
obs = {"screenshot": open("path/to/observation.jpg", 'rb').read()}
response, actions = agent.predict(
    instruction,
    obs
)

Observation Space and Action Space

We currently support the following observation spaces:

  • a11y_tree: the accessibility tree of the current screen
  • screenshot: a screenshot of the current screen
  • screenshot_a11y_tree: a screenshot of the current screen with the accessibility tree overlay
  • som: the set-of-mark trick on the current screen, with table metadata included.

And the following action spaces:

  • pyautogui: valid Python code with pyautogui code valid
  • computer_13: a set of enumerated actions designed by us

To feed an observation into the agent, you have to maintain the obs variable as a dict with the corresponding information:

# continue from the previous code snippet
obs = {
    "screenshot": open("path/to/observation.jpg", 'rb').read(),
    "a11y_tree": ""  # [a11y_tree data]
}
response, actions = agent.predict(
    instruction,
    obs
)

Efficient Agents, Q* Agents, and more

Stay tuned for more updates.