The BEHAVIOR Benchmark

mosaicgif.gif

Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments

BEHAVIOR is a benchmark for embodied AI agents to complete 100 household activities in simulation. In BEHAVIOR, the AI agents need to control a robot’s embodiment taking decisions based on acquired virtual sensor signals, and executing them with control action commands. The complexity, realism and diversity of the benchmark creates a complex challenge for modern AI solutions.

In this documentation, you will find information to use BEHAVIOR to train, develop and evaluate your solutions. You will have to install and use iGibson, our open source simulation environment, and the BEHAVIOR Dataset of Objects (more in the installation instructions). You will also find details about the benchmark setup such as the embodiments available, the control and sensing alternatives, the metrics, and the activities. This documentation includes also useful information to get started with baselines.

We hope to make BEHAVIOR a useful evaluation tool in AI and robotics. Please, contact us if you have any questions or suggestions.

Citation

If you use BEHAVIOR, its assets and models, consider citing the following publications:

@inproceedings{shen2021igibson,
      title={BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments}, 
      author={Sanjana Srivastava and Chengshu Li and Michael Lingelbach and Roberto Mart\'in-Mart\'in and Fei Xia and Kent Vainio and Zheng Lian and Cem Gokmen and Shyamal Buch and Karen Liu and Silvio Savarese and Hyowon Gweon and Jiajun Wu and Li Fei-Fei},
      booktitle={Conference in Robot Learning (CoRL)},
      year={2021},
      pages={accepted}
}
@inproceedings{li2021igibson,
      title={iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks}, 
      author={Chengshu Li and Fei Xia and Roberto Martín-Martín and Michael Lingelbach and Sanjana Srivastava and Bokui Shen and Kent Vainio and Cem Gokmen and Gokul Dharan and Tanish Jain and Andrey Kurenkov and Karen Liu and Hyowon Gweon and Jiajun Wu and Li Fei-Fei and Silvio Savarese},
      booktitle={Conference in Robot Learning (CoRL)},
      year={2021},
      pages={accepted}
}

Repositories

There are three main repositories necessary to evaluate with BEHAVIOR:

Datasets

There are three datasets necessary to evaluate agents in BEHAVIOR:

  • The iGibson 2.0 Dataset of Scenes: New versions of the fully interactive scenes, more densely populated with objects. They are downloaded as part of the installation procedure above.

  • The BEHAVIOR Dataset of Objects: Collection of object models annotated with physical and semantic properties. The 3D models are free to use within iGibson 2.0 for BEHAVIOR. However, to preserver the artists’ copyright, models are encrypted and allowed only to be used within iGibson2.0. Details on how to get access to the iGibson and BEHAVIOR datasets in a bundle can be found in the Installation Section.

  • The BEHAVIOR Dataset of Activity Definitions: Collection of definitions in logic language (in BDDL, our BEHAVIOR Domain Definition Language) that specifies the initial configuration of the scene, and the valid final goal of the activity. This dataset is shipped with the bddl package (https://github.com/StanfordVL/bddl) Activities cannot be defined uniquely: different people may provide different definitions. Therefore, we provide two valid definitions per activity generated by human annotators and filtered to be executable in our scenes, i.e., initial states are guaranteed be sampleable and goal states to be reacheable. These definitions have been used to create the activity instances that humans solved in virtual reality to generate the BEHAVIOR Dataset of Human Demonstrations (see below).

Apart from these necessary datasets, we provide an additional dataset in BEHAVIOR:

  • The BEHAVIOR Dataset of Human Demonstrations: Collection of 500 human successful executions of the activities of BEHAVIOR in iGibson 2.0 using a virtual reality interface. Humans control the BEHAVIOR Robot embodiment (see Embodiments Section). The dataset includes all state and action pairs and can be deterministically replayed to generate any new observations. We also include a first set of processed observations. More information and links in the BEHAVIOR Dataset of Human Demos Section.

Documentation

General information about our benchmark can be found in our webpage: http://behavior.stanford.edu or in our publication available on arxiv.

For specific documentation about the iGibson simulator, please visit http://svl.stanford.edu/igibson/docs or refer to our publications, the iGibson 2.0 arxiv preprint and the iGibson 1.0 arxiv preprint.

More information about BDDL can be found at https://github.com/StanfordVL/bddl and in our BEHAVIOR publication.

Examples

We included multiple code examples in the folder behavior/examples. Take a look there to get started. Additional useful examples can be found in the folder igibson/examples in the iGibson repository.