Getting Started

Installation

# Clone the BulletArm Repo
git clone https://github.com/ColinKohler/BulletArm.git && cd BulletArm

# Install dependencies
pip install -r requirements.txt

# Install BulletArm either using Pip
pip install .
# OR by adding it to your PYTHONPATH
export PYTHONPATH=/path/to/BulletArm/:$PYTHONPATH

Block Stacking Demo

In order to test your installation, we recommend running the block stacking demo to ensure everything is in working order.

python tutorials/block_stacking_demo.py

Below we go over the code within the demo and briefly describe the important details.

 1# The env_factory provides the entry point to BulletArm
 2from bulletarm import env_factory
 3
 4def runDemo():
 5  env_config = {'render': True}
 6  # The env_factory creates the desired number of PyBullet simulations to run in
 7  # parallel. The task that is created depends on the environment name and the
 8  # task config passed as input.
 9  env = env_factory.createEnvs(1, 'block_stacking', env_config)
10
11  # Start the task by resetting the simulation environment.
12  obs = env.reset()
13  done = False
14  while not done:
15    # We get the next action using the planner associated with the block stacking
16    # task and execute it.
17    action = env.getNextAction()
18    obs, reward, done = env.step(action)
19  env.close()

Tutorials

We provide a number of tutorials including an introcutory tutorial demonstrating how to collect data for training of a RL agent. Examples on how to extend PyBullet for either creating new tasks or creating new robots are also included.