Tutorials¶
Here a set of examples on how to use different MyoSuite models and non-stationarities. Jupyter-Notebooks can be found here
Test Environment¶
Example on how to use an environment e.g. send random movements
import myosuite
import gym
env = gym.make('myoElbowPose1D6MRandom-v0')
env.reset()
for _ in range(1000):
env.mj_render()
env.step(env.action_space.sample()) # take a random action
env.close()
Activate and visualize finger movements¶
Example on how to generate and visualize a movement e.g. index flexion, and visualize the results
import myosuite
import gym
env = gym.make('myoHandPoseRandom-v0')
env.reset()
for _ in range(1000):
env.mj_render()
env.step(env.action_space.sample()) # take a random action
env.close()
Test trained policy¶
Example on using a policy e.g. elbow flexion, and change non-stationaries
import myosuite
import gym
policy = "iterations/best_policy.pickle"
import pickle
pi = pickle.load(open(policy, 'rb'))
env = gym.make('myoElbowPose1D6MRandom-v0')
env.reset()
for _ in range(1000):
env.mj_render()
env.step(env.action_space.sample()) # take a random action
Test Muscle Fatigue¶
This example shows how to add fatigue to a model. It tests random actions on a model without and then with muscle fatigue.
import myosuite
import gym
env = gym.make('myoElbowPose1D6MRandom-v0')
env.reset()
for _ in range(1000):
env.mj_render()
env.step(env.action_space.sample()) # take a random action
# Add muscle fatigue
env = gym.make('myoFatiElbowPose1D6MRandom-v0')
env.reset()
for _ in range(1000):
env.mj_render()
env.step(env.action_space.sample()) # take a random action
env.close()
Test Sarcopenia¶
This example shows how to add sarcopenia or muscle weakness to a model. It tests random actions on a model without and then with muscle weakness.
import myosuite
import gym
env = gym.make('myoElbowPose1D6MRandom-v0')
env.reset()
for _ in range(1000):
env.mj_render()
env.step(env.action_space.sample()) # take a random action
# Add muscle weakness
env = gym.make('myoSarcElbowPose1D6MRandom-v0')
env.reset()
for _ in range(1000):
env.mj_render()
env.step(env.action_space.sample()) # take a random action
env.close()
Test Physical tendon transfer¶
This example shows how load a model with physical tendon transfer.
import myosuite
import gym
env = gym.make('myoHandKeyTurnFixed-v0')
env.reset()
for _ in range(1000):
env.mj_render()
env.step(env.action_space.sample()) # take a random action
# Add tendon transfer
env = gym.make('myoTTHandKeyTurnFixed-v0')
env.reset()
for _ in range(1000):
env.mj_render()
env.step(env.action_space.sample()) # take a random action
env.close()
Resume Learning of policies¶
When using mjrl
it might be needed to resume training of a policy locally. It is possible to use the following instruction
python3 hydra_mjrl_launcher.py --config-path config --config-name hydra_biomechanics_config.yaml hydra/output=local hydra/launcher=local env=myoHandPoseRandom-v0 job_name=[Absolute Path of the policy] rl_num_iter=[New Total number of iterations]
Load DEP-RL Baseline¶
See here for more detailed documentation of deprl
.
If you want to load and execute the pre-trained DEP-RL baseline. Make sure that the deprl
package is installed.
import gym
import myosuite
import deprl
# we can pass arguments to the environments here
env = gym.make('myoLegWalk-v0', reset_type='random')
policy = deprl.load_baseline(env)
obs = env.reset()
for i in range(1000):
env.mj_render()
action = policy(obs)
obs, *_ = env.step(action)
env.close()