In a nutshell, the key problems with prosthetics today are:
We've developed a 3D printed arm that costs $200 vs. the existing costs which can reach $100,000. Through computer vision & reinforcement learning algorithms we train the arm to detect objects and how to best manipulate/grasp them to complete a task so that a user doesn't have to manually do this.
These are some of the key components that our research and work is focused on:
Making a very simple prosthetic arm can cost $1,000 for the materials alone. But 3D scanning and printing can shrink the cost to as little as $4 (not including other hardware). Leveraging existing open source arm designs such as Moveo arm and HACKberry, we're developing an improved design that integrates all sensors, motors, GPUs and depth camera (for grasping and object detection) into one arm. The goal is for the arm to be easier to use and much cheaper than arms today. A test version of the Moveo arm as a base platform and HACKberry will be used to test this out on our RL and CNN algorithms.
The algorithm that I used for grasping detection is a Generative Grasping Convolutional Neural Network (GG-CNN) widely inspired by this paper. It takes in depth images of objects and predicts the pose of grasps at every pixel for different grasping tasks and objects. The GG-CNN takes in real-time depth images and identifies objects through object detection. This is further parameterised as a grasp quality, angle and gripper width for every pixel in the input image in a fraction of a second. The best grasp is calculated and a velocity command (v) is issued to the prosthetic robot arm.
We are experimenting with the Deep Deterministic Policy Gradient (DDPG), Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO) in simulation to train an agent. The goal is for the agent to learn grasping techniques for unstructured environments and how to automatically adapt control systems of prosthetic arms to the needs of individual patients. We're also experimenting with the DAgger algorithm to learn through imitation learning from a patient doing certain actions that are replicated by the prosthetic arm. An example of our simulation in Pybullet can be seen here.
For the arm, we will be using the prosthetic arm design as the hand and the moveo arm as the base: Base System (Base Arm) is a platform to verify the prosthetic end functionality. Moveo Arm will emulate the functionality of an actual human arm joints in various degrees of freedom. Prosthetic end will be the end effector. Moveo will locate the prosthetic arm into a predefined position for executing a grab move. At this step we will perform the grab movement on prosthetic at various angles (pitch, yaw, twist) for testing the forces and movements required for a grab.
CNN layer will be running on Jetson TX1 as a separate entity where the program takes Cameras signal and processes images along with centroid values of the objects. Centroid values along with Object ID’s will be transmitted to a ROS Layer which will then execute a control decision.
The grasping task and how we can combine neural net approaches with Symbolic AI.