Loading...
Image

Smart Prosthetic. Powered By AI.

Vision Arm is meant to be a cheaper and fully automated alternative to prosthetic arms today. Using 3D printing to bring down costs of design and improving hand manipulation through deep learning techniques.

About

Image

The Problem

In a nutshell, the key problems with prosthetics today are:

  • Shortage. The WHO has estimated that 30 million people are in need of prosthetic devices with there being a huge cost/accessibility barrier for people in need.
  • Expensive. Prosthetic arms today can cost anywhere from $5,000 for a purely cosmetic arm, up to $20,000 - $100,000 or more for an advanced myoelectric arm, controlled by muscle movements. Without health insurance, this is very difficult for most people to afford.
  • Manual or difficult control. It can take 3+ months for a user to learn to use a prosthetic & often also having to manually control it leading to difficulty with manipulation.

Image

Our Solution

We've developed a 3D printed arm that costs $200 vs. the existing costs which can reach $100,000. Through computer vision & reinforcement learning algorithms we train the arm to detect objects and how to best manipulate/grasp them to complete a task so that a user doesn't have to manually do this.

  • 3D Printing to improve prosthetic arm design & reduce costs.
  • GG-CNN for detecting objects and grasping poses to manipulate objects in real-time.
  • Reinforcement Learning to improve prosthetic arm training and grasping tasks in unstructured environments.

Through support and mentorship from:

Snow

Vision Arm Features

These are some of the key components that our research and work is focused on:

3D Printed Design

Making a very simple prosthetic arm can cost $1,000 for the materials alone. But 3D scanning and printing can shrink the cost to as little as $4 (not including other hardware). Leveraging existing open source arm designs such as Moveo arm and HACKberry, we're developing an improved design that integrates all sensors, motors, GPUs and depth camera (for grasping and object detection) into one arm. The goal is for the arm to be easier to use and much cheaper than arms today. A test version of the Moveo arm as a base platform and HACKberry will be used to test this out on our RL and CNN algorithms.

Convolutional Neural Network (GG-CNN)

The algorithm that I used for grasping detection is a Generative Grasping Convolutional Neural Network (GG-CNN) widely inspired by this paper. It takes in depth images of objects and predicts the pose of grasps at every pixel for different grasping tasks and objects. The GG-CNN takes in real-time depth images and identifies objects through object detection. This is further parameterised as a grasp quality, angle and gripper width for every pixel in the input image in a fraction of a second. The best grasp is calculated and a velocity command (v) is issued to the prosthetic robot arm.

Imitation & Reinforcement Learning

We are experimenting with the Deep Deterministic Policy Gradient (DDPG), Trust Region Policy Optimization (TRPO) and Proximal Policy Optimization (PPO) in simulation to train an agent. The goal is for the agent to learn grasping techniques for unstructured environments and how to automatically adapt control systems of prosthetic arms to the needs of individual patients. We're also experimenting with the DAgger algorithm to learn through imitation learning from a patient doing certain actions that are replicated by the prosthetic arm. An example of our simulation in Pybullet can be seen here.

Image

Credit: Sri Harsha Kunda (designer), Moveo arm (base platform), HACKberry (prosthetic arm).

Our Test Prototype

For the arm, we will be using the prosthetic arm design as the hand and the moveo arm as the base: Base System (Base Arm) is a platform to verify the prosthetic end functionality. Moveo Arm will emulate the functionality of an actual human arm joints in various degrees of freedom. Prosthetic end will be the end effector. Moveo will locate the prosthetic arm into a predefined position for executing a grab move. At this step we will perform the grab movement on prosthetic at various angles (pitch, yaw, twist) for testing the forces and movements required for a grab.

Image

Architecture

CNN layer will be running on Jetson TX1 as a separate entity where the program takes Cameras signal and processes images along with centroid values of the objects. Centroid values along with Object ID’s will be transmitted to a ROS Layer which will then execute a control decision.

Blog Posts

Image

Robotic hand that can see for itself

Alishba Imran May, 2020

More about the problem and how we're solving it using CNNs and RL.

Continue Reading...

Image

Neural-Symbolic AI Approach to Humanoid Manipulation

Alishba Imran August, 2020

The grasping task and how we can combine neural net approaches with Symbolic AI.

Continue Reading...

Have Questions? Let's Connect.