Introduction
On May 9th, we had the amazing opportunity to showcase our capstone project at the College of Engineering 2024 Senior Design Expo at California State University, Long Beach.
The inspiration for our project was to give quadriplegics greater independence at a reasonable cost. Current market wheelchairs for quadriplegics include controllers fitted inside the user’s mouth that can be activated with the tongue, tubes that the user must blow into to move the wheelchair based on different pressures, or pressure sensors that the user must touch with their head to move in a certain direction. However, all of these products share common issues: they are expensive, costing thousands of dollars, and require physical devices attached to the user.
Motivated by this, our team sought to devise an innovative solution. We developed a facial recognition-based wheelchair that processes input from a simple USB webcam, mapping facial positions to control wheelchair movement based on the direction the user is facing. To keep cost down, we repurposed old hoverboard motors to drive the wheelchair, and manufactured custom mounts for them. This technology frees the user from having anything physically on them while costing less than $500.
We were honored to be awarded 1st place by our judges for our project. This recognition was a testament to the hard work and creativity our team put into the project.
I’m thankful to my teammates Hector Castro, Salvador Macias Silva, and Eduardo Aguallo for their dedication and hard work. I’m immensely grateful to our professor Surajit Roy, along with the shop techs Jose Rivera and Chris Vallejo, for their guidance throughout the process. This project has not only been a significant academic achievement but also a deeply rewarding experience that has shaped my aspirations for the future.
Parts List
Here is a list of the parts we used in this project and an explanation as to why
- Motors
- Battery
- Raspberry Pi
- Arduino Mega
- Arduino Screw Terminal
- Motor Drivers
- Jumper Wires
- Momentary Push Buttons
- Toggle Switch
- Electronics Box
- 3 Pin Electrical Connector
- Electrical Wire
- Car Seat Pressure Sensor
- Wheelchair Bag
- Camera Mount
- Camera
- Steel Bars
Hoverboard Motor Wheels
At first we planned on using power chair motors to drive the wheelchair, but we quickly realized that this would not be viable. Since we were modifying a regular wheelchair, a direct-drive system would require a lot of modifications to the current wheels. This being the case, we opted for a friction drive instead. I came across a couple videos in which someone had modified a regular bike to a friction drive by using a hoverboard motor. I realized we could utilize this technique with one hoverboard motor per wheel that we could control to drive the wheelchair.
These hoverboard motors are brushless DC motors that are built into the hoverboard wheel. Buying the motors alone off Amazon will be about $72 per wheel, so it may be cheaper to just buy the hoverboard entirely, or look for a broken one at a reduced cost.
I’ve made another post on how I was able to control these motors with the Arduino Mega and my laptop which you can see here:
Battery
For batteries we were planning on using powerchair batteries which would have cost us about $150, but now we could just use the battery that came with the hoverboard.
This battery is a Lithium-ion 10S2P and is 36V 158.4Wh. Being that this is the battery that came with the motors, it has plenty of voltage and current to provide enough torque for our needs.
Motor Driver
This $19 board easily allows you to control the motors with inputs for direction, brake and speed. It also has hall sensors to measure the motor speed. For this project you need 2 of them.
Arduino
I decided to use the Elegoo Mega for this project since I already had it, it has all the necessary ports, and is fast enough for our applications. The Arduino version is about $50, while the Elegoo version is half that and they work the exact same.
Jumper Wires
We used jumper wires to connect the motor drivers and the Arduino
Screw Terminal
The Arduino Mega has no way to keep the jumper wires inside of the terminals securely so we used a screw terminal to make sure none of the wires come loose
Raspberry Pi
The Arduino Mega alone doesn’t have enough computation power to perform facial recognition, which is why it is connected to a Raspberry Pi 4 running Ubuntu.
Camera
For the camera we went with a Logitech C270 webcam because it can connect with the Pi via USB and is also relatively cheap at around $23.
Car Seat Pressure Sensor
This pressure sensor is typically used to detect when someone is sitting down in a car seat to warn them to wear their seat belt. In our case, we will just use it to detect when someone is sitting in the wheelchair.
Camera Mount
We chose this camera mount because of its vast adjustability and relatively low cost.
Electronics Box
All the electronics are held inside of this box that holes are drilled into in order to access the Raspberry Pi, and for the wires to be routed out. A control panel and charging port were also added.
Wheelchair Bag
This bag that hangs on the handles of the wheelchair was added in order to house the electronics box. A small cutout was added for the wires to route through.
3 Pin Electrical Connector
These 3 pin connectors are typically used for car wiring, but were perfect in our case. For ease of maintenance, the motor’s phase wires were connected to the connectors. These connectors are a good gauge and are easily detachable from one another, unlike the old connectors.
Electrical Wire
24 gauge wire was needed to extend the hall sensor wires enough to reach the back of the wheelchair where the electronics box is located. The 16 gauge was used for the same reason but for the motor phase wires instead
Toggle Switch
This toggle switch is used to turn the wheelchair on or off and is attached to the electronics box.
Momentary Push Buttons
These buttons are used to alter the maximum speed that the wheelchair can go and they are attached to the electronics box.
Steel Bars
The steel required for the motor mounts was bought by our professor and they were about $250 total. We received 4 identical steel bars that we then cut to size and manufactured the mounts.
I’ve made another post on how we designed, prototyped, and manufactured these mounts which you can see here:
Assembly
We created a model of the wheelchair in SolidWorks in order to test fit the motor mounts which you can see below.
Electronics
After the mounts were manufactured, the motor’s wires needed to extended using the 16 gauge wire for the phase, and 24 for the hall wires.
The charging port that was attached to the hoverboard was fitted into a hole in the electronics box as seen in the photo below. The toggle switch was also added, but the direction of the switch had to switched around since we want the off position to close the circuit instead of the on position. Lasty, two buttons were added to the control panel to control the speed of the motors.
The picture below shows all of the electronics inside the box.
Testing
The video below shows our first trial with the metal mounts, before any control were implemented. The motors are simply constantly running, spinning in opposite directions. This video also showcases the on and off switch in action.
This video showcases the use of the laptop controls for the motors, while they are attached to the wheelchair. At this point, the motors can only be told manually which direction to spin, the PWM signal, and whether or not the brake should be applied.
This next video showcases the next big milestone in the project in which the motors can now be controlled via a USB keyboard.
Programming
Face Tracking Node – Pi
This first ROS2 node is for the facial tracking. Once it has determined which direction your face is tilted, it will publish an angular and linear velocity that the next node is subscribed to.
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import Twist
import cv2
import mediapipe as mp
import numpy as np
import time
class FaceTrackingNode(Node):
def __init__(self):
super().__init__('face_tracking_node')
self.publisher_ = self.create_publisher(Twist, 'cmd_vel', 10)
self.cmd_msg = Twist()
UP_X = 0.0
UP_Z = 0.0
DOWN_X = 0.0
DOWN_Z = 0.0
LEFT_X = 0.0
LEFT_Z = 0.0
RIGHT_X = 0.0
RIGHT_Z = 0.0
def camera_input(self):
mp_face_mesh = mp.solutions.face_mesh
face_mesh = mp_face_mesh.FaceMesh(min_detection_confidence=0.5, min_tracking_confidence=0.5)
mp_drawing = mp.solutions.drawing_utils
drawing_spec = mp_drawing.DrawingSpec(thickness=1, circle_radius=1)
cap = cv2.VideoCapture(0)
while cap.isOpened():
success, image = cap.read()
start = time.time()
# Flip the image horizontally for a later selfie-view display
# Also convert the color space from BGR to RGB
image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB)
# To improve performance
image.flags.writeable = False
# Get the result
results = face_mesh.process(image)
# To improve performance
image.flags.writeable = True
# Convert the color space from RGB to BGR
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
img_h, img_w, img_c = image.shape
face_3d = []
face_2d = []
if results.multi_face_landmarks:
for face_landmarks in results.multi_face_landmarks:
for idx, lm in enumerate(face_landmarks.landmark):
if idx == 33 or idx == 263 or idx == 1 or idx == 61 or idx == 291 or idx == 199:
if idx == 1:
nose_2d = (lm.x * img_w, lm.y * img_h)
nose_3d = (lm.x * img_w, lm.y * img_h, lm.z * 3000)
x, y = int(lm.x * img_w), int(lm.y * img_h)
# Get the 2D Coordinates
face_2d.append([x, y])
# Get the 3D Coordinates
face_3d.append([x, y, lm.z])
# Convert it to the NumPy array
face_2d = np.array(face_2d, dtype=np.float64)
# Convert it to the NumPy array
face_3d = np.array(face_3d, dtype=np.float64)
# The camera matrix
focal_length = 1 * img_w
cam_matrix = np.array([ [focal_length, 0, img_h / 2],
[0, focal_length, img_w / 2],
[0, 0, 1]])
# The distortion parameters
dist_matrix = np.zeros((4, 1), dtype=np.float64)
# Solve PnP
success, rot_vec, trans_vec = cv2.solvePnP(face_3d, face_2d, cam_matrix, dist_matrix)
# Get rotational matrix
rmat, jac = cv2.Rodrigues(rot_vec)
# Get angles
angles, mtxR, mtxQ, Qx, Qy, Qz = cv2.RQDecomp3x3(rmat)
# Get the y rotation degree
x = angles[0] * 360
y = angles[1] * 360
z = angles[2] * 360
# See where the user's head tilting
if y < -10:
text = "Left"
LEFT_X = 0.0
LEFT_Z = -0.5
if y > 10:
text = "Right"
RIGHT_X = 0.0
RIGHT_Z = 0.5
if x < -8:
text = "Forward"
UP_X = 0.5
UP_Z = 0.0
if x > 15:
text = "Backward"
DOWN_X = -0.5
DOWN_Z = 0.0
self.cmd_msg.linear.x = UP_X + DOWN_X + LEFT_X + RIGHT_X
self.cmd_msg.angular.z = UP_Z + DOWN_Z + LEFT_Z + RIGHT_Z
self.publisher_.publish(self.cmd_msg)
# Display the nose direction
nose_3d_projection, jacobian = cv2.projectPoints(nose_3d, rot_vec, trans_vec, cam_matrix, dist_matrix)
p1 = (int(nose_2d[0]), int(nose_2d[1]))
p2 = (int(nose_2d[0] + y * 10) , int(nose_2d[1] - x * 10))
cv2.line(image, p1, p2, (255, 0, 0), 3)
# Add the text on the image
cv2.putText(image, text, (20, 50), cv2.FONT_HERSHEY_SIMPLEX, 2, (0, 255, 0), 2)
cv2.putText(image, "x: " + str(np.round(x,2)), (500, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
cv2.putText(image, "y: " + str(np.round(y,2)), (500, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
cv2.putText(image, "z: " + str(np.round(z,2)), (500, 150), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
end = time.time()
totalTime = end - start
# fps = 1 / totalTime
#print("FPS: ", fps)
# cv2.putText(image, f'FPS: {int(fps)}', (20,450), cv2.FONT_HERSHEY_SIMPLEX, 1.5, (0,255,0), 2)
mp_drawing.draw_landmarks(
image=image,
landmark_list=face_landmarks,
connections=mp_face_mesh.FACEMESH_CONTOURS,
landmark_drawing_spec=drawing_spec,
connection_drawing_spec=drawing_spec)
cv2.imshow('Head Pose Estimation', image)
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
def main(args=None):
rclpy.init(args=args)
node = FaceTrackingNode()
rclpy.shutdown()
if __name__ == '__main__':
main()
The video below shows the facial tracking feature in action.
Arduino Control Node – Pi
This node receives the linear and angular velocity from the Face Tracking Node and converts it into a target speed for the motors. This speed is converted to a string which can then be sent to the Arduino via USB.
#!/usr/bin/env python3
import rclpy
from rclpy.node import Node
from geometry_msgs.msg import Twist
import serial
class ArduinoControlNode(Node):
def __init__(self):
super().__init__('arduino_control_node')
self.serial_port = '/dev/ttyACM0' # Change this to your Arduino's serial port
self.serial_baudrate = 115200 # Change this to match Arduino's baudrate
self.serial_timeout = 1 # Adjust timeout as needed
self.serial_connection = serial.Serial(self.serial_port, self.serial_baudrate, timeout=self.serial_timeout)
self.subscription = self.create_subscription(Twist, 'cmd_vel', self.cmd_vel_callback, 10)
self.last_linear_x = 0.0
self.last_angular_z = 0.0
def cmd_vel_callback(self, msg):
# Process twist command received from ROS
linear_x = msg.linear.x # Linear velocity along the x-axis
angular_z = msg.angular.z # Angular velocity about the z-axis
# Determine target speeds based on keyboard input
if linear_x > 0:
# Forward motion
target_speed_1 = 0.5
target_speed_2 = 0.5
elif linear_x < 0:
# Backward motion
target_speed_1 = -0.5
target_speed_2 = -0.5
elif angular_z > 0:
# Right turn
target_speed_1 = 0.5
target_speed_2 = -0.5
elif angular_z < 0:
# Left turn
target_speed_1 = -0.5
target_speed_2 = 0.5
else:
# Stop
target_speed_1 = 0
target_speed_2 = 0
# Convert target speeds to motor commands
linear_change = linear_x - self.last_linear_x
angular_change = angular_z - self.last_angular_z
motor_command = f"{target_speed_1 + linear_change},{target_speed_2 + angular_change}\n"
self.serial_connection.write(motor_command.encode())
# Update last velocities
self.last_linear_x = linear_x
self.last_angular_z = angular_z
def main(args=None):
rclpy.init(args=args)
arduino_control_node = ArduinoControlNode()
rclpy.spin(arduino_control_node)
arduino_control_node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Arduino Code
The Arduino receives the target speed from the Raspberry Pi and sets that target speed for each corresponding motor. The actual speed of each motor is then read, and both the actual speed and the theoretical speeds are sent through a PID controller in order to regulate the actual speed. This ensures that even when under heavy load the motors will maintain a constant speed.
#include <Arduino.h>
// Constants
const double WHEEL_CIRCUMFERENCE_M = 0.56515; // Motor wheel circumference (meters)
const int MAX_PWM_VALUE_DEFAULT = 50;
const int MAX_PWM_VALUE_RED = 100;
const int MAX_PWM_VALUE_BOTH = 255; // Maximum PWM value
const unsigned long SPEED_TIMEOUT = 500000;
const byte Red_Switch = 30;
const byte Blue_Switch = 29;
const byte Pressure_Switch = 53;
// PID constants
double Kp = 3; // Proportional gain
double Ki = 2; // Integral gain
double Kd = 0.5; // Derivative gain
// Pin Declarations
const int PIN_DIR1 = 45; // Motor 1 direction signal
const int PIN_BRAKE1 = 47; // Motor brake signal (active low)
const int PIN_PWM1 = 3; // PWM motor 1 speed control
const int PIN_SPEED1 = 6; // SC Speed Pulse Output from RioRand board
const int PIN_DIR2 = 49; // Motor 2 direction signal
const int PIN_BRAKE2 = 51; // Motor brake signal (active low)
const int PIN_PWM2 = 5; // PWM motor 2 speed control
const int PIN_SPEED2 = 23; // SC Speed Pulse Output from RioRand board
// Speed variables
double target_speed_1 = 0; // Desired speed for motor 1
double target_speed_2 = 0; // Desired speed for motor 2
double actual_speed_1 = 0; // Measured speed for motor 1
double actual_speed_2 = 0; // Measured speed for motor 2
unsigned long last_uS1 = 0;
unsigned long last_uS2 = 0;
double last_error1 = 0;
double last_error2 = 0;
double output_1 = 0;
double output_2 = 0;
// Last state variable declaration
bool lastState = false;
void setup() {
// Set pin directions for motor 1
pinMode(PIN_SPEED1, INPUT);
pinMode(PIN_PWM1, OUTPUT);
pinMode(PIN_BRAKE1, OUTPUT);
pinMode(PIN_DIR1, OUTPUT);
// Set pin directions for motor 2
pinMode(PIN_SPEED2, INPUT);
pinMode(PIN_PWM2, OUTPUT);
pinMode(PIN_BRAKE2, OUTPUT);
pinMode(PIN_DIR2, OUTPUT);
pinMode(Red_Switch, INPUT_PULLUP);
pinMode(Blue_Switch, INPUT_PULLUP);
pinMode(Pressure_Switch, INPUT_PULLUP);
// Set initial pin states
digitalWrite(PIN_BRAKE1, HIGH);
digitalWrite(PIN_DIR1, false);
analogWrite(PIN_PWM1, 0);
digitalWrite(PIN_BRAKE2, HIGH);
digitalWrite(PIN_DIR2, true);
analogWrite(PIN_PWM2, 0);
// Initialize Serial communication
Serial.begin(115200);
}
void loop() {
// Check if pressure switch is turned on
bool redSwitchPressed = digitalRead(Red_Switch) == LOW;
bool blueSwitchPressed = digitalRead(Blue_Switch) == LOW;
bool pressureSwitchOn = digitalRead(Pressure_Switch) == LOW; // Assuming LOW means switch is activated
int MAX_PWM_VALUE;
if (redSwitchPressed && blueSwitchPressed) {
MAX_PWM_VALUE = MAX_PWM_VALUE_BOTH;
} else if (redSwitchPressed) {
MAX_PWM_VALUE = MAX_PWM_VALUE_RED;
} else {
MAX_PWM_VALUE = MAX_PWM_VALUE_DEFAULT;
}
// Read serial input only if pressure switch is on
if (pressureSwitchOn && Serial.available() > 0) {
String command = Serial.readStringUntil('\n');
// Parse command and set target speeds
// Format of command: "target_speed_1,target_speed_2"
target_speed_1 = command.substring(0, command.indexOf(',')).toFloat();
target_speed_2 = command.substring(command.indexOf(',') + 1).toFloat();
// Release brakes
digitalWrite(PIN_BRAKE1, LOW);
digitalWrite(PIN_BRAKE2, LOW);
}
// Read the speed from input pin for motor 1 & 2
ReadSpeed1();
ReadSpeed2();
// Serial.print("Actual speed 1: ");
// Serial.println(actual_speed_1);
// Serial.print("Actual speed 2: ");
// Serial.println(actual_speed_2);
// Compute PID control for motor 1
double error1 = target_speed_1 * 10 - actual_speed_1;
Serial.print("Error 1: ");
Serial.println(error1);
double output_1 = Kp * error1 + Ki * error1 * (micros() - last_uS1) / 1E6 + Kd * (error1 - last_error1) / ((micros() - last_uS1) / 1E6);
last_error1 = error1;
last_uS1 = micros();
// Apply control signal to motor 1
int pwm_value_1 = map(abs(output_1), 0, 1, 0, MAX_PWM_VALUE);
digitalWrite(PIN_DIR1, output_1 <= 0 ? LOW : HIGH);
analogWrite(PIN_PWM1, pwm_value_1);
// Compute PID control for motor 2
double error2 = target_speed_2 * 10 - actual_speed_2;
Serial.print("Error 2: ");
Serial.println(error2);
double output_2 = Kp * error2 + Ki * error2 * (micros() - last_uS2) / 1E6 + Kd * (error2 - last_error2) / ((micros() - last_uS2) / 1e6);
last_error2 = error2;
last_uS2 = micros();
// Apply control signal to motor 2
int pwm_value_2 = map(abs(output_2), 0, 1, 0, MAX_PWM_VALUE);
digitalWrite(PIN_DIR2, output_2 >= 0 ? LOW : HIGH);
analogWrite(PIN_PWM2, pwm_value_2);
//delay(500);
}
void ReadSpeed1() {
static bool lastState1 = false; // Saves the last state of the speed pin
static unsigned long last_uS1; // The time (µs) when the speed pin changes
static unsigned long timeout_uS1; // Timer used to determine the wheel is not spinning
// Read the current state of the input pin
bool state1 = digitalRead(PIN_SPEED1);
// Check if the pin has changed state
if (state1 != lastState1)
{
// Calculate how long has passed since last transition
unsigned long current_uS1 = micros();
unsigned long elapsed_uS1 = current_uS1 - last_uS1;
// Calculate the frequency of the input signal
double period1 = elapsed_uS1 * 2.0;
double frequency1 = 1.0 / period1 * 1E6;
// If frequency is excesively high, then ignore it
if (frequency1 > 3000) frequency1 = 0;
// Calculate the actual speed
double actual_speed_1 = WHEEL_CIRCUMFERENCE_M * frequency1;
if (actual_speed_1 > 50) actual_speed_1 = 0;
// Save the last state and next timeout time
last_uS1 = current_uS1;
timeout_uS1 = last_uS1 + SPEED_TIMEOUT;
lastState1 = state1;
}
else if (micros() > timeout_uS1)
{
actual_speed_1 = 0;
last_uS1 = micros();
}
}
void ReadSpeed2() {
static bool lastState2 = false; // Saves the last state of the speed pin
static unsigned long last_uS2; // The time (µs) when the speed pin changes
static unsigned long timeout_uS2; // Timer used to determine the wheel is not spinning
// Read the current state of the input pin
bool state2 = digitalRead(PIN_SPEED2);
// Check if the pin has changed state
if (state2 != lastState2)
{
// Calculate how long has passed since last transition
unsigned long current_uS2 = micros();
unsigned long elapsed_uS2 = current_uS2 - last_uS2;
// Calculate the frequency of the input signal
double period2 = elapsed_uS2 * 2.0;
double frequency2 = 1.0 / period2 * 1E6;
// If frequency is excesively high, then ignore it
if (frequency2 > 3000) frequency2 = 0;
// Calculate the actual speed
double actual_speed_2 = WHEEL_CIRCUMFERENCE_M * frequency2;
if (actual_speed_2 > 50) actual_speed_2 = 0;
// Save the last state and next timeout time
last_uS2 = current_uS2;
timeout_uS2 = last_uS2 + SPEED_TIMEOUT;
lastState2 = state2;
}
else if (micros() > timeout_uS2)
{
actual_speed_2 = 0;
last_uS2 = micros();
}
}
The video below shows all parts of the code working together. This was the final milestone of the project and it was pretty much complete at the point.
Final Product
The video below was taken at our Senior Expo, where Salvador demonstrates the final product in action.
Conclusion
We achieved everything that we wanted with our project, but it definitely could use some improvements. One of our main concerns is implementing better safety features. Currently the user can not look away from the camera or else they will move in that direction. We hope to implement voice recognition in which the user can tell the wheelchair to turn the camera controls off so that they are free to look around without moving. This can be done using ROS as well but we did not enough time to look further into it.