Navigation Aid for the Visually Impaired
Tell Me More


Opsis diminishes the burden of blind people walking around by helping them map their surroundings with haptic and sound feedback.

Opsis allows users to know where surrounding objects are, how close they are, and what type of room they are in with simple voice commands.


Why do blind people need to be at risk of being injured when performing a simple task as ‘trying to go to the bathroom’ or ‘finding a chair to sit in’

With Opsis, blind people will detect and maneuver around obstacles easily and know what type of place they are in more accurately.

Use Cases

1. Object/Obstacle Detection for blind people

2. Location Detection for blind people

3. Navigation in no lighting environment

Competitive Analysis

click the image to view the website


Perception uses depth sensor with haptic feedback to solve obstacle detection.

Opsis provides better user experience by eliminating unnecessary movements during obstacle detection.


Ultra Cane uses ultrasonic sensors to detect an area around the tip of the cane.

Opsis provides much better precision and better user experience because Opsis utilizes much more vibration sensors with large FOV.

Tech Specs / Requirements

Functional requirements

  • Detects all obstacles that are within 3.5 meters (FOV: 90° horizontal and 70° vertical)

  • Depending on the distance from obstacles to user, the 10 vibration motors independently feed backs different strength of vibrations

  • Allows optional location/room detection and notification via sound

  • Whole system is controlled by simple 3 voice commands:

    • Start: starts the program
    • Stop: stops the program
    • Location: performs location detection
  • Response time is less than 250ms

  • Entire system lasts up to 2.2 hours

Non-functional requirements

  • Durable

  • Lightweight

  • Easy to wear & take off

Interaction Diagram

This diagram shows how our componenets interact with each other.

Once the user starts the program by speaking “start” into the mic, the Intel Compute Stick receives depth data from the depth sensors and generates vibration commands. This command gets sent to the MCU and corresponding vibration motors on the waistband vibrate.

To do location detection, the user can say “location”. This triggers the Intel Compute Stick to receive rgb image data from the depth sensor and send it to Google’s Cloud Vision API. Google Cloud Vision processes the image based on the objects in the image and generates location label, such as “kitchen” or “bathroom”. This location label is then sent to the user as sound.

To shut the device down, the user can say “stop”.

Opsis Front View

Opsis Back View


1. architecture diagram

This diagram shows the 2 primary processing blocks. The left block receives commands over BLE to power our haptic feedbacks system. The second block processes information from the depth sensors and sends the relevant information to speaker via aux cable.

2. Bluetooth MCU

This schematic primarily interfaces the MCU with the components needed for it to operate. It also displays the connection between the antenna and the MCU.

3. Power Regulation & NFETs

This schematic shows all the components needed for power regulation on the board and PWM control of the vibrators.


click the image to view the website


Intel RealSense R200 Depth Camera

Vendor: Newegg

Price: $99

Product link

Library for capturing data from Intel real sense camera (C/C++)

Intel Realsense API

Intel Compute Stick

Vendor: Amazon

Price: $123

Processor: 1.44 GHz Intel Atom

Memory: 2GB RAM

802.11 AC

Bluetooth 4.0

Product link

Haptic Motors

Vendor: Adafruit

Price: $1.95

Product link

Speaker with Mic

Price: vary

Nordic nRF51822 BLE MCU


Google Cloud Vision

Image Analytics, classifies images and detects contents

Google Cloud Vision API (REST API)


Wi-Fi: IEEE 802.11ac

Bluetooth 4.0

USB 3.0

Team Members

Jaeho Bang

Jonathan Appiagyei

Paul Pan

Ryan Park