Weekly Update 9/24 – 9/30

What we did this week as a group:

  • Successfully connected multiple ultrasonic sensors to Raspberry Pi and realized distance data transfer from multiple sensors

  • Tested accuracy of distance data from ultrasonic sensors and valid distance range and angle range of sensors. What we have learnt:
    1. Accuracy of sensor data fluctuates a lot at the very beginning and then slowly stabilizes
    2. Sensor has a limited range of roughly 2 meters and around 15 degrees
    3. Sensors occasionally will have glitches where the distance reading is way off. Some algorithm like median out of 5 readings will need to be used
    4. Interference between sensors can drive both sensors to be extremely inaccurate. Therefore, sensors need to far enough from each other in order to have accurate data.
    5. Interference between sensors get extremely high if there are no obstacles within 1m the sensors.
  • Successfully connected camera to Raspberry Pi and enabled night-vision camera

  • Devised baseline algorithm for plotting of the surrounding environment based on distance data

  • Testing of camera latency
    • Day

 

    • Night

Hubert

  • Realized connection between camera and Raspberry Pi
  • Installed Raspberry pi camera module driver
  • Installed uv4l camera video processing & streaming library
  • Tested live-streaming of camera video and tried multiple streaming settings.
    • H264 streaming result in long latency and due to the lack of compression & bandwidth limit, performance is bad (sutter & huge lag)
    • MJPEG streaming does not depend on the key frame & thus bandwidth bottleneck does not stutter streaming as bad
      • Achieved framerate about 10 fps, latency of daylight max 2s, night-vision max 3s
    • H264 can be streamed in VLC player
    • MJPEG can be streamed in browser & VLC player

Zilei

  • Connected ultrasonic sensors to Raspberry Pi
  • Tested the range and accuracy of ultrasonic sensors
  • Drafted the baseline algorithm for outlining the surrounding environment based on sensor data
  • Refined java functions drafted by Yuqing

Yuqing

  • Connected ultrasonic sensors to Raspberry Pi
  • Drafted python script that utilizes gpiozero module to parse data from ultrasonic sensors
  • Drafted java functions that will be used to plot the position of the vehicle and the outline of obstacles around it
  • Helped devise the baseline algorithm for plotting outline

Weekly Update 9/17 – 9/23

What we did this week as a group:

  • Placed orders for Raspberry Pi, ultrasonic sensors, camera, and wires
  • Set up project website
  • Started building ParkSmart Android App
  • Brainstormed on how to integrate the data from ultrasonic sensors and camera
    • Sketch
    • As shown in the sketch, we hope to draw an outline of obstacles around the vehicles based on the data from ultrasonic sensors, and then show the outline on the App so to give the driver a better visual understanding of the surroundings
    • We will then pass the data to our algorithm to provide instruction based on the spatial outline

Hubert Yu

  • Setting up android development environment
  • Upload android demo projects and introduce basic android development
  • Explore android & rpi video streaming options
    • H.264 https://raspberrypi.stackexchange.com/questions/7446/how-can-i-stream-h-264-video-from-the-raspberry-pi-camera-module-via-a-web-serve
    • MJPEG https://blog.miguelgrinberg.com/post/stream-video-from-the-raspberry-pi-camera-to-web-browsers-even-on-ios-and-android
    • Mp4 https://code.tutsplus.com/tutorials/streaming-video-in-android-apps–cms-19888
    • High level idea:
      • Camera stream -> MJPEG -> android app
      • Camera stream -> H.264 -> android app
      • Camera stream -> MP4 (compressed) -> android app
    • https://github.com/mbebenita/Broadway/issues/59
      • H264 streaming pear and open source app (looks very useful)

Zilei Gu

  • Created initial repository for android app
  • Setup android development environment & exploring android development panels
  • Android UI crash course
  • More readings on parallel parking (double touch: https://crucialminutiae.wordpress.com/2007/08/13/the-perfect-parallel-park/)
  • Readings on RPi wireless access point (https://github.com/SurferTim/documentation/blob/6bc583965254fa292a470990c40b145f553f6b34/configuration/wireless/access-point.md and (https://learn.adafruit.com/setting-up-a-raspberry-pi-as-a-wifi-access-point/overview)

Yuqing Ma

What is Pakr?

Welcome to Carnegie Mellon University: ECE Capstone Projects – ParkSmart.

Pakr is an ECE capstone project done by a group of three: Zilei Gu, Hubert Yu, and Yuqing Ma.

Many new models of automobiles offer amazing parking assistance systems, distance detectors, back-up cameras, etc. While these features could greatly improve your driving/parking experience, many vehicles of older models don’t have the console system that’s compatible with this upgrade. Installing a back-up camera and radar system would often mean replacing the entire console and incur a much higher cost. In addition, even though there exist parking assistance systems on the market, there is a lack of clear instructions on how to maneuver the vehicles based on the spatial feedback. That’s why we’re here to build an integrated parking assistance, Pakr, that provides useful and executable information to drivers through an Android app.

Our system will be able to detect distance through ultrasonic sensors and provide parking spot image through an Android App to the drivers. The distance detection system alerts the drivers when the vehicle gets too close to surrounding objects. The image on the Android App provides the driver with visual information on the relative location of the vehicle to the surrounding objects. In addition, Pakr provides real-time audio instructions on how to maneuverer the steering wheel. 

Want to know more about Pakr? Subscribe to our blog and read our weekly updates!