Weekly Update 10/22 – 10/28

What we did as a team

  • Attached ultrasonic sensors, camera, and gyroscope to the testing vehicle
  • Measured dimensions of the vehicle and discussed solutions for sensor attachment & wiring
  • Obstacle outline with all 9 ultrasonic sensors
    • Discussed any potential changes to the positions

Hubert

  • Attached camera to the back of the vehicle and connected camera to RPi
  • Realized rapid video streaming within customized view in Android app.
  • Added view container for path prediction overlay on video streaming video.
  • Path prediction conceptual effect:

Zilei

  • Added car dimension constants to our library
  • Finished sensor initialization & indexing for RPISensorAdpator that initializes all sensor coordinates & types for later use by the drawing tool
    • differentiating between front-left, front-right, left, right, back ultrasonic sensors so that the angle of detection could be based on the type of the sensor placement
    • positions & types of the sensors relative to the car are initialized at compile time
  • Finished RPI to Android numerical data parser, this completes the entire pipe of numerical data
    • splitting a list
    • changing the rpi output to be formatted so that only few digits are transmitted
  • Test using dynamic sensor readings for drawing
  • Worked on socket networking to transfer data, we have a working dynamically updated obstacle outline as of Sunday

 

Yuqing

  • Experimentally determined the relationship between steering wheel angle and car wheel angle
    • Maximum car wheel turn angle is ~20 degrees (19.89)
    • When car wheel turned to its max angle, roll calculated by gyroscope reading is around +- 30
    • Hence, we are going to approximate the car wheel angle by multiply gyro roll reading by 2/3
  • Modified plotting algorithm based on measurements of the testing vehicle to reflect more realistic ratio
  • Connected 4 ultrasonic sensors and gyroscope to the front Arduino
  • Connected ultrasonic sensors to the back Arduino

 

Weekly Update 10/15 – 10/21

What we did as a group

  • Placed order for testing platform (kids ride on car that has a weight capacity of 130 lb)
  • Further discussed details on our baseline algorithm for generation of parking instructions
    • Based on the feedback given by Professor Low, we plan to first devise a parking algorithm primarily based on the stop-by-stop instructions given here https://www.wikihow.com/Parallel-Park, regardless of how the obstacles are positioned around the vehicle
    • Basic logic: install sensors at specific spots on the vehicle body; give driver instructions on how to the steering wheel based on the reading from the 2 key ultrasonic sensors
    • Step 1: Pull the vehicle forward parallel to the parking spot until sensor A installed at a point (roughly at where the gas tank is) started detecting a sudden decrease in distance feedback => Instruct driver to turn the steering wheel all the way to the right
    • Step 2: Let vehicle reverse until sensor B (roughly at where the Right side mirror is) detects a minimum distance => Instruct driver to turn the steering wheel all the way to the left
    • Of course more refinements (where to place the mirrors, how much to turn the steering wheel at each step, if more steps are needed) are needed as we have the testing vehicle and can experiment with the mechanics of the ride-on car
    • We also need to take into consideration that our system should have a feature that allows the user to know if the empty space is even big enough for the car to back into
  • Github Repo https://github.com/zgu444/pakr

 

Hubert

  • Finished coding UI for Android App
    • Successfully plotted dummy ultrasonic sensor data
  • Finished integrating all previous code snippets (plotting algorithm, socket client code) into Android App
  • Enabled streaming in custom Android App video view

 

Zilei

  • Modified  SensorAdaptor code and allows for drawing of obstacle outlines based on real sensor readings
    • Communication of the sensor distance between rpi & android app
    • individual sensor read changed to do batch read of sensor data instead

 

Yuqing

Weekly Update 10/08 – 10/14

What we did as a team

  • Brainstormed on several questions raised by the TAs
    1. How does the system detect curbs?
      • We have come up with 2 potential solutions:
        • Using CV to detect curbs from the images obtained from the rear camera
        • Using ultrasonic sensors angled towards the ground to detect sudden jump in the readings
    2. How does the system determine the current angle of the steering wheel?
      • Attach a gyroscope to the steering wheel
    3. How do we quantitatively measure the effectiveness of our parking algorithm?
      • We plan to use 3 main criteria to measure the usefulness of our parking instruction
        1. Whether the instruction allows the driver to park in the allocated parking spot within 5 reverses (stretch goal being 3 reverses according to the PA driver’s exam manual)
        2. If the instruction is correct according to common practices. For instance, if the back of the right side of the car is too close to the curb, the algorithm should ask the driver to pull forward to the right or reverse to the left (determined by the available spaces around the car).
        3. After the car is being parked according to the instruction, we measure the distance from 4 sides of the car to the 4 sides of the allocated parking space and the angle at which the side of the car forms with the side curb. The more equal the distance from front and back, and left and right, and the closer the car is parallel to the curb, the more effective our algorithm is.
  • Worked on design presentation and design review report

 

Hubert

  • Configured WIFI and ethernet based MJPEG (uncompressed picture stream) video streaming and proved the (in)feasibility of streaming raw video.
  • Configured WIFI streaming of compressed video stream.
    • Installed and debugged gstreamer-1.0 based camera video h264 processing pipelines.
    • Installed RTSP based video streaming server (gst-rtsp-server) and managed to stream compressed h264 video with a latency of less than 1 second (to RTSP android player app).
  • Started integrating the functions we have previously drafted (graph-plotting function, client side socket functions and video streaming) with Parkr Android App
    • Adapt previously Vanilla Java JApplet based plotting library implementation to android 2d graphics
    • Embed video streaming video in android app front page
    • Integrate previous socket based server-client code to android environment.

 

Zilei

  • Worked extensively on refining the design review document
  • Pieced all the conclusions we have arrived at during the discussion into concrete diagrams and design documentation
  • Refined path prediction algorithm by dividing the algorithm into smaller and more specific submodules and produced the following diagram
  • Refined  and updated instruction generation flow based on the results of our discussion and produced a detailed flowchart

  • Generated communication diagram to better organize the structure of the program which allows us to better explain and track the communication between different modules of the system

Yuqing

  • Connected 4 ultrasonic sensors to Arduino
  • Used Arduino’s NewPing module to implement distance detection for sensors
  • Used NewPing’s median function to obtain the median of every 5 readings from each sensor ( this gave us much more stable and accurate readings than using RPi)
  • Experimented with different placement of ultrasonic sensors and noticed that the effect of interference between sensors are not as great as it was with Raspberry Pi
  • Set up connection between Arduino and RPi so that Arduino can send all readings to RPi as a comma separated string at fixed intervals
  • Updated server side socket python script based on the changes made
  • Assisted in refining the design review document
  • Updated blog post

Weekly Update 10/01 – 10/07

What we did this week as a group:

  • Worked on documenting our design choices to prepare for design review
    • Choice of processor might be changed due to limited video compression power
    • Arduino might be added to mediate the distance sensor drivers

 

Hubert

  • Worked on different compression format of video streaming and recorded latency
    • there’s a tradeoff between bandwidth requirement vs compression power requirement.

 

Zilei & Yuqing

  • Worked on a prototype for communication for distance information between Raspberry Pi & Android App through socket
    • Python code on Raspberry pi
    • Java socket so it’s compatible when moved to the phone
    • considering json format so data structure could be understood by both ends