Weekly Update 11/26 – 12/2

What we did as a team

  • Integrated parking algorithm and instruction generation into the Android App
  • Conducted various tests
    • distance test for ultrasonic sensors
    • predicted path
    • parking algorithm user(Yuqing) testing
    • Setup for testing sensor reading accuracy

Hubert

  • Rewired the circuits to ensure better presentation of the vehicle
  • Reconfigured arduino code to auto-restart after time-out

Zilei

  • Finished working on parking algorithm
    • adjusting finishing conditions
    • adjusting verbosity of the instruction during the first phase
    • adjusting range for appropriate distance to the right during the first phase
    • adjusting detection mechanism for switching from state 2 to 3
      • previously relying on a single point, due to unstable values at the reading point, changed to use 2 readings

Yuqing

  • Finished working on audio instructions for parking algorithm
  • Designated driver

 

Weekly Update 11/19 – 11/25

What we did as a group:

  • We all had reduced workload because of thanksgiving. Work was mostly done remotely. We continued working on the algorithm and the recording. We did feasibility research for a new features we were thinking of.

Hubert:

  • Feasibility research on animation for instructing drivers to get closer or further from the right
    • This will be during the stage where we’re trying to get to the position that’s parallel with the reference car in front of the parking spot.

Zilei:

  • Progress on the algorithm
    • Still using text overlay for now, pending integration with the audio in the upcoming week
    • Redesign of different sub-states within a state
      • currently a mess in code, but pretty comprehensive of the different scenarios

Yuqing:

  • Recording of the audio files
  • Playing audio files from the android app

 

Weekly Update 11/12 – 11/18

What we worked on as a group

  • Code implementation for parking algorithm
  • Preliminary testing for state detection accuracy

  • Preliminary draft for instruction generation
    • Verbal instruction
    • Plot of desired position on our obstacle outline

 

Hubert

  • Cont’d work on headless start
    • emulator doesn’t have cellular and cannot create mobile hot spot
  • Plotting the desired position for the initial parallel position

  • Adding code infrastructures for debugging and plotting destination car positions.

Zilei

  • Set up code structure for parking algorithm
  • State detection & handling different cases
    • Required environment: existence of a reference vehicle,
    • display debug statements to textfield
    • Idle state
      • idle, waiting for button to start
      • could potentially add beeping when vehicle gets too close to surrounding
    • Search state
      • the most complicated of all
      • Will rely on readings from
        • a designated parking sensor on the rear right end of the vehicle to determine whether the car is parallel with the reference car in wrt the y-axis
        • all right sensors & parking sensor for when the distance is too close, in range, too far, or the reference car completely out of sight
          • angle with respect to the reference car
    • Reverse right state
    • Reverse left state

Yuqing

  • Restart & end algorithm buttons
    • change text
    • communicate with the algorithm class

  • Researched on packages available for playing verbal instruction
  • Integrated one sample sound into the App

Weekly Update 11/04-11/11

What we did as a team:

  • Integrated all parts of the system and demoed the first version of the app (without parking instruction) on an Android device
  • Experimentally determined the exact points of references when generating parallel parking instruction
    • We assigned parking lot on an empty space
    • Yuqing drove the testing vehicle while Zilei and Hubert experimented with giving different instructions at different reference points
    • Iterated the process with different reference points and different instructions and decided on the first version of our reference points
    • Testing setup
    • What we found out:
      • The parking spot needs to be at least of dimension 3m x 1.2m
      • The exact point at which the driver needs to turn the steering wheel all the way to the right
      • The exact point at which the driver needs to turn the steering wheel all the way to the left
      • The systematic approach specified above ensures mostly successful parallel parking into the allocated parking spot

Hubert

  • Updated ultrasonic sensors driver to minimize the delay of outline plotting
  • Refined Gyroscope driver to ensure handling of errors occurred during accidental disconnection

Zilei

  • Drafted a flow chart for Parking Instruction Generation algorithm

  • Helped experimentally determined reference points for the testing vehicle in real life
    • Removal of the “straight reverse” stage
    • Determined the approximate size of the spot we need
    • When to steer left from full right

Yuqing

  • Refined path plotting algorithm to reflect more realistic projection of car path by measuring the curvature offset of the car path in the video image
  • Fixed multiple bugs in the current plotting algorithm

Weekly Update 10/29 – 11/03

What we did as a group

  • Reconnected jumper wires
  • Changed our current network protocol, so that no socket is reused and RPI can handle the video streaming between socket read/writes. This solved our previous problem not being able to stream & draw
  • Changed ultrasonic sensor pulling rate so that (1) no outdated data is being used to draw contour,  (2) delay for each reading is less. We improved our reading delay significantly.
  • Progress was made toward path prediction shown on the outline graph and path prediction overlay on the streamed video. Testing still needs to done next week to fine tune those.
  • Prepare for midpoint demo: update our schedule & distribute work for the next 5 weeks

Hubert

  • Rearranged socket code so that RPI can execute stream-related code when the outline drawing isn’t requesting data
  • Worked on Arduino code for ultrasonic sensor sampling algorithm
    • Changed continuous write to pull based write
    • Changed median method to sample fewer values for lower delay without noticeable decrease in accuracy
    • Worked around serial race condition to decrease delay

Yuqing

  • Debug socket connection & analyze the problem
  • Worked on front end and aesthetics
    • Buttons
    • Shaded area
  • Worked on path prediction overlay & finish path prediction graph on the outline

Zilei

  • Debug socket connection & analyze the problem
  • Debug Arduino readings after it was changed from continuous write to pull-based
  • Prepare for demo & revised Gantt Chart

Weekly Update 10/22 – 10/28

What we did as a team

  • Attached ultrasonic sensors, camera, and gyroscope to the testing vehicle
  • Measured dimensions of the vehicle and discussed solutions for sensor attachment & wiring
  • Obstacle outline with all 9 ultrasonic sensors
    • Discussed any potential changes to the positions

Hubert

  • Attached camera to the back of the vehicle and connected camera to RPi
  • Realized rapid video streaming within customized view in Android app.
  • Added view container for path prediction overlay on video streaming video.
  • Path prediction conceptual effect:

Zilei

  • Added car dimension constants to our library
  • Finished sensor initialization & indexing for RPISensorAdpator that initializes all sensor coordinates & types for later use by the drawing tool
    • differentiating between front-left, front-right, left, right, back ultrasonic sensors so that the angle of detection could be based on the type of the sensor placement
    • positions & types of the sensors relative to the car are initialized at compile time
  • Finished RPI to Android numerical data parser, this completes the entire pipe of numerical data
    • splitting a list
    • changing the rpi output to be formatted so that only few digits are transmitted
  • Test using dynamic sensor readings for drawing
  • Worked on socket networking to transfer data, we have a working dynamically updated obstacle outline as of Sunday

 

Yuqing

  • Experimentally determined the relationship between steering wheel angle and car wheel angle
    • Maximum car wheel turn angle is ~20 degrees (19.89)
    • When car wheel turned to its max angle, roll calculated by gyroscope reading is around +- 30
    • Hence, we are going to approximate the car wheel angle by multiply gyro roll reading by 2/3
  • Modified plotting algorithm based on measurements of the testing vehicle to reflect more realistic ratio
  • Connected 4 ultrasonic sensors and gyroscope to the front Arduino
  • Connected ultrasonic sensors to the back Arduino

 

Weekly Update 10/15 – 10/21

What we did as a group

  • Placed order for testing platform (kids ride on car that has a weight capacity of 130 lb)
  • Further discussed details on our baseline algorithm for generation of parking instructions
    • Based on the feedback given by Professor Low, we plan to first devise a parking algorithm primarily based on the stop-by-stop instructions given here https://www.wikihow.com/Parallel-Park, regardless of how the obstacles are positioned around the vehicle
    • Basic logic: install sensors at specific spots on the vehicle body; give driver instructions on how to the steering wheel based on the reading from the 2 key ultrasonic sensors
    • Step 1: Pull the vehicle forward parallel to the parking spot until sensor A installed at a point (roughly at where the gas tank is) started detecting a sudden decrease in distance feedback => Instruct driver to turn the steering wheel all the way to the right
    • Step 2: Let vehicle reverse until sensor B (roughly at where the Right side mirror is) detects a minimum distance => Instruct driver to turn the steering wheel all the way to the left
    • Of course more refinements (where to place the mirrors, how much to turn the steering wheel at each step, if more steps are needed) are needed as we have the testing vehicle and can experiment with the mechanics of the ride-on car
    • We also need to take into consideration that our system should have a feature that allows the user to know if the empty space is even big enough for the car to back into
  • Github Repo https://github.com/zgu444/pakr

 

Hubert

  • Finished coding UI for Android App
    • Successfully plotted dummy ultrasonic sensor data
  • Finished integrating all previous code snippets (plotting algorithm, socket client code) into Android App
  • Enabled streaming in custom Android App video view

 

Zilei

  • Modified  SensorAdaptor code and allows for drawing of obstacle outlines based on real sensor readings
    • Communication of the sensor distance between rpi & android app
    • individual sensor read changed to do batch read of sensor data instead

 

Yuqing

Weekly Update 10/08 – 10/14

What we did as a team

  • Brainstormed on several questions raised by the TAs
    1. How does the system detect curbs?
      • We have come up with 2 potential solutions:
        • Using CV to detect curbs from the images obtained from the rear camera
        • Using ultrasonic sensors angled towards the ground to detect sudden jump in the readings
    2. How does the system determine the current angle of the steering wheel?
      • Attach a gyroscope to the steering wheel
    3. How do we quantitatively measure the effectiveness of our parking algorithm?
      • We plan to use 3 main criteria to measure the usefulness of our parking instruction
        1. Whether the instruction allows the driver to park in the allocated parking spot within 5 reverses (stretch goal being 3 reverses according to the PA driver’s exam manual)
        2. If the instruction is correct according to common practices. For instance, if the back of the right side of the car is too close to the curb, the algorithm should ask the driver to pull forward to the right or reverse to the left (determined by the available spaces around the car).
        3. After the car is being parked according to the instruction, we measure the distance from 4 sides of the car to the 4 sides of the allocated parking space and the angle at which the side of the car forms with the side curb. The more equal the distance from front and back, and left and right, and the closer the car is parallel to the curb, the more effective our algorithm is.
  • Worked on design presentation and design review report

 

Hubert

  • Configured WIFI and ethernet based MJPEG (uncompressed picture stream) video streaming and proved the (in)feasibility of streaming raw video.
  • Configured WIFI streaming of compressed video stream.
    • Installed and debugged gstreamer-1.0 based camera video h264 processing pipelines.
    • Installed RTSP based video streaming server (gst-rtsp-server) and managed to stream compressed h264 video with a latency of less than 1 second (to RTSP android player app).
  • Started integrating the functions we have previously drafted (graph-plotting function, client side socket functions and video streaming) with Parkr Android App
    • Adapt previously Vanilla Java JApplet based plotting library implementation to android 2d graphics
    • Embed video streaming video in android app front page
    • Integrate previous socket based server-client code to android environment.

 

Zilei

  • Worked extensively on refining the design review document
  • Pieced all the conclusions we have arrived at during the discussion into concrete diagrams and design documentation
  • Refined path prediction algorithm by dividing the algorithm into smaller and more specific submodules and produced the following diagram
  • Refined  and updated instruction generation flow based on the results of our discussion and produced a detailed flowchart

  • Generated communication diagram to better organize the structure of the program which allows us to better explain and track the communication between different modules of the system

Yuqing

  • Connected 4 ultrasonic sensors to Arduino
  • Used Arduino’s NewPing module to implement distance detection for sensors
  • Used NewPing’s median function to obtain the median of every 5 readings from each sensor ( this gave us much more stable and accurate readings than using RPi)
  • Experimented with different placement of ultrasonic sensors and noticed that the effect of interference between sensors are not as great as it was with Raspberry Pi
  • Set up connection between Arduino and RPi so that Arduino can send all readings to RPi as a comma separated string at fixed intervals
  • Updated server side socket python script based on the changes made
  • Assisted in refining the design review document
  • Updated blog post

Weekly Update 10/01 – 10/07

What we did this week as a group:

  • Worked on documenting our design choices to prepare for design review
    • Choice of processor might be changed due to limited video compression power
    • Arduino might be added to mediate the distance sensor drivers

 

Hubert

  • Worked on different compression format of video streaming and recorded latency
    • there’s a tradeoff between bandwidth requirement vs compression power requirement.

 

Zilei & Yuqing

  • Worked on a prototype for communication for distance information between Raspberry Pi & Android App through socket
    • Python code on Raspberry pi
    • Java socket so it’s compatible when moved to the phone
    • considering json format so data structure could be understood by both ends

Weekly Update 9/24 – 9/30

What we did this week as a group:

  • Successfully connected multiple ultrasonic sensors to Raspberry Pi and realized distance data transfer from multiple sensors

  • Tested accuracy of distance data from ultrasonic sensors and valid distance range and angle range of sensors. What we have learnt:
    1. Accuracy of sensor data fluctuates a lot at the very beginning and then slowly stabilizes
    2. Sensor has a limited range of roughly 2 meters and around 15 degrees
    3. Sensors occasionally will have glitches where the distance reading is way off. Some algorithm like median out of 5 readings will need to be used
    4. Interference between sensors can drive both sensors to be extremely inaccurate. Therefore, sensors need to far enough from each other in order to have accurate data.
    5. Interference between sensors get extremely high if there are no obstacles within 1m the sensors.
  • Successfully connected camera to Raspberry Pi and enabled night-vision camera

  • Devised baseline algorithm for plotting of the surrounding environment based on distance data

  • Testing of camera latency
    • Day

 

    • Night

Hubert

  • Realized connection between camera and Raspberry Pi
  • Installed Raspberry pi camera module driver
  • Installed uv4l camera video processing & streaming library
  • Tested live-streaming of camera video and tried multiple streaming settings.
    • H264 streaming result in long latency and due to the lack of compression & bandwidth limit, performance is bad (sutter & huge lag)
    • MJPEG streaming does not depend on the key frame & thus bandwidth bottleneck does not stutter streaming as bad
      • Achieved framerate about 10 fps, latency of daylight max 2s, night-vision max 3s
    • H264 can be streamed in VLC player
    • MJPEG can be streamed in browser & VLC player

Zilei

  • Connected ultrasonic sensors to Raspberry Pi
  • Tested the range and accuracy of ultrasonic sensors
  • Drafted the baseline algorithm for outlining the surrounding environment based on sensor data
  • Refined java functions drafted by Yuqing

Yuqing

  • Connected ultrasonic sensors to Raspberry Pi
  • Drafted python script that utilizes gpiozero module to parse data from ultrasonic sensors
  • Drafted java functions that will be used to plot the position of the vehicle and the outline of obstacles around it
  • Helped devise the baseline algorithm for plotting outline