Jason Lu’s Status Report for 4/27

This Week’s Work

  • Worked on final presentation slides and presented them
  • Worked with team on getting everything installed onto the bicycle and hooked up
  • Spent a little bit of time continuing to figure out a solution for the magnetometer

Schedule

Out of the three deliverables from the last status update, I made progress on all three two (in orange) but did not finish them. Most of my time was spent getting everything working on the bicycle.

  1. Get magnetometer into working state for final demo
  2. Make alerts on UI less noisy
  3. Complete more tests

Upcoming Deliverables

The main deliverables (aside from the class assignments such as video, report, etc.) for this week are essentially the same as last week, except changing “complete more tests” to “complete all tests”:

  1. Get magnetometer into working state for final demo
  2. Make alerts on UI less noisy
  3. Complete all tests

State of the UI

Here’s a more detailed rundown of what the main changes were to the UI since the last update:

Making the UI less “flickery” (part 2)

As a recap of what this is for: When we were testing, we noticed that the system is very “flickery” in the sense that the blind spot indicators, range indicators, and alerts can change their values very rapidly. For example, the blind spot indicators would show up and disappear pretty quickly which is not a good experience for the user.

I implemented some throttling last cycle, but this week I added an additional change to only show the blind spot indicators if the vehicle is moving towards you (velocity is negative). This seemed to reduce the amount of false blind spot indicators further.

I also disabled the forward/rear collision warnings as they are too sensitive right now. We will need to discuss whether this is a feature we want to keep.

Magnetometer

There was a lot of continued frustration this past week getting the magnetometer to work correctly. Despite doing things like comparing my code to what Adafruit is doing in their library and trying different functions in my I2C library, nothing got my code to work.

However, I noticed that with the calibration data, using Python and Adafruit’s Python library for the MMC5603 actually gives good results! I’m still mystified as to why their code works and mine doesn’t, and I suspect that I’ll need a logic analyzer to see the I2C messages to figure out why.

So instead, my new approach is to write another Python script to talk to the magnetometer using Adafruit’s library, and then use a pipe (like we’re doing with the radar data) to send it over to the JavaScript UI code.

Jason Lu’s Status Report for 4/20

This Two Week’s Work

  • Worked on the UI code (see “State of the UI” for a description of what happened)
  • Worked on final presentation slides
  • Conducted some testing with my team

Schedule

Out of the two deliverables from last the last status update, I completed one (in green) and did not finish the other one (in red).

  1. Complete implementation of turn signal control
  2. Complete radar and UI integration

Upcoming Deliverables

The deliverables for this week are as follows:

  1. Get magnetometer into working state for final demo
  2. Make alerts on UI less noisy
  3. Complete more tests

Special Questions

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks?

Here are some tools that I had to learn for this project:

  • Figma – the website that I used to mockup the UI before I implemented it.
  • ElectronJS and associated tooling (e.g., Vite) – ElectronJS is the framework that I used to build the UI in. I’ve used HTML, JavaScript, and CSS, but never the ElectronJS framework itself. Additionally I had to learn how to use the associated tooling, such as how to package the application and use the build system. Some additional libraries I learned to use include:
    • onoff – GPIO control in JavaScript
    • i2c-bus – I2C control in JavaScript

Some other specific knowledge I got included:

  • Figuring out how to use the magnetometer over I2C and how to calibrate things

What learning strategies did you use to acquire this new knowledge?

In general for tools, I learned things very informally (e.g., without watching a tutorial or reading a textbook). For Figma, I essentially just messed around with it and looked up things as needed. Figma’s official documentation was useful along with videos.

For Electron, I used their documentation and Googled things when I got stuck. Reading Github issues and StackOverflow answers was also very helpful.

For learning how to use the magnetometer, reading the MMC5603 datasheet and looking at Adafruit’s C++ library implementation was really helpful. And of course, lots of trial and error. For calibration, reading blog posts and seeing sample code proved to be very useful

For Raspberry Pi-related issues, I read posts from peoples’ blogs, the Raspberry Pi forums, questions and answers on the StackExchange network, and reading the documentation.

State of the UI

Here’s a more detailed rundown of what the changes were to the UI since the last update

Integration with radar

The implementation work was done by the previous status report, but in these two weeks Jack got the front radar working and we verified that radar and UI integration is working for both front and rear radars.

Making the UI less “flickery”

When we were testing, we noticed that the system is very “flickery” in the sense that the blind spot indicators, range indicators, and alerts can change their values very rapidly. For example, the blind spot indicators would show up and disappear pretty quickly which is not a good experience for the user.

To fix this, I implemented a deactivation delay for the blind spot indicator where each blind spot indicator must stay activated for at least 2 seconds after the last detection of a vehicle in the side lane before it is allowed to deactivate.

For the range indicator, I ended up implementing something akin to a throttling system where updates to the range indicator are throttled to 1 Hz (although in testing it before writing this I think there might be a slight bug that occasionally allows faster updates to go through).

The collision warnings are still too sensitive right now. I’ll have to think of some way to make them less sensitive.

Adventures in magnetometer calibration

In the last status report, I talked about an issue where rotating the magnetometer 90 degrees physically only reports 60 degrees. I spent a significant amount of time the last two weeks trying to calibrate the magnetometer to see if I could improve this.

In my first attempt, I used some Python calibration code that was modified from code on this Adafruit page (plotting code taken from this Jupyter notebook from Adafruit) to determine the hard iron offsets. That gave me some calibration values, except when I applied the calibration values in code the magnetometer broke even more where rotating the magnetometer by a lot only changed the heading by a few degrees.

Then, I attempted to perform a more advanced calibration using a software called MotionCal (which calculates both hard and soft iron offsets). I spent a decent amount of time getting it to build on the RPi 4 and modifying the code to take input from my Python script that interfaces with the magnetometer instead of reading from a serial port as MotionCal originally wants. This StackOverflow answer had code that I used for bridging between my Python script that interfaced with the magnetometer and MotionCal. My original idea was to create a virtual serial port that MotionCal could use directly so I didn’t have to modify MotionCal, but it turns out this solution didn’t create the type of device MotionCal wanted so I ended up having to modify MotionCal too. In the end, I effectively ended up piping data from the Python script to MotionCal.

I also used this StackExchange answer that had sample code that was helpful in determining what to send from the Python script and in what format (e.g., multiplying by 10 for the raw values) for MotionCal to work, and it was also helpful for learning how to apply the soft iron offsets once I obtained them from MotionCal.

With all these changes, I still didn’t have any improvement – rotating the magnetometer a lot would not result in a significant change in the heading :(. So, as of this writing, the magnetometer is still not functional as a compass. Oddly enough, reverting my changes seemingly no longer allows the magnetometer to work as before all this (where rotating 90 degrees is needed to report 60 degrees). I’ll need to spend some time before the demo getting this fixed, but I may just use my workaround mentioned in the last status report instead of trying to get calibration to work.

Personally, I’m starting to think that something’s wrong with our magnetometer. The plots I get when calibrating are very different from what Adafruit gets (see animation in top left corner), where my code draws three circles very far apart instead of overlapping like they get.

It turns out I actually have an IMU (MPU6050) from a previous class that I took so I could’ve used that instead of the magnetometer, but I think at this point it may be too late to switch. I’ll have to look into how much effort it’ll take to get the IMU up and running instead, but from what I remember from trying to use the IMU to calculate heading in my previous class, I will also need to do calibration for it too which might be difficult.

 

Jason Lu’s Status Report for 4/6

This Week’s Work

  • Implemented physical control of turn signal lights
  • Implemented auto-cancellation using magnetometer
  • Started integration with Jack’s radar code

For details, please see the “State of the UI” section below.

Schedule

Out of the two deliverables from last week, I partially completed one (in orange) and did not make any progress on the last task (in red).

  1. Complete implementation of turn signal control
  2. Complete radar implementation

The only real work remaining for the turn signal stuff on my end is to mitigate the issue where we need to physically turn the compass 90 degrees for it to report a change of 60 degrees – I have a potential workaround (see below)

However, I am behind on the radar implementation task. I will check with Jack on whether this is something he can take on, since theoretically we can reuse the stuff he does for the RCW specifically for the FCW. I’m also realizing it is somewhat ill-defined what radar implementation is – I think a good definition at this point is to have code to filter the points from the radar, identify which of them correspond to a vehicle directly in front (for FCW), and send the distance and relative velocity of that car to the UI.

Upcoming Deliverables

The deliverables for this week are similar to last week’s, except changing “radar implementation” to “radar and UI integration”.

  1. Complete implementation of turn signal control
  2. Complete radar and UI integration

State of the UI

Here’s a more detailed rundown of what the state of the UI is as of the time of this writing.

Turn Signal Lights

I implemented code to drive the actual turn signal lights through GPIO pins which in turn control transistor gates. There is a minor issue where the on-screen turn signal indicator isn’t synchronized with the physical turn signal light, but for now it should be OK.

Turn Signal Auto-Cancellation

For our turn signal auto-cancellation system, we are using the Adafruit MMC5603 Triple-Axis Magnetometer as a compass. As a quick refresher, the idea of using the compass is that when we activate a turn signal, we record the current bicycle heading. Then, once the bicycle turns beyond a certain angle (currently targeting +- 60 degrees of the starting heading) we automatically turn off the turn signals.

Adafruit provides libraries for Arduino and Python, but since the UI code is written in Javascript there weren’t any pre-written libraries. Therefore, I ended up writing my own code to interface with it.

As mentioned last week, I’m using the i2c-bus library to communicate with the MMC5603 chip over I2C. With help from the MMC5603 datasheet and Adafruit’s C++ library for the MMC5603, we now have support for obtaining the raw magnetic field measurements from the MMC5603.

To convert those raw readings into a heading, I implemented the algorithm described in this Digilent article.

There is an issue where rotating the magnetometer reports 60 degrees of angle change only when it’s actually rotated 90 degrees. We can work around this in code right now by setting a lower threshold to auto-cancel the turn signal (perhaps 40 degrees so it’ll turn off when the bicycle rotates 60 degrees physically), but we may need calibration. From what I briefly read online though, this won’t be a simple process.

I also implemented the ability to override the bike angle to some fake angle in the debug menu. Here’s a video I recorded (a CMU account is required) showing the overriding angle feature and the turn signal auto-cancellation.

Integration with radar

The UI side is ready to receive real radar data! The general overview is that Jack’s code will do all the data processing, identify one vehicle each in front, behind, left, and right, and send their distances and velocities in a JSON string to my program. We will be using named pipes, created using mkfifo.

On the UI side, I based my code off of this StackOverflow answer. There was a slight hiccup initially when we were testing passing data from Python to JavaScript when the Python side seemed to buffer multiple JSON strings before actually sending them over the pipe, resulting in a crash on the Javascript side as it received a single string with multiple objects at the top level which isn’t valid JSON. We fixed this by inserting a flush call after sending each string on the Python side.

 

 

Jason Lu’s Status Report for 3/30

This Week’s Work

  • Completed initial implementation of main screen code (with faked sensor data)
  • Implemented button inputs for turn signal control

For details, please see the “State of the UI” section

Schedule

Out of the three deliverables from last week, I completed one (in green), started a task (in orange), and did not make any progress on the last task (in red).

  1. Complete initial implementation of main screen code (with faked sensor data)
  2. Complete implementation of turn signal control
  3. Complete radar tuning

For the turn signal controls, the remaining tasks are:

  1. Assign GPIO pins for the turn signal LED control transistors and add code to toggle the GPIO pins based on the turn signal state
  2. Implement the magnetometer. Unfortunately there aren’t any prebuilt JavaScript libraries for the magnetometer we’re using, so I’m writing the code to operate the magnetometer over I2C. A backup option is to use the Python script that Johnny wrote and add code to have it send the magnetometer over an IPC connection to the UI code where it will be parsed.

I am behind on the turn signal and radar tuning tasks. For the necessary schedule adjustments, please see the team posts. Most likely this week I will continue to prioritize the turn signal implementation work.

Upcoming Deliverables

The deliverables for this week are similar to last week’s, except with the removal of the UI implementation task and changing “radar tuning” to “radar implementation”.

  1. Complete implementation of turn signal control
  2. Complete radar implementation

State of the UI

Since the update last week, the UI has received the following major updates:

  • Implemented turn signal button injection from the debug screen – we can simulate turn signal button presses now
  • Implemented both forward and rear collision warnings
  • Added ability to simulate a radar not detecting any vehicles
  • Implemented physical turn signal button input handling – we can control the turn signals by pressing the real buttons now!
Screenshot of the forward collision warning being triggered – the system takes into account the distance and relative velocity to the vehicle in front.

There was quite an adventure with trying to get GPIO stuff to work with Node.JS. After trying a few different libraries, I settled on using the onoff library.

There are a couple of nuances with using this library:

  • This library doesn’t support setting the pullup/pulldown resistors for the GPIO pins – the README file in the onoff repository lists a few strategies such as modifying the device tree or adding commands to the Pi’s config.txt. However, apparently by default the Raspberry Pi’s GPIO pins that have numbers <= 8 are set to pullup by default. Therefore, I chose to use GPIO6 and GPIO7 so we don’t have to worry about the resistor settings. If we end up needing to change to a different GPIO pin, we can explore the options suggested.
  • GPIO numberings have changed on RPi 5 and older Raspberry Pi’s running kernels >= 6.6, instead of passing 7 as the GPIO number we pass 519 instead. Details are available here if you’re interested.

I’ve also started work on implementing the magnetometer (MMC5603). Since there’s no JavaScript library for this, I’m implementing the code using the datasheet and the i2c-bus library. Currently, I’m still working on the initialization code.

 

Jason Lu’s Status Report for 3/23

This Week’s Work

  • Ethics assignment related tasks
  • Continued UI development (see “State of the UI” section below)

Schedule

I was unable to complete any tasks from this week:

  1. Complete initial implementation of main screen code (with faked sensor data)
  2. Start integration with Jack’s radar code
  3. Start implementation of turn signal control
  4. Start radar tuning for forward collision warning

I’m close to completing the UI, the remaining tasks on my plate are (at least what’s needed for integration):

  • Add UI elements for rear and forward collision warnings
  • Finish up code to simulate turn signal button presses (almost done at time of writing)
  • Allow overriding a radar zone’s data to show no car detected, currently we can only control the range and velocity of the car detected in the zone but not whether a car is present or not.

With respect to the UI itself, I am currently on track although there is a risk of slippage depending on how long these remaining tasks take. According to the schedule, I have until tomorrow to complete the visualizations, so there isn’t much time left.

Because of the delay in starting the other tasks, I’ve shifted the dates for the radar tuning (forward collision tuning) and turn signal implementation to 3/23 – 3/29. This has not resulted in any delays, but further delays (which is a high risk!) in these tasks runs the risk of delaying the overall electrical/software integration such that it results in a cascade of delays downstream. I’m hoping that I can make good progress on these during the lab times this week.

Upcoming Deliverables

The main deliverables for this week are similar to last week except for removing integration with Jack’s radar code since that task doesn’t start until 3/30, and changing “start” to “complete” for turn signal implementation and radar tuning since their deadlines are soon:

  1. Complete initial implementation of main screen code (with faked sensor data)
  2. Complete implementation of turn signal control
  3. Complete radar tuning

State of the UI

Please see this video I recorded for the current state of the UI: Google Drive link (you will need a CMU account to see this).

Jason Lu’s Status Report for 3/16

This Week’s Work

  • Finished initial mockup for the UI (see below)
  • Started work on UI implementation (see below)

Schedule

Out of the 4 targets from the last update, I’ve completed 2 tasks (highlighted in green) and 2 tasks are unstarted (highlighted in red).

  1. Complete UI mockup
  2. Start implementation of the UI in code
    1. Last time I set the target as “Start implementation of the UI in code (with faked sensor data)” but I realized that didn’t make sense so I removed the “with faked sensor data” for this update.
  3. Start implementation of turn signal control
  4. Start radar tuning

With respect to the software side of the UI, I am currently on track. However, I am behind with respect to the turn signal control software and radar tuning as both should have been started at this point. I plan to discuss with the rest of the team on the radar timeline as I may not have time this upcoming week to tune the radar.

Upcoming Deliverables

The main deliverables for this week are:

  1. Complete initial implementation of main screen code (with faked sensor data)
  2. Start integration with Jack’s radar code
  3. Start implementation of turn signal control
  4. Start radar tuning

Updated UI Mockup

After last week, I realized that the range indicator for a bicycle which looked similar to a wifi indicator could be ambiguous as to the meaning. For example, if the range indicator is full strength, does that mean the car is near or far?

Therefore, I redesigned the range indicator to be similar to what Toyota has for their parking assist proximity sensing system (see “Distance Display” under this page from the 2021 Venza manual). Under the new system, an arc will light up corresponding to the range of the detected car, with further arcs indicating a longer distance. The color of the arc will also serve as a visual cue, with green indicating far, yellow indicating medium, and red indicating close range.

Other changes this week include:

  • Removing the vehicle indicator, previously indicated by grey boxes on the old UI
  • Fixed empty space above emergency braking background
  • Added turn signals (based off of Tesla’s turn signal design – see here in the Model Y owner’s manual)

For comparison, here’s the old design:

Old UI from before this week

And here’s the new design:

New revision of the UI (click for large view)

From left to right, the screens display:

  1. Vehicles detected both in front and behind – the front vehicle is at a medium range, and the rear vehicle is at a far range
  2. No vehicles detected in front or behind
  3. Forward collision warning triggered
  4. Rear collision warning triggered
  5. Blind spot monitoring – A vehicle is in the left lane besides us
  6. The last two panels are for animating the turn signal – in this case, we have a left turn signal

Future work will include adding a hazard toggling button and perhaps a debug panel or some other way to show debug information.

State of the UI

Here’s the state of the UI at the time of writing:

The icons are exported from Figma, and the flashing turn signals are driven by the turn signal application logic.

Jason Lu’s Status Report for 3/9

This Two Week’s Work

  • Finished up design report
  • Created initial mockup for the UI (see below)

Schedule

Out of the 2 targets from the last update, I haven’t completed either (highlighted in red):

  1. Complete the UI mockup by 2/28 – I completed an initial mockup after the 2/28 deadline, but I just realized I need to make some pretty big revisions to it so it isn’t complete
  2. Start implementation of the UI in code

I am a little behind as I haven’t started on the UI implementation nor completed the UI mockup. I aim to complete the mockup and start the implementation this week.

Therefore, I needed to extend the due date of the mockup from 2/28 to 3/13 (which also pushes the UI implementation start date back), along with changing the UI implementation end date from 3/15 to 3/24 as I’m not confident I can finish it in the original time frame of 1 week (the new allocated time is 1 + 1/2 weeks). This should be OK as I had extra slack time anyways, so there aren’t any delays due to this. There are a lot of tasks going on now so there is a risk of further slippage, but for now this is the plan.

Upcoming Deliverables

The main deliverables for this week are:

  1. Complete UI mockup
  2. Start implementation of the UI in code (with faked sensor data)
  3. Start implementation of turn signal control
  4. Start radar tuning

UI Mockup

For the UI mockup, I built it using Figma. I’ve heard of Figma but never used it before, but thankfully it seems quite easy to get up and running. Here’s a picture showing mockups of various scenarios:

Mockup of various scenarios

Note that the UI uses portrait orientation and is inspired in part by Tesla’s visualization (a little bit like the image under “Driving Status” in the Tesla Model Y Owner’s Manual). From left to right, the screens display:

  1. Vehicles detected both in front and behind. The signal strength indicates how close the car is, although as I’m typing this I realize that this can be very confusing (e.g., is max signal strength close or far?). I will explore how to redo this. The grey rectangles are placeholders for vehicle symbols (e.g., like how Tesla has grey vehicle models for other cars) and show up only if a vehicle is detected.
  2. No vehicles detected in front or behind
  3. Forward collision warning triggered
  4. Rear collision warning triggered
  5. Blind spot monitoring – A vehicle is in the left lane besides us

Jason Lu’s Status Report for 02/24

This Week’s Accomplishments

  • Worked on design review slides
  • Flashed firmware for the Pi – whatever was on the Pi originally did not want to boot
  • Chose a UI framework (see below)
  • Built a hello world app in Electron (see below)

Schedule

Out of the 5 targets from last week, I’ve completed 4 of them (green text).

  1. Complete the design review slides
  2. Get the RPi up and running
  3. Select a graphical framework
  4. Write a barebones hello world GUI application
  5. Begin UI mockup

I originally slipped behind a bit according to the schedule as software bringup was supposed to be done by 02/21 and I only finished it on 02/24. Nonetheless, I am on schedule as of this moment again, although there is a risk of slipping behind a bit if I don’t finish the UI mockup by 02/28.

Upcoming Deliverables

The main deliverables for this week are:

  1. Complete the UI mockup by 02/28
  2. Start implementation of the UI in code

Choosing a UI framework

Step 1: Options

I ruled out using a web app simply because we would need to interface with hardware which is more difficult with web apps. I’d need to write native code even with a web app to interface with the hardware, and trying to connect the native code and web app seems difficult to me.

Here are some options that I first considered:

  • Electron – I knew of this as a cross-platform UI framework that uses Chromium under the hood
  • Qt
  • Gtk

During testing, I also added a Rust framework that is conceptually similar to Electron, called Tauri. I’ve seen it before in my free time and I saw it again when I was browsing Rust UI frameworks at https://areweguiyet.com/. I chose to add it because I saw the high memory usage of Electron and wanted something conceptually similar but hopefully lower usage, and I was curious if Tauri might accomplish that.

 

Step 2: Evaluation Criteria

Here are the criteria that I used to evaluate things. Note that not every factor is weighted equally.

  1. Cross-platform – Can this run on both MacOS and Linux? Our RPi runs Raspbian, and I plan to primarily develop on my MacBook so if the framework is cross-platform I can work on my MacBook and build for Linux from the same code base.
  2. Language/Tooling Familiarity – Given that we have limited time for developing things, I don’t want to spend too much time on learning the language primarily used for the framework nor learning how to use the framework itself.
  3. Adoption – Who are the major users of it? If it is a framework that is used by major apps, it means that this framework should be production-ready and is proven to be usable for real-world tasks.
  4. Baseline resource usage – How much CPU usage and RAM do we use rendering roughly the same things? Our Raspberry Pi 4, while it does have 8 GB RAM, at the end of the day is more of an embedded system that does not have the power of a desktop system. Spending too much resources on rendering the UI would increase power draw and take up performance that could be used for other tasks like radar data processing and collision time estimation.
  5. Cross-compileable – Can we easily build packages for the RPi 4 from a host computer? While technically I could build on the RPi 4, it’s easier to build on the host computer and copy the package to the RPi 4 so we don’t have to install build tools on the Pi itself. Building on a laptop is probably a lot faster too.

Step 3: Testing

I created hello world apps in all 4 frameworks and measured their memory and CPU usage. Testing was conducted on a M1 MacBook Air running MacOS Sonoma (14.2.1) with 16 GB RAM. Admittedly the laptop is much faster than an RPi 4 and isn’t even running the same OS, but I’m hoping that the general trends will be similar across MacOS and Linux.

The memory usage was obtained by using Activity Monitor and summing up the relevant threads’ memory usage (e.g., for Electron we have multiple threads running around so we need to account for all of them). For calculating CPU usage, this is admittedly a lot less scientific where I just watched the CPU usage of the threads and grabbed the maximum value (only if it repeatedly hit it – e.g., hitting 1% CPU usage just once wouldn’t count, only if it fluctuated up to 1% a few times).

The code for all of four demo applications are in our shared Github repository.

Results

For the full file with citations, please see the file in Google Drive

Screenshots

Here are some screenshots of the hello world apps:

“Hello world” using Electron
“Hello world” using QT
“Hello world” using GTK
“Hello world” using Tauri

Step 4: Decision

All four frameworks are cross-platform so we can ignore that as a deciding factor, although if one of them wasn’t it would have been a major negative point.

For Tauri, I was unfortunately unable to find any applications that I knew of that used it, which made me a little more hesitant about whether it would be as production-ready and usable for real-world apps as the other three. Therefore, I ruled out considering it further.

Between Qt, Gtk, and Electron, Qt and Gtk have similar performance requirements. Sadly, Electron uses 5x the memory and has a little bit more measurable overhead.

However, I feel much more comfortable with using HTML/CSS/JavaScript for building UIs since I’ve used them before for web development. In contrast, I’m not as familiar with how to build UIs with Qt and Gtk (either through code or the GtkBuilder system) which would take additional time to become familiar with. I also was not familiar with Qt’s tooling such as Qt Creator (although apparently it isn’t required, just recommended), so learning that would incur some additional time expense. Language-wise, I feel more comfortable with Javascript/HTML/CSS or C vs. C++. So overall, I feel a lot more comfortable using Electron compared to the other two due to the smaller learning curve.

So in the end, it is a tradeoff between performance overhead and ease-of-development.

Given that consideration, I’d rather use Electron despite the performance overhead simply because the lower learning curve is more important to me. The memory usage, despite being 5x as much, shouldn’t be an issue on the Pi when we have 8 GB (although it will make it more difficult to downsize to a board with less RAM). The fact that Electron can be easily packaged for Linux from MacOS is a bonus whereas packaging the other two from MacOS will be harder (see references).

DECISION: Use Electron for the framework

Building a Hello World App

Scaffolding

First, I had to scaffold a basic Electron app. Based on the Electron tutorial which mentioned the create-electron-app command, I decided to use that so I didn’t have to spend time on boilerplate stuff.

Reading the documentation for create-electron-app, I saw I had a choice between Webpack and Vite-based templates. I’ve used Webpack in the past but Vite was completely unfamiliar to me, but based on https://vitejs.dev/guide/why and https://blog.replit.com/vite it seems like it does the job of Webpack (although not all of it) so I’ll go with that. TypeScript is nice because it gives us typing information, so I’ll use the vite-typescript template.

The final scaffolding commandline was:

npm init electron-app@latest ui -- --template=vite-typescript

Here’s a screenshot of the app:

Screenshot of the app that was generated using create-electron-app

Testing Packaging

I also wanted to see if I could package this for a Linux system. I spun up an ARM Linux image on my computer (using UTM and the Debian 12 image from UTM), and built a .deb package for Debian using electron-forge. After installing it, I verified I can build packages for Linux from my Mac and have it run!

A screenshot of BikeBuddy packaged on MacOS running in a Linux VM

Jason Lu’s Status Report for 02/17

Personal Progress

  • Worked on the design review slides along with my team
  • I developed a block diagram for our system:

  • Since we were planning to use a power bank, I wanted to to figure out if USB negotiation is needed for higher power. Decision: No need for negotiation, see power allocation
  • Researched how to connect multiple wires together (without a PCB or breadboard) since we plan to directly wire the battery pack to the radars, lights, and potentiometer circuit. Decision: Go with wire nuts
    • My original idea was to solder things, but I got spooked by this Reddit post which talked about solder melting, although in hindsight it was talking about code compliance which doesn’t apply to us and house wiring (we won’t be pulling that much current)
    • https://diy.stackexchange.com/questions/135925/when-to-use-electrical-tape-rather-than-wire-nuts rules out electrical tape because they say electrical tape does not hold things together which we need
    • Then I remembered that at work I’ve used wire nuts before, so I wondered about using wire nuts to tie them all together (I forgot that they were called that so I had to Google “twist on wire connectors” lol)

Schedule

Out of the 4 deliverables from last week, I only met one (green = completed, red = incomplete). However, I’m still on schedule as I have until 02/21 to complete initial software bringup and I plan to test the Pi and start software bringup this week.

  1. Get the RPi up and running
  2. Select a graphical framework
  3. Write a barebones hello world GUI application
  4. Select and order a display to attach to the RPi (done with team)

Deliverables

Here are the deliverables for this week:

  1. Complete the design review slides
  2. Get the RPi up and running
  3. Select a graphical framework
  4. Write a barebones hello world GUI application
  5. Begin UI mockup

 

Jason Lu’s Status Report for 02/10

This week, I focused on the following tasks:

  • Working on and finishing up the proposal presentation slides
  • Working on and completing the Gantt chart for our team which is displayed below:
    • I originally developed the Gantt chart using a software called Project Libre which was listed in the lecture slides, but it was extremely laggy with scrolling on my computer so I switched to another software called GanttProject
    • Thank God Project Libre was able to export in Microsoft Project format and that GanttProject can import it, so I didn’t lose too much time!
    • There was some weird issues with the length of tasks that required manual editing, and I also ran into a bug with GanttProject where assigning people using the right click menu caused duplicate events to show up in the resource allocation page (that displays how many tasks each person has assigned). I had to use the properties page for each task as a work around.

  • Attending the mandatory lab meetings and participating in the peer review process
  • Researching the difference between an RPi 5 and 4 which ended up boiling down to higher performance for the RPi 5 compared to the 4 in exchange for higher power draw based on https://hackaday.com/2023/09/28/a-raspberry-pi-5-is-better-than-two-pi-4s/ and https://bret.dk/raspberry-pi-5-review/#Raspberry-Pi-5-Benchmarks
  • Researching viable radars for us to use
    • I discovered that the K-LD7 radar can detect vehicles up to 30 m away and reports targets over serial which is nice because we don’t need to do any signal processing unlike the similar K-LC7 which only gives us I/Q data which apparently we need to manually signal process because it only gives us the sinusoidal data
    • However, I later discovered that the K-LD7 radar cannot detect stationary target detection which might pose a problem if a car is traveling at the exact same speed as the bicycle
    • Because of that, I spent a lot of time researching what other radar tranceivers exist. Some interesting ones I found were:
      • DISTANCE2GOL from Infineon along with other evaluation boards from Infineon because I didn’t want to design our own PCB especially with RF
      • TI evaluation boards for their automotive radars – If we could use the automotive radars that would be great because they are used in real cars which are battle tested, but the evaluation boards are insanely expensive like this one
      • Radars from SEEEDStudio like this 60 GHz module, but from reading the description it looked like they were more for human detection than vehicles
      • Acconeer radars – These radars had great specs but the evaluation boards were pretty expensive
    • Professor Tamal suggested looking at a previous team that used microwave radars for their bicycle safety system (Team B3 in F21), and they used the SEN0306 which looked promising. However, I was a little hesitant about the range since in team B3’s final report they state that it only worked up to 11 m which technically meets our requirements but is shorter than the 30 m for the K-LD7
    • I eventually realized that the K-LD7 product page literally lists blind spot monitoring as a use of it, so I was much more comfortable with it

My personal progress is on track according to the schedule, my only task this week was to choose the central computer and we did.

My deliverables for the following week are:

  1. Get the RPi up and running
  2. Select a graphical framework
  3. Write a barebones hello world GUI application
  4. Select and order a display to attach to the RPi