Shiyi Zhang’s Status Report for 04/29/2023

Personal Accomplishments

  1. Constructed two wooden stands for the sensors to remain stationary. One of the sensors need to be on the monitor while the other needs to be on the table. The sensor stand supposed to be attached to the monitor is able to lock onto the monitor, and the other will need to be duct-taped to the table on the demo day.
  2. Conducted tests. I conducted various tests for the UI and the sensors. The test results can be found in the group status report for this week.

Schedule

My progress is on schedule.

Next week

I will be using the newly arrived alligator clips next week to extend our wires. This will allow us to place the sensors connected to the Arduino further away on both the monitor and the table. I will also be working with my teammates on the final report, poster, and video.Furthermore, I will utilize the remaining time to search for possible corner cases to ensure that they do not occur on the demo day.

Shiyi Zhang’s Status Report for 04/22/2023

Personal Accomplishments

This week, I’ve been working on bringing together all the different components of our project. The application can now transition from waking up from the sleep mode, taking orders for the current customer, processing a checkout, and then returning to the sleep mode if there are no new customers in line.

Additionally, I’ve also styled our web application to make it visually appealing for anyone looking for some fast food. I’ve also filmed a video that shows the entire ordering process for our final presentation slides.

I’ve also conducted some tests to measure the response time of our sensors. On average, it takes our sensors about 1-2 seconds to detect the presence of a customer and begin the ordering process. Additionally, our tests revealed that when a customer is not speaking close enough to our microphone, it takes approximately 1 second for our system to recognize their speech.

Schedule

My progress is on schedule.

Next week

I’ve found some minor bugs in our application, and I will be addressing them next week. One issue that arose was the occasional skipping of an alert display that reminds the customer that it is time to speak/be silent. Another issue was that occasionally there was mismatch between the speech recognition result and the frontend display of that result due to using two different libraries for the same job.

Shiyi Zhang’s Status Report for 04/08/2023

Tests

For the parts I am responsible for (this includes distance sensors, user interface, and possibly a camera), I have conducted some unit tests on PIR sensors and user interface. Our distance sensors have not arrived yet, so tests for them will be delayed, but we are expecting them to arrive next week.

  1. User interface: The duration of the timeout given to the text transcription (i.e., audio input to text) has a direct impact on the completeness of the transcribed text. After testing timeouts of 1, 2, and 3 seconds, I found that 3 seconds was the safest option, as the text rarely got cut off. However, this extended delay came at the expense of user experience. In contrast, a timeout of 1 second provided a better user experience but required the customer to speak quickly with no gap at all, or risk having their speech cut off. After weighing these options, I ultimately decided to go with a timeout of 1 second. In addition, I have tested edge cases including receiving no speech for longer than 30 seconds (should go into INACTIVE mode and delete the current, incomplete order), checking out (should submit order), and receiving unrecognizable speech (should wait). They work as intended. However, I have not tested the UI with the sensors and the camera installed. My plan is to test whether the UI can reflect the number of people waiting in line, whether it can remind the customer to get closer to the mic, and whether it can switch to the appropriate page when no customer is around.
  2. Sensors and the camera: My plan is to experiment with tilting the sensors to find the optimal angle for detecting people within a specific distance range, while ignoring those beyond that range. There are several factors to consider, including the location of the sensors and how to distinguish between an individual and a large crowd.Once we have the sensors installed and calibrated, I will evaluate their performance in terms of accuracy and speed. Specifically, I’ll be looking at how accurately the system can count the number of people in line (actual # of people vs # calculated by us), as well as how fast the camera/OpenCV can process the data (within how many seconds the # of people is counted).

Personal Accomplishments

This week, my focus has been on integrating the backend and the frontend. Nina added flag variables and a new interface for the frontend, which is now used by the frontend to read the status of the speech recognition and natural language processing parts of the system. As a result of the changes, the frontend now has the ability to detect when it’s time for customers to speak and when the system is processing and won’t accept any audio inputs. Additionally, I implemented code that can transcribe speech to text to display on the screen. This will enable customers who are hard of hearing to view their order.

Aside from the frontend work, I’ve also been working on the hardware aspect of the project. Since the distance sensors have not arrived yet, I have been exploring the use of OpenCV to better understand what the customer is doing. As a result, the system can now detect the number of people waiting in line, as well as identify if a person is present.

Schedule

My progress has been slightly delayed because the distance sensors haven’t arrived yet, and we have just switched from RPI to Arduino. However, to avoid any further delays, I’ve implemented a backup solution using OpenCV and a camera. This should ensure that our progress won’t be affected, even if the sensors never arrive. We also have a second backup plan in place, which involves using PIR sensors. I have already written the necessary code for this option, so we are prepared.

Next week

Once the distance sensors arrive, my plan is to install them on our Arduino and then work on debugging the code I have prepared for them. Additionally, I intend to integrate these sensors with a camera, using the OpenCV library, so that the OpenCV part knows when and when not to check the surrounding.

Shiyi Zhang’s Status Report for 04/01/2023

Personal Accomplishments

This week I’ve been working on integrating the backend code, the frontend code, and the hardware components to prepare for the interim demo. Our voice-operated system is now capable of detecting presence of customers, responding to their speech, and place orders. However, there are still a few minor bugs that pop up occasionally due to unhandled edge cases. For example, there is a particular thread that occasionally crash unexpectedly, whereas this issue does not occur on a Mac. We plan to address these issues next week. Overall, we’re in good shape for the demo. In additional to the integration, I’ve been doing some research on utilizing three extra PIR sensors (right now we are using just one) or a combo of a webcam and OpenCV to provide a more detailed understanding of the presence of a customer. I’ve narrowed down to an algorithm called Multiple Sensor Fusion, the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. While OpenCV is a viable option, I’m leaning towards a more “hardware” approach, which is using sensors, as this is an ECE project.

On schedule

Yes, my progress is on schedule.

Next week

I’ll be collaborating with Nina to make sure that our code is compatible with the Raspberry Pi. Additionally, I’ll be working on developing code to implement Multiple Sensor Fusion and see if it works better.

Shiyi Zhang’s Status Report for 03/25/2023

Personal Accomplishments

This week, I’ve been working on a few different things for our project. Firstly, I’ve been downloading the necessary packages and test-running the parts that we’ve completed so far. In addition to this, I’ve also been working on connecting our microphone to the Pi.

One particular challenge I encountered was with downloading the en_core_web_sm package under Spacy. Although I had no trouble downloading Spacy itself, downloading this specific package was unsuccessful. It turned out that it was looking for a variable in a system-level C file, which was undefined on the 32-bit Raspberry Pi OS we were using. I tried installing a different OS, the 64-bit Raspberry Pi OS, but unfortunately, that didn’t work either. After spending a day searching online for a solution, I came across a modified version of the 64-bit Raspberry Pi OS created by an online user that might do the trick. I installed it, and thankfully it worked.

I was unable to connect our Neat Bumblebee II microphone to the Pi due to its power and audio data transmission requirements, which require a single USB-C to USB-A cable. Unfortunately, the Pi’s only USB-C port is reserved for power supply and does not support audio data transmission. Additionally, the USB-A ports on the Pi do not support power delivery (PD). As an alternative solution, I tried connecting a Bluetooth wireless headphone with a built-in microphone to the Pi. While the Pi was able to detect the headphone, it did not recognize it as an audio device. That means it may require a driver to get it recognized. My next task for the upcoming week will be to troubleshoot this issue.

On schedule

Yes, my progress is on schedule.

Next week

This weekend, my top priority will be to find a solution to connect our Neat Bumblebee II microphone to the Pi or identify an alternative microphone that is compatible. Once the microphone is set up, I will focus on creating additional pages for the frontend. As we have decided not to ask customers to state all of the items they want to order in one sentence, I will work on creating a page where customers state one item at a time.

Shiyi Zhang’s Status Report for 03/18/2023

Personal Accomplishments

  • Frontend

This past week, my main focus has been on making adjustments to our frontend code. Specifically, I’ve been transitioning it from Django to Tkinter. The reason behind this decision was simply due to the fact that our current project priorities lie elsewhere and we need the frontend to be operational as quickly as possible. However, we may switch back to Django once we’re nearing the completion of the project since it provides us with more styling options.

Currently, the pages have the capability to wait for output variables from the backend, such as the system response, and to then display the text like a typewriter. Additionally, the pages are able to disable audio inputs while waiting for responses from the backend or while still in the process of type-writing.

  • Sensor

I have installed the operating system, a fan, and some heat sinks onto our Raspberry Pi. The code for the sensor has also been transferred to the RPi and is functioning correctly.

On schedule

Yes, my progress is on schedule.

Next week

I will fetch Lisa’s NLP code and incorporate it into my workspace, connecting it to my frontend. Additionally, I will also integrate the sensor code with the frontend.

Shiyi Zhang’s Status Report for 03/11/2023

Personal Accomplishments

During Spring break, I continued working on the client-side UI and now have two pages: one that appears when user speech is detected, and another that displays our menu and added items.

Page #1

This page is supposed to be voice-operated, but as we have not yet received the microphone, I have decided to use a click button that listens to the laptop’s microphone for now. By utilizing Mozilla’s Web Speech API and its JavaScript functions, the page is capable of displaying real-time transcribed text in the provided text area.

Page #2

This is where the customer views the menu and review their order before checkout.

Schedule

The client-side UI is close to completion,  but it’s currently not talking to any sub-system such as the Django backend, so my progress is a bit behind on schedule. I don’t think it’s too much of a problem since the mic/tool kit will arrive next week, and utilizing the outputs from the sensors should not take too long.

Next week

Next week I will be working on making the sensors & the mic work with the backend and, if I got time, making it work with the frontend as well. I will work with Lisa on the mic part since she is responsible for language parsing.

Shiyi Zhang’s Status Report for 02/25/2023

Personal Accomplishments

Over the past week, I’ve been working on creating the cart page for our kiosk. To make things easier, we decided to use Django, which will allow us to write our backend in Python. For the frontend, we thought it would be best to stick with HTML and rely on Bootstrap for styling. With this approach, we can keep things simple while still creating a great user experience.

Here’s what the page will look like (some places are displaying source code because they are expecting outputs from the backend, which is currently under development):

Code:

On schedule?

This week I’m a bit behind on schedule as we have a lot of pages to create. Originally I planned to make at least two pages per week. However, due to other commitments, such as wrapping up another project and taking a final exam for a half-semester class, I fell behind. I will get us back on track next week.

Deliverables for next week

I’m planning to create two additional web pages. The first page will be displayed when the kiosk is listening to the customer speaking, and the second page will be an error page that will only be shown when the customer’s audio quality is poor and we need them to repeat their request. These pages will help improve the overall user experience by providing clear instructions and feedback to our customers.

Shiyi Zhang’s Status Report for 02/18/2023

Related ECE Courses

I learned about modularity and ethics in 15-440 Distributed Systems, 17-214 Principles of Software Construction, and 17-437 Web Application Development. An example of using the modularity principle would be that, when designing a distributed system that involves clients, proxies, and servers, it is important to divide the system into separate, independent components or modules. For our project, we have divided the system into speech recognition (backend), motion detection (backend), database (backend), user interfaces (frontend), and hardware components such as microphones and infrared sensors. Each person in our team is responsible for one or more of these modules.

Personal Accomplishments

One of the tasks assigned to me was to make the infrared sensor work with the rest of the system. The sensor should signal the Raspberry Pi when a customer is within 0.3 – 1m away from the kiosk.

HC-SR501 PIR sensor (front & back)

After doing some research, I made a list of hardware components I needed:

1 x HC-SR501 PIR sensor

1 x 830-point solderless breadboard

1 x Raspberry Pi holder compatible with Raspberry Pi 4B

1 x T-shape GPIO Extension Board

1 x 20cm 40-pin Flat Ribbon Cable

5 x HC-SR501 Motion PIR sensor

Resistors

Jumper wires

I ordered them online, and they arrived this past week. I brought the Raspberry Pi home with me from the class inventory and spent Friday assembling them.

The next step was to write a Python script that would allow us to visualize when motion is detected. I downloaded the Thonny IDE and used it to write the code because of its vanilla-like interface.

import RPi.GPIO as GPIO 
import time

GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)
PIR_PIN = 23

GPIO.setup(PIR_PIN, GPIO.IN)
print('Starting up the PIR Module (click on STOP to exit)')
time.sleep(1)
print ('Ready')

while True:
    if GPIO.input(PIR_PIN):
        print('Motion Detected')
    time.sleep(2)

It prints “Motion detected” when the sensor detects movement (condition applied). Along the way I discovered a few places that caused bugs and fixed them by:

  • Giving a 1-sec sleep to settle the sensor before entering the infinite loop.
  • Giving a 2-sec sleep in each iteration to avoid multiple motion detections. 

After running this code, the shell should look like:

On schedule?

Yes, my progress is on schedule.

Deliverables for next week

It doesn’t make sense to add more code for the sensor until the backend is a bit more developed (currently at the design stage). Therefore, I will move on to working on the frontend (customer UI & staff UI). I will be discussing with Lisa, who is responsible for the speech recognition part, about what to display on the web pages as this is semi dependent on how & how fast speech is parsed.

Team Status Report For 2/18/2023

Principles of Engineering, Science, and Mathematics
  1. Modularity – We broke our design down into smaller chunks that each manage a cohesive group of tasks. For example, the program that runs on the Raspberry Pi consists of two modules: one monitors the infrared sensor and wakes up the main backend loop; the other manages the heavy-lifting for speech parsing and recognition. These modules can further be broken down into submodules such as signal processing, speech-to-text translation, and text parsing (NLP).
  2. Ethicality – One of the main goals of our project is to improve the welfare of fast-food restaurant employees. We believe that the success of our system will alleviate the burden of kitchen staff, enabling them to focus only on preparing food. Our infrared sensor and ordering station will also accommodate customers in wheelchairs as well as children.
Risks

Since we are still in the design phase of our project, the most significant risk that could jeopardize its success is failing to consider important design requirements, which would lead to fundamental flaws in our design. To mitigate this risk, we will carefully review feedback from our design presentation and discuss potential problems with our instructors.

Design Changes

We finalized our design for the design review presentation and created a system diagram for the current design:

We have already requested and received a Raspberry Pi 4 with 8GB memory from the ECE inventory. Once we present our design and receive feedback, we will start ordering the hardware components (infrared sensor, microphone, and sound shield).

Schedule

We reformatted our schedule and took spring break into consideration. 

Here’s the updated version: