Team Status Report for 4/27/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  1. We are currently using the software solution, which is the distance estimation feature in the OR model, for the proximity module. Although it has some decent accuracy with all measurements to be within +- 30cm, the uncertainty is around 20%, which may jeopardize the success of the use case requirement of the detection distance. This risk can be mitigated by using an Arduino board to connect to the Jetson and the ultrasonic sensor to get an accurate distance. However, this alternative will increase the weight of the product, which can go over the use case requirement of the weight of the device, and have latency in data transmission. It will also increase the development time to transfer the distance data to the Jetson. This is the tradeoff we still need to consider: accuracy vs. weight and latency. 
  2. After connecting the camera module and the OR model, we realized that there is a latency for every frame, possibly due to the recognition delay. Therefore, even if the camera is turned to a different object, the Jetson outputs the correct object around 5 seconds after the change. This can crucially jeopardize the success of the project because we had set the use case requirement to be less than 2.5 seconds of recognition delay. The risk can be mitigated by using an alternative method of capturing frames. A screen capture can be used instead of the video stream, which can potentially resolve the delay issue. However, the problem with this method is that the process of Jetson Nano running the program of a camera capture, transferring the information to the model, and deleting the history of the captured frame can take more time than the current delay. This alternative solution can also delay the product delivery due to more time necessary for the modification of the program. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

Besides the change from using an ultrasonic sensor integration to using the DE feature for the proximity module, no design change has been made. 

Provide an updated schedule if changes have occurred.

Josh and Shakthi will work on integration and testing of the headless device. Meanwhile, Meera will work on the box for the device. Consequently, Josh, Meera, and Shakthi will conduct user testing and work on the final demo. 

List all unit tests and overall system test carried out for experimentation of the system. 

Testing Metrics Result
Object Recognition Model > 70% on identifying an object  95% (38/40 images, 5 objects)
Distance Estimation Feature ± 30cm of actual object distance  Tests done on 4 different distances. Average of 21.5% uncertainty within ± 30cm
Text-to-speech Module user-testing for surrounding sounds 20 trials each object 100% (20/20 person, 20/20 couch, 20/20 chair, 20/20 cat, 20/20 cellphone)
Vibration Module > 95% accuracy on vibration 100% (20/20 on person, 20/20 on nothing)
Device Controls (buttons) 100% accuracy on controls 100% (20/20 on button A, 20/20 on button B)
Module Integration (weight) < 450g on the overall product weight 192g (device) + 209g (battery) = 401g < 450g
Recognition Delay < 2.5s to recognize an object  ~8 seconds delay for 20 seconds testing.

Frame delay due to the latency of the OR model 

List any findings and design changes made from your analysis of test results and other data obtained from the experimentation.

  • Chose pre-trained model instead of trained model with indoor object dataset 
Model Real Objects Detected Falsely detected Percentage (%)
Pre-trained 58 49 5 84.4
Trained 58 21 4 36.2
  • Chose Distance Estimation feature in the OR model instead of ultrasonic sensor
    • Ultrasonic sensor does not work well with Jetson Nano
    • DE feature rarely goes over ± 30cm, although some calibration is necessary
Actual (m) Detected (m) Off (m)
1.80 1.82 + 0.02
1.20 0.89 – 0.31
0.20 0.38 + 0.18
2.2 1.94 – 0.26

Team Status Report for 4/20/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

1. We have completed the integration of the speech module and the object recognition module to the overall system and conducted primitive user-testing during development. We have now looped back to the integration of the proximity module and found that the timing issues we faced were not due to the hardware or software programs we built, but due to the innate inability of the jetson nano to handle real-time processing. We found online that other users who attempted this same integration of the HC-SR04 ultrasonic sensor to the NVIDIA Jetson Nano faced the same challenges and the workaround is to offload the ultrasonic sensor to it’s own microcontroller. This hardware route will require us to completely separate the proximity module from the rest of the system. The alternative solution we have come up with is to handle this with a software solution that will get a distance estimation from the OR module, using the camera alone to approximate the distance between the user and the objects. This logic is already implemented in our OR module and our original idea was to use ultrasonic sensors to compute the distance so as to have improved accuracy as well as reduced latency so the proximity module doesn’t rely on the OR module. However, due to this unforeseen change of events, we may have to attempt this software solution. We aim to make the call on which direction this weekend. As Meera, our hardware lead, is away due to unfortunate family emergencies, we hope to get her input when she can on which (hardware/software) solution to take with this challenge. For now, we will be attempting the software route and conducting testing to see if this is a viable solution for the final demo.

2. After connecting the camera module and the OR model, we realized that there is a latency for every frame, possibly due to the recognition delay. Therefore, even if the camera is turned to a different object, the Jetson outputs the correct object around 5 seconds after the change. This can crucially jeopardize the success of the project because we had set the use case requirement to be less than 2.5 seconds of recognition delay. The risk can be mitigated by using an alternative method of capturing frames. A screen capture can be used instead of the video stream, which can potentially resolve the delay issue. However, the problem with this method is that the process of Jetson Nano running the program of a camera capture, transferring the information to the model, and deleting the history of the captured frame can take more time than the current delay. This alternative solution can also delay the product delivery due to more time necessary for the modification of the program. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

As mentioned above, the HC-SR04 ultrasonic sensor integration seems to be incompatible with the NVIDIA Jetson Nano due to it’s inability to handle real-time processing. To address this problem, we have two potential solutions that will change the existing design of the system. We have outlined the two options below but have yet to make a concrete design change due to unfortunate circumstances in our team and also having to direct our focus on the upcoming final presentation.

1. Hardware Option: Microcontroller to handle Ultrasonic Sensor

Entirely offload the proximity sensor from the NVIDIA Jetson Nano and have a separate Arduino microcontroller to handle the ultrasonic sensor. We have found projects online that integrate the Arduino to the NVIDIA Jetson Nano and definitely believe this is a possible solution should our hardware lead Meera also be on board.

2. Software Option: Distance estimation from OR module

As described above, pulling the distance data from the existing logic used to detect the closest object to the user in the OR module is our alternative route. The downsides to this include the lowered accuracy of the distance estimation done solely using an image as compared to using an ultrasonic sensor. The upside would be a much quicker design & implementation change as compared to the hardware route.

As the final presentation is coming soon, we may use the software route as a temporary solution and later switch to the microcontroller hardware route to ensure full functionality for the final demo. The hardware route will certainly add development time to our project and we risk cutting it close to the final demo but we will achieve a more robust functionality with this approach. As for the cost, I believe we can acquire the Arduino from the class inventory so I don’t think this will add much costs to our project. The software route will be a quick implementation with 0 cost.

Provide an updated schedule if changes have occurred.

Josh and Shakthi will work on integration and testing of headless device 4/20-4/25. Josh, Meera, and Shakthi will conduct testing and final demo work.

Team Status Report for 4/6/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

At this stage of the project we don’t have too many major risks that we have identified that could affect the course of the next few weeks. However, this may still change and we definitely are ready to solve them as they come. One risk that stands out is the constant running of the proximity module. The current design of the proximity module involves the ultrasonic sensors being turned on and measuring distance continuously and during testing we realized that this may be an unnecessary load on the Jetson that can be mitigated by having it so that the toggling of the modes also control the ultrasonic sensor of the proximity module and not just the vibration motor output. This is a call that we will make once the OR module is integrated into the system and we can actually test the overall latency and functionality of the device.

A second risk is the development time and cost of 3D printing a device case. We have decided that given the few weeks we have remaining, risking it with the extended development time 3D printing may need may not be worth the small difference in user experience between 3D vs laser cutting or other methods for building the case and strap to make the device wearable. Hence, that is the approach we will be taking for the device case.

A final risk is that the PCB may need to be redesigned. Since we are having issues with the proximity module code, we plan to test whether the issues stem from interfering signals on the PCB. If this is the case, we will need to redesign the PCB and order a new one as soon as possible, so that we are able to test that one.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We have decided to deviate from the original plan to 3D print our device case and instead look into laser cutting or simply hand-building our case using other materials. This change was necessary because of the cost and time required to 3D print with the minimal experience we have as a group. This will help us reduce cost and we believe this is a practical move given the remaining timeline of this project.

Provide an updated schedule if changes have occurred.

Integrating the OR model to NVIDIA Jetson will be extended due to unexpected system errors in the Jetson. 

Now that you have some portions of your project built, and entering into the verification and validation phase of your project, provide a comprehensive update on what tests you have run or are planning to run. In particular, how will you analyze the anticipated measured results to verify your contribution to the project meets the engineering design requirements or the use case requirements?

After we combine all subsystems to Jetson, we are planning to test the device in terms of accuracy. We will use several different indoor settings and check whether the device is able to detect the closest object within 2 meters and able to transfer the result to the audio output. We will validate the device by checking whether the result is the same as the actual closest object. The project will be successful if the accuracy is above 70%, which was the use case requirement. In addition to the quantitative testing, we will be reaching out to our contacts through the Library of Accessible Media Pittsburgh to ask for volunteers to test our device. This will happen closer to completion of the project, with some time given to fine-tune our device based on the feedback that we receive.



Team Status Report for 3/23/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  • Yolov4 OR model: A step back from yolov9 to yolov4 has been decided to incorporate the DE feature due to yolov5~9’s incapability of manipulating the detected output data unlike yolov4. Because training the model is not possible for yolov4, a risk of not being able to focus on indoor objects may arise. However, after some research, a pre-trained model uses MS COCO, which has 330K images with 1.5 million object instances, 80 object categories. This is a much better annotated dataset compared to what we could find online, which has 640 images. Therefore, it makes sense to use the pre-trained model. It is also possible to filter out the specific outdoor objects, such as a car or a bus, in the DE feature, so we can still focus on indoor objects. 
  • PCB Assembly: Due to some delay in the receiving of transistors for the PCB assembly, we have been set back by 1 week as Meera had to wait on fixing the PCB for a first-round of testing. We aim to get a first version of the PCB ready in the first half of the coming week but this 1-week delay will certainly be cutting into the time we allocated to testing and modifying the current design and re-ordering a new PCB if necessary. The contingency plan is now to get the first iteration of the PCB ready as quickly as possible and work on testing. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

  • A change from yolov9 to yolov4 has been decided in our software module. Considering that the test setting will be in a well-lit indoor environment with few indoor objects, it is expected that the accuracy drop from yolov9 to yolov4 will not significantly impact the project. 

Provide an updated schedule if changes have occurred.

There has been a change in my (Shakthi’s) work schedule. I decided to move working on the speech module till after the implementation of the vibration module as the vibration module had a shorter end-to-end data flow and required a lot of the same processing of data as the speech module does. Now that I’ve completed the first iteration of the vibration module, I will be going back to finish up the speech module and work on its integration with the rest of the system.

There also has been a change in the work schedule for the OR model. Because we are using Yolov4 with the DE feature, the schedule has been adjusted accordingly. The testing stage has been pushed back by a week.

 

Due to the delays in getting the transistors, the hardware development schedule has also been pushed back until we get the transistors this week.

Team Status Report for 3/16/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The major risks remain the same as previous weeks: the weight of the device, the PCB connection between Jetson and peripherals, the identification of partial frames of objects, and the OR model version.

Another risk is the accuracy of the DE feature. Because it uses a reference image and known size to estimate the distance of an identified object, if the model misidentifies a certain obstacle, it will produce an incorrect distance and lead to an incorrect nearest distance. Then, the system will output a wrong obstacle to the user. This risk will be mitigated by raising the accuracy of the model with a better training method. Few adjustments with epochs and image resolutions will be made to output the greatest precision.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

The only potential change that can be made to the design of the system is that if the pre-trained model identifies a batch of test objects better than the trained model with our own dataset, the pre-trained Yolov9-e.pt will be used for the weight of the OR model. 

Provide an updated schedule if changes have occurred.

Since we are still waiting for our order of transistors for the PCB, and have not yet ordered the audio converter, the hardware development schedule has been pushed back slightly:

Another update to the schedule is that the integration of the DE feature has been pushed back for another week due to its unexpected complexity and learning curve. 

Team Status Report for 3/9/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The major risks remain the same as last week: the weight of the device, the PCB connection between Jetson and peripherals, and the identification of partial frames of objects.

A new risk that can potentially jeopardize the success of the project is the longevity of the support for the Yolov5 model. Although it is constantly supported and updated by Ultralytics, we cannot guarantee that this version will be supported in the next few years. To mitigate this risk, we are planning to upgrade this version to the Yolov9 model, which is the most recent version (currently being updated). The reason for this approach is that we have also found an open source that had already integrated distance estimation features to the OR model. Therefore, we can reduce the development time and just need to focus on training the model and creating a data processor to manage the data output. If this development faces an issue due to the ongoing deployment by the developers, we are planning to stick to the Yolov5 model and meet the MVP. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There is currently one change made to the existing design of the system. It is the upgrade from Yolov5 to Yolov9. This change is necessary to raise the accuracy of the object recognition and to mitigate the risk of the module not being supported in the future. It can also reduce the development time of integrating a distance estimation feature by referring to an open source that uses this model and the feature. 

Provide an updated schedule if changes have occurred.

By 03/11, retraining the Yolov9 + Dist. Est. feature will be completed. Then, by 03/15, implementing the data processor will be done, so that the testing can be done by 03/16.

Please write a paragraph or two describing how the product solution you are designing will meet a specified need…

Part A (written by Josh): … with consideration of global factors.

The product solution focuses on its influence in a global setting. Our navigation aid device is designed to be easily worn with a simple neck-wearable structure. There are only two buttons in the device to control all the alert and object recognition settings, so visually impaired people can easily utilize the device without any technological concerns or visual necessity. The only requirement is learning the functionalities of each button, which we delegate the instructions to the user’s helper. 

Another global factor considered is that the device outputs results in English. Because English is the most commonly used language, the product can be used by not only those from the United States but also those who are aware of indoor objects in English terms. 

Part B (written by Shakthi): … with consideration of cultural factors. 

The product solution considers the cultural factors by taking into consideration the commonly used indoor items in many cultures. That is, this design takes an account of indoor items like a sofa, table, chair, trash bin, and a shelf, which can be easily found in most indoor settings. Furthermore, as mentioned in part A, English is used to identify the items, so the cultures with English as their first language or secondary languages can easily use the device.

Most importantly, the device is aiming to positively influence the community of visually impaired people. Its goal is to give them confidence to go around indoor settings without any safety concerns. After an interview with several blind people, the device takes into consideration the common struggle and challenge that they face in daily lives. We hope that our product can flourish the relationship between the people with visual needs and the people who do not. 

Part C (written by Meera): … with consideration of environmental factors. 

The product solution considers environmental factors by allowing the users to take care of the waste properly. A trash bin is one of the indoor objects that is in the dataset, so the users can know if a bin is in front of them. This design encourages the visually impaired people to put trash into the bin. 

Furthermore, this navigation device utilizes a rechargeable battery, so it reduces the total amount of product that may go to waste after its usage. In addition, we are connecting the sensor, vibration motor, and logic level converter to the PCB using headers and jumper wires instead of soldering them onto the PCB so that we can reuse them if the PCB needs to be redesigned. We are attempting to avoid using disposable items as much as possible to avoid harming the environment.

Team Status Report for 02/24/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

The major risks remain the same as last week: the weight of the device, the PCB connection between Jetson and peripherals, and the identification of partial frames of objects.

A new risk that can potentially jeopardize the success of the project is the dependency on the object recognition model. We have realized that training the Yolov4 model with our own dataset is no longer possible due to the malfunction in darknet, which is the responsible team that has supported the recognition model. Therefore, we have changed our plan to upgrade the model to Yolov5, which is more recent than Yolov4 and is implemented by a more reliable team Ultralytics. The risk of such dependency can be mitigated by upgrading the version one by one as time permits. Our reach goal is to upgrade to Yolov7, which is relatively new, and attach a distance estimation module to the new version. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

No major changes have been made to our design. Using the suggestions from the LAMP advisory board, we are focusing on datasets for the OR model incorporating hallways, stairs, doors, trash cans, and/or pets, since these are obstacles they identified as common and necessary to identify. One change of a design of the system is that the OR model has changed from Yolov4 to Yolov5 due to the outdated dependency and unsupported module. 

Provide an updated schedule if changes have occurred.

Because the OR model has been upgraded to version 5, it needs a new distance estimation feature to be integrated. Therefore, we have postponed testing the image recognition model by a few days and added some time to work on integrating the feature to the upgraded model. 

Team Status Report for 02/17/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

  • Weight of the device:

We initially estimated a weight of 200 grams for the whole device which in retrospect was a vast underestimate. Our on-board computer (NVIDIA Jetson Nano) alone comes to 250 grams, along with some other heavy components such as the PCB/ Arduino, the rechargeable battery pack and other sensors. We also intend on 3D printing a case to improve the overall look and feel of the device. Given all of this, the total weight is going to be around 400-450g. We now run the risk of the device being too bulky and just uncomfortable and impractical for our user case. Although we will certainly make efforts along the way to reduce weight when we can, our backup plan is to offload the battery pack and potentially the Jetson to the waist of the user so that the weight is distributed and less of a disturbance for the user. 

  • Connection to peripherals:

We plan to connect the peripherals (buttons, sensor, and vibration motor) to the GPIO pins of the Jetson, with a custom PCB in between to manage the voltage and current levels. A risk with this approach is that custom PCBs take time to order, and there may not be enough time to redesign a PCB if there are bugs. We plan to manage this risk by first breadboarding the PCB circuit to ensure it is adequate for safely connecting the peripherals before we place the PCB order. Our contingency plan in case the PCB still has bugs is to replace the PCB with an Arduino, which will require us to switch to serial communication between the Jetson and Arduino and will cause us to reevaluate our power and weight requirements.

  • Recognition of partial frame of an object 

Although we are planning to find the dataset of indoor objects that includes some additions of partial images of objects, the recognition of a cropped image of an object due to close distance can be inaccurate. To mitigate this risk, we are planning to implement a history referral system that can track back to the history of recognized objects and determine the object if the accuracy is below a chosen threshold. Then, even when a user walks closer to the object to the point where the product cannot recognize the item, it can still produce a result by using the history. 

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

  • Switch from RPi to NVIDIA Jetson Nano:

In our previous report, we mentioned taking a network-based approach that offloads the bulk of the processing from the RPi to a server hosted on the cloud. This raised the issue of having the device be Wifi dependent and we quickly decided against taking that approach as we wanted our device to be accessible and easy to use, rather than an added chore for the user. To mitigate this, we did some research and found that switching from the RPi to the NVIDIA Jetson Nano as our on-board computer would make the most sense for the purposes of our project, as well as resolving the problem of overexerting the RPi or having to rely on a network connection to a server. The NVIDIA Jetson has higher performance, more powerful GPUs that make it better suited to run our object recognition model on board. Here is an updated block diagram:

As for changes in the cost, we have been able to get an NVIDIA Jetson Nano from the class inventory and so there is no additional cost. However, we have had to place a purchase order for a Jetson-compatible camera as the ones in the inventory were all taken. This was $28 out of our budget, which we believe we can definitely afford, and we don’t foresee any extra costs due to this switch.

  • Extra device control (from 1 button to 2 buttons):

Our design prior to this modification was such that the vibration module that alerts the user of objects in their way would be the default mode for our device, and that there would be one button that worked as follows: single-press for single object identification, double-press for continuous speech identification. As we got into discussing this implementation further, we realized that having the vibration  module be turned on  by default during the speech settings may be uncomfortable and possibly distracting for the user. To avoid risking overstimulation for the user, we decided to have both the vibration and speech modules be controllable via buttons, allowing the user to choose the combination of modes they wanted to use. This change is reflected in the above block diagram that now shows buttons A and B.

The added cost for this change should be fairly minimal as buttons cost around $5-10 and will greatly improve user experience.

  • Custom PCB:

Since we have switched to using the Jetson and plan to configure the GPIO pins for connecting peripherals, we now need to design and order a custom PCB for voltage conversion and current limiting. This change was necessary because the operating voltage of peripherals and the GPIO pin tolerances are different, and require a circuit in between to ensure safe operation without damaging any of the devices.

The added cost of this change is the cost of ordering a PCB as well as shipping costs associated with this. Since we are using a Jetson from ECE inventory, and the rest of our peripherals are fairly inexpensive, this should not significantly impact our remaining budget.

Provide an updated schedule if changes have occurred.

The hardware development schedule has changed slightly since we are now ordering a custom PCB. The plan for hardware development this week was to integrate the camera and sensor with the Jetson, but since these devices haven’t been delivered yet, we will focus on PCB design this week and will push hardware integration to the following week.

During testing of a pre-trained object recognition and distance estimation model, we have realized that the model only detects few objects that are irrelevant to indoor settings. Therefore, we have decided to train the model ourselves by using the dataset of common indoor objects. The workload of searching for a suitable dataset and training the model is added to the schedule, which has pushed back the object recognition model testing stage for around a week. 

 

Please write a paragraph or two describing how the product solution you are designing will meet a specified need.

Part A: … with respect to considerations of public health, safety or welfare.

Our product aims to aid visually impaired people from encountering an unnoticed danger that has not been detected by just using a cane. Not only does the product notifies the user what the object is but also alerts that there exists an obstacle right in front.  We are projecting the safety distance to be 2 meters, so that the user has time to avoid an obstacle in their own methods. 

If the product is successfully implemented, it can benefit blind people also in a  physiological sense. The people no longer need to be worried about running into an obstacle and getting hurt, which can significantly reduce their anxiety when walking in an unfamiliar environment. In addition, the user has an option to change a device setting to a manual option, in which the user manually presses a button to identify what the object in front of them is. This will alleviate the user’s stress of hearing the recognized objects every second. 

Part B: … with consideration of social factors. 

The visually impaired face significant challenges when it comes to indoor navigation, often relying on assistance from those around them or guide dogs. To address this, our goal is to have our device use technology to provide an intuitive and independent navigation experience. We use a combination of depth sensors, cameras, object recognition algorithms and speech synthesis to hopefully achieve this objective. The driving factor for this project is to improve inclusivity and accessibility in our society, aiming to empower individuals to participate freely in social activities and navigate public spaces with autonomy. Through our collaboration with the Library of Accessible Media, Pittsburgh, we also hope to involve our target audience in the developmental stages, as well as testing during the final stages of our project. 

Part C: … with consideration of economic factors.

Guide dogs are expensive to train and care for, and can cost tens of thousands of dollars for the visually impaired dog owner. Visually impaired people may also find it difficult to care for their guide dog, making them inaccessible options for many people. Our device aims to provide the services of guide dogs without the associated costs and care. Our device would reach a much lower price point and would be available for use immediately, while guide dogs require years of training. This makes indoor navigation aid more financially accessible to visually impaired people.

Team Status Report for 02/10/2024

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

RPi may overheat with constant use, and constant use also requires a large battery capacity, which increases device weight.

Plan: Have the RPi only be responsible for pre-processing the camera data and compiling it into a package with any necessary metadata to be sent to a server. This makes it so that the RPi is not doing the bulk of the processing and only acts as a channel to the server. The implementation will require us to set up a server on the cloud that runs our object recognition model and have it listen for incoming packages from the RPi. The RPi will periodically (every 1 second, for example) send an image to the server. This allows for real-time detection while minimizing the load on the RPi.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

There is a change made to the existing design regarding the change in the functionality of the ultrasonic sensor. To help reduce data latency when the user confronts an obstacle in the way, we plan on directly connecting the sensor to the vibration module rather than integrating the sensor with the recognition model. The cost will remain the same, but the purpose of an ultrasonic sensor has reduced from estimating the distance of an object and detecting an object to simply detecting that an object exists in front of the user. Then, there will be a change in plan to include the distance estimation model from a single source camera. Preferably, we will aim to integrate a single model that can take on both roles to relieve the workload of integrating two different models. 

Provide an updated schedule if changes have occurred.

Updated schedule to accommodate design changes and the need for further research:

2/10-2/16: Continue testing detection models to decide which ones to integrate

2/10-2/14: Continue research into camera modules, TOF cameras, and depth detection without ultrasonic sensors

2/12: Contact LAMP and set up a time to get feedback and suggestions for features

2/14: Order parts this week (RPi and potentially camera modules)