Team Status Report for 12.07

Here is what we did this week:

To optimize the YOLO-based object detection system, we migrated its operations to a cloud-based platform, creating an API endpoint. The Raspberry Pi now sends cropped images of the table area to the cloud for processing, reducing local computational delays. This change allows the Pi to allocate more resources to other critical tasks, such as feedback control for the charging pad and gantry system. After testing the cloud service 30 times, we observed an average detection time of 1.3 seconds and maintained an accuracy rate of 90%. These results met our requirements.

We improved the vision detection workflow by modifying the coordinate mapping system to align with the cropped images. Testing the gantry system under the updated configuration showed a consistent alignment precision within 1.5 cm across 20 trials. These adjustments reduces data size and improving accuracy.

In terms of feedback control, we explored using computer vision to detect the charging pad’s light as an indicator of charging status. This approach provided faster and more reliable feedback compared to the app-based system, which suffers from delayed polling rates, especially in the background. Testing the light-based feedback on 10 images yielded a 90% success rate, with one failure due to improper HSV threshold settings. Adjustments to the threshold values are planned to improve reliability further.

On the hardware side, we addressed stability issues with the gantry system by replacing PLA components with steel rails and shafts. These changes, along with revised screwing methods, enhanced the stability of the center manipulator. To resolve cable interference issues, we increased the height of the charging pad and adjusted the distance between the glass layers.

System integration tests were conducted to verify end-to-end functionality, including phone detection, gantry movement, and charging activation. While the system worked as expected, minor adjustments to ensure precise positioning are ongoing.

Looking ahead, we will focus on finalizing system integration and refining the interaction between software and hardware components. Specific tasks include tuning the gantry’s tracking and alignment feedback mechanisms and completing documentation, including a video, poster, and final report.

Bruce’s Status Report for 12.07

This week, I optimized the YOLO-based object detection system by migrating it to a cloud-based solution using the free tier. I constructed an API endpoint, which allow the Raspberry Pi to send images to the cloud for processing and receive responses. The reason for this change is because we run 20 tests on the pi, the yolo itself works pretty great, but when we incorporate other tasks like cv for feedback control, the computational power of pi seems insufficient to do it, and the yolo itself running on pi can cause some computational delay. Therefore, after I switch the yolo to cloud, the pi could distribute more computational power to other tasks, like feedback control for charging pad and gantry system. The cloud running app is registered as services to make sure it is always available, and I test for 30 times, the average detection time is 1.3s, which is below our requirement, and the accuracy is still around 90 percent as we are using the same model.

(yolo service on cloud || endpoint:  http://35.196.159.246:8000/detect_phone)

I also improved the vision detection workflow by modifying how images are processed. Instead of sending the entire picture for detection, the Raspberry Pi now sends cropped images representing only the table area to cloud. This change reduced data size and processing time and also improve the accuracy. Additionally, I updated the gantry system’s coordinate calculations to match the revised coordinate system, and we tested for about 20 times, each time the alignment will be within 1.5cm.

What’s more, in terms of the feedback control, I use computer vision to detect the light indicator on the charging pad as a real-time feedback mechanism. This approach provides immediate feedback on whether a device is charging, which is faster and more reliable than relying on the app. I initially try to read the cloud database from the app, but the app-based feedback system suffers from delayed polling rates, especially when running in the background. Therefore, Incorporating the light-based feedback is a better way to do this. I tested on about 10 images, and there is only 1 time it did not correctly identify the blue light on back of the charging pad(which indicate it is charging), so I may need to still update the HSV threshold value.

As mentioned before, I conducted several unit and system-level tests to evaluate performance and identify areas for improvement. Unit tests included verifying API response times for YOLO processing, ensuring coordinate adjustments matched the new cropped image detection system, and validating the detection of the charging pad’s light under different lighting conditions.

I am currently on schedule, and next week is basically just fixing small bugs and incorporate everything.

Bruce’s Status Report for 11.30

This week I worked closely with my teammates to ensure that the gantry system, a critical component of our design, became fully operational. Together, we collaborated to identify and address mechanical and software-related issues that were hindering its performance. By developing and fine-tuning the gantry control code, we achieved smooth and precise movement, ensuring the charging pad can accurately align with the detected device’s position.

(The problem is the deformation of this 3d part, which makes the belt hard to rotate smoothly)

On the vision system front, I successfully deployed our object detection software onto the Raspberry Pi. During the deployment process, I noticed that the original code caused high latency when running on the Pi’s constrained hardware resources. To address this, I reviewed and optimized the code, making key modifications that significantly improved its efficiency and performance. These changes have resulted in faster detection and response times, which are critical for seamless interaction with the gantry system.

The final step this week involved integrating the software and hardware subsystems to evaluate the entire system’s functionality. I conducted extensive testing, including a comprehensive suite of unit tests to ensure the individual components were functioning correctly and integration tests to validate the interaction between subsystems. These tests demonstrated promising results, with the system operating effectively and meeting our performance expectations.

I am currently on schedule. there isn’t many specific tasks left for next week, since we have done most of them. Therefore, for next week, I am going to focus on testing, and resolve some minor issues for system integration.

Bruce’s Status Report for 11.16

This week I focus on charging app development and CV development, and did some testing related to the subsystem.

Charging App Development

  • Completed the development of the iOS and macOS apps, enabling users to monitor their phone’s charging status in real-time.
  • Successfully integrated Google Cloud Firebase storage, providing a centralized system where users can view and manage the charging statuses of all their devices from the macOS app.
  • Added features to enhance the user experience, such as real-time updates and seamless synchronization across devices.

    Computer Vision System
  • Finalized the object detection module, achieving approximately 90% accuracy in detecting phones on the table.
  • Implemented a two-frame difference technique to identify significant changes between video frames, signaling the potential placement of a phone, and also avoid using yolo model at all frame to save performance.
  • Incorporated a YOLO model to confirm the detection, identify the phone, and calculate its center coordinates for precise localization.
  • Enhanced the detection pipeline to minimize processing time while maintaining high accuracy.

The project is on track, with key software systems functional and aligned with the project timeline. Significant progress has been made on both the app and vision subsystems, ensuring they are ready for integration.

Next Week’s Plan:

  1. Enhance Charging Pad Stability:
    • Focus on improving the stability and reliability of the charging pad system to ensure consistent wireless charging performance.
  2. Optimize Phone Detection:
    • Fine-tune the YOLO model to reduce false positives and further improve accuracy.
    • Test the system with various phone models and orientations to enhance robustness.
  3. Integrate Vision and Gantry Systems:
    • Begin integrating the computer vision system with the gantry system, enabling the seamless transfer of phone location data to control the movement of the charging pad.

Test the communication between the vision system and Raspberry Pi to ensure smooth coordination.

Testing
Software Testing (iOS and macOS Apps) (Already did):
Ensure real-time updates of charging status for multiple phones through Firebase. Therefore, for this test, we need to test multiple different devices (iPhone 12 Pro, iPhone 13 Pro, iPhone 14) charging simultaneously, and we need to measure the time taken for status changes to reflect on the apps and repeat for 10 times. The test result shows an average update delay <500 million seconds, meeting real-time requirements, and it verified seamless synchronization across iOS and macOS platforms.

Data Consistency Testing (Already did)
Verified data consistency between Firebase and app interfaces by making real-time changes to device charging status and observing updates on both IOS and MacOS platform. The test was conducted 15 times, and the data are all consistent and updated in a very short time period. (< 500 milliseconds)

Object Detection Accuracy Testing (Already did)
Evaluated the accuracy of the YOLO model with a dataset of 3 phone placements in various conditions such as different lighting, orientations, and phone models. The test was conducted 20 times, and in 18 of the times all phone locations are correctly identified, achieving an overall detection accuracy of 90 percent with occasional false positives for phone-like objects.

 

Bruce’s Status Report for 11.09

This week, my main focus was on finalizing the phone app. So currently, I have finalized our design of the app on iPhone, and right now it allows users to see the current changing status, the estimate time to full, and the thermal state of the phone.

App UI when it is charging

App UI when it is not charging

App Icon in phone

So right now the app looks better and contains animation, it also contains an app icon. However, One problem is that for IOS system, in order to provide the privacy, their public API to get the battery percentage only have accuracy of 5%. I also checked the existing app in App Store, and the existing app that can show battery percentage also have this problem.

I am currently on track.

Next week I would mainly focus on integrating this with the web browser and the macOS app, so that user can see their device info when they are charging on their computer. I would also help building the gantry system to make it mostly finished next week.

Team Status Report for 11.02

This past week, our team focused on advancing the structural, motor, and charging systems of the table. With the final structural components arriving early in the week, we examined the aluminum extrusions and connectors for the table frame, ensuring they fit within our design specifications. We created detailed 3D models using Fusion 360 and submitted critical parts, like the table corners and charging pad holders, for 3D printing. These components will provide support for the transparent layers and integrate with the gantry system, allowing precise positioning of the charging pad.

For the motor system, we configured the motors on the Raspberry Pi, establishing essential commands like turning and stopping. Leveraging compatibility with the Nvidia Jetson Orin Nano, we extended this motor functionality to the Jetson platform, which will allow us to translate coordinate data from the vision system into precise motor commands.

On the charging module side, we successfully integrated the charging pad with the Jetson Nano, establishing reliable communication between the two. Initial tests show that our setup can charge an iPhone 13 Pro in approximately 45 minutes. However, we identified stability issues with the charging pad, where it occasionally stops charging unexpectedly. We also experimented with materials and configurations to minimize interference from the gantry’s magnetic components, adjusting the design to ensure reliable charging without compromising the table’s layout.

We are currently on schedule.

Our primary objectives for the upcoming week are as follows:

  1. We will continue assembling and testing the structural and mechanical components, including verifying the 3D-printed parts for stability and functionality.
  2. We aim to complete the gantry system’s mechanical assembly and begin functional testing, focusing on motor control and precise movement based on coordinate inputs from the vision system.
  3. Enhancing the charging module’s stability will be a top priority to prevent disruptions in charging. We will conduct further tests to identify the root cause of the charging interruptions and make the necessary adjustments to achieve consistent charging performance.

Bruce’s Status Report for 11.2

This week, my main focus was the development of the charging module. I worked on the integration of the charging pad with the Jetson Nano, our project’s central processing unit. By the end of the week, I successfully incorporated the charging pad with the Jetson Nano.

One of the key metrics I measured this week was the charging efficiency of the system. With the current setup, it takes approximately 45 minutes to fully charge an iPhone 13 Pro.  Currently, the charging pad is not stable enough, which means that sometimes it just suddenly stop charging the devices. I haven’t figure out the reason yet, so I am going to focus on solve this next week.

One big consideration was the unique structural layout of the table. The design must accommodate both the moving parts of the gantry and the magnetic properties of the charging pad without interference. Through testing, I discovered that certain materials and structural arrangements could impact the strength and stability of the magnetic field around the pad. I adjusted the layout and experimented with different materials to mitigate this issue, but further refinement will be necessary as we move toward finalizing the design.

I am currently on schedule.

In the upcoming week, the primary objective will be to improve the stability of the charging pad during the charging process.  I will also begin to build the gantry system to make sure we have enough time to finalize the project.

Bruce’s Status Report for 10.26

This week, we received our Stereo Camera module (NVIDIA JETSON NANO/XAVIER NX, DUAL OV2311 Monochrome Camera Module). I tested the existing phone detection algorithm which I developed last week, implemented with YOLO, on the new camera hardware. Initial tests indicated a slight drop in detection accuracy, with only 12 out of 15 tests successfully detecting the phone’s location. To address this, I am currently retraining the YOLO model to better suit the parameters and specifications of the new camera setup.

(As shown above, the current Yolo model detects the calculator as an cell phone)

Next, I finished the project’s ethics documentation, focusing on our commitment to public health, safety, and welfare. This report outlines the ethical considerations for the smart charging table, ensuring that we adhere to necessary standards.

Then, I improved  the software app by integrating the Phone API. Before, I have designed the UI of the software, but the values shown is a preset values. Now I have successfully call the iPhone API to demonstrate the device information, which allows the UI to display actual phone temperature and charging status instead of using preset values. The update offers a more realistic demonstration of the charging process.

I am currently on schedule with our project timeline. Each milestone this week has contributed to refining the system’s functionality and user experience, ensuring that our project objectives are met.

Next Steps:

Continue Training the YOLO Model to improve detection accuracy with the new camera.

Further Test and Validate the Updated UI, ensuring data accuracy for all phone models.

Collaborate with the Team to begin designing the gantry system with the camera-based detection system, ensuring seamless communication with the central unit (Jetson Nano) once all our parts arrived.

Bruce’s Status Report for 10.20

The first task I completed was finalizing the system implementation and testing plan in our design report. This is a crucial document that will guide the integration and verification processes for the project’s three core subsystems: the gantry system, vision system, and charging system. The plan outlines detailed steps for ensuring that each subsystem functions as expected and interacts seamlessly with the others. For instance, the testing plan includes scenarios that simulate real-world usage conditions, such as cases where the phone might be placed at varying orientations or positions on the table, or where the gantry system must make rapid adjustments. It also includes edge case testing, ensuring that the system can handle situations like multiple phones or objects being placed on the table simultaneously.

In parallel, I worked extensively on the computer vision system. Using the YOLO (You Only Look Once) object detection algorithm, I developed the first version of our vision system, which is capable of identifying the location of a phone on the table with an accuracy rate exceeding 90%. Since our parts have not arrived yet, I use my own camera to take pictures and detect the existence of the phone. In order to simulate the charging table situation, I intentionally make the phone face down, so the camera is taking pictures of the back of the phones.  The first version implementation ensures the system’s ability to detect phones reliably in most situations, which is essential for the proper functioning of the gantry system. The YOLO algorithm, known for its speed and precision in object detection, was trained on a custom dataset that reflects the specific conditions of our table—such as the camera angle, lighting conditions, and various phone models. I dedicated significant time to training the model and testing it against different cases to ensure accuracy, and the results so far have been very promising. The system is able to successfully detect phones in a variety of orientations and under different lighting conditions, which provides confidence in its real-world application. However, there is still room for optimization, particularly in reducing the time it takes for the system to process each frame and communicate the phone’s coordinates to the gantry system. It is also possible to improve the accuracy in some edge cases, such as when the light is not sufficient or too strong.

I have made several iterations of testing, refinement, and retraining to reach the desired accuracy level. I invested time in adjusting hyperparameters and improving the training dataset to enhance the system’s performance. Moreover, I spent time debugging certain cases where the system initially struggled to detect phones placed at extreme angles or partially obscured by other objects. These issues have largely been resolved, and I am confident that the system is now reliable enough for initial integration with the gantry system.

At this stage, I am happy to report that my progress is on schedule. The development of the vision system and completion of the design report are major milestones that were planned for this week, and I am confident that we are moving forward according to the timeline we established. The next step will be to begin the process of developing other systems as soon as our parts arrived. Also, I would continue to improve my algorithm for the vision system and continue developing the app for our smart charging table.

In terms of deliverables for next week, my focus will be on continue to develop the app we have right now.  Additionally, I plan to work on optimizing the vision system. While the system currently performs well, there are improvements that can be made in terms of speed and accuracy, particularly in handling more challenging scenarios like detecting phones in low light or at difficult angles. Reducing the latency of the system’s response is also a priority, as faster detection will allow the gantry system to move more quickly and efficiently.

Bruce’s Status Report for 10.5

This week, I took on the role of presenting the design introduction for our project. To ensure a smooth delivery, I thoroughly prepared my part, focusing on clearly conveying the main concepts and design rationale. I spent significant time rehearsing the presentation and refining the slides to make sure they were visually engaging and easy to follow. I also gathered feedback from my teammates during practice sessions to further improve the clarity and flow of the content.

In addition, I was responsible for selecting the components for the charging pad module in our purchasing list. I researched extensively to determine the best options for our charging system, comparing various models and their specifications to ensure compatibility with our requirements. I also coordinated with my teammates to finalize other essential components, such as mechanical parts, ensuring our choices align well with the overall design and meet the project’s technical needs. This involved multiple discussions and iterations to balance cost, availability, and performance.

Lastly, I made improvements to the existing features of our app. I implemented a communication protocol that now allows the app to connect with other devices via WiFi, enabling real-time data exchange and expanding its functionality. Additionally, I started investigating the iPhone API to access information about temperature and charging status, which will further enhance the app’s capabilities. I looked through the API’s documentation and experimenting with different methods to retrieve the relevant data effectively. These improvements aim to make the app more user-friendly and provide more comprehensive information to users, preparing for our final design.

   

I am currently on schedule, and for the next steps:

  1. Finalize the purchase of the selected components and begin testing their integration into the system.
  2. Continue refining the app by completing the implementation of the iPhone API to gather temperature and charging status data, ensuring seamless integration.
  3. Work on the prototype of the charging pad module to verify its functionality and compatibility with the rest of the system.
  4. Collaborate with my teammates to set up a testing environment for the system, allowing us to identify any issues early and iterate on improvements.