Team Status Report for 12.07

Here is what we did this week:

To optimize the YOLO-based object detection system, we migrated its operations to a cloud-based platform, creating an API endpoint. The Raspberry Pi now sends cropped images of the table area to the cloud for processing, reducing local computational delays. This change allows the Pi to allocate more resources to other critical tasks, such as feedback control for the charging pad and gantry system. After testing the cloud service 30 times, we observed an average detection time of 1.3 seconds and maintained an accuracy rate of 90%. These results met our requirements.

We improved the vision detection workflow by modifying the coordinate mapping system to align with the cropped images. Testing the gantry system under the updated configuration showed a consistent alignment precision within 1.5 cm across 20 trials. These adjustments reduces data size and improving accuracy.

In terms of feedback control, we explored using computer vision to detect the charging pad’s light as an indicator of charging status. This approach provided faster and more reliable feedback compared to the app-based system, which suffers from delayed polling rates, especially in the background. Testing the light-based feedback on 10 images yielded a 90% success rate, with one failure due to improper HSV threshold settings. Adjustments to the threshold values are planned to improve reliability further.

On the hardware side, we addressed stability issues with the gantry system by replacing PLA components with steel rails and shafts. These changes, along with revised screwing methods, enhanced the stability of the center manipulator. To resolve cable interference issues, we increased the height of the charging pad and adjusted the distance between the glass layers.

System integration tests were conducted to verify end-to-end functionality, including phone detection, gantry movement, and charging activation. While the system worked as expected, minor adjustments to ensure precise positioning are ongoing.

Looking ahead, we will focus on finalizing system integration and refining the interaction between software and hardware components. Specific tasks include tuning the gantry’s tracking and alignment feedback mechanisms and completing documentation, including a video, poster, and final report.

Bruce’s Status Report for 12.07

This week, I optimized the YOLO-based object detection system by migrating it to a cloud-based solution using the free tier. I constructed an API endpoint, which allow the Raspberry Pi to send images to the cloud for processing and receive responses. The reason for this change is because we run 20 tests on the pi, the yolo itself works pretty great, but when we incorporate other tasks like cv for feedback control, the computational power of pi seems insufficient to do it, and the yolo itself running on pi can cause some computational delay. Therefore, after I switch the yolo to cloud, the pi could distribute more computational power to other tasks, like feedback control for charging pad and gantry system. The cloud running app is registered as services to make sure it is always available, and I test for 30 times, the average detection time is 1.3s, which is below our requirement, and the accuracy is still around 90 percent as we are using the same model.

(yolo service on cloud || endpoint:  http://35.196.159.246:8000/detect_phone)

I also improved the vision detection workflow by modifying how images are processed. Instead of sending the entire picture for detection, the Raspberry Pi now sends cropped images representing only the table area to cloud. This change reduced data size and processing time and also improve the accuracy. Additionally, I updated the gantry system’s coordinate calculations to match the revised coordinate system, and we tested for about 20 times, each time the alignment will be within 1.5cm.

What’s more, in terms of the feedback control, I use computer vision to detect the light indicator on the charging pad as a real-time feedback mechanism. This approach provides immediate feedback on whether a device is charging, which is faster and more reliable than relying on the app. I initially try to read the cloud database from the app, but the app-based feedback system suffers from delayed polling rates, especially when running in the background. Therefore, Incorporating the light-based feedback is a better way to do this. I tested on about 10 images, and there is only 1 time it did not correctly identify the blue light on back of the charging pad(which indicate it is charging), so I may need to still update the HSV threshold value.

As mentioned before, I conducted several unit and system-level tests to evaluate performance and identify areas for improvement. Unit tests included verifying API response times for YOLO processing, ensuring coordinate adjustments matched the new cropped image detection system, and validating the detection of the charging pad’s light under different lighting conditions.

I am currently on schedule, and next week is basically just fixing small bugs and incorporate everything.

Steven’s Status Report 12.07

This week, our team successfully conducted the final presentation, effectively showcasing the significant advancements made in our hardware system. The presentation highlighted the structural improvements to the gantry system, particularly addressing the friction issues caused by the charging cable interacting with the charging pad connection. To resolve this, I enhanced our CAD model by increasing the distance between the glass interlayers on the tabletop by 2mm and redesigned the charging pad housing. These modifications ensure a better fit for the charging pad and prevent the cable from rubbing against the tabletop, thereby improving the system’s durability and performance.

In addition to the structural enhancements, I dedicated considerable effort to unit testing and recalibrating the gantry subsystem. This involved fine-tuning the subsystem to accurately move specified distances in the XY direction and ensuring reliable interaction with the computer vision system. Moving from one end of the gantry system to another end, with 57 cm of travel distance, the average error rate for the system is about 1.8mm. We have also run several tests on Z axis manipulator and hardcode its value. Each time the travel distance of the Z axis has a 2-3mm error while trying to improve the stability, this margin of error is acceptable for the design.T hrough rigorous testing, we confirmed that the vision system accurately detects the coordinates of a cell phone and effectively directs the gantry to position the charging pad at the target location.

Looking ahead to next week, our focus will shift to further system integration and enhancing inter-system interactions. I will be collaborating with Bruce on state feedback detection and vision calibration for the gantry’s end effector, ensuring precise positioning after the charging pad is moved to its designated location. These efforts aim to refine the overall system functionality and ensure seamless operation across all components.

Overall, this week has been highly productive, with key improvements made to both the structural design and the functional integration of our gantry system. These advancements position us well for continued progress in the upcoming phases of the project.

Harry’s Status Report for 12.07

Based on our final presentation, the remaining for us contains software (gantry system tracking, alignment feedback control) and hardware (workload test and cable management)

For the past week, after our presentation on Wednesday, we begin to test on charging pad alignment with the camera. The work I completed (primarily with Steven together for the hardware part) contains:

  1. Test original center manipulator. Use test code to let center manipulator drag the charging pad. However this configuration is not stable: sometimes can’t successfully drag; center manipulator very unstable with PLA shaft and rail. We tested for around 20 times, and around 8 times the center manipulator can’t successfully drag around the charging pad with wire. Therefore, we intend to modify the material of the center manipulator, the design of corner fasteners as well as the charging pads.
  2. Change to steel rail and shaft. Change screwing methods. Fix the shaft and rail right to the center manipulator. These changes are made to ensure the center manipulator is very stable and can successfully drag the charging pad.
  3. Help with corner fastener redesign and charging pad structure redesign.
  4. Help with software integration tests. Camera detect phone –> gantry move charging pad to the position of the phone –> start charging. Still tuning
  5. We increased the distance between the two glass boards. Increased the height of the charging pad. This makes sure the type C wire does not come out the charging pad at the bottom. 20 trails were implemented and 18 trails completed. One fail trail the charging module was not fixed well to the charging pad. The other fail trail the manipulator failed to drag the charging pad.
  6. Fine tuning of the CV system was also implemented. We placed the camera at several different positions to ensure it can view the whole top of the table. Based on the fixed position of the camera, we record the position and begin to fine tune the movement of center manipulator. We transform pixel distance to real world distance and change input values for fine tuning,

Next week: Video, poster, final report ongoing. Overall on time. Still a lot of paper work to do.

Steven’s Weekly Status Report 11.30

 

This week I continued to refine the hardware and control system for the gantry system. In order to ensure a close enough charging distance, I weighed the tradeoff between table thickness and charging effectiveness. Through testing, I found that a 3mm desktop ensures the best charging results i.e. charging with a case and without having to fully align the charging pad with the phone. However, the acrylic sheet we started with was not strong enough and its center section deformed very significantly and blocked the movement of the charging pad. On balance, I urgently purchased and cut glass panels and redesigned the CAD for the corner fixing parts, and tested to ensure that the charging pads would run silky smooth between the two layers of table tops, dragged by magnets.

I also debugged the motor driver and wiring for the up and down movement of the manipulator to ensure that it could perform the specified movement and drag the charging pad under the table. After experimentation, the gantry system now moves the charging pad well. At the same time, because the motor wiring is more confusing. I organized its wiring and centralized the motor driver into a cardboard box to avoid it receiving disconnection and other problems. Next week I’m going to focus mainly on improvements in the accuracy and workload testing of the system as well as the smoothness of the integration.

As you designed, implemented, and debugged projects, what new tools or knowledge did you find you needed to learn to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

As an ECE student, I started with no specialized knowledge of gantry systems. I referenced many belt designs for 3D printers and other three-dimensional gantry units from the internet. At the same time, the motor driver was more complex than we thought. Since I had not used a stepper motor professionally before, it took me a lot of time to debug the PWM signals. I browsed through Arduino user forums and YouTube tutorials to properly match our motor driver with the motor body.At the same time, since the design process requires a lot of rework, I have also brushed up on a lot of skills such as Solidworks and 3D Printing Slicing through online tutorials and learning from my mechanical engineering friends to help us efficiently complete our tests and improvements.

Team Status Report for 11.30

Our most crucial work for the past week is described in subsections as below.

  1. Gantry system debug based on problem exhibited in interim demo:The problem that led to slow and uneven movement of the belt is the deformation of this 3d part. We reprinted this 3d part using ABS and it can then move on all x, y and z axis
  2. Center manipulator can operate as intended. Center manipulator is attached to the manipulator on the center rail. Also shaft and rail are assembled for center part to move up and down
  3. Glass boards are used to replace acrylic boards to ensure minimal structure changes
  4. Code is updated for testing the whole gantry system
  5. Charging module and charging pads are redesigned.
  6. Charging  success rate is ensured. Using around 2.6mm glass, the center manipulator can use magnets to attract the charging pad and move it around. Phone on top of the table can be successfully charged through testings. Video demos are taken and might be shown in presentation slides.

At present, the construction and testing of the overall hardware system has been completed, and more tests on intergation and workload are needed.

Harry’s Status Report for 11.30

The work I finished for the past week is described as below:

  1. Center part design/printing/purchasing/assemblyAssembled center part. Contains stepper motor, base, shaft, rail, upper part. The rail and the shaft are printed. Steel made rail and shaft are ready for cut (will be performed in machine shop). The upper part (primarily for charging pad attraction using magnets and movement) can move up and down on the shaft. Video was taken. This part is fixed onto the center manipulator as shown below: 
  2. Charging pad (charging pad was reprinted two times before it can work as intended) Steven will show in detail regarding charging results.
  3. Acrylic boards are replaced by glass boards and are fixed to the top of the table. Corners are reprinted to make sure they matches with the size of the glass board as well as the height of the charging pad
  4. Whole system overview with motor drivers fix into a box.Power supply under the table. All hardware parts completed. Assembled and tested for many times. Gantry runs good with odometry, can operate in all x, y and z axis. Center manipulator can drag charging pad around using magnets. 
  5. System integration already began. Camera control gantry movement. Bruce will update on this.

Tools and knowledge needed to accomplish these tasks:

  1. Mechanical design skills
  2. Understanding of Rpi programming logics (eg: gpio),
  3. Gantry system principles
  4. Testing and troubleshooting methods: hw, sw, meche, external force
  5. Quick 3d printing skills
  6. system integration skills, how to connect sw, hw and meche together. How to control the subsystems.

learning strategies

  1. Online documents blogs for Rpi use
  2. Online videos for specific motor driver use
  3. Rpi/Motor/Motor driver operational manual
  4. Consult other students, professors

We will continue system integration and work on slides.

Bruce’s Status Report for 11.30

This week I worked closely with my teammates to ensure that the gantry system, a critical component of our design, became fully operational. Together, we collaborated to identify and address mechanical and software-related issues that were hindering its performance. By developing and fine-tuning the gantry control code, we achieved smooth and precise movement, ensuring the charging pad can accurately align with the detected device’s position.

(The problem is the deformation of this 3d part, which makes the belt hard to rotate smoothly)

On the vision system front, I successfully deployed our object detection software onto the Raspberry Pi. During the deployment process, I noticed that the original code caused high latency when running on the Pi’s constrained hardware resources. To address this, I reviewed and optimized the code, making key modifications that significantly improved its efficiency and performance. These changes have resulted in faster detection and response times, which are critical for seamless interaction with the gantry system.

The final step this week involved integrating the software and hardware subsystems to evaluate the entire system’s functionality. I conducted extensive testing, including a comprehensive suite of unit tests to ensure the individual components were functioning correctly and integration tests to validate the interaction between subsystems. These tests demonstrated promising results, with the system operating effectively and meeting our performance expectations.

I am currently on schedule. there isn’t many specific tasks left for next week, since we have done most of them. Therefore, for next week, I am going to focus on testing, and resolve some minor issues for system integration.

Steven’s Status Report 11.16

Weekly Progress and Testing Plans

Harry and I spent about 30 hours this week building and testing the gantry system. We completed the hardware construction and testing of two sets of motor drives and belt drive systems. Currently, our motors can drive the manipulator to move smoothly. We encountered very many technical problems during the building process. First of all, our motor drive had the problem of insufficient torque at the beginning and could not drive the belt system during the actual system verification. I reduced the number of microsteps in the stepper motor drive from 6400 to 3200 and increased the drive voltage from 9V to 12V. We verified that this configuration could successfully drive the system by linking the motor to the belt and testing it with the whole gantry system.

We are still trying to test if we can control the end effector using the difference in speed and direction of rotation of the two motors because of a delay in the end effector part we 3D printed at Techspark.In the meantime, we have been verifying the charging of our wireless charging pads this week. We tested our 5W and 12W wireless charging modules as well as Apple’s original wireless charging module. The 12W charging module had the best results. We tested it on 3mm and 5mm acrylic boards, and with the 12W charging pad, we were able to charge a phone without a case on both thicknesses of acrylic boards very smoothly. For phones with cases, charging on 5mm acrylic requires a very tight fit between the phone and the charging pad. We will be printing out the charging plate pallets next week for the 5mm charging experiment to see if the 12W module can fully meet our needs. We plan to complete the gantry system to grab and release the charging pad next week and provide an API for the computer vision system to move the pad.

Bruce’s Status Report for 11.16

This week I focus on charging app development and CV development, and did some testing related to the subsystem.

Charging App Development

  • Completed the development of the iOS and macOS apps, enabling users to monitor their phone’s charging status in real-time.
  • Successfully integrated Google Cloud Firebase storage, providing a centralized system where users can view and manage the charging statuses of all their devices from the macOS app.
  • Added features to enhance the user experience, such as real-time updates and seamless synchronization across devices.

    Computer Vision System
  • Finalized the object detection module, achieving approximately 90% accuracy in detecting phones on the table.
  • Implemented a two-frame difference technique to identify significant changes between video frames, signaling the potential placement of a phone, and also avoid using yolo model at all frame to save performance.
  • Incorporated a YOLO model to confirm the detection, identify the phone, and calculate its center coordinates for precise localization.
  • Enhanced the detection pipeline to minimize processing time while maintaining high accuracy.

The project is on track, with key software systems functional and aligned with the project timeline. Significant progress has been made on both the app and vision subsystems, ensuring they are ready for integration.

Next Week’s Plan:

  1. Enhance Charging Pad Stability:
    • Focus on improving the stability and reliability of the charging pad system to ensure consistent wireless charging performance.
  2. Optimize Phone Detection:
    • Fine-tune the YOLO model to reduce false positives and further improve accuracy.
    • Test the system with various phone models and orientations to enhance robustness.
  3. Integrate Vision and Gantry Systems:
    • Begin integrating the computer vision system with the gantry system, enabling the seamless transfer of phone location data to control the movement of the charging pad.

Test the communication between the vision system and Raspberry Pi to ensure smooth coordination.

Testing
Software Testing (iOS and macOS Apps) (Already did):
Ensure real-time updates of charging status for multiple phones through Firebase. Therefore, for this test, we need to test multiple different devices (iPhone 12 Pro, iPhone 13 Pro, iPhone 14) charging simultaneously, and we need to measure the time taken for status changes to reflect on the apps and repeat for 10 times. The test result shows an average update delay <500 million seconds, meeting real-time requirements, and it verified seamless synchronization across iOS and macOS platforms.

Data Consistency Testing (Already did)
Verified data consistency between Firebase and app interfaces by making real-time changes to device charging status and observing updates on both IOS and MacOS platform. The test was conducted 15 times, and the data are all consistent and updated in a very short time period. (< 500 milliseconds)

Object Detection Accuracy Testing (Already did)
Evaluated the accuracy of the YOLO model with a dataset of 3 phone placements in various conditions such as different lighting, orientations, and phone models. The test was conducted 20 times, and in 18 of the times all phone locations are correctly identified, achieving an overall detection accuracy of 90 percent with occasional false positives for phone-like objects.