Bruce’s Status Report for 10.26

This week, we received our Stereo Camera module (NVIDIA JETSON NANO/XAVIER NX, DUAL OV2311 Monochrome Camera Module). I tested the existing phone detection algorithm which I developed last week, implemented with YOLO, on the new camera hardware. Initial tests indicated a slight drop in detection accuracy, with only 12 out of 15 tests successfully detecting the phone’s location. To address this, I am currently retraining the YOLO model to better suit the parameters and specifications of the new camera setup.

(As shown above, the current Yolo model detects the calculator as an cell phone)

Next, I finished the project’s ethics documentation, focusing on our commitment to public health, safety, and welfare. This report outlines the ethical considerations for the smart charging table, ensuring that we adhere to necessary standards.

Then, I improved  the software app by integrating the Phone API. Before, I have designed the UI of the software, but the values shown is a preset values. Now I have successfully call the iPhone API to demonstrate the device information, which allows the UI to display actual phone temperature and charging status instead of using preset values. The update offers a more realistic demonstration of the charging process.

I am currently on schedule with our project timeline. Each milestone this week has contributed to refining the system’s functionality and user experience, ensuring that our project objectives are met.

Next Steps:

Continue Training the YOLO Model to improve detection accuracy with the new camera.

Further Test and Validate the Updated UI, ensuring data accuracy for all phone models.

Collaborate with the Team to begin designing the gantry system with the camera-based detection system, ensuring seamless communication with the central unit (Jetson Nano) once all our parts arrived.

Steven’s Status Report 10.26

Due to an unforeseen procurement issue, we were not able to obtain the aluminum profiles and motors and accessories needed to build our gantry system as planned this week, so we were not able to begin assembling our system. So this week I spent my time mostly on the parts that didn’t require physical hardware.

This week I further refined the CAD modeling and simulation, and following up on last week’s design of the motor support structure, this week our motors were able to be fixed in their designated positions and function properly in the type of environment we were working in. I fabricated the motor support structure at Roboclub using a 3D printer, however because of the slicing problem. Our parts were not able to reach the hardness we needed during testing, so next week we need to continue 3D printing to get a strong enough motor mount.

 

Carrying on from last week’s motor controller development, I wiring the Jetson Orin Nano this week, there was some difficulty with the pin assignments as we have more peripherals, however as the motor driver only needs analog signals we assigned it to a few free pins. It was tested by a logic analyzer and it works properly.

We have been notified of the arrival of our hardware this Friday afternoon, and next week we hope to finish connecting and debugging the motor controllers and attempt to begin building and debugging the gantry system.

Team Status Report 10.26

The work we completed this week can be discussed in following sections:

  1. Camera Integration and Algorithm Development:
    • We have integrated a new Stereo Camera module (NVIDIA JETSON NANO/XAVIER NX, DUAL OV2311 Monochrome Camera Module) into our system. Initial tests with the YOLO model showed a decrease in detection accuracy; hence, we are currently retraining the model to adapt to the new camera’s specifications.
    • Our work on the software app saw considerable advancement with the successful integration of the iPhone API, allowing real-time display of device information such as phone temperature and charging status.
  2. Hardware and Mechanical Design:
    • Due to a delay in receiving aluminum profiles and motor accessories, our plans to assemble the gantry system were postponed. Instead, we focused on refining the CAD models and simulating motor placements.
    • The motor support structures were fabricated using a 3D printer at the Roboclub. However, issues with the slicing process led to insufficient hardness, necessitating further printing next week.
  3. Electrical and Control Systems:
    • Wiring the Jetson Orin Nano proved challenging due to the increased number of peripherals, but we successfully assigned the motor driver to available analog pins. The setup was verified using a logic analyzer.
    • We are preparing for the arrival of the hardware this Friday, with plans to connect and debug the motor controllers and start constructing the gantry system.
  4. Documentation and Compliance: we carefully went through the comments in our design report because it seems that there are problems both quantitatively and qualitatively. For example, the use case requirements part do need to include some quantitative information. However, due to lack of detailed information, we missed some important information for several parts. Therefore, we are working on the report to refine the content. We believe we will continue to work on it next week to include more details regarding our design.

Next week, our team will focus on enhancing the YOLO model’s detection accuracy through continued retraining and testing. We’ll also conduct further tests on the updated user interface to ensure accurate data display for all phone models. Once we receive all necessary parts, we’ll begin assembling the gantry system and cutting the rails. Additionally, we’ll work on refining our design review report to address any remaining issues.

 

 

Harry’s Status Report for 10.26

This week we have obtained the camera borrowed from 18500 Inventory: Stereo Camera NVIDIA JETSON NANO/XAVIER NX, DUAL OV2311 Monochrome Camera Module. The specification of the camera can be obtained from the following website: https://www.arducam.com/product/arducam-2mp-stereo-camera-for-raspberry-pi-nvidia-jetson-nano-xavier-nx-dual-ov2311-monochrome-global-shutter-camera-module/

The Arducam 2MP Stereo Camera MIPI Module is tailored for integration with the Jetson Nano/Xavier NX platforms, enhancing our capabilities in stereo vision for applications like depth sensing, 3D mapping, and SLAM. This module incorporates two synchronized 2MP monochrome global shutter OV2311 image sensors, providing high resolution and sensitivity essential for precise depth information. It connects via the MIPI CSI-2 interface and operates with V4L2 camera drivers. Although it can achieve frame rates up to 50fps at 3200×1300 resolution, it does not support ISP processing on the Nvidia Jetson platform. We must utilize external image processing tools and applications that comply with the V4L2 framework. The camera module includes a selection of mounts and cables, but does not come with the Jetson board itself. Therefore, we are working on our openCV code for the camera, currently using Gaussian Blur for image smoothing and the Sobel Operator for edge detection. After the code done early next week we will begin testing on the camera.

Apart from the camera as well the testings for Jetson Nano, I spent a long time together with my teammates to go over the design review report. During last week’s meeting, many problems with our design report was pointed out and we need to fix all of them. For some of them we do have a clear idea while for the remaining parts we still need to consult professors and TAs. We will continue refining our design review report next week.

Overall, my progress is on schedule. For the next weeK, I plan to continue the testing on the camera we obtained last week.  At the same time, as mentioned earlier, our motors and linear rails have arrived. We will begin to divide the work and start cutting rails and construct the gantry system. For the motors, we will start testing next week as well.

Team Weekly Status Report 10.20

This week, our focus was on advancing multiple aspects of the smart charging table project, including both hardware and software developments, while ensuring that we remain on schedule. A key milestone was the finalization of our design review report, where each team member contributed significantly to different sections. For example, In the related work, we examined a previous team’s similar project, identified their challenges, and integrated improvements into our design. This reflection on prior work led to several modifications to our original design, particularly in how we will combine the gantry system with the robotic control ideas.

On the hardware side, we received and began working with the Nvidia Jetson Orin Nano, which will serve as the central control unit for our project. Our efforts this week were focused on familiarizing ourselves with its interface, setting up the initial programming environment, and conducting basic circuit tests in Techspark. We set up the Jetson Orin Nano using a DisplayPort connector with a keyboard and monitor, successfully gathering information about the GPIO header pinout from resources online. We used this pinout to test basic functionalities, such as powering an LED and a motor driver through breadboard circuits. We also explored communication protocols such as UART, SPI, and I2C, which will be critical for integrating various sensors and components moving forward.

In parallel, we continued testing the camera system, despite not yet receiving the stereo camera we ordered. Using the available camera, we implemented basic image filtering tests, simulating real-world conditions by positioning the camera to capture electronics on a table. We are still refining these tests, taking into account environmental factors like lighting and object positioning. Once the stereo camera arrives, we plan to compare its performance with the Nvidia Jetson Nano’s built-in camera to determine which setup is more efficient for our application.

Regarding the vision system, we made significant progress by implementing the YOLO object detection algorithm. The first version of our system is now capable of detecting phones on the table with over 90% accuracy, even when the phone is facedown, simulating the real charging scenario. The algorithm was trained on a custom dataset to reflect the specific conditions we anticipate in real use, such as varied lighting and phone orientations. While this initial implementation is promising, We are working to further optimize the system by reducing latency and improving performance in edge cases, such as low light conditions or when the phone is obscured.

For the gantry system, we are developing the motor control system using TB6600 motor controllers. This week, we wrote the initial control code using the Jetson.GPIO library and successfully tested it on the Jetson Orin Nano. The next step is to integrate the motor controllers with the actual motors, once they arrive, and begin full power load testing with a second controller for more complex gantry operations.

Consideration of Global Factors
The smart charging table we are designing addresses a global need for efficient, multi-device wireless charging in a variety of settings, from personal homes to commercial spaces. With the growing number of mobile devices worldwide, especially in fast-growing markets, there is a clear need for versatile and reliable charging solutions that can cater to different phone models and device types. Our system, leveraging the Nvidia Jetson Orin Nano’s processing power and the precision of our gantry system, is capable of identifying and charging multiple devices simultaneously and automatically, making it applicable to global consumers who demand flexibility and efficiency in their device management. By designing a system that can adapt to varying environmental conditions and device types, we are catering to a broad audience beyond just technologically advanced or academic environments. What’s more, our product is fully automatic, which ensures that the system can be deployed in any region, helping users efficiently manage their devices without requiring advanced technical knowledge.

Consideration of Cultural Factors
Our product design takes into account the diverse ways in which people across different cultures use and interact with technology. For example, in regions where multiple device ownership is common, our system’s ability to detect and charge several devices at once offers a significant advantage. We have also considered user interface design, ensuring that the system is intuitive and easy to use for people with varying levels of technological expertise. The seamless operation of our gantry and vision systems ensures that the system can be used by anyone, regardless of their background or experience with similar products. Additionally, we have taken care to ensure that the design is adaptable to different environments and cultural contexts, whether it is being used in a high-tech office or a traditional household. Lastly, our software is going to support multiple languages, which would make people that do not understand English use the product easily.

Consideration of Environmental Factors
Our design incorporates environmental sustainability by focusing on energy efficiency and reducing material waste. The gantry system has been optimized to minimize power consumption without sacrificing performance, and the use of 3D-printed components allows us to limit material waste during production. Moreover, by encouraging users to centralize their device charging in one place, we are promoting more efficient energy usage compared to having multiple chargers in various locations. The smart charging table’s longevity and adaptability also reduce electronic waste, as the system is designed for durability and can accommodate future technology updates, extending its usable life. These considerations ensure that our system is not only practical and efficient but also environmentally responsible.

(A was written by Steven Zhang, B was written by Bruce Cheng, C was written by Harry Huang)

Steven’s Weekly Status Report 10.20

This week my work has been focused on following up on the progress of parts procurement and driver development for the motor controllers. I worked on the TB6600, a motor controller. My current design is to connect the + and – terminals of the motor controller to the motor and use the Jetson GPIOs to connect the Pulse, Direction, and Enable terminals of the controller and then use the Jetson.GPIO library to control the motor controller. This week I finished writing the original code and successfully ran it on the Jetson. I used a digital logic analyzer to analyze the waveforms of the GPIO pins to make sure they were working properly. Next week when the controllers and motors arrive, I will start working on the hardware connections to actually use the Jetson to control the motors and add a second controller for full power load experiments.

Meanwhile, this week I’ve been working on the precision design of the mechanical hardware. I designed some 3D printed parts such as supports to hold motors and fixed and moving pulleys for the gantry system. We plan to start 3D printing parts next week to ensure the mechanical stability and viability of the whole system.

We submitted our purchase request two weeks ago, and since most of our purchases come from Amazon, I expect that we should receive most of our parts next week. Next week we will be working on building and field testing the hardware. I plan to spend some time completing the aluminum frame build and basic experimentation with the gantry system.

Bruce’s Status Report for 10.20

The first task I completed was finalizing the system implementation and testing plan in our design report. This is a crucial document that will guide the integration and verification processes for the project’s three core subsystems: the gantry system, vision system, and charging system. The plan outlines detailed steps for ensuring that each subsystem functions as expected and interacts seamlessly with the others. For instance, the testing plan includes scenarios that simulate real-world usage conditions, such as cases where the phone might be placed at varying orientations or positions on the table, or where the gantry system must make rapid adjustments. It also includes edge case testing, ensuring that the system can handle situations like multiple phones or objects being placed on the table simultaneously.

In parallel, I worked extensively on the computer vision system. Using the YOLO (You Only Look Once) object detection algorithm, I developed the first version of our vision system, which is capable of identifying the location of a phone on the table with an accuracy rate exceeding 90%. Since our parts have not arrived yet, I use my own camera to take pictures and detect the existence of the phone. In order to simulate the charging table situation, I intentionally make the phone face down, so the camera is taking pictures of the back of the phones.  The first version implementation ensures the system’s ability to detect phones reliably in most situations, which is essential for the proper functioning of the gantry system. The YOLO algorithm, known for its speed and precision in object detection, was trained on a custom dataset that reflects the specific conditions of our table—such as the camera angle, lighting conditions, and various phone models. I dedicated significant time to training the model and testing it against different cases to ensure accuracy, and the results so far have been very promising. The system is able to successfully detect phones in a variety of orientations and under different lighting conditions, which provides confidence in its real-world application. However, there is still room for optimization, particularly in reducing the time it takes for the system to process each frame and communicate the phone’s coordinates to the gantry system. It is also possible to improve the accuracy in some edge cases, such as when the light is not sufficient or too strong.

I have made several iterations of testing, refinement, and retraining to reach the desired accuracy level. I invested time in adjusting hyperparameters and improving the training dataset to enhance the system’s performance. Moreover, I spent time debugging certain cases where the system initially struggled to detect phones placed at extreme angles or partially obscured by other objects. These issues have largely been resolved, and I am confident that the system is now reliable enough for initial integration with the gantry system.

At this stage, I am happy to report that my progress is on schedule. The development of the vision system and completion of the design report are major milestones that were planned for this week, and I am confident that we are moving forward according to the timeline we established. The next step will be to begin the process of developing other systems as soon as our parts arrived. Also, I would continue to improve my algorithm for the vision system and continue developing the app for our smart charging table.

In terms of deliverables for next week, my focus will be on continue to develop the app we have right now.  Additionally, I plan to work on optimizing the vision system. While the system currently performs well, there are improvements that can be made in terms of speed and accuracy, particularly in handling more challenging scenarios like detecting phones in low light or at difficult angles. Reducing the latency of the system’s response is also a priority, as faster detection will allow the gantry system to move more quickly and efficiently.

Harry’s Status Report for 10.20

During the week prior to the fall break, we focused on our design review report for the project. I completed the use case/design requirements, design trade studies, and related work parts.  In the related work part, I looked into the team that has a similar project last year, discussed their problems and ways we could do better.  Some of these thoughts are reflected in the trade studies part because we modified our design several times to combine the gantry system and robot ideas.

Also, we received our central control unit: Nvidia Jetson Orin Nano. At the same time, we tested some basic circuits during the class time at Techspark. Also, we implemented some testing on the camera we have because we haven’t got the ordered camera yet. We looked into come basic implementation regarding image filtering and segmentation and we will continue to work on our original camera next week. After we got the Stereo Camera Nvidia Jetson Nano, we will compare these two devices and pick the one that is easier to program.

For me, this week I mainly familiarize myself with the Nvidia Jetson Orin Nano. There are detailed descriptions on Nvidia website regarding the ports names and usage. With my own keyboard and monitor, I’m able to connect the Jetson Nano with them using the DisplayPort connector. Then I was able to begin programming. At the same time, I gathered info about the gpio header pinout.

https://jetsonhacks.com/nvidia-jetson-orin-nano-gpio-header-pinout/

I followed this pinout to implemented some basic testings, for example use wires and breadboard to light up and LED, power a motor driver, etc. Also, this pins support UART, SPI as well as I2C communication protocols. I also test the existing camera with the Jetson Nano to complete some basic image filtering. It works now but I believe further testings should be implemented because this time I randomly take pictures. Detailed testing regarding taking pictures from the bottom to view the electronic devices on top of the table should be taken into consideration. We only confirmed there are transparent acrylic boards in Techspark but haven’t test them together with camera. I also dig into camera settings and found that environmental lights may also affect image filtering results, so we assume this table is placed indoor without too strong light from the top.

Overall, my progress is on schedule. During this week I begin to familiarize myself with Nvidia Jetson Nano and implemented some basic testings. Next week we will combine the Jetson Nano with components that will arrive. We will continue to develop the image filtering code with our existing camera. At the same time we will begin to construct the gantry system as well after receive the corresponding components.

Bruce’s Status Report for 10.5

This week, I took on the role of presenting the design introduction for our project. To ensure a smooth delivery, I thoroughly prepared my part, focusing on clearly conveying the main concepts and design rationale. I spent significant time rehearsing the presentation and refining the slides to make sure they were visually engaging and easy to follow. I also gathered feedback from my teammates during practice sessions to further improve the clarity and flow of the content.

In addition, I was responsible for selecting the components for the charging pad module in our purchasing list. I researched extensively to determine the best options for our charging system, comparing various models and their specifications to ensure compatibility with our requirements. I also coordinated with my teammates to finalize other essential components, such as mechanical parts, ensuring our choices align well with the overall design and meet the project’s technical needs. This involved multiple discussions and iterations to balance cost, availability, and performance.

Lastly, I made improvements to the existing features of our app. I implemented a communication protocol that now allows the app to connect with other devices via WiFi, enabling real-time data exchange and expanding its functionality. Additionally, I started investigating the iPhone API to access information about temperature and charging status, which will further enhance the app’s capabilities. I looked through the API’s documentation and experimenting with different methods to retrieve the relevant data effectively. These improvements aim to make the app more user-friendly and provide more comprehensive information to users, preparing for our final design.

   

I am currently on schedule, and for the next steps:

  1. Finalize the purchase of the selected components and begin testing their integration into the system.
  2. Continue refining the app by completing the implementation of the iPhone API to gather temperature and charging status data, ensuring seamless integration.
  3. Work on the prototype of the charging pad module to verify its functionality and compatibility with the rest of the system.
  4. Collaborate with my teammates to set up a testing environment for the system, allowing us to identify any issues early and iterate on improvements.

Steven’s Status Report 10.5

My first priority this week was to complete the mechanical modeling simulation and part selection for the Gantry system. For the power configuration, I chose a common NEMA motor to power the system, a set of two motors to power the horizontal and vertical movement of the entire system, and a separate motor to control the arm up and down to grab the charging pads for movement. Since I hadn’t touched or designed a gantry system before, I researched and borrowed some designs from cheaper 3D printers, and used a combination of two sets of moving and fixed pulleys and 6mm belts for the drive, as well as a set of MGN9 rails for the limit. I finished picking and filling out the drivetrain parts on the purchase list and we’ve submitted it to the TA for review. Since most of the parts are being purchased using Amazon, we hope to complete the development of the motor controller drives and the basic Gantry build over fall break and the following week.

In the meantime, since we’re picking parts together this week, we think that choosing how thick the acrylic or glass panels will be is an essential thing to consider. We experimented with both 3mm and 5mm acrylic sheets and found that 3mm would be a good balance of strength and charging performance. We will use Techspark’s acrylic boards for the time being in subsequent experiments because of the possibility that the glass may break during the experiments. While waiting for the hardware to arrive, I also plan to finish testing the vision system with the team, and we will decide soon whether to use traditional vision or a machine learning model such as YOLO for cell phone device tracking. If machine learning is to be used for cell phone device tracking, we will immediately start collecting case data from cell phone devices and begin developing our first version of the model.