Tedd’s Status Report – 10/18

This week I worked on getting a visible plot from depth coordinates provided by the Intel Realsense. With my preliminary python script, I was able to run OpenCV and matplotlib to get a pretty good output of which pins should be actuated and the distances that each pin should be actuated. Here are a few examples below:

As you can see, the depth camera was pretty accurate and is able to capture simple objects and display them pixelated on a plot. However, I realized that more complex objects like faces are not well translated onto the plot. I might have to fix up the resolution of the depth camera and see if that could help us get better results. For now, this is a preliminary script and it seems to be working well. I will automate this process in the future as well.

 

Team Status Report – 10/4

The most significant risks that could jeopardize the success of the project is the actuator and rack and pinion mechanism not working properly. In order to successfully actuate the pins, we will need this system to work or we will experience speed delays and inaccuracies. Another risk is the depth map not giving us accurate results. This could lead the wrong pins to be actuated even though they are just following the information from the depth map. The first risk is being managed by ensuring that we have proper design requirements so that the hardware does not run into any issues. The second risk is being managed by doing rigorous testing of the depth camera and ensuring that the heightmap from the camera is accurate. Further testing is currently being done to convert the heightmap to a depthmap. We have contingency plans set up to make sure that our product will still be successful. For the gantry system, we can sacrifice speed for more accurate actuation. For the depth camera, we can always find another camera to work with, ensuring that our options are not limited.

There were no changes made to the existing design of the system since the design presentation. We are pretty confident in our design, including the requirements, block diagram, and system specifications. We may run into issues that may require us to pivot from our current design, but so far we are confident in the design.

This is our schedule from the design presentation, and we are still on time with our work with no changes necessary.

A picture of visible progress this week:

Tedd’s Status Report – 10/4

This week, I started working on getting the Intel Realsense camera to work. While we were anticipating on using the Oak-D cameras, we realized that other teams have checked out the cameras already, leaving us with the Realsense camera. There is nothing wrong with the Realsense camera, but we found out that it wasn’t fully compatible with Macs. However, I have an Intel NUC that runs on Ubuntu 24.04, so there is no huge issue. After downloading the necessary libraries, I was able to run the realsense application and turn on the camera. The image below displays a heightmap of me sitting on a chair holding a guitar:

The next step after this is to actually get a good depthmap for different shapes and see if I can convert them into actual measurements.

Tedd’s Status Report – 9/27

This week, we made sure to get the design presentation down. We broke down the design presentation into different parts, and for the presentation, I took the use-case requirements and the testing and verification portion. We also took feedback from other groups from our proposal presentation, and realized that we need to add more numbers and be more thorough in our explanation. This week I also looked into the Intel Realsense, but realized that it is not fully compatible with Macs. So I will be waiting for the Oak-d Pro to be lent to us as it is fully compatible with Macs.

Tedd’s Status Report – 9/20

This week, I looked at the various computer vision libraries that were compatible with our project. I am not sure which depth camera we will be using, so I am assuming that we will be using the Intel Realsense camera. Right now, we are tasked to work on the design of our product, which means that we will need to look over all our design choices for the gantry and the actuators. I looked into which computer vision libraries may be useful for our project, which will include the height map as well. Right now I think we are right on schedule.

Introduction and Project Summary

As an initiative to make museums and art exhibits more interactive, we propose to build a real-time 3D reconstruction system through a pin art board, called LivePin. This device will allow a user to place any 3D object in front of a camera in order to reproduce the image as a tactile 3D impression on a pin art board. Before you would have to physically press onto a pin art board to make it move and create an accurate impression, but now with our system, all you need to do is hold something in front of a camera and the pins will move automatically. We aim to design a budget friendly and efficient automated pin art board that captures the nostalgia of this wonderful childhood toy and turns it into a more technologically advanced interactive display. Additionally, while there are existing display boards that utilize a similar concept (like flipdot displays), there is no known device that moves pins automatically to create interactive designs. As a result, we hope to bring this unique idea into fruition for the future of museum and art exhibits as well as for children around the world.