Team Status Report for 04/26/2025

This past week has been spent working on optimizing the height map to 3D model algorithm and finishing upgrades to the manipulator. All that’s left to do is characterize the limits of our system and gather data that compares our scans to a traditional 3D scanner’s.

Theo finished the linear bearings upgrade to the manipulator and has made a new design for the electronics housing.

Yon worked on printing some parts including the test object, and writing the object comparison algorithm.

Sophia started testing and bug-fixing code for the automatic import of the height map into an object in Blender. So far it creates the plane, but it’s having trouble to find the correct file path for the height map. It should be able to be fixed before the final demo, but worst case scenario it takes maybe a minute to manually create the plane and add the height map to create the object in Blender.

Unit tests for the manipulator included measuring out precise rotations and making sure the manipulator could be placed on and rotate objects of different heights and sizes. We’ll continue to test this through our system test.

There are two main areas of the image processing system that need to be tested: image alignment and model creation. Image alignment can be easily tested by running several different shaped objects through the system and ensuring the resulting bounding box represents aligned images. The model creation part is a little harder to test, and is the reason we Yon has been working on the test object and model similarity score algorithm, which will general a similarity score between our scan and the ground truth, along with a quality score comparing our scan with the benchmark scan of the same object. We can only really test this on an object for which a ground truth exists, so this won’t exactly be a ‘unit test’. We will also run scans on a variety of different objects and perform a visual inspections on them.

Unit tests for the software include running the files individually to make sure they completely worked before adding another file to the system pipeline. Ex. we ran the dotNet C file (which runs the scanner) first many times, then ran the Main.py file which would include running the dotNet file, then running the align file repeatedly to  make sure it works, then added in the align file to Main.py and ran it many times to make sure it works, etc. So it was an iterative process of running individual files, then adding them to the system after they’ve shown they worked individually, then running the system to iron out integration.

Our overall system test consists of:

  1. Place object on scanner
  2. Place manipulator on top of object
  3. Plug in manipulator and scanner to user’s computer (both USB)
  4. Run project script. The system alternates between scans and rotations until the object has been rotated around to its starting position.
  5. The image processing automatically derives the normal map of the object and then generates a height map.
  6. The height map is imported to Blender for the user to inspect and interact with.

The only work from the user is during the setup. After starting the script, it only takes a minute or two (depending on the DPI of the scan, a modifiable parameter) for the object to be viewable in Blender.

Theo’s Status Report for 04/26/2025

This past week was spent on putting the finishing touches on the manipulator. We installed the new mount, linear bearings, and longer screws+spacers. The resulting mechanism is extremely smooth, and capable of fitting objects as thick as the frame’s space allows. The last thing that needs to be fixed up is the mounting. The motor shield on the ESP32 board makes designing housing around the outgoing wires difficult, but this next iteration ought to work. Functionally, the manipulator is complete. Besides the 3D print, all of my remaining time will be spent helping characterize the manipulator, gathering data, and working on our reports.

Theo’s Status Report for 04/19/2025

This past week was spent designing and implementing upgrades for the manipulator hardware. The linear bearings arrived, so I made a mount to fit them and used M5 screws, spacers, and nuts to accommodate them. The motion is very smooth and stable, but the thicker mount takes up the majority of the 1.5″ clearance the current screws allow. I’ll swap out the screws with 3″ long ones this upcoming week.

The electronics housing is the other piece I’ve been working on. The current design ought to work: it fits the microcontroller and DC air pump inside of a 3D-printed shell that clicks together and can be screwed onto one of the T-channels that make up the manipulator.

Functionally, the manipulator is fine. This last bit of upgrades should give us the full capabilities we planned for. After this, most of my time will be spent helping the others improve the image processing or user interface.

On learning new knowledge:  I started using a new CAD software called OnShape during this project, which had a bit of a learning curve but ended up being faster and easier to use than SolidWorks. I found it was easiest to keep a tab with the documentation open while working so I could quickly find solutions for any problems I had. I also got to learn a lot about the math behind our computing of normal maps, height maps, and 3D objects. Most of this was learned from Yon explaining it and our group reading through the relevant papers together.

Team Status Report for 04/12/2025

This past week, the most significant problems we’ve encountered have been with the quality of the scans. As mentioned in Theo’s status report for this week, the stepper-suction adapter casts a black shadow behind the object that shows up in the normal map, and the rotation is still misaligned. The new mount has tighter tolerances that prevent it from sliding around, but it is still extremely difficult to place the suction cup directly in the center of the object (in order for each vector in the normal map to align). We’ve 3D printed a white stepper-suction adapter to see if that will help (it ought to blend in with the white acrylic cover sheet), and we’ve started looking into a software-level solution to the scan alignment issue. As detailed in Yon’s status report, we have two approaches to solve the alignment issue which we will test this coming week. These create very minor changes in our system, by introducing a small amount of complexity in the ‘image alignment’ step. If the image alignment goes well and creates better normal maps, we will then be able to move on to ensuring the height map turns out well, since the results are currently skewed from the misaligned normal map. There is a normal-to-height-map module currently as detailed in Sophia’s status report, but we may need to look for alternatives or edit it for our use case of small objects, since this module was normally used on larger cases.

As we continue to improve the manipulator, we’re also looking to begin testing with objects other than a quarter. Yon has designed a test object that we can start using when we’re ready, and the manipulator ought to be able to work with any object that fits in its 6inx6inx1in space. This also would be the validation of our system as a whole. When we compare the scan we take from the model, the original model we made, and the scans that commercial 3D scanners take, we will be able to calculate how close our scans are versus commercial scans using Hausdorff distance.

A scan from our current system and a normal map generated from it and its rotated counterparts. 

Theo’s Status Report for 04/12/2025

This past week, I’ve spent most of my time working on the manipulator hardware. We 3D printed a new mount with tighter tolerances, and it’s done a good job at minimizing the mount’s movement. The tight tolerance makes it hard to slide up and down, so we’ll probably use the linear bearings in our next upgrade (they arrived this Friday). We also laser cut the acrylic sheet and used it while scanning. It does a decent job at blocking out the noise from the air tubing, but makes it much more difficult to align the suction cup on top of the object (even more so when trying to center it). The current issues with our manipulator are off-center scans and a shadow being cast by the black stepper-suction adapter. We’re reprinting the adapter in white so it can match with the cover sheet, and we’re looking into making the acrylic cover sheet thicker. We have an extra 1/8″ sheet that we can cut identically and lay on top.

Here’s what a single scan in the current process and the generated normal map look like. The black shadow behind the object is entirely due to the stepper-suction adapter.


Team Status Report for 03/29/25

This past week was focused on testing the integration of our subsystems. We continued working on the Mac incompatibility issue but haven’t made progress yet. We’ve tentatively pushed it to the back of our task list, since it’s the least essential thing we’ve yet to do.

While Theo completed the manipulator and started testing for any issues, Yon and Sophia implemented the mapping code with the current control code. We were able to run a test where we ran an entire “scan,” with one of us rotating a coin on the scanner between each scan. Even with imprecise rotations and a scanner DPI of 100, we still observed decent a output map. Since the manipulator is completely working (besides some upgrades and adjustments mentioned in Theo’s status report), we will be ready to integrate everything we have so far on Monday.

We are still on track, with the next steps being to characterize and improve the manipulator while implementing our code in a blender plugin.

Theo’s Status Report for 03/29/25

This week was mainly spent working on the stepper-suction adapter, which would allow the suction cup to rotate with the stepper motor while still being connected to the air pump. It worked on its second iteration, and we were able to put together the entire manipulator. The suction and rotation work as desired, but we’ve yet to fully characterize what is and isn’t possible. All that’s left to do before demo day is make sure the delays in our serial python code are properly sized to let the entire physical rotation/suction happen. We’ll be able to do this within a few minutes on Monday.

I noticed with some of my first tests that the mount can become slanted on the guide pillars, which makes sense because those holes did not print out to the exact proper size. Rather than trying to get this right, I’m leaning toward implementing linear ball bearings that would function identically to the holes, but be properly sized, smoother, and sturdier than just a hole in our 3D print. I’ll order these on Monday.

The suction cup also sticks out relatively far down the manipulator, pushing the mount up the majority of the guide pillars that we planned to leave as slack/space for thicker objects. While this isn’t a point of failure, the remedy is as simple as having the stepper motor’s mounting hole stick less far down from the mount. Since we’ll be reprinting to ensure the linear bearings will fit anyways, one more print should solve both problems at once.

I’ve also continued work on the 3D-printed electronics housing that will be attached to the structure of the manipulator, but that is my last priority at the moment.

The two pictures attached below are the suction cup with the stepper-suction adapter, and the complete prototype demonstrating a slanted mount while resting on top of a coin. 

Theo’s Status Report for 03/22/2025

This past week was mainly spent working on the 3D printed parts for the manipulator. The newest iteration of our mount finally got the hole size right, so it slides up and down relatively smoothly. It will be interesting to see how well it works once the suction cup is attached to the stepper motor.

The adapter for the suction cup and stepper motor is going along well. It seems like there’s enough clearance in the area that we don’t need to cut the stepper motor axle. The next iteration needs to be longer and have a slightly larger hole for the suction cup, but it should be doable. I’ll have the stl to Yon by Monday for more testing.

The last 3D printed piece I’ll be working on is electronics housing. It will be a long box along one of the T-channels on the frame, housing the mirocontroller and dc air pump. There will be a micro-usb port and a 12V DC adapter port. I’m leaving this for last once we confirm full functionality for the manipulator itself.

Everything is still on schedule, hopefully the adapter doesn’t take too many iterations to finish. It prints pretty quickly (40min), so I may spend an afternoon in our 3D printing area to prototype and get it right.

Theo’s Status Report for 3/15/2025

This past week I focused on implementing the suction cup for our manipulator and helping Sophia debug the NAPS2 code we’re using to talk to our scanner. We’re able to toggle the suction cup on and off and have succeeded in lifting objects as heavy as my phone. With a t-connector, scissors, and hot glue, I was able to create a right-angle connection between the suction cup and the air tubing running to the pump. From here, we just need a 3D-printed piece that connects to a bearing on the stepper motor and aligns the suction cup with the shaft. I’m already working on its design, and we should have it 3D-printed by Wednesday. The stepper motor shaft will also need to be cut to accommodate this; I’ve already talked with the TechSpark machine shop, and we are able to make this cut anytime this week.

Team Status Report for 3/08/2025

Since we’ve all been on schedule so far, we took Spring Break off. During the last week of February, we made progress on our prototype and have moved closer to system integration/testing.

Theo built the prototype’s structure, and we’re now just waiting on the suction cup and its mounting components to finish it. See his section for specifics and a photo of the prototype on our scanner. He also made some basic serial code for sending rotation and pumping commands, as well as python code for interfacing over serial. He’ll be working with Sophia to integrate a better version of this in the final control software.

Besides the serial code, Sophia did trial tests of the flatbed scanner with the controller software and found that it had issues with no clear solution. After many attempts and guesses at what was going wrong with the scanner to computer communication, and Theo after attempting to setup the project but finding it incompatible with newer versions of Linux, she’ll be pivoting to a more modular approach that will use unique and more compatible libraries for each OS, keeping the individual OS scanning processes in separate files. This will mean a more incremental approach to ensuring each OS works well without jeopardizing the states of the other OS’s and it won’t rely on an outdated dotnet framework (NAPS2 needed version 4.8, was only supported at all to version 8, current version is version 9).

Yon finished working out the math for normal map construction and detailed his findings in the design report. He also identified 3D scanners we can use for qualification, which gives both Yon and Theo some work next week in designing and manufacturing a test object. Now that a basic python version of the normal map code is implemented and can be used for testing Theo and Sophia’s subsystems, Yon will turn to implementing the normal map math in the design report. He also still has to identify a normal map to depth map pipeline, which can be achieved either in a custom implementation, a C library, or externally through another blender plugin / tool.

A was written by Theo, B was written by Yon, and C was written by Sophia.

Part A: On the global scale, our project has possible uses in the fields of archeology, history, and design. There are no limiting factors in who across the world could access/utilize flatbed 3D scanning besides a possible language barrier in the software (which is easily translatable).

People not in the academic environment would be more likely to use this as hobbyists who want to get detailed scans of their art, sculptures, or other detailed objects. There is a small subculture of photographers who use scanners for their high DPI, and such a group would likely be eager to map their hobby into a third dimension that they can interact with via Blender or other .obj-friendly software.

There is an emphasis throughout our project on making the scanning process user-friendly and hands-off. While this is mainly meant to accommodate repetitive data acquisition, less tech-savvy people would only have to deal with a few more steps than when using a flatbed scanner normally (place manipulator, plug in cables, run software).

Part B: Our project implements a cheap and accessible way to 3D scan small objects. One significant area of application for this technology is in archeology and preservation where cheap, quick, and onsite digitization of cultural artifacts can help preserve clotures and assist dialog and discourse around them.

That said, all technology is a double edged sword. The ability to create quick replicas of historical artifacts makes them vulnerable to pop cloture-ification which could manifest in cultural appropriation.

Part C: Our project uses a low power draw, which is an environmental boon. Especially considering its competitors are larger, complicated machines that would use more energy. Our project also leverages an existing technology, therefore reusing devices and not requiring buying a larger version that uses much more material and energy in manufacturing and usage.

The simplicity of our project also lends itself to environmentalism, since we don’t use any cloud storage, AI features, or other massive energy consumption processes. We don’t even use a separate battery, drawing power from the computer through USB. Open source projects like ours are also generally more sustainable than completely commercialized sources.

Environmental organisms and discoveries can even be captured for future research and knowledge using our project. Since archaeology is a key audience, it’s not a stretch to extend that into general biology. Scanning small bones, skeletons, feathers, footprints, or other small biological features would be possible as long as they aren’t particularly frail. This contributes to the knowledge bank of biological organisms, furthering science’s understanding. The Smithsonian for example has a public access bank of 3D scans, so our project would be perfectly suited to its use for small, detailed objects.