Sophia’s Status Report for 03/22/2025

This week, I started on the Blender add-on. I’ve successfully made an add-on with the name “Flatbed 3D Scan” to appear in Blender, and it runs the “execute” function that is inside of its Python file when clicked. So, next week I’ll be making it so it calls the Main.py file from last week which would run the scan. I need to look further into if there’s a way I can input arguments to the add-on, or a way to make a UI window pop-up to enter arguments in.

For Main.py, I refactored it a bit along with scanner-controller dotnet project and the serial_proto.py in order to put them in the right directories and directly call serial_proto from Main.py. So, the sequence looks okay. I need to implement try-catches to make sure the program doesn’t have unhandled errors. The dotnet project still isn’t working on Mac, so we’re going to look more into that next week as well.

So overall, next week will be adding in quality of life updates with the Blender add-on UI and bug fixing with testing the system.

Team Status Report for 03/15/2025

Theo received the suction cup we were waiting on this week and began working on it. It can hold and let go of objects as heavy as a phone, and he was able to make a right-angled connection between the suction cup and the tubing. This is important for the last design component of the manipulator: a 3D printed part that connects the suction cup to a bearing on the stepper motor, centering it with the shaft. He’s already started working on this and will have it ready to print before Wednesday.

Sophia, with Theo’s help, solved what was wrong with the NAPS2 code so the scanner controller software works. She’s started working on the Main.py file that will connect together the scanner controller, the microcontroller manipulator device, the image processing, and the Blender add-on UI.

A risk that is coming to light in our design choices thus far is the 3D printed mount’s smoothness while sliding up and down the pillars on our manipulator. This issue is mainly due to the tolerances on the mount’s holes. We’ve needed to reprint it multiple times while making the holes larger, and this issue affects the holes the stepper motor will go into as well. If the holes are made large enough but the sliding still isn’t smooth, we are considering linear bearings on the pillars/mount to ensure smooth motion. We’ll weigh the costs and benefits of this change if it proves necessary.

The schedule has not changed significantly. Even though Yon’s subsystem is a little behind due to him being sick last week, we worked in some padding to the implementation of the entire image processing system which leaves the schedule unchanged.

Theo’s Status Report for 3/15/2025

This past week I focused on implementing the suction cup for our manipulator and helping Sophia debug the NAPS2 code we’re using to talk to our scanner. We’re able to toggle the suction cup on and off and have succeeded in lifting objects as heavy as my phone. With a t-connector, scissors, and hot glue, I was able to create a right-angle connection between the suction cup and the air tubing running to the pump. From here, we just need a 3D-printed piece that connects to a bearing on the stepper motor and aligns the suction cup with the shaft. I’m already working on its design, and we should have it 3D-printed by Wednesday. The stepper motor shaft will also need to be cut to accommodate this; I’ve already talked with the TechSpark machine shop, and we are able to make this cut anytime this week.

Sophia’s Status Report for 03/15/2025

With Theo’s help, I got the scanner controller software working still using the NAPS2 library. It required changing the scanning contexts to different contexts than she expected, specifically Windows OS had two different scanning contexts and the one that said it was for Windows Forms, not Windows, was needed in this instance.

I started on writing the Main file that would call the scanner controller software, command the microcontroller manipulator device, and eventually will interact with the image processing software and the Blender add-on UI. I chose to do Python for this since there would be a lot of moving between file directories and some command line calls for the dotnet project that encompasses the scanner controller software. I also updated the scanner controller software to account for the Main file creating the scanning directory, so it now just has to make sure it doesn’t repeat file names for new scans.

The next step would be to implement the manipulator device calls into the Main.py file and ensure that the process of scanning and rotating works, building the Blender add-on, and integrating Yon’s image processing software. Initially I thought to make the Blender add-on do all of the calls, but now I think it’d be better to let the add-on be almost exclusively UI with a start button that calls the Main.py file. So, technically I’m a touch behind schedule, but as long as I finish the Blender add-on next week and the Main.py file is working, it should be good since we left two weeks of buffer time.

Team Status Report for 3/08/2025

Since we’ve all been on schedule so far, we took Spring Break off. During the last week of February, we made progress on our prototype and have moved closer to system integration/testing.

Theo built the prototype’s structure, and we’re now just waiting on the suction cup and its mounting components to finish it. See his section for specifics and a photo of the prototype on our scanner. He also made some basic serial code for sending rotation and pumping commands, as well as python code for interfacing over serial. He’ll be working with Sophia to integrate a better version of this in the final control software.

Besides the serial code, Sophia did trial tests of the flatbed scanner with the controller software and found that it had issues with no clear solution. After many attempts and guesses at what was going wrong with the scanner to computer communication, and Theo after attempting to setup the project but finding it incompatible with newer versions of Linux, she’ll be pivoting to a more modular approach that will use unique and more compatible libraries for each OS, keeping the individual OS scanning processes in separate files. This will mean a more incremental approach to ensuring each OS works well without jeopardizing the states of the other OS’s and it won’t rely on an outdated dotnet framework (NAPS2 needed version 4.8, was only supported at all to version 8, current version is version 9).

Yon finished working out the math for normal map construction and detailed his findings in the design report. He also identified 3D scanners we can use for qualification, which gives both Yon and Theo some work next week in designing and manufacturing a test object. Now that a basic python version of the normal map code is implemented and can be used for testing Theo and Sophia’s subsystems, Yon will turn to implementing the normal map math in the design report. He also still has to identify a normal map to depth map pipeline, which can be achieved either in a custom implementation, a C library, or externally through another blender plugin / tool.

A was written by Theo, B was written by Yon, and C was written by Sophia.

Part A: On the global scale, our project has possible uses in the fields of archeology, history, and design. There are no limiting factors in who across the world could access/utilize flatbed 3D scanning besides a possible language barrier in the software (which is easily translatable).

People not in the academic environment would be more likely to use this as hobbyists who want to get detailed scans of their art, sculptures, or other detailed objects. There is a small subculture of photographers who use scanners for their high DPI, and such a group would likely be eager to map their hobby into a third dimension that they can interact with via Blender or other .obj-friendly software.

There is an emphasis throughout our project on making the scanning process user-friendly and hands-off. While this is mainly meant to accommodate repetitive data acquisition, less tech-savvy people would only have to deal with a few more steps than when using a flatbed scanner normally (place manipulator, plug in cables, run software).

Part B: Our project implements a cheap and accessible way to 3D scan small objects. One significant area of application for this technology is in archeology and preservation where cheap, quick, and onsite digitization of cultural artifacts can help preserve clotures and assist dialog and discourse around them.

That said, all technology is a double edged sword. The ability to create quick replicas of historical artifacts makes them vulnerable to pop cloture-ification which could manifest in cultural appropriation.

Part C: Our project uses a low power draw, which is an environmental boon. Especially considering its competitors are larger, complicated machines that would use more energy. Our project also leverages an existing technology, therefore reusing devices and not requiring buying a larger version that uses much more material and energy in manufacturing and usage.

The simplicity of our project also lends itself to environmentalism, since we don’t use any cloud storage, AI features, or other massive energy consumption processes. We don’t even use a separate battery, drawing power from the computer through USB. Open source projects like ours are also generally more sustainable than completely commercialized sources.

Environmental organisms and discoveries can even be captured for future research and knowledge using our project. Since archaeology is a key audience, it’s not a stretch to extend that into general biology. Scanning small bones, skeletons, feathers, footprints, or other small biological features would be possible as long as they aren’t particularly frail. This contributes to the knowledge bank of biological organisms, furthering science’s understanding. The Smithsonian for example has a public access bank of 3D scans, so our project would be perfectly suited to its use for small, detailed objects.

Sophia’s Status Report for 03/08/2025

The week before spring break, I started working with Theo on the serial code to get communication between the computer and the microcontroller on the manipulator device. Since the device connects using USB, it’s a matter of opening the serial ports and feeding commands at the right time to the manipulator device.

As for the program to automatically capture scans using the flatbed scanner, I’ve encountered. The file works fine before attempting a practical trial with the flatbed scanner. However, when I try to use the scanner with it I encounter a “lost connection with the scanner” error in the middle of scanner. I hear the scanner go off, but then it loses connection and doesn’t save the file. Online search wasn’t helpful, only suggesting to unplug and replug/restart the scanner, which I tried a few times unsuccessfully. I guessed it was something to do with file access permissions on my computer with the scanner, so I tried moving the project to more accessible file locations that definitely wouldn’t require admin, I tried running the script as an admin, I tried to see if there was an access permission the scanner was missing, I double checked that the device drivers were up to date, and nothing seemed to fix this issue. It’s extra confusing because I was able to scan and save just fine from the scanner’s native software and from the NAPS2 library software. I asked Theo to try downloading the project and running it to see maybe if it was an issue with Windows or my machine in particular. However, he encountered a lot of issues with trying to set up the dotnet project and the incompatibilities in the version of dotnet that the project required with his version of Linux.

So, in light of this I believe the best approach would be to pivot from trying to use the universal scan library of NAPS2 that requires a dotnet project. Instead, just a series of files instead that don’t rely on an existing framework. There would be a master file that receives the command from the Blender UI, checks the OS version, and then calls a corresponding file to make the scans based on the OS. This way, we would have a file for each OS version and each OS would be able to use a compatible scanning library. It also makes it so that we could incrementally ensure each OS works, ensuring one is done before moving on to the next. It would also ensure that something in general works, even if not compatible with every OS. Currently, I’m looking at WIA (Windows Imaging Acquisition) for Windows, SANE (Scanner Access Now Easy) for Linux, and ImageCaptureCore (Apple API) for Mac. Since two of these are native to their OS’s and Linux is generally good with setting up libraries, I think these will work out better.

Theo’s Status Report for 3/08/2025

I’ve taken the past week off for Spring Break since I’m on schedule. The week before, we received the parts for the manipulator’s structure, so I actually built it (see image below). The parts all fit together besides the 3D printed mount, which needs slightly larger holes for the stepper motor to fit inside. We’ll 3D print this when we return to campus. So far, we’ve noticed slight instability and rough motion when trying to move the 3D printed mount up and down the screws with spacers. If the next iteration’s larger holes don’t enable the smooth and stable motion we want, then we’ll look into linear bearings or something similar.

We’ve also further theorized possible solutions to picking the object up during rotation. Currently, solenoids on the mount that push off of the second layer of T-channel extrusions is our best idea. This would likely work, and we would use at least two in order to deliver equal/symmetric force while sliding up the screws.

My next steps include finishing the troubleshooting on this 3D print, helping Sophia with integrating my basic serial test code with her control software, and designing + optimizing the 3D printed shaft-suction cup piece that we’ll use to connect the vacuum pump tubes and the stepper motor to the suction cup.

Our manipulator prototype on our scanner.

Yon’s Status Report for 2/22/2025

This week I worked on parametrizing the normal map computation, and 3D printed components for the manipulator. I made progress on the normal map math, but still need some more work to fully parametrize it. That puts me a little behind schedule, but I gave myself some buffer time in implementing the math in code, which should be very quick as its computed naively per pixel. I had to reprint the manipulator parts a few times due to printer issues, but we now have that component made and handed off to Theo for assembly.

Next week I will finally finish the math, and begin testing the manipulator. I can help characterize rotation accuracy and scan quality with/without a cover. I also already have some code written for the normal map computation with n=4 rotations so we could run a full system test if manipulator testing goes well.

Team Status Report for 2/22/2025

This week has been one of preparation and prototyping. Our Adafruit and first Amazon orders arrived, and we were able to run a test circuit where we checked that both the stepper motor and air pump could be powered and controlled. Electronics-wise, everything is fine; we’ll be waiting for the rest of our orders to come in before we can build our complete prototype. The 3D printed electronics mount had too little clearance in all of its holes, so we’ll be re-printing it with an extra 0.25mm of radius this week.

Software-wise, NAPS2 is proving to be the correct library choice. Sophia was able to implement OS-specific functionality, and the preliminary work can run on Windows and Linux. There is also a consistent file system for saving completed scans. The next steps will be testing computer-printer interactions.

On the signals end, Yon has continued to make progress on the scan/object mapping and has found new research that we may be able to draw inspiration from.

As of right now, we’re currently all on schedule. The only foreseeable issue in the project right now is the possibility of an object with an abrasive surface scratching the bed of the scanner while rotating. We’ll tackle this during the characterization of our prototype, and the most likely solution will be motorizing the vertical movement of the electronics mount so that we pick up the object before rotating it and let it down before we scan it.

Theo’s Status Report for 2/22/2025

This week, I practiced more for my presentation before presenting on Wednesday and spent time figuring out the suction cup’s connection to the stepper motor. I found a shaft coupler that doubles as a mounting platform, to which I’ll attach a 3D printed mount for the suction cup that allows it to be rotated by the stepper motor while connected to the air pump. I ordered this, along with some more air tubing connectors and 3mm mounting screws (our stepper motor didn’t come with any), in our second amazon order. Our first amazon order and our adafruit order came in later this week, and I picked them up along with the 3D printed circuits mount (see electronics in github). The holes on the mount needed the slightest bit more clearance, so I added 0.25mm to the radius of each hole. We’ll 3D print it this weekend or next week.

On Saturday, I started prototyping with the electronics that had come in from Adafruit and was able to control the stepper motor and air pump over serial. I’ve attached a picture of the setup below. Now I’ll wait for our new 3D printed mount, the suction cup, and the rest of the structure/hardware to come in before building a complete prototype.