Team Status Report for 3/08/2025

Since we’ve all been on schedule so far, we took Spring Break off. During the last week of February, we made progress on our prototype and have moved closer to system integration/testing.

Theo built the prototype’s structure, and we’re now just waiting on the suction cup and its mounting components to finish it. See his section for specifics and a photo of the prototype on our scanner. He also made some basic serial code for sending rotation and pumping commands, as well as python code for interfacing over serial. He’ll be working with Sophia to integrate a better version of this in the final control software.

Besides the serial code, Sophia did trial tests of the flatbed scanner with the controller software and found that it had issues with no clear solution. After many attempts and guesses at what was going wrong with the scanner to computer communication, and Theo after attempting to setup the project but finding it incompatible with newer versions of Linux, she’ll be pivoting to a more modular approach that will use unique and more compatible libraries for each OS, keeping the individual OS scanning processes in separate files. This will mean a more incremental approach to ensuring each OS works well without jeopardizing the states of the other OS’s and it won’t rely on an outdated dotnet framework (NAPS2 needed version 4.8, was only supported at all to version 8, current version is version 9).

Yon finished working out the math for normal map construction and detailed his findings in the design report. He also identified 3D scanners we can use for qualification, which gives both Yon and Theo some work next week in designing and manufacturing a test object. Now that a basic python version of the normal map code is implemented and can be used for testing Theo and Sophia’s subsystems, Yon will turn to implementing the normal map math in the design report. He also still has to identify a normal map to depth map pipeline, which can be achieved either in a custom implementation, a C library, or externally through another blender plugin / tool.

A was written by Theo, B was written by Yon, and C was written by Sophia.

Part A: On the global scale, our project has possible uses in the fields of archeology, history, and design. There are no limiting factors in who across the world could access/utilize flatbed 3D scanning besides a possible language barrier in the software (which is easily translatable).

People not in the academic environment would be more likely to use this as hobbyists who want to get detailed scans of their art, sculptures, or other detailed objects. There is a small subculture of photographers who use scanners for their high DPI, and such a group would likely be eager to map their hobby into a third dimension that they can interact with via Blender or other .obj-friendly software.

There is an emphasis throughout our project on making the scanning process user-friendly and hands-off. While this is mainly meant to accommodate repetitive data acquisition, less tech-savvy people would only have to deal with a few more steps than when using a flatbed scanner normally (place manipulator, plug in cables, run software).

Part B: Our project implements a cheap and accessible way to 3D scan small objects. One significant area of application for this technology is in archeology and preservation where cheap, quick, and onsite digitization of cultural artifacts can help preserve clotures and assist dialog and discourse around them.

That said, all technology is a double edged sword. The ability to create quick replicas of historical artifacts makes them vulnerable to pop cloture-ification which could manifest in cultural appropriation.

Part C: Our project uses a low power draw, which is an environmental boon. Especially considering its competitors are larger, complicated machines that would use more energy. Our project also leverages an existing technology, therefore reusing devices and not requiring buying a larger version that uses much more material and energy in manufacturing and usage.

The simplicity of our project also lends itself to environmentalism, since we don’t use any cloud storage, AI features, or other massive energy consumption processes. We don’t even use a separate battery, drawing power from the computer through USB. Open source projects like ours are also generally more sustainable than completely commercialized sources.

Environmental organisms and discoveries can even be captured for future research and knowledge using our project. Since archaeology is a key audience, it’s not a stretch to extend that into general biology. Scanning small bones, skeletons, feathers, footprints, or other small biological features would be possible as long as they aren’t particularly frail. This contributes to the knowledge bank of biological organisms, furthering science’s understanding. The Smithsonian for example has a public access bank of 3D scans, so our project would be perfectly suited to its use for small, detailed objects.

Sophia’s Status Report for 03/08/2025

The week before spring break, I started working with Theo on the serial code to get communication between the computer and the microcontroller on the manipulator device. Since the device connects using USB, it’s a matter of opening the serial ports and feeding commands at the right time to the manipulator device.

As for the program to automatically capture scans using the flatbed scanner, I’ve encountered. The file works fine before attempting a practical trial with the flatbed scanner. However, when I try to use the scanner with it I encounter a “lost connection with the scanner” error in the middle of scanner. I hear the scanner go off, but then it loses connection and doesn’t save the file. Online search wasn’t helpful, only suggesting to unplug and replug/restart the scanner, which I tried a few times unsuccessfully. I guessed it was something to do with file access permissions on my computer with the scanner, so I tried moving the project to more accessible file locations that definitely wouldn’t require admin, I tried running the script as an admin, I tried to see if there was an access permission the scanner was missing, I double checked that the device drivers were up to date, and nothing seemed to fix this issue. It’s extra confusing because I was able to scan and save just fine from the scanner’s native software and from the NAPS2 library software. I asked Theo to try downloading the project and running it to see maybe if it was an issue with Windows or my machine in particular. However, he encountered a lot of issues with trying to set up the dotnet project and the incompatibilities in the version of dotnet that the project required with his version of Linux.

So, in light of this I believe the best approach would be to pivot from trying to use the universal scan library of NAPS2 that requires a dotnet project. Instead, just a series of files instead that don’t rely on an existing framework. There would be a master file that receives the command from the Blender UI, checks the OS version, and then calls a corresponding file to make the scans based on the OS. This way, we would have a file for each OS version and each OS would be able to use a compatible scanning library. It also makes it so that we could incrementally ensure each OS works, ensuring one is done before moving on to the next. It would also ensure that something in general works, even if not compatible with every OS. Currently, I’m looking at WIA (Windows Imaging Acquisition) for Windows, SANE (Scanner Access Now Easy) for Linux, and ImageCaptureCore (Apple API) for Mac. Since two of these are native to their OS’s and Linux is generally good with setting up libraries, I think these will work out better.

Theo’s Status Report for 3/08/2025

I’ve taken the past week off for Spring Break since I’m on schedule. The week before, we received the parts for the manipulator’s structure, so I actually built it (see image below). The parts all fit together besides the 3D printed mount, which needs slightly larger holes for the stepper motor to fit inside. We’ll 3D print this when we return to campus. So far, we’ve noticed slight instability and rough motion when trying to move the 3D printed mount up and down the screws with spacers. If the next iteration’s larger holes don’t enable the smooth and stable motion we want, then we’ll look into linear bearings or something similar.

We’ve also further theorized possible solutions to picking the object up during rotation. Currently, solenoids on the mount that push off of the second layer of T-channel extrusions is our best idea. This would likely work, and we would use at least two in order to deliver equal/symmetric force while sliding up the screws.

My next steps include finishing the troubleshooting on this 3D print, helping Sophia with integrating my basic serial test code with her control software, and designing + optimizing the 3D printed shaft-suction cup piece that we’ll use to connect the vacuum pump tubes and the stepper motor to the suction cup.

Our manipulator prototype on our scanner.

Yon’s Status Report for 2/22/2025

This week I worked on parametrizing the normal map computation, and 3D printed components for the manipulator. I made progress on the normal map math, but still need some more work to fully parametrize it. That puts me a little behind schedule, but I gave myself some buffer time in implementing the math in code, which should be very quick as its computed naively per pixel. I had to reprint the manipulator parts a few times due to printer issues, but we now have that component made and handed off to Theo for assembly.

Next week I will finally finish the math, and begin testing the manipulator. I can help characterize rotation accuracy and scan quality with/without a cover. I also already have some code written for the normal map computation with n=4 rotations so we could run a full system test if manipulator testing goes well.

Team Status Report for 2/22/2025

This week has been one of preparation and prototyping. Our Adafruit and first Amazon orders arrived, and we were able to run a test circuit where we checked that both the stepper motor and air pump could be powered and controlled. Electronics-wise, everything is fine; we’ll be waiting for the rest of our orders to come in before we can build our complete prototype. The 3D printed electronics mount had too little clearance in all of its holes, so we’ll be re-printing it with an extra 0.25mm of radius this week.

Software-wise, NAPS2 is proving to be the correct library choice. Sophia was able to implement OS-specific functionality, and the preliminary work can run on Windows and Linux. There is also a consistent file system for saving completed scans. The next steps will be testing computer-printer interactions.

On the signals end, Yon has continued to make progress on the scan/object mapping and has found new research that we may be able to draw inspiration from.

As of right now, we’re currently all on schedule. The only foreseeable issue in the project right now is the possibility of an object with an abrasive surface scratching the bed of the scanner while rotating. We’ll tackle this during the characterization of our prototype, and the most likely solution will be motorizing the vertical movement of the electronics mount so that we pick up the object before rotating it and let it down before we scan it.

Theo’s Status Report for 2/22/2025

This week, I practiced more for my presentation before presenting on Wednesday and spent time figuring out the suction cup’s connection to the stepper motor. I found a shaft coupler that doubles as a mounting platform, to which I’ll attach a 3D printed mount for the suction cup that allows it to be rotated by the stepper motor while connected to the air pump. I ordered this, along with some more air tubing connectors and 3mm mounting screws (our stepper motor didn’t come with any), in our second amazon order. Our first amazon order and our adafruit order came in later this week, and I picked them up along with the 3D printed circuits mount (see electronics in github). The holes on the mount needed the slightest bit more clearance, so I added 0.25mm to the radius of each hole. We’ll 3D print it this weekend or next week.

On Saturday, I started prototyping with the electronics that had come in from Adafruit and was able to control the stepper motor and air pump over serial. I’ve attached a picture of the setup below. Now I’ll wait for our new 3D printed mount, the suction cup, and the rest of the structure/hardware to come in before building a complete prototype.

Sophia’s Status Report for 02/22/2025

This week, I added in a check for different OS to the scanner controller program. So far, it correctly recognizes and adjusts the scanning context based on the OS, since NAPS2 uses a different scanning context object for each OS. I also added a “scans” folder to save each scanned .png file to. Each scan process/cycle will create a new scan folder (in the format of scan0, scan1, scan2, etc.) in the scans directory in order to keep scan cycles cleanly separated. Each scan .png file will be saved in its respective scan cycle folder (in the format of scanPage0.png, scanPage1.png, etc.). So, for example if we take 3 scans of an object, then you could find scanner-controller/scans/scan0/scanPage0.png, scanner-controller/scans/scan0/scanPage1.png, and scanner-controller/scans/scan0/scanPage2.png to send to the image processing component. I would just need to make sure the program knows what scan number it is on, essentially passing a scan ID between functions.

Additionally, I  implemented an exception for if there are no scanners detected. I also did more research into the NAPS2 documentation and how to properly connect the scanner. I usually use WSL to run code that’s stored on my Windows OS filesystem and I can run the code fine through Windows, but I think I’ve found a way to get the scanner recognized in WSL too. So, I believe I would be able to test both the Windows OS and Linux OS environments from my laptop alone. Yon would just have to test the Mac OS version.

The scanner I ordered arrive. Next week will begin testing communication integration with it. I am currently on schedule.

Team Status Report for 02/15/2025

Started working on our individual components, specifically creating the device model in CAD, setting up the software project, and looking into more research for the signals processing. We decided to use a suction cup as our friction tip and set up the hardware and electronics necessary to use it in our first prototype. Instead of testing two different prototypes with different manipulators, we can simply test with the suction on and off.

We have ordered all parts that we believe will be necessary at the moment. We should get all of them by next week and be able to start prototyping and testing with them. The 3D printed part we’ll need is also designed and can be printed anytime.

Our next steps are to progress on our individual tasks to keep on track before attempting and testing integration of the components.

Theo’s Status Report for 2/15/25

This past week I prioritized finishing the manipulator design and ordering the parts. I also finished the CAD mockup, including the first prototype of a 3D printed mount for the electronics. It will freely move vertically on top of the manipulator to account for objects of different heights/thicknesses. I’ll 3D print this mount and test it once the ordered parts have arrived and the rest of the manipulator is put together.

Though I’m still behind on my tasks from the roadmap, a lot of future time was allocated to debugging the hardware, characterizing the working prototype, and designing a new version with a suction cup manipulator (instead of just a friction tip); this time may as well be slack time that’s instead used for this prototype. We’ve gone ahead and ordered the air pump, tubing, and suction cup with the rest of our parts. The suction cup is the same material and size (silicone for high coeff. of friction, 13mm diameter to fit behind a dime or similar coin) as we would’ve made the friction tip, so we will simply test the prototype with and without activating suction instead of needing to build an entirely new prototype/add-on. I believe this will be a massive time save, and don’t foresee any issues with the current design. I’ve uploaded the solidworks assembly+relevant files to github.

Besides working on the design and prototype, I’ve worked with my team on the design review presentation slides. I’ll be presenting on either Monday or Wednesday, and the tentative plan is to finish a draft early enough tonight or tomorrow so that our TA can look it over and give advice.

Sophia’s Status Report for 02/15/2025

I created a branch in GitHub for the scanner controller software. I also created the dotnet project and added the universal scanner NAPS2 library which took a lot longer than you’d think, the dependencies felt never-ending. I never worked with dotnet either that I remember, so figuring out projects and solutions was a difficult time. Additionally, I added simple scan sample code to ensure all needed libraries of NAPS2 were correctly identified.

I also worked on design presentation. I edited and added in most of the slides needed for this sprint and filled out my own relevant software sections, ensuring that the needed topics were covered in the presentation slides.

My next steps are to create mock data and unit tests for the scanner controller code to ensure the simple single scans are working before moving on to scanning multiple times with a delay between.