Sophia’s Status Report for 04/12/2025

These two weeks, I added in a pop up UI window for the Blender add-on. After you press “Flatbed 3D Scan” in the menu, a window appears where you can select the file path (should be where you store Main.py and other files needed), input the number of scans to take (min of 3 and max of 12, but this isn’t supported in the normal map calculations yet), and the DPI for the scans to use (fully functional).

I also found an existing Blender add-on “DeepBump” that had a module to convert a normal map to a height map. I utilized some of the code and put it into our project to use. It currently is part of the overall system run and produces a height map, though not a very good one because of our alignment issues. We may need to look for a different height map method depending on how well it works on an aligned normal map. This one seems to utilize gradients from the normal map in the Frankot-Chellappa algorithm, but as our project focuses on very small objects, I’m not sure a gradient is the right way to go.

Verification so far has included mostly manual testing of running parts of the software system and ensuring they’re working  by running an individual file I worked on, then run it using Main.py, then run it using the Blender add-on, etc. It’s difficult to set up unit testing since many of the functions depend on external signals, image file outcomes, and what the physical hardware does, so running it plugged in usually provides a good idea. Checking that files are saved appropriately and investigating when there is a crash or error has so far led to getting a functioning software subsystem. Future verification for image alignment would be manually checking the files to see if they have properly aligned the object. Verification for the height map would be a visual check of if the height map looks accurate to what we’d expect from the scans. Importing it to Blender would utilize Blender’s built-in errors as well as a manual check of the files, seeing that the .obj is created and appears in the Blender editor. We also run all of these manual tests on Theo’s computer, testing them on a different system (so we ensure it works on systems other than mine and I didn’t build it to something specific to my machine) and on the Linux OS with his dual-boot laptop.

The next step would be to align the images, ensure the height map is accurate enough, and then figure out how to import it into Blender as an .obj.

Yon’s Status Report 3/29/25

This week we ran for the first the my subsystem together with Sophia’s scanner controller. We rotated the coin manually so the normal map is noisy, be we see expected behavior for the areas we don’t spin (flat normal map). I also designed the test object we’re going to use, and printed it.

This sets us up well for the demo this coming week, as Theo just implemented his system so we’ll be able to run a full system test. I’ll also scan the test object on the benchmark scanner and probably try and run it on ours (maybe for the demo) to begin doing qualification. Lastly, I’ve been thinking about how we might threshold the object to avoid performing the pretty computationally heavy normal->depth map operation on the part of the scan we don’t care about. I think this should be simple if we just threshold the normal map, but ill test it and then work it into my subsystem.

Sophia’s Status Report for 03/29/2025

This week, we tested the system from scanning, to if the stepper motor rotates, to saving the scans, to feeding it into the math in a python file to make the normal map. The system, after a bit of directory/save name tweaking, works, even properly starting and working fully from the Blender add-on. The only thing we didn’t test is the manipulator rotating the coin/object itself, Theo did it manually for the testing, because we were waiting on a 3D printed part to mount the suction part on the manipulator device. We were actually able to get a decently clear normal map of the coin, just a regular U.S. quarter. So personally, what I did was make adjustments in the software to be able to integrate in Yon’s math normal-map-making python file.

We’ve decided to put Mac-compatible software on hold, as there’s complications with Mac and the framework we were using to command the scanner. We’ll look into Mac-specific scanning libraries to use after getting the whole system including making the .obj working on Windows and Linux. Better to have at least one OS working completely than have it work only partially on all three.

Next step is expanding the Blender UI so the user can select a file path needed for the scanning process, the COM port for the microcontroller (though maybe we could do that automatically), and possibly the scanning DPI. Maybe somehow getting the command line messages we have with the scanning process progress getting displayed in Blender? Also we need to make sure the suction manipulation works in system run and integrate .obj 3D model creation. I’d say we’re on track and have a pretty good setup for the demos this upcoming week.

Sophia’s Status Report for 03/22/2025

This week, I started on the Blender add-on. I’ve successfully made an add-on with the name “Flatbed 3D Scan” to appear in Blender, and it runs the “execute” function that is inside of its Python file when clicked. So, next week I’ll be making it so it calls the Main.py file from last week which would run the scan. I need to look further into if there’s a way I can input arguments to the add-on, or a way to make a UI window pop-up to enter arguments in.

For Main.py, I refactored it a bit along with scanner-controller dotnet project and the serial_proto.py in order to put them in the right directories and directly call serial_proto from Main.py. So, the sequence looks okay. I need to implement try-catches to make sure the program doesn’t have unhandled errors. The dotnet project still isn’t working on Mac, so we’re going to look more into that next week as well.

So overall, next week will be adding in quality of life updates with the Blender add-on UI and bug fixing with testing the system.

Sophia’s Status Report for 03/15/2025

With Theo’s help, I got the scanner controller software working still using the NAPS2 library. It required changing the scanning contexts to different contexts than she expected, specifically Windows OS had two different scanning contexts and the one that said it was for Windows Forms, not Windows, was needed in this instance.

I started on writing the Main file that would call the scanner controller software, command the microcontroller manipulator device, and eventually will interact with the image processing software and the Blender add-on UI. I chose to do Python for this since there would be a lot of moving between file directories and some command line calls for the dotnet project that encompasses the scanner controller software. I also updated the scanner controller software to account for the Main file creating the scanning directory, so it now just has to make sure it doesn’t repeat file names for new scans.

The next step would be to implement the manipulator device calls into the Main.py file and ensure that the process of scanning and rotating works, building the Blender add-on, and integrating Yon’s image processing software. Initially I thought to make the Blender add-on do all of the calls, but now I think it’d be better to let the add-on be almost exclusively UI with a start button that calls the Main.py file. So, technically I’m a touch behind schedule, but as long as I finish the Blender add-on next week and the Main.py file is working, it should be good since we left two weeks of buffer time.

Sophia’s Status Report for 03/08/2025

The week before spring break, I started working with Theo on the serial code to get communication between the computer and the microcontroller on the manipulator device. Since the device connects using USB, it’s a matter of opening the serial ports and feeding commands at the right time to the manipulator device.

As for the program to automatically capture scans using the flatbed scanner, I’ve encountered. The file works fine before attempting a practical trial with the flatbed scanner. However, when I try to use the scanner with it I encounter a “lost connection with the scanner” error in the middle of scanner. I hear the scanner go off, but then it loses connection and doesn’t save the file. Online search wasn’t helpful, only suggesting to unplug and replug/restart the scanner, which I tried a few times unsuccessfully. I guessed it was something to do with file access permissions on my computer with the scanner, so I tried moving the project to more accessible file locations that definitely wouldn’t require admin, I tried running the script as an admin, I tried to see if there was an access permission the scanner was missing, I double checked that the device drivers were up to date, and nothing seemed to fix this issue. It’s extra confusing because I was able to scan and save just fine from the scanner’s native software and from the NAPS2 library software. I asked Theo to try downloading the project and running it to see maybe if it was an issue with Windows or my machine in particular. However, he encountered a lot of issues with trying to set up the dotnet project and the incompatibilities in the version of dotnet that the project required with his version of Linux.

So, in light of this I believe the best approach would be to pivot from trying to use the universal scan library of NAPS2 that requires a dotnet project. Instead, just a series of files instead that don’t rely on an existing framework. There would be a master file that receives the command from the Blender UI, checks the OS version, and then calls a corresponding file to make the scans based on the OS. This way, we would have a file for each OS version and each OS would be able to use a compatible scanning library. It also makes it so that we could incrementally ensure each OS works, ensuring one is done before moving on to the next. It would also ensure that something in general works, even if not compatible with every OS. Currently, I’m looking at WIA (Windows Imaging Acquisition) for Windows, SANE (Scanner Access Now Easy) for Linux, and ImageCaptureCore (Apple API) for Mac. Since two of these are native to their OS’s and Linux is generally good with setting up libraries, I think these will work out better.

Sophia’s Status Report for 02/22/2025

This week, I added in a check for different OS to the scanner controller program. So far, it correctly recognizes and adjusts the scanning context based on the OS, since NAPS2 uses a different scanning context object for each OS. I also added a “scans” folder to save each scanned .png file to. Each scan process/cycle will create a new scan folder (in the format of scan0, scan1, scan2, etc.) in the scans directory in order to keep scan cycles cleanly separated. Each scan .png file will be saved in its respective scan cycle folder (in the format of scanPage0.png, scanPage1.png, etc.). So, for example if we take 3 scans of an object, then you could find scanner-controller/scans/scan0/scanPage0.png, scanner-controller/scans/scan0/scanPage1.png, and scanner-controller/scans/scan0/scanPage2.png to send to the image processing component. I would just need to make sure the program knows what scan number it is on, essentially passing a scan ID between functions.

Additionally, I  implemented an exception for if there are no scanners detected. I also did more research into the NAPS2 documentation and how to properly connect the scanner. I usually use WSL to run code that’s stored on my Windows OS filesystem and I can run the code fine through Windows, but I think I’ve found a way to get the scanner recognized in WSL too. So, I believe I would be able to test both the Windows OS and Linux OS environments from my laptop alone. Yon would just have to test the Mac OS version.

The scanner I ordered arrive. Next week will begin testing communication integration with it. I am currently on schedule.

Sophia’s Status Report for 02/15/2025

I created a branch in GitHub for the scanner controller software. I also created the dotnet project and added the universal scanner NAPS2 library which took a lot longer than you’d think, the dependencies felt never-ending. I never worked with dotnet either that I remember, so figuring out projects and solutions was a difficult time. Additionally, I added simple scan sample code to ensure all needed libraries of NAPS2 were correctly identified.

I also worked on design presentation. I edited and added in most of the slides needed for this sprint and filled out my own relevant software sections, ensuring that the needed topics were covered in the presentation slides.

My next steps are to create mock data and unit tests for the scanner controller code to ensure the simple single scans are working before moving on to scanning multiple times with a delay between.

Sophia’s Status Report for 02/08/2025

Most of my work time this week was spent on preparing the proposal presentation. Including fleshing out requirements and quantitative measures for it, diagrams, and creating the slides themselves. I worked with my teammates over calls to flesh it out. My team all agreed that I would be presenting the proposal presentation. I spent time practicing to make sure I explained the project in a clear manner, not just directly reading from the slides, and asking my teammates about their parts of the project so that I had a thorough understanding of all of the elements.

Decided on scanner and library – I did research on what software libraries could potentially work for our project, particularly what libraries were available to command scanners. I found one called Naps2 that should work for any popular brand of scanner. I also decided on a scanner to buy, it’s a 4800 dpi Cannon scanner. 4800 dpi is plenty of detail for the scans, Cannon is a very common brand, and the scanner is small and shouldn’t be too difficult to move or store. It’s also less than $100, which leaves plenty of budget for materials for the hardware of our device.

My progress is currently on schedule. Next week, I will start into using the Naps2 library and creating the software that would interact directly with the scanner sending commands, would receive those scans, and then send them to Yon’s image processing software. Depending how far I get, I could even start unit testing it. Hopefully it will be in a decent state by the time the scanner is ordered and arrives for practical testing.