Hannah’s Status Update Week 3/22

This past week has been another COVID-19 week, thus I didn’t accomplish much besides reading up on some documentation of different techniques for blob detection. However, on Sunday, we finally decided how to split up the work and I will now for sure be focusing on the 3d-visualizer (taking the g-code and creating a 3d model from it)

So this upcoming week,  I plan to create a function that will take in a parsed list of g-code commands (only consisting of x-y-z movements) and create a 3d model using python. Then I will create a separate function that will be able to take this 3d model and output a basic STL (CAD) file.

Joshua’s Status Update for Week of 3/22

This week, I had more discussions with Hannah and Lucas. I think we’re getting back on track. We had to reassign some tasks because each of us now have restrictions on the resources we have access to. So since Hannah has done all the research on the gcode, and since I offered to do the blob code. I started working on the blob comparison code as well as blob detection code. Nothing is really working, so I’m glad the demo got pushed back a week. This week I will finish the blob stuff and integrate that with the core plugin.

Team Status Update for Week of 3/22

We re-did our Gantt Chart, trimming out the unnecessary tasks due to our rescoping of the project and reassigning some tasks to better accommodate those who currently do not have access to on-campus resources. This week we will be focusing on the gcode visualizer and finishing the blob detection.

The biggest risk factor is our hardware limiting the soft- ware implementation we have chosen. We are manipulating very large data sets and performing multiple searches during each error check. Since our design can be checked just by streaming in a prerecorded video of a print job, we don’t necessarily need an RPi to prove our system works. So in the worst case, we can just run our plugin on a laptop.

Another risk factor is the system having a higher false positive rate than expected. Although a high false positive rate is preferable to a high false negative rate, we want to also have a low false positive rate. If we come across this issue, we can tune the various similarity metrics we used in order to mitigate this risk. This is unchanged from the design report.

A minor risk is that the fixture we construct to hold our sensors and devices is too heavy to meet specifications. Lucas has designed the armature already and is able to print it at home, so he’ll be in direct control of the weight of the fixture.

We only have access to a single functioning printer…that has already broken down once. If the printer were put out of commision again, we would lose not only our only test platform as well as our only avenue of printing mounts and other mechanical hardware.

Team Status Update for Week of 3/15

Because of the COVID-19 predicament, as a team we didn’t do much in terms of work, but we did meet with each other and the course faculty and TAs over Zoom to discuss our project. The results of the discussion are outlines in our Statement of Work.

This upcoming week we will work on the error detection functions, the core system, and fixing up the broken Printrbot and the associated software components.

Joshua’s Status Update for Week of 3/15

Considering the circumstances with COVID-19, I didn’t do as much this week as in previous weeks. However, during Spring Break, I did begin implementation of the main system plugin. This included a half-runnable/half-pseudcode Python implementation of the RPi camera IO access, creating the prototype for a basic Octoprint plugin with the appropriate handers to access the gcode, and messing around with the possibility of multithreaded. I also attempted to begin the point cloud code. This can all be found in the team’s Github repo. I also wrote the statement of work.

This upcoming week I need to refocus on the main system code and provide solid system error messages and give better access to the necessary IO and handlers.

Hannah’s Status Update Week 3/15

This first week “back” has been extremely chaotic and I personally haven’t been able to get much done. My teammates and I met via Zoom to discuss how we are thinking about changing our project, and I believe that I will continue to work on our blob/edge detection as we originally had planned.

This week, since it’s been difficult to focus given the circumstances, I’ve been trying to research different ways to approach this problem and how to combine multiple algorithms to make as robust of an blob detection as possible.

Hopefully the upcoming week will be more productive as I get settled into virtual school life.

Hannah’s Status Update Week 2/23

This past week, I spend a good chunk of time refining my contribution to the design to match our new overall design (switching from the custom SBC and being a very hardware based project to using an RPi and focusing a lot more on the algorithms for the error detection).

In particular, Joshua and I began beefing up our research and ideas for what exactly we wanted to do to create as robust of an error detection as possible. We came to the conclusion of doing blob detection, edge detection, and 3D point cloud analysis. And over the course of this week, I focused mostly on the blob detection and edge detection.

We also had to reconsider what cameras to use, since the RPi has the option to include only one camera in parallel. We decided to keep the one camera we chose from before (which connects via U-ART Serial) and to have the second camera be a camera that is easily compatible with the RPi. We settled on the RPi camera board v2, because it is compatible with the RPi, has a very similar image sensor to our current camera, and additionally, it has the same lens mount as the other camera so that if we decide to buy wider lenses, the same lens can be put on both camera modules.

This upcoming week I will hopefully finally start implementing the blob detection. If the camera set up is done at the same time, we should be able to start testing soon 🙂

*note of a minor inconvenience

the reason that I haven’t been able to start implementing is that my laptop is so old, that it doesn’t automatically upgrade it’s software anymore. I have figured out how to override this, but I need to backup my laptop first, but I found that somehow my backup hard drive is broken(???) so I needed to buy a new one. Will be starting as soon as I can.

Team Status Update for Week of 2/23

This week we spent time incorporating the written and verbal feedback we received from the design presentation into our design report. First, we reread the papers we found in order to get a better understanding of past designs. We refined our design further, and our report contains specific details on our system. In particular, we talk about the required math and gave more descriptive justifications for our design choices.

Joshua’s Status Update for Week of 2/23

What I did:

I presented the Design Review. I also re-read Vision based error detection for 3D printing processes by Felix Baumann and Dieter Roller to understand fully the details of their algorithm. By reading Initial Work on the Characterization of Additive Manufacturing (3D Printing) Using Software Image Analysis by Jeremy Straub, I was able to get more of a run-through of other ways to model the printed object.

Continue reading “Joshua’s Status Update for Week of 2/23”

Hannah’s Status Update Week 2/16

This past week I finished up the trade study on the different camera modules and we came to a conclusion on what to get.

We decided to go with the TTL Serial JPEG Camera with NTSC Video from adafruit [link]. We chose this one because of it’s price and some extra capabilities that it has that will make our time working with the module, such as a DSP Management tool that will allow us to tweak basically all of the features on the camera as well as simplify testing. We are also purchasing a separate camera lens that will cover a viewing angle of 116° rather than the standard 56° [link]. 

Other than this, I worked a lot on learning more about different edge detection methods and ways to parse and reconstruct a 3D image out of g-code. I found a repo on Github for g-code reading which I plan to modify in order to better suit our needs [link].