Hannah’s Status Update Week 4/19

so this past week was a lot of playing around with the parsing and graphing code that I wrote to get it to be able to automatically slice the image into the different layers.

this week I also worked on “injecting” bad / faulty gcode into my plotter so that way we can actually start to compare the good & bad models against joshua’s edges from images of what the print should look like.

Hannah’s Status Update Weeks 4/5 & 4/12

So the past two weeks I’ve been working on creating a grapher for the prints we want using their gcode files.

I’ve finished my first passes at it, but the issue that is now arising is it’s presentation when it comes time to compare it to the actual image.

I have two examples included, (both gcode files I got off of thingiverse)

The first is a puzzle piece. From all around, it looks good and similar to the slicer’s view of it, but when looking at the graph from the view we think our camera would be at, there are all the gaps in between each layer. I think this is because of how python has automatic axes (I’m not entirely sure of this and will work on fixing it)

The second object was a classic coke bottle. This one had some different issues. With the puzzle piece, There’s the final line of the extruder returning to its original position, but that doesn’t get in the way of the piece. For the coke bottle, the extruder is returning to such a high z-value, that it shrunk the look of the print in the graph. I will probably need to go into the actual gcode file and find out where this last step is and change it’s value so that this issue is fixed.

So this upcoming week is really about fixing my graphs and getting these images to Joshua so we can test the edge detection. Then, working with Lucas and the more advanced parser, we want to try and be able to figure out how long each layer takes and from there perform multiple comparison tests for one object.

Hannah’s Status Update Week 3/22

This past week has been another COVID-19 week, thus I didn’t accomplish much besides reading up on some documentation of different techniques for blob detection. However, on Sunday, we finally decided how to split up the work and I will now for sure be focusing on the 3d-visualizer (taking the g-code and creating a 3d model from it)

So this upcoming week,  I plan to create a function that will take in a parsed list of g-code commands (only consisting of x-y-z movements) and create a 3d model using python. Then I will create a separate function that will be able to take this 3d model and output a basic STL (CAD) file.

Hannah’s Status Update Week 3/15

This first week “back” has been extremely chaotic and I personally haven’t been able to get much done. My teammates and I met via Zoom to discuss how we are thinking about changing our project, and I believe that I will continue to work on our blob/edge detection as we originally had planned.

This week, since it’s been difficult to focus given the circumstances, I’ve been trying to research different ways to approach this problem and how to combine multiple algorithms to make as robust of an blob detection as possible.

Hopefully the upcoming week will be more productive as I get settled into virtual school life.

Hannah’s Status Update Week 2/23

This past week, I spend a good chunk of time refining my contribution to the design to match our new overall design (switching from the custom SBC and being a very hardware based project to using an RPi and focusing a lot more on the algorithms for the error detection).

In particular, Joshua and I began beefing up our research and ideas for what exactly we wanted to do to create as robust of an error detection as possible. We came to the conclusion of doing blob detection, edge detection, and 3D point cloud analysis. And over the course of this week, I focused mostly on the blob detection and edge detection.

We also had to reconsider what cameras to use, since the RPi has the option to include only one camera in parallel. We decided to keep the one camera we chose from before (which connects via U-ART Serial) and to have the second camera be a camera that is easily compatible with the RPi. We settled on the RPi camera board v2, because it is compatible with the RPi, has a very similar image sensor to our current camera, and additionally, it has the same lens mount as the other camera so that if we decide to buy wider lenses, the same lens can be put on both camera modules.

This upcoming week I will hopefully finally start implementing the blob detection. If the camera set up is done at the same time, we should be able to start testing soon 🙂

*note of a minor inconvenience

the reason that I haven’t been able to start implementing is that my laptop is so old, that it doesn’t automatically upgrade it’s software anymore. I have figured out how to override this, but I need to backup my laptop first, but I found that somehow my backup hard drive is broken(???) so I needed to buy a new one. Will be starting as soon as I can.

Hannah’s Status Update Week 2/16

This past week I finished up the trade study on the different camera modules and we came to a conclusion on what to get.

We decided to go with the TTL Serial JPEG Camera with NTSC Video from adafruit [link]. We chose this one because of it’s price and some extra capabilities that it has that will make our time working with the module, such as a DSP Management tool that will allow us to tweak basically all of the features on the camera as well as simplify testing. We are also purchasing a separate camera lens that will cover a viewing angle of 116° rather than the standard 56° [link]. 

Other than this, I worked a lot on learning more about different edge detection methods and ways to parse and reconstruct a 3D image out of g-code. I found a repo on Github for g-code reading which I plan to modify in order to better suit our needs [link].

 

Hannah’s Status Update Week 2/9

This week I started to look into all of the different camera systems, and how the different components (image sensor, lens mount, and lens) work together. Based off of some light research, for most error detection systems using some sort of camera, it’s necessary for it to be at least 60 fps. We also care about the power usage and the overall size of whatever chip we get. These requirements were my main focus when looking at different camera/chip systems.

I also began to look up different ways to implement out g-code to 2D image function, and found g-code translator code on GitHub that uses Python to read g-code and produce a 3D graphical representation of the 3D print. I plan to continue looking at this repo more this upcoming week to learn more about what they did, and how they best implemented the different components of the translation.

Hannah’s Status Update Week 2/2

I presented for our Project Proposal, and the feedback we got was extremely helpful. I was proud of myself and my teammates for how we structured the slides and how we answered questions.

Other work that I’ve done this week includes starting the trade studies for different camera modules that we may use in our project. I’m taking a deep look into the physical size of the camera, as well as it’s pixel count & size. There are a lot of camera modules out there that have a bunch of additional features (i.e. shape detection, color blocking, motion detection) that we may want to take advantage of.