Joshua’s Status Update for Week of 2/23

What I did:

I presented the Design Review. I also re-read Vision based error detection for 3D printing processes by Felix Baumann and Dieter Roller to understand fully the details of their algorithm. By reading Initial Work on the Characterization of Additive Manufacturing (3D Printing) Using Software Image Analysis by Jeremy Straub, I was able to get more of a run-through of other ways to model the printed object.

I took a step to run through the literature because I do not think that our 1-possibly-2 camera setup is enough to create a significant model of what is currently being printed. I wanted to quantify our actual CVR (coverage rate) because I believe that a 100% CVR is not attainable with our current camera placement, especially considering the bulky extruder head occluding the print bed behind the printed object, and that our current configuration of using a front-facing camera and the built-in Ultimaker3 camera is not sufficient to cover the print surface not facing those cameras.

My position on the design is that we are currently creating a TOF laser system in order to fix inadequacies of the camera we specced. I would have rather decide on a minimum object size to detect requirement, require a vertical and horizontal field of view (probably the height and width of the Ultimaker3, respectively), and calculate the needed resolution, as can be described in this article by National Instruments:

Sensor Resolution = Image Resolution = (Pixel Count to Detect Object)*(FOV in mm / Smallest Object Size in mm)

In addition, the TOF laser beam we chose is actually conic (see page 6 of the data sheet)  so using this particular device in order to target the extruder/filament specifically is not as useful as previously thought, since the extruder head is bulky and will always be closer to the TOF laser than the actual extruding point.

I think that a better design that reduces sensing and data transference latency is to just spec out better cameras and use their output in a multi-faceted method. We definitely need to keep our Canny Edge detection algorithm, but we can also track the current printed object and the extruder head to ensure that the extruder is not clogged (i.e. causing a premature stop error); this all comes from the same set of images. I think that using a structured light method (using a top-down-viewed camera) would help tighten up the model we get from the multiple camera views — I certainly think a lot of our recurrent problems can be solved with this technique — but I think we can make do without this strategy. In any case, using structured light effectively would probably require that we inject g-code so that the extruder moves out of the way every N number of layers. I also believe that we need more cameras or other sensors to fully capture the object, not just the view from the front or a corner view.

I am basically pushing for the removal of the TOF lasers unless the essentially binary output (i.e. is/is not something there?) can be proved to provide information that a stronger camera specification, placement, and algorithm can’t provide. In general, I want the team to cultivate a data-driven attitude and not make our requirements subservient to the tools from the outset (although I do recognize we have a $600 budget). I am also pushing for the team to focus on developing and testing and demoing solely on the Ultimaker3 (i.e. not thinking at alllllll about any other printer).

Am I behind?:

Yes.

Goals for Next Week:

By this week, I should have completed:

  • Getting the team on track w.r.t meeting deadlines promptly
  • Literature skimmage
  • Fully thought out design with justification(s) for each decision
  • Complete Design Review Report

in order to begin next week:

  • g-code visualizer code
  • System state code

Pursuant to the goals above, these are my results for this week:

  • Setup Ultimaker3 on the network and gained access to the built-in camera. I tried researching the specs for the camera, but only found speculations that it could be a 1600×1200 camera at extremely low fps, and in general operates at 800×600 resolution.
  • Performed quick calculations to get required resolution to detect 0.5mm object with our specific horizontal and vertical FOVs.
  • Compared the SONY Spresense 5MP Camera module & the SONY Spresense main board with the RPI 8MP Camera Module v2 & RPI 3B+.
  • Using the RPI 8MP Camera Module v2 as a reference, I calculated the working distance -to-focal length ratio. Discovered that lenses with a focal length of less than 12 mm cause distortion (may be able to correct in software); however, focal lengths > 12mm require working distances on the order of a meter, which is not feasible for this project since our build volume is 215x215x300 mm. So we would have to get a 1/4 format lens with a 3.6mm focal length.
  • Finally we agreed to only use 1 printer; we switch to the Printbot from the RoboClub since that affords us more hardware flexibility than the Ultimaker3, which was limited in its support of Octoprint.
  • Lucas found and I read In situ real time defect detection of 3D printed parts by Oliver Holzmond and Xiaodong Li
  • Even though I’m more of a hardware person, I moved myself to solely the software team because I think the algorithm is the single most important part of the project.
  • I worked on refining the design and changing where necessary; our final design is presented in the report.

Leave a Reply

Your email address will not be published. Required fields are marked *