Team Status Update for 05/03/2020

This week, our team mainly helped each other work on the demo video and final report. The whole team helped write the script but Jeremy was mainly responsible for the animation and editing the video. Alex and Chakara are mainly responsible for the final report but Jeremy still helps to fill out his portions and proofread the report.

Currently, there are no significant risks. 

There were no changes made to the existing design of our system. 

Below is the updated Gantt chart. 

https://docs.google.com/spreadsheets/d/1GGzn30sgRvBdlpad1TIZRK-Fq__RTBgIKN7kDVB3IlI/edit#gid=1867312600

Chakara’s Status Update for 05/03/2020

This week, I mainly helped our team prepared for the final presentation. I then started to help the team write our video script and recorded audio files for the overview and triangulation section.

After giving some feedback to Jeremy for video and animations, I mainly worked on the final report. I am currently drafting up the final report for Alex and Jeremy to fill in their parts and proofread the final thing.

I am currently on schedule and hope to finish the final report by the deadline.

Team Status Update for 04/26/2020

This week our team mainly worked on making necessary fixes to parts of our pipeline such as ICP and triangulation. We then created our driver application to be able to run the whole pipeline with different parameters. We also updated our verification script. Our driver can now output result CSV files to help us with writing our final presentation slides and report. 

 

Currently, there are no significant risks. 

 

There were no changes made to the existing design of our system. 

 

Below is the updated Gantt chart. 

https://docs.google.com/spreadsheets/d/1GGzn30sgRvBdlpad1TIZRK-Fq__RTBgIKN7kDVB3IlI/edit#gid=1867312600

Chakara’s Status Update for 04/26/2020

This week I started of by helping Alex write the verification script.

After seeing results from different objects,  we realized the triangulation, although looks good, doesn’t give satisfiable results. I spent a lot of time fixing the triangulation algorithm. The Screened Poisson algorithm was working decently well and was fast, but using our metrics to test points from our resulting meshes to the ground truth meshes, our accuracy was not as good. Also, there were a few points that were outside of 5% of the longest axis of the ground truth model which means we failed our accuracy requirement.

I then switched back to our Delaunay Triangulation algorithm from pyvista. I adjusted different parameters. Although this algorithm takes so much longer, the accuracy results were much more satisfiable. The time was around 3-4 times longer than Poisson algorithm for most meshes but was still within our timing requirement. No points were outside of 5% of the longest axis of the ground truth model and only a few points were outside of 2% of the longest axis of ground truth models.

After fixing everything, I worked with Alex to write our driver. We added different parameters to the script so that we can adjust different parameters and test our pipeline more efficiently. This also helps with when we are trying to look into tradeoffs and validation for the final presentation and final report. Users can also then use our program by just running this driver script. Moreover, I also helped write the driver to be able to output the results in CSV format for easier analysis.

After that, I worked together with both Jeremy and Alex in creating the final presentation slides so that we can show different tradeoffs and our whole project.

I’m currently on schedule and next week, I hope to work with both Jeremy and Alex to finish the final report and the two videos.

Chakara’s Status Update for 04/19/2020

This week, on top of working with other team members on preparing for the in-lab demo and thinking of different testing we could perform, I mainly worked on improving the triangulation algorithm. As mentioned in last week’s report, the triangulated meshes using different algorithms were not satisfiable. The Screened Poisson Reconstruction proposed in Kazhdan and Hoppe’s paper looked the most promising; thus I decided to try fixing it. 

 

I first tried working on other example pcd files from the web, and the Poisson Reconstruction algorithm worked perfectly. 

I looked deeper into the pcd files and the only two main differences were just that example pcd files also contain FIELDS normal_x, normal_y, and normal_z which have normals information. Thus, my speculation was correct. 

 

I then tried orienting the normals to align with the direction of the camera position and laser position but they still didn’t work. 

 

After that, I tried adjusting the search parameter when estimating the normals. I changed from the default search tree to o3d.geometry.KDTreeSearchParamHybrid which I can adjust the number of neighbors and the radius size to look at when estimating the normals for each point in the point cloud. After estimating the normals, I then orient the normals such that the normals are all pointing inward and invert the normals out so that all the normals are pointing directly outward from the surface. The results are much more promising. The smoothed results were not as accurate so I decided to ignore smoothing. 

I then realigned the normals to get rid of the weird vase shape by making sure that the z-axis alignment of the normals was at the center of the point cloud. 

After that, I helped Alex work on the verification by writing a function to get the accuracy percentage. I used Alex’s work on getting the distances between the target and the source and wrote a simple function to check if 90% of the points are within 5% of the longest axis and 100% of the points are within 2% of the longest axis. 

I am currently on schedule. For next week, I hope to fully finish verification and help the team prepare for the final presentation and demo videos. 

Chakara’s Status Update for 04/11/2020

This week, after Alex made fixes to the point cloud construction algorithm, I tried testing the new pcd files on our current triangulation algorithm. The rendered object looks perfect.

However, testing a more complicated object such as a monkey, the triangle meshes look a little rough and might not meet our accuracy requirements. 

I looked into the point cloud and it is very detailed so the problem is with the delaunay technique we are currently using. Although I tried changing different parameters, the results are still not satisfiable. 

Thus, I started looking into other libraries which might have other techniques. I ended up trying open3d this week. The first technique I try is to compute a convex hull of the point cloud and generate a mesh from that, the result is very not satisfiable since the convex hull is just a set of points is defined as the smallest convex polygon, that encloses all of the points in the set. Thus, it only creates an outer rough polygon surface of the monkey. 

After that, I tried the pivoting ball technique. This implements the Ball Pivoting algorithm proposed in F. Bernardini et al. The surface reconstruction is done by rolling a ball with a given radius over the point cloud, whenever the ball touches three points a triangle is created, and I could adjust the radii of the ball that are used for the surface reconstruction. I computed the radii by finding the norms of all the closest distance between every point in the point cloud and multiply it by a constant. Using a constant smaller than 5, the results were not satisfiable. The results got more accurate as I increased the constant size; however, a constant above 15 takes longer than 5 minutes to compute using my computer which would not pass our efficiency requirement, and the results were still not as satisfiable as I hoped for. I tried different smoothing techniques but they did not help much. 

The next technique I used was the poisson technique. This implements the Screened Poisson Reconstruction proposed in Kazhdan and Hoppe. From this method, I can vary the depth, width, scale, and the linear fit of the algorithm. The depth is the maximum depth of the tree that will be used for surface reconstruction. The width specifies the target width of the finest level octree cells. The scale specifies the ratio between the diameter of the cube used for reconstruction and the diameter of the samples’ bounding cube. And linear fit can tell if the reconstructor uses linear interpolation to estimate the positions of iso-vertices or not. 

The results are accurate and look smooth once I normalize the normals but there is a weird surface. By looking at the pcd file and a voxel grid (below), there are no points where this weird rectangular surface lies. 

Currently, I assumed that the weird surface is from the directions the normals are oriented to, since the location of the surface changes when I orient the normals differently.

I’m currently a little behind schedule since I hoped to fully finish triangulation, but luckily, our team allocated enough slack time for me to fix this. If I finish early, I hope to help the team work on the testing benchmarks and adding noise.  

For next week, I hope to be able to fix this issue, either by applying different other techniques on top of the poisson technique or changing to marching cubes algorithm which also seems probable.

Team Status Update for 04/11/2020

This week, our team focuses mainly on fixing accuracy issues with the laser detection, point cloud construction, and triangulation algorithms. Most of our work was done separately. For next week, we plan on preparing for the demo, finish adding noises and writing testing benchmarks, and making any necessary final fixes.

Currently, there are no significant risks. 

There were no changes made to the existing design of our system. 

Below is the updated Gantt chart. 

https://docs.google.com/spreadsheets/d/1GGzn30sgRvBdlpad1TIZRK-Fq__RTBgIKN7kDVB3IlI/edit#gid=1867312600

Team Status Update for 04/04/2020

This week, our team focused mainly on the integration of different parts we have been working on: simulation for the laser line camera input, global point cloud construction, and triangulation. We also have been preparing for the demo which would be on the upcoming Monday. 

Currently, there are no significant risks. 

There were no changes made to the existing design of our system. 

Below is the updated Gantt chart. 

https://docs.google.com/spreadsheets/d/1GGzn30sgRvBdlpad1TIZRK-Fq__RTBgIKN7kDVB3IlI/edit#gid=1867312600

Chakara’s Status Update for 04/04/2020

This week, I mainly worked on integrating my triangulation work with Alex and Jeremy’s work. From the global point cloud Alex constructed, I needed to fine-tune a few different parameters for the triangulation module to work properly. Below are the point cloud images from 2 different perspectives. 

The initial mesh looks like this. 

After adjusting the parameters (mainly the distance value to control the output of this filter and the tolerance to control discarding of closely spaced points), we achieve the results below. 

I’m currently on schedule as our team mainly focused on how to most appropriately demo our project. 

For next week, I hope to fine-tune the triangulation algorithm and help Alex finish writing the testing benchmarks. I also hope to help the team add noise to our input data.

Team Status Update for 03/28/2020

This week, all our team members have our own separate tasks.

We currently have no significant risks. Some minor risks include warping of the point cloud by the mesh triangulation algorithm, which may make it difficult to meet our originally proposed accuracy requirements. Another risk is that if we choose to go down the path of developing a web application for the demo, there may be some connectivity/load issues. To alleviate this risk we may prepare a demo video in case.

There were no changes made to the existing design of the system from last week.

Below is the updated Gantt Chart with physical component related tasks removed.
https://docs.google.com/spreadsheets/d/1GGzn30sgRvBdlpad1TIZRK-Fq__RTBgIKN7kDVB3IlI/edit#gid=1867312600