Stephen Dai’s Status Report for 11/11/23

I made A LOT of additions/changes to the computer vision code this week. Here is an overview:

  1. If a node is detected but doesn’t have at least two detected neighbors, then this node is removed. This is done to remove stray circles that might be detected as nodes. By doing this, we err on the side of missing nodes instead of identifying fake nodes/circles. To account for this, we make the Hough circles transform less strict so it is more lenient in its circle detection.
  2. I changed the way that component subimages are generated. Before I created a hard-coded size box that had X by Y dimensions based on the coordinates of its nodes. Now, I do the following. I create a straight line from one node to the other. I iterate the line upwards + downwards / left + right (depending on if the component is horizontal or vertical), and I stop iterating once there are no black pixels in the line. Then I add a fixed value of 20 pixels, and now I have a box that is guaranteed to encompass the entire component.
  3. I finally found a good combination of preprocessing functions to run on component sub images. I spent so much time on this trying to find a way to remove shadows and the grain of the paper from the images, which were seriously messing up the individual component classification. The combination I ended up with was median blurring -> non-local means denoising -> adaptive thresholding -> median blur. From my testing this combination does really well in removing grain from the paper (noise) and smoothing out shadows. I don’t think I will stray from this combination besides fine-tuning parameters in the future.
  4. Before feature detection and matching, I have added an additional step. I run Hough circle transform on the processed subimage in order to determine if the component symbol is one with a circle (voltage source, current source, lightbulb). If it is, then feature matching is only performed with these components in the dataset. If it is not, then the component must be a wire, resistor, switch, or LED. I then perform probabilistic Hough line detection. I then look for the maximum and minimum X / Y coordinates (depending on if the component is vertical/horizontal) and see what the difference is. If the difference is small (less than a third of the subimage width/height), then the component must be a wire. Else, the component must be either a resistor, switch, or LED. I did these things because the individual component detection was quite sad. Sometimes a wire would get classified as a voltage/current source, which made no sense. I figured that because wire’s are so much different/simpler than every other component I could case specially on them and not even require feature matching for them.

The improvements I want to make next week are to make the node detection more consistent even with shadows. I think I will experiment with new preprocessing algorithms like what I did with the subimage preprocessing. The other thing I want to try to improve is the feature detection by tweaking some parameters.

Test that I have run and are planning to run look like this:

Node detection/subimage generation: I take images of drawn circuits and then feed them into my subimage generator. I validate that the detected nodes correspond to the drawn nodes by showing an image of detected circles overlaid on top of the circuit image. I also visually validate that the subimages generated properly encompass the entire component.

Component classification: In order to determine what image preprocessing algorithms I wanted to use, I tried many different combinations of algorithms and outputted/showed the image resulting of running each algorithm. This way I could intuitively tune what algorithms to use and which parameters I needed to change. In order to validate the output, I print the top three component classifications. Based on our use-case requirements, the top choice should be the correct classification 90% of the time.

Full circuit classification: I run everything together and print the output of the circuit classification, which is a list of the five best circuit classifications. Based on our use case requirements, one of the five classifications should be correct 90% of the time.

Next steps with my testing is to actually measure these results for a quantitative measure of how well my subsystems are performing. For the individual component classification, I will work with a testing dataset of 50 drawn components from our user group, and see if we hit the 90% mark. For full circuit classification, I will work with a testing dataset of 28 different circuit images from our user group and see if we hit the 90% mark as well.

Leave a Reply

Your email address will not be published. Required fields are marked *