This week was spent running tests of our use case requirements. I worked to test the bot’s performance when starting at the edge of a 1.5m radius circle, in which three plastic bottles were placed at random. Through all this testing we discovered and attempted to fix multiple issues, including but not limited to: the bot being too slow, which we attempted to fix by increasing the robot movement speed and switching to KCF, which is faster as our tracking algorithm rather than CSRT, which is more accurate. However with this speed v accuracy tradeoff, we also ran into many false positive issues where the bot would incorrectly classify tape, outlets, and windows as bottles. When dealing with these struggles we have upped the confidence threshold for identified objects to around 0.4 to filter out potential obstacles, as well as debugging a case where we were rotating in angle calculation indefinitely. We also began setting some configurations for the LiDAR camera so that it works with low ambient light, which is the suggested configuration for best performance. So far, it is dicey whether this change affects much but we have been looking into the different settings we can put on the camera.

On the hardware side of things we replaced the sliced off half of lexan plate on top of our bot with a panel of plastic in order to reintroduce a barrier for the bottles to bounces slightly off of as to not completely fly out of the storage area due to the aggressiveness of the intake.

We started testing with obstacles but the performance with these are not great (see group status report). We are planning to refocus most of our effort into our primary use case requirement in order to get it as solid as possible in terms of performance.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *