This week was a lot of work on our design presentation towards the start. During the creation of that presentation we finalized a lot of our technical requirements, including performance specs, the camera, power source, general setup, etc. The camera does shoot at 100fps so the fact remains that we’ll need to downsample it to get the fps we are looking for but we’re happy with the camera as a whole so to us this isn’t a major concern. We are also happy to have found a suitable power source, being the PD Pioneer laptop portable charger, that is able to supply power to our J25 DC power jack at 5V/3A. This should allow the Nano to run at maximum capacity and not force it to limit its specs due to a lack of power coming into the system (which the spec sheet had listed as a side effect of trying to power the Nano at 5V/2A through the micro-USB option). This leaves us feelings like we’re in a good place moving forward as we can proceed with our implementation knowing that we should have components that are all capable of working together and putting us on the path towards a complete final product. 

With the design of the CNN being finalized (AlexNet encoder) initial coding has been done to lay the groundwork for implementation there. While we currently lack a sufficient amount of data to have meaningful testing, this should be a good initial start for the ML systems of the project. 

On the non CNN software side of the project progress is continuing to be made. While the presented design consisted of primarily 2 processes utilizing shared memory to pass data, this design is being re-evaluated to see if there is a better way to handle the intake of frames and processing of frames concurrently. While this research and brainstorming is on going the core idea behind the design is largely the same, where there are 2 distinct processes going on, with 1 simply taking in video, making frame packets, and placing the packets into some sort of object that is shared with the other process. This is the largest roadblock right now with this part of the project as finding out the best way to handle shared, read-write, memory seems to not be as trivial as initially hoped and is requiring extra research into Python’s multiprocessing module. After this is done however the batch thread dispatching and reconstruction processes are both implemented (granted the reconstruction is currently just making a correctly ordered list of the frames). Also critical points in the program, such as where input from the Nano’s GPIO will be checked for recording/not recording, are also given placeholder comments and alternative snippets of code to provide similar functionalities in the short term. This should allow for initial testing of this system to be done hopefully within the next week, with a placeholder method put in place of the CNN for now, to gauge if this approach will satisfy the backend requirements we need it to and allow for more focus to be shifted elsewhere on the project.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *