Remote Interface!

This week, in preparation for what became a rather successful Demo2, I continued to work through the CSM-Perception software integration process and, perhaps more significantly, programmed the remote interface! After preliminary work on the application last week, I realized that my approach using Apple’s antiquated tutorial was faulted and required major changes. As a result, I scraped the initial codebase and began from scratch once again, yet during this iteration with a more intuitive set of educational videos regarding basic Swift development by the “CodeWithChris” account on Youtube.

Thus, over the week, I constructed the entire Remote Interface! The application follows the entire FSM pathway designed for ensuring a consistent user experience. The user can first request snapshot frames from the CSM containing bounding boxes for each of the detected potential targets within view, labeled with a unique index numbering each target. The user can the scroll through possible target selection indices to confirm his or her intended tracking focus, sending its bounding box edges as a reply to the CSM hosting compute device. Then, at any point, the user can elect to stop the recording by pressing the “Stop Tracking” button, again sending a message to the CSM that will be parsed to stop the current filming term. The only features left to add to this application is the Bluetooth communication (iPhone to Macbook as opposed to iPhone to Jetson due to COVID-19 shipping restrictions) and to provide more robustness to the application.

This week was fascinating, tiring, and exciting. I had never seriously programmed in Swift before this week. I had seen Swift syntax before, but that the extent of my knowledge of what I came to see as an intuitively designed and impact-wise impressive programming language. A brief demo (~40 seconds) of the Remote Interface’s functionality can be seen below.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *