The two most resource-consuming problems in machine learning are the challenges of feature engineering and hyperparameter optimization. In both industry and academia, these two problems prohibit the development of optimal models by adding computationally complex problems that can only be solved through trial-and-error solutions. The state of the art involves using Graphics Processing Units (GPUs) to parallelize training, but this hardware is expensive and is often shared between researchers. The aim of this project is to develop a scalable solution using a Field Programmable Gate Array (FPGA) that acts as a backend for existing machine learning frameworks.
Take a look at the Proposal Presentation here:
https://docs.google.com/presentation/d/1gVesa2Qr5i0uIbOTQh9IQaoD9_IxPD9D55bPpI1QF9c/edit?usp=sharing