To build these kinds of an FPGA implementation, we compress the 2D CNN by applying dynamic quantization methods. Rather than fantastic-tuning an already qualified network, this move entails retraining the CNN from scratch with constrained bitwidths with the weights and activations. This process is known as quantization-mindful schooling (QAT). The https://laurac926xgn9.frewwebs.com/profile