1 min readNov 8, 2019
Because during conversion you quantized the model from float32 to int8. Check my [part=3](https://medium.com/analytics-vidhya/facenet-on-modile-part-3-cc6f6d5752d6) to understand it more deeply.
Because during conversion you quantized the model from float32 to int8. Check my [part=3](https://medium.com/analytics-vidhya/facenet-on-modile-part-3-cc6f6d5752d6) to understand it more deeply.
Evangelist | Co-Founder | Mentoring | Product | https://tomdeore.wixsite.com/epoch