Replies: 1 comment 1 reply
-
Hmm, that is interesting. All of my testing is with a 64-bit OS on the RPi3, yes, so that might be the main issue. In general, I found that the .tflite models are about ~10-20% faster than ONNX on ARM, so in general that is recommended. But perhaps on 32-bit this isn't the case. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, thanks for the great work!
The detection works very well on my Linux-desktop computer. However, on my Pi 3 with 32-bit OS I only achieve a runtime of ~200ms-250ms per 80ms audio-sample just by using owwModel.predict() with one keyword running on the default .tflite models. As it is a 32bit-arm, i did not manage to install the Onnxruntime (yet..) and hence I deactivated the Onnxruntime dependencies. Does this explain why my performance is so much slower then what is mentioned in the description? Or is there another trick (e.g.64bit OS) to reach the mentioned performance on the Pi3 ?
Thanks !
Beta Was this translation helpful? Give feedback.
All reactions