Expert talk: TensorFlow Dev Summit 2019 Impressions
Update focused on performance improvements
Two weeks have passed from TensorFlow Dev Summit 2019 and we are really excited to share our first thoughts about updates.
The main announcement was dedicated to the alpha release of TensorFlow 2.0, which leads to a conceptual shift in ML / DL apps production, especially in terms of prototyping speed.
In order to get a better sense of changes, we at DataRoot Labs tested it out on a few of our projects using tf_update_v2 script, and surprisingly, it worked out! It looked good and did its job without errors. We didn’t see any significant performance improvements and the reason is clear – a lot of stuff led right to tf.compat.v1. But it was easy to replace the major “compat” part with new functionality provided in updated APIs. It became much easier to debug with eager execution by default. Also, we really enjoy the way tf.function replaces a lot of redundant stuff (with a new paradigm in mind of course).
Another exciting news which our team enjoyed is the updates on TensorFlow.js and TensorFlow Lite. Main improvements to TensorFlow Lite are usability, model conversion features, optimization (i.e. quantization) and performance (i.e. GPU acceleration). In addition, TensorFlow.js 1.0 update focuses on performance improvements demonstrated on mobilenet v1 inference speed, which is claimed to be 9 times faster compared to the previous year.
Currently, we’re in the process of testing inference speed with a quantized version of mobilenet v2, both with mobile devices and browsers. We’re very impressed with early benchmarks showing 2.4x inference speed improvement on average for real-time pose estimation task! This makes it possible to develop a whole new range of applications with on-device inference, which is amazing!
Next steps are to check out what’s new in TF’s Federated, Privacy, Probability, and Agents mentioned at the summit 😉
Have an idea? Let's discuss!
Talk to Yuliya. She will make sure that all is covered. Don't waste time on googling - get all answers from relevant expert in under one hour.