Posted by Paul Ruiz, Developer Relations Engineer
We’re excited to announce that the TensorFlow Lite plugin for Flutter has been officially migrated to the TensorFlow GitHub account and released!
Three years ago, Amish Garg, one of our talented Google Summer of Code contributors, wrote a widely used TensorFlow Lite plugin for Flutter. The plugin was so popular that we decided to migrate it to our official repo, making it easier to maintain directly by the Google team. We are grateful to Amish for his contributions to the TensorFlow Lite Flutter plugin.
Through the efforts of developers in the community, the plugin has been updated to the latest version of TensorFlow Lite, and a collection of new features and example apps have been added, such as object detection through a live camera feed.
So what is TensorFlow Lite? TensorFlow Lite is a way to run TensorFlow models on devices locally, supporting mobile, embedded, web, and edge devices. TensorFlow Lite’s cross-platform support and on-device performance optimizations make it a great addition to the Flutter development toolbox. Our goal with this plugin is to make it easy to integrate TensorFlow Lite models into Flutter apps across mobile platforms, with desktop support currently in development through the efforts of our developer community. Find pre-trained TensorFlow Lite models on model repos like Kaggle Models or create your own custom TensorFlow Lite models.
Let’s take a look at how you could use the Flutter TensorFlow Lite plugin for image classification:
TensorFlow Lite Image Classification with Flutter
First you will need to install the plugin from pub.dev. Once the plugin is installed, you can load a TensorFlow Lite model into your Flutter app and define the input and output tensor shapes. If you’re using the MobileNet model, then the input tensor will be a 224 by 224 RGB image, and the output will be a list of confidence scores for the trained labels.
// Load model
Future<void> _loadModel() async {
final options = InterpreterOptions();
// Load model from assets
interpreter = await Interpreter.fromAsset(modelPath, options: options);
// Get tensor input shape [1, 224, 224, 3]
inputTensor = interpreter.getInputTensors().first;
// Get tensor output shape [1, 1001]
outputTensor = interpreter.getOutputTensors().first;
}
To make things a bit more organized, you can also load in the labels for the 1000 items that MobileNet is trained for:
Future<void> _loadLabels() async {
final labelTxt = await rootBundle.loadString(labelsPath);
labels = labelTxt.split(‘n’);
}
For the sake of being succinct, let’s go ahead and skip some of the pre-processing steps, though you can find them in the repo’s image classification example here.
When you’re ready to run inference, you can create a new input and output based on the tensor shapes that you defined earlier, then call run on the interpreter to get your final results.
// Run inference
Future<void> runInference(
List<List<List<num>>> imageMatrix,
) async {
// Tensor input [1, 224, 224, 3]
final input = [imageMatrix];
// Tensor output [1, 1001]
final output = [List<int>.filled(1001, 0)];
// Run inference
interpreter.run(input, output);
// Get first output tensor
final result = output.first;
Now that you have your results, you can match them to your labels and use them in your app.
What’s next?
To explore what else you can do with the Flutter TensorFlow Lite plugin, check out the official GitHub repository where you can find examples for text classification, super resolution, style transfer, and more!
Additionally, we are working on a new plugin specifically for MediaPipe Tasks, a low-code tool for easily performing common on-device machine learning tasks. This includes image classification and object detection, like you’ve just learned about, as well as audio classification, face landmark detection, and gesture recognition, alongside a whole lot more.
We look forward to all the exciting things you make, so be sure to share them with @googledevs, @TensorFlow, and your developer communities!
Posted by Paul Ruiz, Developer Relations Engineer
We’re excited to announce that the TensorFlow Lite plugin for Flutter has been officially migrated to the TensorFlow GitHub account and released!
Three years ago, Amish Garg, one of our talented Google Summer of Code contributors, wrote a widely used TensorFlow Lite plugin for Flutter. The plugin was so popular that we decided to migrate it to our official repo, making it easier to maintain directly by the Google team. We are grateful to Amish for his contributions to the TensorFlow Lite Flutter plugin.
Through the efforts of developers in the community, the plugin has been updated to the latest version of TensorFlow Lite, and a collection of new features and example apps have been added, such as object detection through a live camera feed.
So what is TensorFlow Lite? TensorFlow Lite is a way to run TensorFlow models on devices locally, supporting mobile, embedded, web, and edge devices. TensorFlow Lite’s cross-platform support and on-device performance optimizations make it a great addition to the Flutter development toolbox. Our goal with this plugin is to make it easy to integrate TensorFlow Lite models into Flutter apps across mobile platforms, with desktop support currently in development through the efforts of our developer community. Find pre-trained TensorFlow Lite models on model repos like Kaggle Models or create your own custom TensorFlow Lite models.
Let’s take a look at how you could use the Flutter TensorFlow Lite plugin for image classification:
TensorFlow Lite Image Classification with Flutter
First you will need to install the plugin from pub.dev. Once the plugin is installed, you can load a TensorFlow Lite model into your Flutter app and define the input and output tensor shapes. If you’re using the MobileNet model, then the input tensor will be a 224 by 224 RGB image, and the output will be a list of confidence scores for the trained labels.// Load model
Future<void> _loadModel() async {
final options = InterpreterOptions();
// Load model from assets
interpreter = await Interpreter.fromAsset(modelPath, options: options);
// Get tensor input shape [1, 224, 224, 3]
inputTensor = interpreter.getInputTensors().first;
// Get tensor output shape [1, 1001]
outputTensor = interpreter.getOutputTensors().first;
}To make things a bit more organized, you can also load in the labels for the 1000 items that MobileNet is trained for:// Load labels from assets
Future<void> _loadLabels() async {
final labelTxt = await rootBundle.loadString(labelsPath);
labels = labelTxt.split(‘n’);
}
For the sake of being succinct, let’s go ahead and skip some of the pre-processing steps, though you can find them in the repo’s image classification example here.
When you’re ready to run inference, you can create a new input and output based on the tensor shapes that you defined earlier, then call run on the interpreter to get your final results.// Run inference
Future<void> runInference(
List<List<List<num>>> imageMatrix,
) async {
// Tensor input [1, 224, 224, 3]
final input = [imageMatrix];
// Tensor output [1, 1001]
final output = [List<int>.filled(1001, 0)];
// Run inference
interpreter.run(input, output);
// Get first output tensor
final result = output.first;
Now that you have your results, you can match them to your labels and use them in your app.
What’s next?
To explore what else you can do with the Flutter TensorFlow Lite plugin, check out the official GitHub repository where you can find examples for text classification, super resolution, style transfer, and more!
Additionally, we are working on a new plugin specifically for MediaPipe Tasks, a low-code tool for easily performing common on-device machine learning tasks. This includes image classification and object detection, like you’ve just learned about, as well as audio classification, face landmark detection, and gesture recognition, alongside a whole lot more.
We look forward to all the exciting things you make, so be sure to share them with @googledevs, @TensorFlow, and your developer communities! Read More Announcement, Explore, Flutter, TensorflowLite, tutorial