google_ml_kit 0.3.0
google_ml_kit: ^0.3.0 copied to clipboard
A Flutter plugin to implement google's standalone ml kit made for mobile platform.
Google's ML Kit Flutter Plugin #
A Flutter plugin to use Google's standalone ML Kit for Android. Stay tuned for iOS, it will come soon!
[./screenshots/pose.png?raw=true] [./screenshots/imagelabeling.png?raw=true] [./screenshots/giff.gif] [./screenshots/barcode.png?raw=true] [./screenshots/text_detector.jpg?raw=true]
Note #
From version 0.2 the way to create instance of detectors has been changed. Creating instance before version 0.2
final exampleDetector = GoogleMlKit.ExampleDetector
After 2.0
final exampleDetector = GoogleMlKit.vision.ExampleDetector
//Or
final exampleDetector = GoogleMlKit.nlp.ExampleDetector
Currently supported api's #
Vision #
- Pose Detection
- Digital Ink Recognition
- Image Labelling
- Barcode Scanning
- Text Recognition
- Face Detection
NLP #
Usage #
Add this plugin as dependency in your pubspec.yaml.
- In your project-level build.gradle file, make sure to include Google's Maven repository in both your buildscript and allprojects sections(for all api's).
- The plugin has been written using bundled api models, this implies models will be bundled along with plugin and there is no need to implement any dependencies on your part and should work out of the box.
- If you wish to reduce the apk size you may replace bundled model dependencies with model's provided within Google Play Service, to know more about this see the below links
Procedure to use vision api's #
1. First Create an InputImage
Prepare Input Image (image you want to process)
import 'package:google_ml_kit/google_ml_kit.dart';
// From path
final inputImage = InputImage.fromFilePath(filePath);
// From file
final inputImage = InputImage.fromFile(file);
// From CameraImage (if you are using the camera plugin)
final camera; // your camera instance
final WriteBuffer allBytes = WriteBuffer();
for (Plane plane in cameraImage.planes) {
allBytes.putUint8List(plane.bytes);
}
final bytes = allBytes.done().buffer.asUint8List();
final Size imageSize = Size(cameraImage.width.toDouble(), cameraImage.height.toDouble());
InputImageRotation imageRotation = InputImageRotation.Rotation_0deg;
switch (camera.sensorOrientation) {
case 0:
imageRotation = InputImageRotation.Rotation_0deg;
break;
case 90:
imageRotation = InputImageRotation.Rotation_90deg;
break;
case 180:
imageRotation = InputImageRotation.Rotation_180deg;
break;
case 270:
imageRotation = InputImageRotation.Rotation_270deg;
break;
}
final inputImageData = InputImageData(
size: imageSize,
imageRotation: imageRotation,
);
final inputImage = InputImage.fromBytes(bytes: bytes, inputImageData: inputImageData);
To know more about formats of image.
2. Create an instance of detector
final barcodeScanner = GoogleMlKit.vision.barcodeScanner();
final digitalInkRecogniser = GoogleMlKit.vision.digitalInkRecogniser();
3. Call processImage()
or relevant function of the respective detector
4. Call close()
Digital Ink recognition #
Read to know how to imlpement Digital Ink Recognition
Pose Detection #
-
Google Play service model is not available for this api' so no extra implementation*
-
Create
PoseDetectorOptions
final options = PoseDetectorOptions(
poseDetectionModel: PoseDetectionModel.BasePoseDetector,
selectionType : LandmarkSelectionType.all,
poseLandmarks:(list of poseaLndmarks you want);
//or PoseDetectionModel.AccuratePoseDetector to use accurate pose detector
Note: To obtain default poseDetector no options need to be specied. It gives all available landmarks using BasePoseDetector Model.
The same implies to other detectors as well
- Calling
processImage(InputImage inputImage)
returns Map<int,PoseLandMark>
final landMarksMap = await poseDetector.processImage(inputImage);
Use the map to extract data. See this example to get better idea.
Image Labeling #
If you choose google service way. In your app level buil.gradle add.
<application ...>
...
<meta-data
android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="ica" />
<!-- To use multiple models: android:value="ica,model2,model3" -->
</application>
The same implies for all other models as well
Create ImageLabelerOptions
. This uses google's base model
final options =ImageLabelerOptions( confidenceThreshold = confidenceThreshold);
// Default =0.5
//lies between 0.0 to 1.0
To use custom tflite models
CustomImageLabelerOptions options = CustomImageLabelerOptions(
customModel: CustomTrainedModel.asset
(or CustomTrainedModel.file),// To use files stored in device
customModelPath: "file path");
To use autoMl vision models models
final options = AutoMlImageLabelerOptions(
customTrainedModel: CustomTrainedModel.asset
(or CustomTrainedModel.file),
customModelPath:);
calling processImage()
returns List<ImageLabel>
final labels = await imageLabeler.processImage(inputImage);
To know more see this example
Barcode Scanner #
Obtain BarcodeScanner
instance.
BarcodeScanner barcodeScanner = GoogleMlKit.instance
.barcodeScanner(
formats:(List of BarcodeFormats);
Supported BarcodeFormats. To use a specific format use
Barcode.FORMAT_Default
Barcode.FORMAT_Code_128
etc..
call processImage()
It returns List<Barcode>
final result = await barcodeScanner.processImage(inputImage);
To know more see this example
Text Recognition #
Calling processImage()
returns RecognisedText object
final text = await textDetector.processImage(inputImage);
To know more see this example
Face Detection #
To know more see this example
Language Detection #
- Call
identifyLanguage(text)
to identify language of text. - Call
identifyPossibleLanguages(text)
to get a list of IdentifiedLanguage which contains all possible languages that are above the specified threshold. Default is 0.5. - To get info of the identified BCP-47 tag use this class.
To know more see this example.
On-Device Translator #
- Create
OnDeviceTranslator
object.
final _onDeviceTranslator = GoogleMlKit.nlp
.onDeviceTranslator(sourceLanguage: TranslateLanguage.ENGLISH,
targetLanguage: TranslateLanguage.SPANISH);
- Call
_onDeviceTranslator.translateText(text)
to translate text.
Note: Make sure the models are downloaded before calling translatetext()
Managing translate language models explicitly
- Create
TranslateLanguageModelManager
instance.
final _languageModelManager = GoogleMlKit.nlp.translateLanguageModelManager();
- Call
_languageModelManager.downloadModel(TranslateLanguage.ENGLISH)
to download a model. - Call
_languageModelManager.deleteModel(TranslateLanguage.ENGLISH)
to delete a model. - Call
_languageModelManager.isModelDownloaded(TranslateLanguage.ENGLISH)
to to check whether a model is downloaded. - Call
_languageModelManager.getAvailableModels()
to get a list of all downloaded models.
To know more see this example.
Contributing #
Contributions are welcome. In case of any problems open an issue. Create a issue before opening a pull request for non trivial fixes. In case of trivial fixes open a pull request directly.