llama_library_flutter 0.0.3 copy "llama_library_flutter: ^0.0.3" to clipboard
llama_library_flutter: ^0.0.3 copied to clipboard

LLaMA Is Library for Inference of Meta's LLaMA model (and others)

Llama Library #

Llama Library Is library for inference any model ai LLAMA / LLM On Edge without api or internet quota, but need resources depends model you want run

Copyright (c) 2024 GLOBAL CORPORATION - GENERAL DEVELOPER

đŸ“šī¸ Docs #

  1. Documentation
  2. Youtube
  3. Telegram Support Group
  4. Contact Developer (check social media or readme profile github)

đŸ”–ī¸ Features #

  1. ✅ đŸ“ąī¸ Cross Platform support (Device, Edge Severless functions)
  2. ✅ đŸ“œī¸ Standarization Style Code
  3. ✅ âŒ¨ī¸ Cli (Terminal for help you use this library or create project)
  4. ✅ đŸ”Ĩī¸ Api (If you developer bot / userbot you can use this library without interact cli just add library and use đŸš€ī¸)
  5. ✅ đŸ§Šī¸ Customizable Extension (if you want add extension so you can more speed up on development)
  6. ✅ âœ¨ī¸ Pretty Information (user friendly for newbie)

â”ī¸ Fun Fact #

  • This library 100% use on every my create project (App, Server, Bot, Userbot)

  • This library 100% support all models from llama.cpp (depending on your device specs, if high then it can be up to turbo, but if low, just choose tiny/small)

âš ī¸ Important information #

To get good AI results, it's a good idea to have hardware that supports AI, otherwise the results will be inappropriate/bad.

đŸ“ˆī¸ Proggres #

  • 10-02-2025 Starting Release Stable With core Features

Resources #

  1. MODEL

đŸ“Ĩī¸ Install Library #

  1. Dart
dart pub add llama_library
  1. Flutter
flutter pub add llama_library_flutter ggml_library_flutter

đŸš€ī¸ Quick Start #

Example Quickstart script minimal for insight you or make you use this library because very simple


import 'dart:convert';
import 'dart:io';
import 'package:llama_library/llama_library.dart';
import 'package:llama_library/scheme/scheme/api/api.dart';
import 'package:llama_library/scheme/scheme/respond/update_llama_library_message.dart';

void main(List<String> args) async {
  print("start");
  File modelFile = File(
    "../../../../../big-data/deepseek-r1/deepseek-r1-distill-qwen-1.5b-q4_0.gguf",
  );
  final LlamaLibrary llamaLibrary = LlamaLibrary(
    sharedLibraryPath: "libllama.so",
    invokeParametersLlamaLibraryDataOptions:
        InvokeParametersLlamaLibraryDataOptions(
      invokeTimeOut: Duration(minutes: 10),
      isThrowOnError: false,
    ),
  );
  await llamaLibrary.ensureInitialized();
  llamaLibrary.on(
    eventType: llamaLibrary.eventUpdate,
    onUpdate: (data) {
      final update = data.update;
      if (update is UpdateLlamaLibraryMessage) {
        /// streaming update
        if (update.is_done == false) {
          stdout.write(update.text);
        } else if (update.is_done == true) {
          print("\n\n");
          print("-- done --");
        }
      }
    },
  );
  await llamaLibrary.initialized();
  final res = await llamaLibrary.invoke(
    invokeParametersLlamaLibraryData: InvokeParametersLlamaLibraryData(
      parameters: LoadModelFromFileLlamaLibrary.create(
        model_file_path: modelFile.path,
      ),
      isVoid: false,
      extra: null,
      invokeParametersLlamaLibraryDataOptions: null,
    ),
  );
  if (res["@type"] == "ok") {
    print("succes load Model");
  } else {
    print("Failed load Model");
    exit(0);
  }
  stdin.listen((e) async {
    print("\n\n");
    final String text = utf8.decode(e).trim();
    if (text == "exit") {
      llamaLibrary.dispose();
      exit(0);
    } else {
      await llamaLibrary.invoke(
        invokeParametersLlamaLibraryData: InvokeParametersLlamaLibraryData(
          parameters: SendLlamaLibraryMessage.create(
            text: text,
            is_stream: false,
          ),
          isVoid: true,
          extra: null,
          invokeParametersLlamaLibraryDataOptions: null,
        ),
      );
    }
  });
}

Reference #

  1. Ggerganov-llama.cpp ffi bridge main script so that this program can run

Copyright (c) 2024 GLOBAL CORPORATION - GENERAL DEVELOPER

Example Project Use This Library #

Minimal simple application example of using whisper library Youtube Video

Mobile Desktop
0
likes
160
points
26
downloads

Publisher

unverified uploader

Weekly Downloads

LLaMA Is Library for Inference of Meta's LLaMA model (and others)

Homepage
Repository (GitHub)
View/report issues

Documentation

Documentation
API reference

Funding

Consider supporting this project:

github.com
github.com
github.com

License

Apache-2.0 (license)

Dependencies

flutter

More

Packages that depend on llama_library_flutter