whisper4dart_bindings_generated library

Classes

ggml_backend
ggml_backend_buffer
ggml_backend_buffer_type
ggml_backend_buffer_usage
Backend buffer
ggml_backend_dev_caps
functionality supported by the device
ggml_backend_dev_props
all the device properties
ggml_backend_dev_type
Backend device
ggml_backend_device
ggml_backend_event
ggml_backend_feature
Get a list of feature flags supported by the backend (returns a NULL-terminated array)
ggml_backend_graph_copy
Utils
ggml_backend_reg
ggml_backend_sched
ggml_bf16_t
google brain half-precision bfloat16
ggml_cgraph
ggml_context
ggml_cplan
the compute plan that needs to be prepared for ggml_graph_compute() since https://github.com/ggerganov/ggml/issues/287
ggml_ftype
model file types
ggml_init_params
ggml_log_level
ggml_numa_strategy
numa strategies
ggml_object
ggml_object_type
ggml_op
available tensor operations:
ggml_op_pool
ggml_prec
precision
ggml_sched_priority
scheduling priorities
ggml_sort_order
sort rows
ggml_status
ggml_tensor
n-dimensional tensor
ggml_tensor_flag
this tensor...
ggml_threadpool
ggml_threadpool_params
threadpool params Use ggml_threadpool_params_default() or ggml_threadpool_params_init() to populate the defaults
ggml_type
NOTE: always add types at the end of the enum to keep backward compatibility
ggml_type_traits
ggml_type_traits_cpu
ggml_unary_op
UnnamedStruct1
UnnamedStruct2
whisper_ahead
whisper_aheads
whisper_alignment_heads_preset
whisper_context
C interface
whisper_context_params
whisper_full_params
Parameters for the whisper_full() function If you change the order or add new parameters, make sure to update the default values in whisper.cpp: whisper_full_default_params()
whisper_grammar_element
whisper_gretype
grammar element type
whisper_model_loader
whisper_sampling_strategy
Available sampling strategies
whisper_state
whisper_timings
Performance information from the default state.
whisper_token_data
WhisperDartBindings
Bindings for src/whisper4dart.h.

Typedefs

FILE = _IO_FILE
ggml_abort_callback = Pointer<NativeFunction<Bool Function(Pointer<Void> data)>>
Abort callback If not NULL, called before ggml computation If it returns true, the computation is aborted
ggml_backend_buffer_t = Pointer<ggml_backend_buffer>
ggml_backend_buffer_type_t = Pointer<ggml_backend_buffer_type>
ggml_backend_dev_t = Pointer<ggml_backend_device>
ggml_backend_eval_callback = Pointer<NativeFunction<Bool Function(Int node_index, Pointer<ggml_tensor> t1, Pointer<ggml_tensor> t2, Pointer<Void> user_data)>>
ggml_backend_event_t = Pointer<ggml_backend_event>
ggml_backend_graph_plan_t = Pointer<Void>
ggml_backend_reg_t = Pointer<ggml_backend_reg>
ggml_backend_sched_eval_callback = Pointer<NativeFunction<Bool Function(Pointer<ggml_tensor> t, Bool ask, Pointer<Void> user_data)>>
Evaluation callback for each node in the graph (set with ggml_backend_sched_set_eval_callback) when ask == true, the scheduler wants to know if the user wants to observe this node this allows the scheduler to batch nodes together in order to evaluate them in a single call
ggml_backend_sched_t = Pointer<ggml_backend_sched>
The backend scheduler allows for multiple backend devices to be used together Handles compute buffer allocation, assignment of tensors to backends, and copying of tensors between backends The backends are selected based on:
ggml_backend_t = Pointer<ggml_backend>
ggml_binary_op_f32_t = Pointer<NativeFunction<Void Function(Int, Pointer<Float>, Pointer<Float>, Pointer<Float>)>>
ggml_custom1_op_f32_t = Pointer<NativeFunction<Void Function(Pointer<ggml_tensor>, Pointer<ggml_tensor>)>>
ggml_custom1_op_t = Pointer<NativeFunction<Void Function(Pointer<ggml_tensor> dst, Pointer<ggml_tensor> a, Int ith, Int nth, Pointer<Void> userdata)>>
custom operators v2
ggml_custom2_op_f32_t = Pointer<NativeFunction<Void Function(Pointer<ggml_tensor>, Pointer<ggml_tensor>, Pointer<ggml_tensor>)>>
ggml_custom2_op_t = Pointer<NativeFunction<Void Function(Pointer<ggml_tensor> dst, Pointer<ggml_tensor> a, Pointer<ggml_tensor> b, Int ith, Int nth, Pointer<Void> userdata)>>
ggml_custom3_op_f32_t = Pointer<NativeFunction<Void Function(Pointer<ggml_tensor>, Pointer<ggml_tensor>, Pointer<ggml_tensor>, Pointer<ggml_tensor>)>>
ggml_custom3_op_t = Pointer<NativeFunction<Void Function(Pointer<ggml_tensor> dst, Pointer<ggml_tensor> a, Pointer<ggml_tensor> b, Pointer<ggml_tensor> c, Int ith, Int nth, Pointer<Void> userdata)>>
ggml_fp16_t = Uint16
ieee 754-2008 half-precision float16 todo: make this not an integral type
ggml_from_float_t = Pointer<NativeFunction<Void Function(Pointer<Float> x, Pointer<Void> y, Int64 k)>>
ggml_guid_t = Pointer<Pointer<Uint8>>
ggml_log_callback = Pointer<NativeFunction<Void Function(Int32 level, Pointer<Char> text, Pointer<Void> user_data)>>
TODO these functions were sandwiched in the old optimization interface, is there a better place for them?
ggml_threadpool_t = Pointer<ggml_threadpool>
ggml_to_float_t = Pointer<NativeFunction<Void Function(Pointer<Void> x, Pointer<Float> y, Int64 k)>>
ggml_unary_op_f32_t = Pointer<NativeFunction<Void Function(Int, Pointer<Float>, Pointer<Float>)>>
custom operators
ggml_vec_dot_t = Pointer<NativeFunction<Void Function(Int n, Pointer<Float> s, Size bs, Pointer<Void> x, Size bx, Pointer<Void> y, Size by, Int nrc)>>
Internal types and functions exposed for tests and benchmarks
whisper_encoder_begin_callback = Pointer<NativeFunction<Bool Function(Pointer<whisper_context> ctx, Pointer<whisper_state> state, Pointer<Void> user_data)>>
Encoder begin callback If not NULL, called before the encoder starts If it returns false, the computation is aborted
whisper_logits_filter_callback = Pointer<NativeFunction<Void Function(Pointer<whisper_context> ctx, Pointer<whisper_state> state, Pointer<whisper_token_data> tokens, Int n_tokens, Pointer<Float> logits, Pointer<Void> user_data)>>
Logits filter callback Can be used to modify the logits before sampling If not NULL, called after applying temperature to logits
whisper_new_segment_callback = Pointer<NativeFunction<Void Function(Pointer<whisper_context> ctx, Pointer<whisper_state> state, Int n_new, Pointer<Void> user_data)>>
Text segment callback Called on every newly generated text segment Use the whisper_full_...() functions to obtain the text segments
whisper_progress_callback = Pointer<NativeFunction<Void Function(Pointer<whisper_context> ctx, Pointer<whisper_state> state, Int progress, Pointer<Void> user_data)>>
Progress callback
whisper_token = Int32