VertexAITextModelRequestParams class

Parameters for a Vertex AI Text model request.

Annotations
  • @immutable

Constructors

VertexAITextModelRequestParams.new({double temperature = 0.2, int maxOutputTokens = 1024, double topP = 0.95, int topK = 40, List<String> stopSequences = const [], int candidateCount = 1})
Parameters for a Vertex AI Text model request.
const

Properties

candidateCount int
The number of response variations to return.
final
hashCode int
The hash code for this object.
no setteroverride
maxOutputTokens int
Maximum number of tokens that can be generated in the response. A token is approximately four characters. 100 tokens correspond to roughly 60-80 words.
final
runtimeType Type
A representation of the runtime type of the object.
no setterinherited
stopSequences List<String>
Specifies a list of strings that tells the model to stop generating text if one of the strings is encountered in the response. If a string appears multiple times in the response, then the response truncates where it's first encountered. The strings are case-sensitive.
final
temperature double
The temperature is used for sampling during response generation, which occurs when topP and topK are applied. Temperature controls the degree of randomness in token selection. Lower temperatures are good for prompts that require a more deterministic and less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic, meaning that the highest probability response is always selected.
final
topK int
Top-K changes how the model selects tokens for output. A top-K of 1 means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of 3 means that the next token is selected from among the three most probable tokens by using temperature.
final
topP double
Top-P changes how the model selects tokens for output. Tokens are selected from the most (see top-K) to least probable until the sum of their probabilities equals the top-P value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-P value is 0.5, then the model will select either A or B as the next token by using temperature and excludes C as a candidate.
final

Methods

noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
toMap() Map<String, dynamic>
Converts this object to a Map.
toString() String
A string representation of this object.
override

Operators

operator ==(covariant VertexAITextModelRequestParams other) bool
The equality operator.
override