LLMChain<LLMType extends BaseLanguageModel<Object, LanguageModelOptions, LanguageModelResult<Object>>, LLMOptions extends LanguageModelOptions, MemoryType extends BaseMemory> constructor

const LLMChain<LLMType extends BaseLanguageModel<Object, LanguageModelOptions, LanguageModelResult<Object>>, LLMOptions extends LanguageModelOptions, MemoryType extends BaseMemory>({
  1. required LLMType llm,
  2. LLMOptions? llmOptions,
  3. required BasePromptTemplate prompt,
  4. MemoryType? memory,
  5. BaseOutputParser<Object?, OutputParserOptions, Object?>? outputParser,
  6. String outputKey = defaultOutputKey,
  7. bool returnFinalOnly = true,
})

NOTE: Chains are the legacy way of using LangChain and will eventually be removed. Use LangChain Expression Language (LCEL) instead, a more flexible and powerful way to combine different components.

Chain to run queries against LLMs.

It takes in a prompt template, formats it with the user input and returns the response from an LLM.

Example:

final prompt = PromptTemplate.fromTemplate('Tell me a {adjective} joke');
final chain = LLMChain(llm: OpenAI(apiKey: '...'), prompt: prompt);
final res = await chain.run('bad');

Implementation

const LLMChain({
  required this.llm,
  this.llmOptions,
  required this.prompt,
  super.memory,
  this.outputParser,
  this.outputKey = defaultOutputKey,
  this.returnFinalOnly = true,
}) : super(defaultOptions: const ChainOptions());