LLMChain<LLMType extends BaseLanguageModel<Object, LanguageModelOptions, LanguageModelResult<Object> > , LLMOptions extends LanguageModelOptions, MemoryType extends BaseMemory> constructor
const
LLMChain<LLMType extends BaseLanguageModel<Object, LanguageModelOptions, LanguageModelResult<Object> > , LLMOptions extends LanguageModelOptions, MemoryType extends BaseMemory> ({
- required LLMType llm,
- LLMOptions? llmOptions,
- required BasePromptTemplate prompt,
- MemoryType? memory,
- BaseOutputParser<
Object?, OutputParserOptions, Object?> ? outputParser, - String outputKey = defaultOutputKey,
- bool returnFinalOnly = true,
NOTE: Chains are the legacy way of using LangChain and will eventually be removed. Use LangChain Expression Language (LCEL) instead, a more flexible and powerful way to combine different components.
Chain to run queries against LLMs.
It takes in a prompt template, formats it with the user input and returns the response from an LLM.
Example:
final prompt = PromptTemplate.fromTemplate('Tell me a {adjective} joke');
final chain = LLMChain(llm: OpenAI(apiKey: '...'), prompt: prompt);
final res = await chain.run('bad');
Implementation
const LLMChain({
required this.llm,
this.llmOptions,
required this.prompt,
super.memory,
this.outputParser,
this.outputKey = defaultOutputKey,
this.returnFinalOnly = true,
}) : super(defaultOptions: const ChainOptions());