How to use reference examples
The quality of extractions can often be improved by providing reference examples to the LLM.
While this tutorial focuses how to use examples with a tool calling model, this technique is generally applicable, and will work also with JSON more or prompt based techniques.
Weβll use OpenAIβs GPT-4 this time for their robust support for
ToolMessages
:
- npm
- yarn
- pnpm
npm i @langchain/openai zod uuid
yarn add @langchain/openai zod uuid
pnpm add @langchain/openai zod uuid
Letβs define a prompt:
import {
ChatPromptTemplate,
MessagesPlaceholder,
} from "@langchain/core/prompts";
const SYSTEM_PROMPT_TEMPLATE = `You are an expert extraction algorithm.
Only extract relevant information from the text.
If you do not know the value of an attribute asked to extract, you may omit the attribute's value.`;
// Define a custom prompt to provide instructions and any additional context.
// 1) You can add examples into the prompt template to improve extraction quality
// 2) Introduce additional parameters to take context into account (e.g., include metadata
// about the document from which the text was extracted.)
const prompt = ChatPromptTemplate.fromMessages([
["system", SYSTEM_PROMPT_TEMPLATE],
// ββββββββββββββββββββββββββββ
new MessagesPlaceholder("examples"),
// βββββββββββββββββββββββββββββ
["human", "{text}"],
]);
Test out the template:
import { HumanMessage } from "@langchain/core/messages";
const promptValue = await prompt.invoke({
text: "this is some text",
examples: [new HumanMessage("testing 1 2 3")],
});
promptValue.toChatMessages();
[
SystemMessage {
lc_serializable: true,
lc_kwargs: {
content: "You are an expert extraction algorithm.\n" +
"Only extract relevant information from the text.\n" +
"If you do n"... 87 more characters,
additional_kwargs: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "You are an expert extraction algorithm.\n" +
"Only extract relevant information from the text.\n" +
"If you do n"... 87 more characters,
name: undefined,
additional_kwargs: {}
},
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "testing 1 2 3", additional_kwargs: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "testing 1 2 3",
name: undefined,
additional_kwargs: {}
},
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "this is some text", additional_kwargs: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "this is some text",
name: undefined,
additional_kwargs: {}
}
]
Define the schemaβ
Letβs re-use the people schema from the quickstart.
import { z } from "zod";
const personSchema = z
.object({
name: z.optional(z.string()).describe("The name of the person"),
hair_color: z
.optional(z.string())
.describe("The color of the person's hair, if known"),
height_in_meters: z
.optional(z.string())
.describe("Height measured in meters"),
})
.describe("Information about a person.");
const peopleSchema = z.object({
people: z.array(personSchema),
});
Define reference examplesβ
Examples can be defined as a list of input-output pairs.
Each example contains an example input
text and an example output
showing what should be extracted from the text.
The below example is a bit more advanced - the format of the example needs to match the API used (e.g., tool calling or JSON mode etc.).
Here, the formatted examples will match the format expected for the OpenAI tool calling API since thatβs what weβre using.
To provide reference examples to the model, we will mock out a fake chat history containing successful usages of the given tool. Because the model can choose to call multiple tools at once (or the same tool multiple times), the exampleβs outputs are an array:
import {
AIMessage,
type BaseMessage,
HumanMessage,
ToolMessage,
} from "@langchain/core/messages";
import { v4 as uuid } from "uuid";
type OpenAIToolCall = {
id: string;
type: "function";
function: {
name: string;
arguments: string;
};
};
type Example = {
input: string;
toolCallOutputs: Record<string, any>[];
};
/**
* This function converts an example into a list of messages that can be fed into an LLM.
*
* This code serves as an adapter that transforms our example into a list of messages
* that can be processed by a chat model.
*
* The list of messages for each example includes:
*
* 1) HumanMessage: This contains the content from which information should be extracted.
* 2) AIMessage: This contains the information extracted by the model.
* 3) ToolMessage: This provides confirmation to the model that the tool was requested correctly.
*
* The inclusion of ToolMessage is necessary because some chat models are highly optimized for agents,
* making them less suitable for an extraction use case.
*/
function toolExampleToMessages(example: Example): BaseMessage[] {
const openAIToolCalls: OpenAIToolCall[] = example.toolCallOutputs.map(
(output) => {
return {
id: uuid(),
type: "function",
function: {
// The name of the function right now corresponds
// to the passed name.
name: "extract",
arguments: JSON.stringify(output),
},
};
}
);
const messages: BaseMessage[] = [
new HumanMessage(example.input),
new AIMessage({
content: "",
additional_kwargs: { tool_calls: openAIToolCalls },
}),
];
const toolMessages = openAIToolCalls.map((toolCall, i) => {
// Return the mocked successful result for a given tool call.
return new ToolMessage({
content: "You have correctly called this tool.",
tool_call_id: toolCall.id,
});
});
return messages.concat(toolMessages);
}
Next letβs define our examples and then convert them into message format.
const examples: Example[] = [
{
input:
"The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.",
toolCallOutputs: [{}],
},
{
input: "Fiona traveled far from France to Spain.",
toolCallOutputs: [
{
name: "Fiona",
},
],
},
];
const exampleMessages = [];
for (const example of examples) {
exampleMessages.push(...toolExampleToMessages(example));
}
6
Letβs test out the prompt
const promptValue = await prompt.invoke({
text: "this is some text",
examples: exampleMessages,
});
promptValue.toChatMessages();
[
SystemMessage {
lc_serializable: true,
lc_kwargs: {
content: "You are an expert extraction algorithm.\n" +
"Only extract relevant information from the text.\n" +
"If you do n"... 87 more characters,
additional_kwargs: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "You are an expert extraction algorithm.\n" +
"Only extract relevant information from the text.\n" +
"If you do n"... 87 more characters,
name: undefined,
additional_kwargs: {}
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.",
additional_kwargs: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "The ocean is vast and blue. It's more than 20,000 feet deep. There are many fish in it.",
name: undefined,
additional_kwargs: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: { content: "", additional_kwargs: { tool_calls: [ [Object] ] } },
lc_namespace: [ "langchain_core", "messages" ],
content: "",
name: undefined,
additional_kwargs: {
tool_calls: [
{
id: "8fa4d00d-801f-470e-8737-51ee9dc82259",
type: "function",
function: [Object]
}
]
}
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
content: "You have correctly called this tool.",
tool_call_id: "8fa4d00d-801f-470e-8737-51ee9dc82259",
additional_kwargs: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "You have correctly called this tool.",
name: undefined,
additional_kwargs: {},
tool_call_id: "8fa4d00d-801f-470e-8737-51ee9dc82259"
},
HumanMessage {
lc_serializable: true,
lc_kwargs: {
content: "Fiona traveled far from France to Spain.",
additional_kwargs: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "Fiona traveled far from France to Spain.",
name: undefined,
additional_kwargs: {}
},
AIMessage {
lc_serializable: true,
lc_kwargs: { content: "", additional_kwargs: { tool_calls: [ [Object] ] } },
lc_namespace: [ "langchain_core", "messages" ],
content: "",
name: undefined,
additional_kwargs: {
tool_calls: [
{
id: "14ad6217-fcbd-47c7-9006-82f612e36c66",
type: "function",
function: [Object]
}
]
}
},
ToolMessage {
lc_serializable: true,
lc_kwargs: {
content: "You have correctly called this tool.",
tool_call_id: "14ad6217-fcbd-47c7-9006-82f612e36c66",
additional_kwargs: {}
},
lc_namespace: [ "langchain_core", "messages" ],
content: "You have correctly called this tool.",
name: undefined,
additional_kwargs: {},
tool_call_id: "14ad6217-fcbd-47c7-9006-82f612e36c66"
},
HumanMessage {
lc_serializable: true,
lc_kwargs: { content: "this is some text", additional_kwargs: {} },
lc_namespace: [ "langchain_core", "messages" ],
content: "this is some text",
name: undefined,
additional_kwargs: {}
}
]
Create an extractorβ
Here, weβll create an extractor using gpt-4.
import { ChatOpenAI } from "@langchain/openai";
// We will be using tool calling mode, which
// requires a tool calling capable model.
const llm = new ChatOpenAI({
// Consider benchmarking with the best model you can to get
// a sense of the best possible quality.
model: "gpt-4-0125-preview",
temperature: 0,
});
// For function/tool calling, we can also supply an name for the schema
// to give the LLM additional context about what it's extracting.
const extractionRunnable = prompt.pipe(
llm.withStructuredOutput(peopleSchema, { name: "people" })
);
Without examples πΏβ
Notice that even though weβre using gpt-4
, itβs unreliable with a
very simple test case!
We run it 5 times below to emphasize this:
const text = "The solar system is large, but earth has only 1 moon.";
for (let i = 0; i < 5; i++) {
const result = await extractionRunnable.invoke({
text,
examples: [],
});
console.log(result);
}
{
people: [ { name: "earth", hair_color: "grey", height_in_meters: "1" } ]
}
{ people: [ { name: "earth", hair_color: "moon" } ] }
{ people: [ { name: "earth", hair_color: "moon" } ] }
{ people: [ { name: "earth", hair_color: "1 moon" } ] }
{ people: [] }
With examples π»β
Reference examples help fix the failure!
const text = "The solar system is large, but earth has only 1 moon.";
for (let i = 0; i < 5; i++) {
const result = await extractionRunnable.invoke({
text,
// Example messages from above
examples: exampleMessages,
});
console.log(result);
}
{ people: [] }
{ people: [] }
{ people: [] }
{ people: [] }
{ people: [] }
await extractionRunnable.invoke({
text: "My name is Hair-ison. My hair is black. I am 3 meters tall.",
examples: exampleMessages,
});
{
people: [ { name: "Hair-ison", hair_color: "black", height_in_meters: "3" } ]
}