使用大型語言模型進行資訊擷取(Information Extraction with LLMs)
本段包含一組用於探索大型語言模型資訊擷取能力的提示集合。
目錄
使用大型語言模型進行資訊擷取
背景
以下提示旨在測試大型語言模型執行資訊擷取任務的能力,該任務需從機器學習論文的摘要中擷取出模型名稱。
提示詞
你的任務是從機器學習論文的摘要中擷取模型名稱。你的回應格式應為一個陣列,例如:["model_name"]。如果你在摘要中找不到任何模型名稱,或你不確定是否有模型名稱,請回傳 ["NA"]。
**摘要:**
大型語言模型(LLMs),如 ChatGPT 和 GPT-4,已經徹底改變了自然語言處理研究,並展現出在人工通用智慧(AGI)上的潛力。然而,LLMs 昂貴的訓練與部署成本對於透明與開放的學術研究造成挑戰。為了解決這些問題,本計畫開源了中文版本的 LLaMA 與 Alpaca……
提示詞模板
你的任務是從機器學習論文的摘要中擷取模型名稱。你的回應格式應為一個陣列,例如:["model_name"]。如果你在摘要中找不到任何模型名稱,或你不確定是否有模型名稱,請回傳 ["NA"]。
摘要:{input}
程式
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[
{
"role": "user",
"content": "Your task is to extract model names from machine learning paper abstracts. Your response is an array of the model names in the format [\\\"model_name\\\"]. If you don't find model names in the abstract or you are not sure, return [\\\"NA\\\"]\n\nAbstract: Large Language Models (LLMs), such as ChatGPT and GPT-4, have revolutionized natural language processing research and demonstrated potential in Artificial General Intelligence (AGI). However, the expensive training and deployment of LLMs present challenges to transparent and open academic research. To address these issues, this project open-sources the Chinese LLaMA and Alpaca…"
}
],
temperature=1,
max_tokens=250,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
import fireworks.client
fireworks.client.api_key = "<FIREWORKS_API_KEY>"
completion = fireworks.client.ChatCompletion.create(
model="accounts/fireworks/models/mixtral-8x7b-instruct",
messages=[
{
"role": "user",
"content": "Your task is to extract model names from machine learning paper abstracts. Your response is an array of the model names in the format [\\\"model_name\\\"]. If you don't find model names in the abstract or you are not sure, return [\\\"NA\\\"]\n\nAbstract: Large Language Models (LLMs), such as ChatGPT and GPT-4, have revolutionized natural language processing research and demonstrated potential in Artificial General Intelligence (AGI). However, the expensive training and deployment of LLMs present challenges to transparent and open academic research. To address these issues, this project open-sources the Chinese LLaMA and Alpaca…",
}
],
stop=["<|im_start|>","<|im_end|>","<|endoftext|>"],
stream=True,
n=1,
top_p=1,
top_k=40,
presence_penalty=0,
frequency_penalty=0,
prompt_truncate_len=1024,
context_length_exceeded_behavior="truncate",
temperature=0.9,
max_tokens=4000
)
References
Information Extraction with LLMs
上一篇:Prompt Hub - 大型語言模型用於評估 (LLM Evaluation)
下一篇:Prompt Hub - Image Generation (圖像生成Image Generation)