Prompt Hub - 使用大型語言模型進行問答(Question Answering with LLMs)

使用大型語言模型進行問答(Question Answering with LLMs)

本段包含一組用於測試大型語言模型問答能力的提示集合。


目錄


使用大型語言模型進行封閉領域問答

背景

以下提示旨在測試大型語言模型回答封閉領域問題的能力,這類問題限定於特定主題或領域內。

提示詞

病人資料:

* 20 歲女性
* 有神經性厭食症與憂鬱症病史
* 血壓 100/50,脈搏 50,身高 5 呎 5 吋
* 由其營養師轉介,但病人否認自己有疾病
* 自述飲食正常,但實際上體重嚴重過輕

請僅使用上述資訊,將其改寫為一份醫療紀錄。

程式

from openai import OpenAI
client = OpenAI()
 
response = client.chat.completions.create(
model="gpt-4",
messages=[
    {
    "role": "user",
    "content": "Patient’s facts:\n- 20 year old female\n- with a history of anerxia nervosa and depression\n- blood pressure 100/50, pulse 50, height 5’5’’\n- referred by her nutrionist but is in denial of her illness\n- reports eating fine but is severely underweight\n\nPlease rewrite the data above into a medical note, using exclusively the information above."
    }
],
temperature=1,
max_tokens=500,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
import fireworks.client
fireworks.client.api_key = "<FIREWORKS_API_KEY>"
completion = fireworks.client.ChatCompletion.create(
    model="accounts/fireworks/models/mixtral-8x7b-instruct",
    messages=[
        {
        "role": "user",
        "content": "Patient’s facts:\n- 20 year old female\n- with a history of anerxia nervosa and depression\n- blood pressure 100/50, pulse 50, height 5’5’’\n- referred by her nutrionist but is in denial of her illness\n- reports eating fine but is severely underweight\n\nPlease rewrite the data above into a medical note, using exclusively the information above.",
        }
    ],
    stop=["<|im_start|>","<|im_end|>","<|endoftext|>"],
    stream=True,
    n=1,
    top_p=1,
    top_k=40,
    presence_penalty=0,
    frequency_penalty=0,
    prompt_truncate_len=1024,
    context_length_exceeded_behavior="truncate",
    temperature=0.9,
    max_tokens=4000
)

使用大型語言模型進行開放領域問答

背景

以下提示旨在測試大型語言模型回答開放領域問題的能力,這類問題為事實性問題,但不提供任何輔助證據。

提示詞

在這段人類與 AI 的對話中,AI 表現得既有幫助又友善,當它不知道答案時會說「我不知道」。

AI:嗨,我可以怎麼幫助你?
人類:我可以在 SeaTac 機場買到麥當勞嗎?

程式

from openai import OpenAI
client = OpenAI()
 
response = client.chat.completions.create(
model="gpt-4",
messages=[
    {
    "role": "user",
    "content": "In this conversation between a human and the AI, the AI is helpful and friendly, and when it does not know the answer it says \"I don’t know\".\n\nAI: Hi, how can I help you?\nHuman: Can I get McDonalds at the SeaTac airport?"
    }
],
temperature=1,
max_tokens=250,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
import fireworks.client
fireworks.client.api_key = "<FIREWORKS_API_KEY>"
completion = fireworks.client.ChatCompletion.create(
    model="accounts/fireworks/models/mixtral-8x7b-instruct",
    messages=[
        {
        "role": "user",
        "content": "In this conversation between a human and the AI, the AI is helpful and friendly, and when it does not know the answer it says \"I don’t know\".\n\nAI: Hi, how can I help you?\nHuman: Can I get McDonalds at the SeaTac airport?",
        }
    ],
    stop=["<|im_start|>","<|im_end|>","<|endoftext|>"],
    stream=True,
    n=1,
    top_p=1,
    top_k=40,
    presence_penalty=0,
    frequency_penalty=0,
    prompt_truncate_len=1024,
    context_length_exceeded_behavior="truncate",
    temperature=0.9,
    max_tokens=4000
)

使用大型語言模型進行科學問答

背景

以下提示旨在測試大型語言模型在科學領域問答方面的能力。

提示詞

根據下列內容回答問題,請保持簡短扼要。如果不確定答案,請回答「不確定答案」。

背景資訊:Teplizumab 的起源可追溯至新澤西州的一家製藥公司 Ortho Pharmaceutical。該公司科學家研發出抗體的早期版本,稱為 OKT3。這種分子最初是從老鼠身上取得的,能與 T 細胞表面結合,限制其殺細胞能力。1986 年,OKT3 獲准用於預防腎臟移植後的器官排斥反應,成為首個獲准用於人體的治療性抗體。

問題:OKT3 最初是從哪裡取得的?
答案:

程式

from openai import OpenAI
client = OpenAI()
 
response = client.chat.completions.create(
model="gpt-4",
messages=[
    {
    "role": "user",
    "content": "Answer the question based on the context below. Keep the answer short and concise. Respond \"Unsure about answer\" if not sure about the answer.\n\nContext: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.\n\nQuestion: What was OKT3 originally sourced from?\nAnswer:"
    }
],
temperature=1,
max_tokens=250,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
import fireworks.client
fireworks.client.api_key = "<FIREWORKS_API_KEY>"
completion = fireworks.client.ChatCompletion.create(
    model="accounts/fireworks/models/mixtral-8x7b-instruct",
    messages=[
        {
        "role": "user",
        "content": "Answer the question based on the context below. Keep the answer short and concise. Respond \"Unsure about answer\" if not sure about the answer.\n\nContext: Teplizumab traces its roots to a New Jersey drug company called Ortho Pharmaceutical. There, scientists generated an early version of the antibody, dubbed OKT3. Originally sourced from mice, the molecule was able to bind to the surface of T cells and limit their cell-killing potential. In 1986, it was approved to help prevent organ rejection after kidney transplants, making it the first therapeutic antibody allowed for human use.\n\nQuestion: What was OKT3 originally sourced from?\nAnswer:",
        }
    ],
    stop=["<|im_start|>","<|im_end|>","<|endoftext|>"],
    stream=True,
    n=1,
    top_p=1,
    top_k=40,
    presence_penalty=0,
    frequency_penalty=0,
    prompt_truncate_len=1024,
    context_length_exceeded_behavior="truncate",
    temperature=0.9,
    max_tokens=4000
)

References

Question Answering with LLMs


目錄:Prompt Hub - 提示詞匯集

上一篇:Prompt Hub - 使用大型語言模型進行數學理解 (Mathematical Understanding with LLMs)
下一篇:Prompt Hub - 使用大型語言模型進行推理 (Reasoning with LLMs)