Basics of Prompt
Engineering
Using Generative AI to create content
What is Prompt Engineering
● Prompt engineering is a relatively new discipline for developing and
optimizing prompts to efficiently use language models (LMs) for a wide
variety of applications and research topics.
● Prompt engineering skills help to better understand the capabilities and
limitations of large language models (LLMs).
● Researchers use prompt engineering to improve the capacity of LLMs on a
wide range of common and complex tasks such as question answering and
arithmetic reasoning.
● Developers use prompt engineering to design robust and effective prompting
techniques that interface with LLMs and other tools.
Text Summarization
● One of the standard tasks in natural language generation is text
summarization.
● Text summarization can include many different flavors and domains.
● One of the most promising applications of language models is the ability
to summarize articles and concepts into quick and easy-to-read
summaries.
● Let's try a basic summarization task using prompts.
Example 1:
Prompt:
Explain antibiotics
A:
Output:
Antibiotics are a type of medication used to treat bacterial infections. They
work by either killing the bacteria or preventing them from reproducing,
allowing the body’s immune system to fight off the infection. Antibiotics are
usually taken orally in the form of pills, capsules, or liquid solutions, or
sometimes administered intravenously. They are not effective against viral
infections, and using them inappropriately can lead to antibiotic resistance.
Explanation
● The "A:" is an explicit prompt format that's used in question
answering.
● Used to tell the model that there is an expected answer
● Let's just assume that this is too much information and want to
summarize it further. We can instruct the model to summarize into
one sentence like so:
Example 2:
Prompt:
Antibiotics are a type of medication used to treat bacterial
infections. They work by either killing the bacteria or preventing
them from reproducing, allowing the body’s immune system to fight off
the infection. Antibiotics are usually taken orally in the form of
pills, capsules, or liquid solutions, or sometimes administered
intravenously. They are not effective against viral infections, and
using them inappropriately can lead to antibiotic resistance.
Explain the above in one sentence:
…Output
Output:
Antibiotics are medications used to treat bacterial
infections by either killing the bacteria or stopping
them from reproducing, but they are not effective against
viruses and overuse can lead to antibiotic resistance.
Information Extraction
● While language models are trained to perform natural language
generation and related tasks, it's also very capable of performing
classification and a range of other natural language processing (NLP)
tasks.
● Here is an example of a prompt that extracts information from a
given paragraph.
Example:
Prompt:
Author-contribution statements and acknowledgements in research
papers should state clearly and specifically whether, and to
what extent, the authors used AI technologies such as ChatGPT
in the preparation of their manuscript and analysis. They
should also indicate which LLMs were used. This will alert
editors and reviewers to scrutinize manuscripts more carefully
for potential biases, inaccuracies and improper source
crediting. Likewise, scientific journals should be transparent
about their use of LLMs, for example when selecting submitted
manuscripts.
Mention the large language model based product mentioned in the
paragraph above:
Output
Output:
The large language model based product mentioned
in the paragraph above is ChatGPT.
There are many ways we can improve the results above, but this is
already very useful.
Question Answering
● One of the best ways to get the model to respond to specific answers is to
improve the format of the prompt.
● A prompt could combine instructions, context, input, and output indicators to
get improved results.
● While these components are not required, it becomes a good practice as the
more specific you are with instruction, the better results you will get.
● Next is an example of how this would look following a more structured
prompt.
Example:
Prompt:
Answer the question based on the context below. Keep the answer short.
Respond "Unsure about answer" if not sure about the answer.
Context: Teplizumab traces its roots to a New Jersey drug company called
Ortho Pharmaceutical. There, scientists generated an early version of the
antibody, dubbed OKT3. Originally sourced from mice, the molecule was able
to bind to the surface of T cells and limit their cell-killing potential.
In 1986, it was approved to help prevent organ rejection after kidney
transplants, making it the first therapeutic antibody allowed for human
use.
Question: What was OKT3 originally sourced from?
Answer:
Output
Output:
Mice.
Context obtained from Nature.
Code Generation
One application where LLMs are quite effective is code generation. Copilot is a great example of
this. There are a vast number of code-generation tasks you can perform with clever prompts.
Let's look at a few examples below.
First, let's try a simple program that greets the user.
Example:
Prompt:
/*
Ask the user for their name and say "Hello"
*/
Output
Output:
let name = prompt("What is your name?");
console.log(`Hello, ${name}!`);
You can see that we didn't even need to specify the language to use.