KEMBAR78
Presentation Session 2 -Context Grounding.pdf
Transitioning to agentic automation
6 Part webinar series by the UiPath Community
March – April 2025
2
Meet the team!
Satish Prasad
Solution Consultant @ IRIS
UiPath MVP
Rohit Radhakrishnan
Community Lead - Asia Pacific & Japan
@ UiPath
Mannoj Batra
Vice President @ NatWest Group
UiPath MVP
Mukesh Kala
RPA Manager @ Boundaryless Group
UiPath MVP
Vibhor Shrivastava
Director Community Advocacy
@ UiPath
3
6 Part Series
Register for each session
here
4
How Context Affects Understanding
Text Messages Without Context
Imagine receiving a message that just says, "It's ready!“
• If you know the context (e.g., you ordered pizza), you understand it's about your food being
ready.
• Without context, you’re left guessing—What’s ready? Dinner? A project?
Let's Start Simple
5
The Importance of Tone and Setting
If someone says, "You're unbelievable!" in a joking tone while smiling, they
might mean you're funny.
If they say it angrily, it might mean they’re upset with you.
Context like tone and facial expression clarifies the meaning.
6
Maps and Locations
If a friend says, "Turn left at the next light," you need to know where you are to make
sense of it.
If you're in a new city, this instruction without context becomes confusing.
7
How Context Enables Effective Human
Communication
Humans rely heavily on context to communicate effectively. Here's how:
Clarity in Ambiguity:
Words or phrases often have multiple meanings.
Example: "Bank" could mean a riverbank or a financial institution. The sentence, "I saw a bank on my way to work," makes
sense only if you know the topic of conversation or setting.
Understanding Relationships:
If you’re at a family dinner and someone says, "Pass the salt," you know they’re talking to the closest person and
referring to the salt on the table, not a random salt shaker somewhere else.
Making Decisions:
If someone says, "Let’s go out," the context (time, weather, location) influences what you plan—whether it’s a walk,
a dinner, or a weekend trip.
8
How LLMs Use Context to Generate Responses
Large Language Models (LLMs) like ChatGPT are designed to process and generate human-like text.
To do this effectively, they rely on context, which means understanding the information provided to them to make their
responses relevant and accurate.
Imagine you’re having a conversation with a friend.
You naturally remember what was said earlier to make your next statement logical.
LLMs work in a similar way—they "look back" at the input text and use it to generate meaningful responses.
9
How LLMs "Understand" Context
Breaking Text into Tokens:
• LLMs divide the input text into small chunks called tokens. These tokens can be words, parts of words, or even characters.
• For example, the sentence "I love apples" might be split into tokens like ["I", "love", "apples"].
Context Window:
• LLMs process the tokens in a context window. This window represents how much recent information the model can "remember" while
generating a response.
• If the input is too long, only the most recent tokens might be considered.
Predicting the Next Word:
• Based on the tokens in the context window, the LLM predicts the next word or phrase that fits best.
• Example: If the input is "I love apples and bananas, but my favorite fruit is...", the model predicts something like "mangoes" because it
fits the context.
10
Examples of How LLMs Use Context
Conversation Flow:
• If you say, "Tell me about dogs," and then ask, "What about their diet?" the LLM uses the previous sentence to know
you’re still talking about dogs.
Complex Queries:
• Input: "Summarize this article and explain how it applies to marketing."
• The LLM first processes the entire article (context) and then tailors the explanation to marketing.
Revisiting Earlier Inputs:
• Input 1: "Write a poem about rain."
• Input 2: "Now make it funny."
• The LLM remembers the first input and modifies its approach to make the poem humorous.
11
Scenario 1: Responding to a Helpdesk Query
Without Context: Prompt:
"How do I reset my password?"
Expected Output:
generates a generic response: "You can reset your password by
going to the login page and clicking on 'Forgot Password.' Follow
the instructions provided."
With Context:
Prompt:
"A corporate user is unable to log into their Active Directory account due to a forgotten
password. Provide a step-by-step guide for resetting it using the organization's self-service
password reset (SSPR) portal, ensuring security compliance and minimal downtime."
Expected Output:
Steps to Reset Active Directory Password via SSPR:
• Access the SSPR Portal: Open a browser and navigate to the organization's Self-Service
Password Reset Portal at [URL].
• Enter Credentials: Input your Active Directory username or corporate email address.
• Verify Identity: Complete the authentication process using Multi-Factor Authentication
(MFA) (e.g., OTP via SMS/email, security questions, or biometric verification).
• Set a New Password: Choose a compliant password following corporate security policies
(e.g., minimum 12 characters, uppercase, lowercase, numbers, special characters, and no
previous passwords).
• Confirm and Apply Changes: Click "Submit" to reset the password. If successful, you will
receive a confirmation message.
• Login & Sync: Log in using the new password on your workstation. If you use multiple
devices, ensure password synchronization by logging into VPN or a domain-joined system.
Troubleshooting:
If the reset fails, check network connectivity, MFA device availability, or contact IT Support at
[Helpdesk Contact].
12
Scenario 2: Debugging a Code Snippet
Without Context:
Input: "Fix this code for me."
def calculate_total(price, tax):
return price + tax_rate
print(calculate_total(100))
With Context:
Input: "I’m trying to calculate the total price, including a 10% tax. Can you fix this code
for me?"
Objective: Show how ChatGPT responds when context is missing versus when it's provided.
def calculate_total(price, tax):
return price + tax_rate
print(calculate_total(100))
Likely Output: The model now understands the goal and will provide a targeted fix:
def calculate_total(price, tax_rate=0.1):
return price + (price * tax_rate)
print(calculate_total(100))
Context helps the model understand the intention, not just the syntax.
13
Limitations of Context in LLMs
While LLMs are good at handling context, they have some limitations:
Limited Memory (Context Window):
• LLMs can only "remember" a certain amount of text at a time.
• If the input is too long, they may lose track of earlier information.
No True Understanding:
• LLMs don’t actually "know" things.
• They predict text based on patterns, so their responses depend heavily on the quality of the input.
14
15
Active & Long-Term Memory
Active Memory - an aggregated log of all of the processes and tasks that an agent executes
during its runtime; retained as chunks of action-observation pairs in Context Grounding
indexes automatically by Agent Builder
Long-Term Memory - a combination of available knowledgebases and reflections on
active memory observations that are respectively managed by users setting up
Context Grounding and the implicit Agent Memory itself
16
Active vs Long-Term Memory – Key Differences
• Lives Across Sessions
• Persistent Context (Days/Weeks/Months)
• Remembers: "What did we discuss last time?“
• Stored Knowledge, Accessible Later
• Persists, Can Be Updated/Deleted
• Example: "Sales Bot remembers a customer’s preferences over
multiple calls"
• Lives in Current Session
• Short-Term Context (Few minutes/hours)
• Remembers: "What are we discussing now?
• Fast, Temporary Recall
• Clears after Session Ends
• Example: "HR Bot remembers your last question in this chat"
17
Vector Database
• Traditional databases store structured data (rows, columns).
• Vector Databases store unstructured data (text, images, audio) in a way AI can understand the meaning.
• They store information in the vector format(in other words, a collection of numbers)
• These vectors capture the meaning (semantics) and allow semantic search and similarity comparisons.
• Enable fast, relevant retrieval of grounding information.
18
How does it work
Text Data → Embeddings
Convert documents, emails, FAQs,
etc., into vectors using Embeddings
(OpenAI, Hugging Face, etc.).
Store Vectors in Vector DB
Vectors are stored in specialized
databases (Pinecone, Chroma, FAISS
etc.).
Query by similarity search
When users ask a question, it's
converted to a vector and compared
to stored vectors by similarity, not
keywords.
19
Type of searches in Vector Databases
Euclidean Distance (L2 Norm)
• Measures the straight-line distance between two vectors.
• Think of it as finding the shortest path between two points in space.
• Best for: Image recognition, recommendation systems.
Cosine Similarity
• Measures the angle between two vectors instead of distance.
• Even if two vectors have different magnitudes, they can be considered similar if they point in the same
direction.
• Best for: Text and document search.
Manhattan Distance (L1 Norm)
• Measures block-wise (grid-based) distance between vectors.
• Think of walking through city blocks instead of taking a straight-line path.
• Best for: Path planning, logistics, and recommendation engines.
Hamming Distance
• Counts the number of differences between two binary vectors.
• Best for: Error detection, DNA sequence comparison.
20
Why Do We Need a Vector Database?
• Unstructured data dominates enterprise knowledge (80-90% of data).
• Keyword search fails when people phrase things differently.
• Vector Databases enable:
• Semantic search: Understands meaning.
• Scalability: Handles millions of documents
• Speed: Fast similarity search at scale.
Data
Structured Unstructured
21
Context Grounding in UiPath: Customizing LLMs
Demo:
• Create Storage Bucket in UiPath
• Upload and manage documents/data
• Create an Index for quick search (will act as Vector DB – UiPath-managed vector database)
• Use pre-built UiPath Activities Studio Web
22
Thank You
Register for each session here

Presentation Session 2 -Context Grounding.pdf

  • 1.
    Transitioning to agenticautomation 6 Part webinar series by the UiPath Community March – April 2025
  • 2.
    2 Meet the team! SatishPrasad Solution Consultant @ IRIS UiPath MVP Rohit Radhakrishnan Community Lead - Asia Pacific & Japan @ UiPath Mannoj Batra Vice President @ NatWest Group UiPath MVP Mukesh Kala RPA Manager @ Boundaryless Group UiPath MVP Vibhor Shrivastava Director Community Advocacy @ UiPath
  • 3.
    3 6 Part Series Registerfor each session here
  • 4.
    4 How Context AffectsUnderstanding Text Messages Without Context Imagine receiving a message that just says, "It's ready!“ • If you know the context (e.g., you ordered pizza), you understand it's about your food being ready. • Without context, you’re left guessing—What’s ready? Dinner? A project? Let's Start Simple
  • 5.
    5 The Importance ofTone and Setting If someone says, "You're unbelievable!" in a joking tone while smiling, they might mean you're funny. If they say it angrily, it might mean they’re upset with you. Context like tone and facial expression clarifies the meaning.
  • 6.
    6 Maps and Locations Ifa friend says, "Turn left at the next light," you need to know where you are to make sense of it. If you're in a new city, this instruction without context becomes confusing.
  • 7.
    7 How Context EnablesEffective Human Communication Humans rely heavily on context to communicate effectively. Here's how: Clarity in Ambiguity: Words or phrases often have multiple meanings. Example: "Bank" could mean a riverbank or a financial institution. The sentence, "I saw a bank on my way to work," makes sense only if you know the topic of conversation or setting. Understanding Relationships: If you’re at a family dinner and someone says, "Pass the salt," you know they’re talking to the closest person and referring to the salt on the table, not a random salt shaker somewhere else. Making Decisions: If someone says, "Let’s go out," the context (time, weather, location) influences what you plan—whether it’s a walk, a dinner, or a weekend trip.
  • 8.
    8 How LLMs UseContext to Generate Responses Large Language Models (LLMs) like ChatGPT are designed to process and generate human-like text. To do this effectively, they rely on context, which means understanding the information provided to them to make their responses relevant and accurate. Imagine you’re having a conversation with a friend. You naturally remember what was said earlier to make your next statement logical. LLMs work in a similar way—they "look back" at the input text and use it to generate meaningful responses.
  • 9.
    9 How LLMs "Understand"Context Breaking Text into Tokens: • LLMs divide the input text into small chunks called tokens. These tokens can be words, parts of words, or even characters. • For example, the sentence "I love apples" might be split into tokens like ["I", "love", "apples"]. Context Window: • LLMs process the tokens in a context window. This window represents how much recent information the model can "remember" while generating a response. • If the input is too long, only the most recent tokens might be considered. Predicting the Next Word: • Based on the tokens in the context window, the LLM predicts the next word or phrase that fits best. • Example: If the input is "I love apples and bananas, but my favorite fruit is...", the model predicts something like "mangoes" because it fits the context.
  • 10.
    10 Examples of HowLLMs Use Context Conversation Flow: • If you say, "Tell me about dogs," and then ask, "What about their diet?" the LLM uses the previous sentence to know you’re still talking about dogs. Complex Queries: • Input: "Summarize this article and explain how it applies to marketing." • The LLM first processes the entire article (context) and then tailors the explanation to marketing. Revisiting Earlier Inputs: • Input 1: "Write a poem about rain." • Input 2: "Now make it funny." • The LLM remembers the first input and modifies its approach to make the poem humorous.
  • 11.
    11 Scenario 1: Respondingto a Helpdesk Query Without Context: Prompt: "How do I reset my password?" Expected Output: generates a generic response: "You can reset your password by going to the login page and clicking on 'Forgot Password.' Follow the instructions provided." With Context: Prompt: "A corporate user is unable to log into their Active Directory account due to a forgotten password. Provide a step-by-step guide for resetting it using the organization's self-service password reset (SSPR) portal, ensuring security compliance and minimal downtime." Expected Output: Steps to Reset Active Directory Password via SSPR: • Access the SSPR Portal: Open a browser and navigate to the organization's Self-Service Password Reset Portal at [URL]. • Enter Credentials: Input your Active Directory username or corporate email address. • Verify Identity: Complete the authentication process using Multi-Factor Authentication (MFA) (e.g., OTP via SMS/email, security questions, or biometric verification). • Set a New Password: Choose a compliant password following corporate security policies (e.g., minimum 12 characters, uppercase, lowercase, numbers, special characters, and no previous passwords). • Confirm and Apply Changes: Click "Submit" to reset the password. If successful, you will receive a confirmation message. • Login & Sync: Log in using the new password on your workstation. If you use multiple devices, ensure password synchronization by logging into VPN or a domain-joined system. Troubleshooting: If the reset fails, check network connectivity, MFA device availability, or contact IT Support at [Helpdesk Contact].
  • 12.
    12 Scenario 2: Debugginga Code Snippet Without Context: Input: "Fix this code for me." def calculate_total(price, tax): return price + tax_rate print(calculate_total(100)) With Context: Input: "I’m trying to calculate the total price, including a 10% tax. Can you fix this code for me?" Objective: Show how ChatGPT responds when context is missing versus when it's provided. def calculate_total(price, tax): return price + tax_rate print(calculate_total(100)) Likely Output: The model now understands the goal and will provide a targeted fix: def calculate_total(price, tax_rate=0.1): return price + (price * tax_rate) print(calculate_total(100)) Context helps the model understand the intention, not just the syntax.
  • 13.
    13 Limitations of Contextin LLMs While LLMs are good at handling context, they have some limitations: Limited Memory (Context Window): • LLMs can only "remember" a certain amount of text at a time. • If the input is too long, they may lose track of earlier information. No True Understanding: • LLMs don’t actually "know" things. • They predict text based on patterns, so their responses depend heavily on the quality of the input.
  • 14.
  • 15.
    15 Active & Long-TermMemory Active Memory - an aggregated log of all of the processes and tasks that an agent executes during its runtime; retained as chunks of action-observation pairs in Context Grounding indexes automatically by Agent Builder Long-Term Memory - a combination of available knowledgebases and reflections on active memory observations that are respectively managed by users setting up Context Grounding and the implicit Agent Memory itself
  • 16.
    16 Active vs Long-TermMemory – Key Differences • Lives Across Sessions • Persistent Context (Days/Weeks/Months) • Remembers: "What did we discuss last time?“ • Stored Knowledge, Accessible Later • Persists, Can Be Updated/Deleted • Example: "Sales Bot remembers a customer’s preferences over multiple calls" • Lives in Current Session • Short-Term Context (Few minutes/hours) • Remembers: "What are we discussing now? • Fast, Temporary Recall • Clears after Session Ends • Example: "HR Bot remembers your last question in this chat"
  • 17.
    17 Vector Database • Traditionaldatabases store structured data (rows, columns). • Vector Databases store unstructured data (text, images, audio) in a way AI can understand the meaning. • They store information in the vector format(in other words, a collection of numbers) • These vectors capture the meaning (semantics) and allow semantic search and similarity comparisons. • Enable fast, relevant retrieval of grounding information.
  • 18.
    18 How does itwork Text Data → Embeddings Convert documents, emails, FAQs, etc., into vectors using Embeddings (OpenAI, Hugging Face, etc.). Store Vectors in Vector DB Vectors are stored in specialized databases (Pinecone, Chroma, FAISS etc.). Query by similarity search When users ask a question, it's converted to a vector and compared to stored vectors by similarity, not keywords.
  • 19.
    19 Type of searchesin Vector Databases Euclidean Distance (L2 Norm) • Measures the straight-line distance between two vectors. • Think of it as finding the shortest path between two points in space. • Best for: Image recognition, recommendation systems. Cosine Similarity • Measures the angle between two vectors instead of distance. • Even if two vectors have different magnitudes, they can be considered similar if they point in the same direction. • Best for: Text and document search. Manhattan Distance (L1 Norm) • Measures block-wise (grid-based) distance between vectors. • Think of walking through city blocks instead of taking a straight-line path. • Best for: Path planning, logistics, and recommendation engines. Hamming Distance • Counts the number of differences between two binary vectors. • Best for: Error detection, DNA sequence comparison.
  • 20.
    20 Why Do WeNeed a Vector Database? • Unstructured data dominates enterprise knowledge (80-90% of data). • Keyword search fails when people phrase things differently. • Vector Databases enable: • Semantic search: Understands meaning. • Scalability: Handles millions of documents • Speed: Fast similarity search at scale. Data Structured Unstructured
  • 21.
    21 Context Grounding inUiPath: Customizing LLMs Demo: • Create Storage Bucket in UiPath • Upload and manage documents/data • Create an Index for quick search (will act as Vector DB – UiPath-managed vector database) • Use pre-built UiPath Activities Studio Web
  • 22.
    22 Thank You Register foreach session here