KEMBAR78
Advanced Gen-AI Development | PDF | Artificial Intelligence | Intelligence (AI) & Semantics
0% found this document useful (0 votes)
83 views57 pages

Advanced Gen-AI Development

The document presents an overview of advanced Gen-AI development, highlighting various startups and real products utilizing Gen-AI technologies. It discusses foundation models, fine-tuning, and Retrieval-Augmented Generation (RAG), explaining their functionalities and differences. Additionally, it covers AI agents, their frameworks, and tools like LangChain and Microsoft Semantic Kernel for building AI applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views57 pages

Advanced Gen-AI Development

The document presents an overview of advanced Gen-AI development, highlighting various startups and real products utilizing Gen-AI technologies. It discusses foundation models, fine-tuning, and Retrieval-Augmented Generation (RAG), explaining their functionalities and differences. Additionally, it covers AI agents, their frameworks, and tools like LangChain and Microsoft Semantic Kernel for building AI applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Advanced Gen-AI Development

Presented By: Prepared By:

Mona Monir Hany Saad


mtawakol@iti.gov.eg hghiett@iti.gov.eg
Gen-AI based startups and real products
• Use cases for real products using Gen AI:
• Loop-x (AI Agents for different roles): https://loop-x.co
• Gemelo ai (Twin AI creator): https://gemelo.ai
• Apriora AI (Recruitment and Interviews): https://www.apriora.ai
• Velents AI (Recruitment and Interviews): : https://www.velents.com
• Nancy AI (Recruitment and Interviews): : https://nancy-ai.com
• Mariana AI (Medical): https://marianaai.com

General
Models - OpenAI API

OpenAI
Models
Fine Tuning and RAG

4
General
Foundation Models and Fine-Tuning
➢ What are Foundation Models?
Foundation models are large-scale AI models pre-trained on vast, diverse datasets. These models are designed to be highly
generalizable, providing the base for a wide range of downstream tasks. Examples of foundation models include GPT-4, BERT, and
CLIP. These models are versatile, capable of handling various data types such as text, images, and audio.

➢ Fine-Tuning
Fine-tuning refers to the process of customizing a pre-trained foundation model for a specific task by training it on additional,
domain-specific data. This approach allows organizations to adapt foundation models to their needs without the need for extensive
retraining from scratch. For example, a general language model like GPT can be fine-tuned to better handle finance-related text,
resulting in models like FinBERT.

➢ Benefits of Fine-Tuning
• Customization: Fine-tuning enhances the model’s performance for specific applications, making it more relevant for niche use
cases.
• Efficiency: It requires less computational power and data compared to training a model from scratch.
• Scalability: Fine-tuned models can be deployed across various domains, from healthcare to finance and beyond.

5
General
Fine-Tuning – Getting started

https://platform.openai.com/docs/api-reference/fine-tuning/create
6
General
Retrieval-Augmented Generation (RAG)
➢ What is RAG?
Retrieval-Augmented Generation (RAG) enhances a language model's
capabilities by allowing it to access external data sources in real-time. It
works by retrieving relevant information from databases or other sources
and combining it with the user's query to generate a more contextually
accurate and up-to-date response. This is particularly useful when dealing
with constantly evolving fields like healthcare or finance, where accessing
the latest information is crucial.

7
General
Retrieval-Augmented Generation VS. Fine
tuning
RAG differs from fine-tuning primarily in how each method handles information:
• RAG dynamically retrieves external data from databases or APIs during inference, enabling the model to access and incorporate
real-time, ensuring the generated responses are up-to-date. It doesn't alter the model's internal knowledge but enhances its
answers with current, relevant data.
• In contrast, fine-tuning involves retraining the model with a domain-specific dataset to embed specialized knowledge directly
into the model itself. This means fine-tuned models provide consistent, specialized outputs but lack the ability to adapt to new or
updated information without retraining. This permanently embeds domain-specific knowledge within the model, making it more
adept at particular tasks but unable to incorporate new information unless retrained

8
General
Retrieval-Augmented Generation (RAG)

9
General
RAG data pipeline flow
High-level flow for a data pipeline that supplies
grounding data for a RAG application:
1. Documents are either pushed or pulled into a data
pipeline.
2. The data pipeline processes each document
individually by completing the following steps:
a. Chunk document: Breaks down the document
into semantically relevant parts that ideally
have a single idea or concept.
b. Enrich chunks: Adds metadata fields that the
pipeline creates based on the content in the
chunks. The data pipeline categorizes the
metadata into discrete fields, such as title,
summary, and keywords.
c. Embed chunks: Uses an embedding model to
vectorize the chunk and any other metadata
fields that are used for vector searches.
d. Persist chunks: Stores the chunks in the search
index.
10
General
RAG-Embeddings
Embeddings are a way to represent words, sentences, or even entire
documents as dense vectors in a high-dimensional space.

An embedding is a mathematical representation of an object, such as text.


When a neural network is being trained, many representations of an object are
created. Each representation has connections to other objects in the network.
The purpose of embeddings is to capture the semantic meaning of text, such that
words or phrases with similar meanings are located closer to each other in this
vector space.
For example:
• The words “king” and “queen” might be close to each other in an
embedding space because they share similar semantics (both are royalty).

Embedding similarity: the distance between any two items can be calculated
mathematically and can be interpreted as a measure of relative similarity
between those two items.

The AI model is trained in such a way that these vectors capture the essential
features and characteristics of the underlying data.

11
General
RAG-Embeddings

12
General
Bad Good
News News
RAG – Creating Embeddings

14
API Reference - OpenAI API General
RAG – Vector databases
Vector database stores unstructured data (text, images, audio, video, etc.) in the form of vector embeddings.
Each data point, whether a word, a document, an image, or any other entity, is transformed into a numerical vector using ML
techniques.

15
General
RAG – Putting all together

16
General
RAG re-represented

15
General
RAG developer’
stack

General
RAG developer’
stack

General
RAG Architectures

General
Don’t RAG But CAG

20
General
Cache-Augmented Generation (CAG)

19
General
AI Agents and Agentic
framework

20
General
Agentic Framework and Multi-Agents
AI agent refers to a system or program that is capable of autonomously performing
tasks on behalf of a user or another system by designing its workflow and utilizing
available tools.

Non-agentic AI chatbots are ones without available tools, memory and reasoning.

Agentic AI chatbots learn to adapt to user expectations over time, providing a


more personalized experience and comprehensive responses.

Agentic Framework allows AI systems to operate through multiple agents, each


with specific tasks or roles. This framework supports dynamic, iterative processes
that mirror human problem-solving by allowing agents to collaborate and provide
feedback to one another. Agentic frameworks can be integrated with Large
Language Models (LLMs) and are ideal for automating complex workflows,
enhancing decision-making, and improving the quality of AI outputs.

Multi-Agents systems consist of multiple AI agents, each specialized for different


functions. These agents can communicate and collaborate, handling subtasks
autonomously. For example, in a multi-agent system, one agent could gather
information, while another processes it and yet another evaluates the output. This
reduces the need for manual intervention and can lead to faster, more accurate
solutions in tasks like software development, business operations, and content
creation.

24
General
AI Workflows vs AI Agents

25
General
Key features of an AI agent
• Reasoning: This core cognitive process involves using logic and available information to draw conclusions, make inferences, and
solve problems. AI agents with strong reasoning capabilities can analyze data, identify patterns, and make informed decisions based
on evidence and context.

• Acting: The ability to take action or perform tasks based on decisions, plans, or external input is crucial for AI agents to interact
with their environment and achieve goals. This can include physical actions in the case of embodied AI, or digital actions like
sending messages, updating data, or triggering other processes.

• Observing: Gathering information about the environment or situation through perception or sensing is essential for AI agents to
understand their context and make informed decisions. This can involve various forms of perception, such as computer vision,
natural language processing, or sensor data analysis.

• Planning: Developing a strategic plan to achieve goals is a key aspect of intelligent behavior. AI agents with planning capabilities
can identify the necessary steps, evaluate potential actions, and choose the best course of action based on available information
and desired outcomes. This often involves anticipating future states and considering potential obstacles.

• Collaborating: Working effectively with others, whether humans or other AI agents, to achieve a common goal is increasingly
important in complex and dynamic environments. Collaboration requires communication, coordination, and the ability to
understand and respect the perspectives of others.

• Self-refining: The capacity for self-improvement and adaptation is a hallmark of advanced AI systems. AI agents with self-refining
capabilities can learn from experience, adjust their behavior based on feedback, and continuously enhance their performance and
capabilities over time. This can involve machine learning techniques, optimization algorithms, or other forms of self-modification.

23
https://cloud.google.com/discover/what-are-ai-agents
General
Agentic framework – Reasoning paradigms

24
https://www.ibm.com/think/topics/ai-agents#Reasoning+paradigms
General
AI Agents Types

https://www.ibm.com/think/topics/ai-agents#Types+of+AI+agents
25
https://www.digitalocean.com/resources/articles/types-of-ai-agents General
Agentic RAG
Agentic RAG describes an AI agent-based implementation of RAG. Specifically, it incorporates AI agents into the RAG pipeline to
orchestrate its components and perform additional actions beyond simple information retrieval and generation to overcome the
limitations of the non-agentic pipeline.

26
https://weaviate.io/blog/what-is-agentic-rag
General
Core components of an AI agent
The core components of an AI agent are:
• LLM (with a role and a task)
• Memory (short-term and long-term)
• Planning (e.g., reflection, self-critics, query routing, etc.)
• Tools (e.g., calculator, web search, etc.)

27
https://weaviate.io/blog/what-is-agentic-rag
General
Agentic frameworks

https://www.ibm.com/think/insights/top-ai-agent-frameworks
31
General
29
General
AI orchestration frameworks

30
General
AI orchestration
AI orchestration is the process of coordinating different AI tools and systems, so they work together effectively.

AI orchestration, if done well, increases efficiency and effectiveness because it streamlines processes and ensures the AI you’re using
communicate, share data, and function as one system.

34
General
LangChain
LangChain is an open-source framework for developing applications powered by language models. It provides a modular and
composable API for building chains of operations that can be used to create a wide variety of applications, such as chatbots,
question-answering systems, and document summarization tools.

LangChain is built on top of the popular Hugging Face Transformers library, which provides access to a wide range of pre-trained
language models. This allows developers to focus on building their applications without having to worry about the underlying
infrastructure.

LangChain is still under development, but it has already been used to create a number of impressive applications, including:
• A chatbot that can answer questions about the world
• A system that can summarize long documents
• A tool that can generate creative text formats, like poems, code, scripts, musical pieces, email, letters, etc.

LangChain is a powerful tool that can be used to create a wide variety of applications. It is easy to learn and use, and it is backed by
a large and active community.

Here are some of the key features of LangChain:


• Modular and composable API: LangChain provides a modular and composable API that makes it easy to build chains of
operations. This allows developers to create complex applications without having to write a lot of code.
• Access to pre-trained language models: LangChain is built on top of the popular Hugging Face Transformers library, which
provides access to a wide range of pre-trained language models. This allows developers to get started quickly and easily.
• Active community: LangChain has a large and active community of developers who are constantly contributing new features
and improvements. This ensures that LangChain is always up-to-date and that developers have access to the latest resources.
If you are interested in building applications powered by language models, then LangChain is a great place to start. It is a powerful
tool that is easy to learn and use.

35
General
Microsoft Semantic Kernel
Semantic Kernel is an open-source SDK that lets you easily
build agents that can call your existing code. As a highly
extensible SDK, you can use Semantic Kernel with models
from OpenAI, Azure OpenAI, Hugging Face, and more! By
combining your existing C#, Python, and Java code with
these models, you can build agents that answer questions
and automate processes.
It integrates Large Language Models (LLMs) like OpenAI,
Azure OpenAI, and Hugging Face with conventional
programming languages like C#, Python, and Java.
Semantic Kernel achieves this by allowing you to define
plugins that can be chained together in just a few lines of
code.
What makes Semantic Kernel special, however, is its ability
to automatically orchestrate plugins with AI. With Semantic
Kernel planners, you can ask an LLM to generate a plan that
achieves a user's unique goal. Afterwards, Semantic Kernel
will execute the plan for the user.

33
General
OpenAI APIs –
Advanced tools

34
General
OpenAI APIs – Assistants API Overview
Assistants API Overview
The Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and
files to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, File Search, and Function calling.

How Assistants work


1. Assistants can call OpenAI’s models with specific instructions to tune their personality and capabilities.
2. Assistants can access multiple tools in parallel. These can be both OpenAI-hosted tools — like code_interpreter and file_search — or tools you
build / host (via function calling).
3. Assistants can access persistent Threads. Threads simplify AI application development by storing message history and truncating it when the
conversation gets too long for the model’s context length. You create a Thread once, and simply append Messages to it as your users reply.
4. Assistants can access files in several formats — either as part of their creation or as part of Threads between Assistants and users. When using
tools, Assistants can also create files (e.g., images, spreadsheets, etc) and cite files they reference in the Messages they create.

38
General
Assistant API – Code interpreter
Code Interpreter
What it is: The Code Interpreter allows an OpenAI Assistant to write and run
Python code within a secure environment.

Use Case: Suppose you need to analyze a CSV file containing sales data. The
Code Interpreter can read the file, perform calculations (like summing up total
sales), and even create a graph to visualize the data. If the code doesn’t work
initially, it can try different approaches until it gets it right.

Example: You ask, "Analyze this sales data and show me the total revenue."
The Assistant can load your CSV file, write the necessary Python code, and
present you with the result.

39
General
Assistant API – File search
File Search
What it is: File Search lets the Assistant access and retrieve information from
documents you upload, beyond its pre-existing knowledge.

Use Case: Imagine you upload a manual for a product your company makes.
The Assistant can search through that manual to answer specific questions,
like "What’s the warranty period?" by finding the relevant section in the
document.

Example: You upload a PDF of a product guide and ask, "What are the
installation steps?" The Assistant can locate and provide the steps directly
from your document.

40
General
Assistant API – Function calling
Function Calling
What it is: Function Calling allows you to define specific functions that the
Assistant can call during a conversation. The Assistant knows when to use
these functions and what arguments to pass to them.

Use Case: If you have a weather forecasting function, the Assistant can
automatically call this function when you ask about the weather, providing
you with accurate, up-to-date information.

Example: You might ask, "What’s the weather like today?" The Assistant will
recognize this as a request for weather data and use the appropriate function
to fetch and return the current weather for your location.

41
General
Assistant API – Function calling (Cont.)

42
General
Structured output
What it is: Structured Outputs ensure that the responses generated by the OpenAI Assistant are always formatted according to a
specified JSON Schema.

Use Case: Suppose you’re building an app that needs to process user input, such as booking information. You define a JSON
Schema that specifies exactly how the booking details should be structured (e.g., with fields like: name, date, time). The Assistant
will always provide the output in this structured format, reducing the risk of errors.

Example: If you ask the Assistant to "Create a booking for John on September 10th at 3 PM," it will return the details in the exact
JSON format you specified, like this:
{ "name": "John", "date": "2024-09-10", "time": "15:00"}
This ensures the data is always ready to be processed by your application without additional checks.

43
General
Open AI Agent SDK

https://platform.openai.com/docs/guides/agents
https://openai.github.io/openai-agents-python/
44
General
OpenAI Cookbook

45
General
Model Context Protocol (MCP)

Introducing the Model Context Protocol \ Anthropic

46
General
Model Context Protocol (MCP)

47
General
What’s next?

44
Resources

Option 1: Udemy (Subscription):


• RAG, AI Agents and Generative AI with Python and OpenAI 2025
• Generative AI Architectures with LLM, Prompt, RAG, Vector DB
• 2025 Master Langchain and Ollama - Chatbot, RAG and Agents
• Mastering Ollama: Build Private Local LLM Apps with Python

Option 2: Free courses


• LangChain for LLM Application Development
• LangChain: Chat with Your Data
• Multi AI Agent Systems with crewAI
• Open-Source Models with Hugging Face
• Run LLM Models Locally using Ollama

General
More resources

Different learning paths

General
Resources – Udemy (Subscription)
• Gen AI and OpenAI APIs:
• Generative AI For Beginners with ChatGPT and OpenAI API: https://banquemisr25.udemy.com/course/generative-ai-with-chatgpt-and-openai-api
• Complete OpenAI API Course: https://banquemisr25.udemy.com/course/complete-openai-api-course-connect-to-chatgpt-api-more/
• Generative AI using OpenAI API for Beginners: https://banquemisr25.udemy.com/course/open-ai-api-for-beginners-using-python/
• APIs, RAG and AI Agents:
• RAG, AI Agents and Generative AI with Python and OpenAI 2025 (Beginner: Including Python intro): https://banquemisr25.udemy.com/course/generative-ai-rag/
• Generative AI Architectures with LLM, Prompt, RAG, Vector DB (Beginner: Hugging-face, Ollama, RAG, Vector DB) https://www.udemy.com/course/generative-ai-architectures-with-llm-
prompt-rag-vector-db/?couponCode=LETSLEARNNOW
• Basic to Advanced: Retreival-Augmented Generation (RAG): https://banquemisr25.udemy.com/course/basic-to-advanced-retreival-augmented-generation-rag-course/
• Build Autonomous AI Agents From Scratch With Python (Beginner: simple Agent): https://banquemisr25.udemy.com/course/build-autonomous-ai-agents-from-scratch-with-python/
• 2025 Master Langchain and Ollama - Chatbot, RAG and Agents (Intermediate): https://banquemisr25.udemy.com/course/ollama-and-langchain
• Ollama & Local LLM:
• Mastering Ollama: Build Private Local LLM Apps with Python: https://banquemisr25.udemy.com/course/master-ollama-python
• Zero to Hero in Ollama: Create Local LLM Applications: https://banquemisr25.udemy.com/course/ollama-starttech
• Ollama Zero to Hero: Build Chat, Vision Games & AI Agents: https://banquemisr25.udemy.com/course/ollama-docker-api-library-full-course/
• AI Agents (Intermediate):
• AI Agentic Design Patterns with Ollama & OpenAI Guide: https://banquemisr25.udemy.com/course/ai-agentic-design-patterns/
• AI-Agents: Automation & Business with LangChain & LLM Apps (Intermediate): https://banquemisr25.udemy.com/course/ai-agents-automation-business-with-langchain-llm-apps/
• GitHub Copilot:
• AI For Developers With GitHub Copilot, Cursor AI & ChatGPT: https://banquemisr25.udemy.com/course/ai-for-developers-with-github-copilot-cursor-ai-chatgpt/
• GitHub Copilot Beginner to Pro - AI for Coding & Development: https://banquemisr25.udemy.com/course/github-copilot/
• Other advanced topics (Optional)
• Generative AI application design and development (Intermediate: Hugging-face, LangChain, RAG): https://www.udemy.com/course/generative-ai-app-
dev/?couponCode=LETSLEARNNOW
• LLM Engineering: Master AI, Large Language Models & Agents (Intermediate: Generative AI, RAG, LoRA and AI Agents) https://www.udemy.com/course/llm-engineering-master-ai-and-
large-language-models/
• LangChain in Action: Develop LLM-Powered Applications: https://banquemisr25.udemy.com/course/langchain-in-action-develop-llm-powered-applications/
AI Agents: Building Teams of LLM Agents that Work For You (AutoGen, ChatGPT API, Streamlit, Google Cloud): https://banquemisr25.udemy.com/course/ai-agents-building-teams-of-llm-
agents-that-work-for-you/
• 2025 Bootcamp: Generative AI, LLM Apps, AI Agents, Cursor AI: https://banquemisr25.udemy.com/course/bootcamp-generative-artificial-intelligence-and-llm-app-development
• Python intro (Optional)
• Python Basics Course: https://banquemisr25.udemy.com/course/python-entry-level-programmer-certification-pcep/

General
Resources – MaharaTech talks (Free)
• Staff development – Recorded sessions:
• https://drive.google.com/drive/folders/1A23GwazLX9L76XK0WZlFvD2szreR-yJA

• Mahara-Tech talk (Prepared from Staff Dev sessions):


• Exploring Generative AI: Tools, Use Cases and Future Trends: https://maharatech.gov.eg/course/view.php?id=2294
• Gen-AI: Tools for the Modern Developer: https://maharatech.gov.eg/course/view.php?id=2299

General
Resources (DeepLearning.ai – Free)
• Python intro (Optional):
• AI Python for Beginners: https://www.deeplearning.ai/short-courses/ai-python-for-beginners/
• Prompt engineering for Devs, OpenAI APIs
• ChatGPT Prompt Engineering for Developers: https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/
• Building Systems with the ChatGPT API: https://www.deeplearning.ai/short-courses/building-systems-with-chatgpt/
• Reasoning with o1: https://www.deeplearning.ai/short-courses/reasoning-with-o1/
• Open-Source models & Hugging-face:
• Open-Source Models with Hugging Face: https://www.deeplearning.ai/short-courses/open-source-models-hugging-face/
• Introducing Multimodal Llama 3.2: https://www.deeplearning.ai/short-courses/introducing-multimodal-llama-3-2/
• Ollama – Run LLM Locally:
• Learn Ollama in 15 Minutes - Run LLM Models Locally for FREE (YouTube): https://www.youtube.com/watch?v=UtSSMs6ObqY
• Ollama Course – Build AI Apps Locally (YouTube – Intermediate - Optional): https://www.youtube.com/watch?v=GWB9ApTPTv4
• LangChain, RAG, AI Agents:
• LangChain for LLM Application Development: https://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/
• LangChain: Chat with Your Data: https://www.deeplearning.ai/short-courses/langchain-chat-with-your-data/
• AI Agents in LangGraph (Intermediate): https://www.deeplearning.ai/short-courses/ai-agents-in-langgraph/
• Functions, Tools and Agents with LangChain (Intermediate): https://www.deeplearning.ai/short-courses/functions-tools-agents-langchain/
• AI Agent and multi-agent:
• Multi AI Agent Systems with crewAI (Beginner): https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/
• AI Agentic Design Patterns with AutoGen (Beginner – Optional): https://www.deeplearning.ai/short-courses/ai-agentic-design-patterns-with-autogen/
• RAG (Intermediate):
• Building and Evaluating Advanced RAG Applications: https://www.deeplearning.ai/short-courses/building-evaluating-advanced-rag/
• Building Multimodal Search and RAG: https://www.deeplearning.ai/short-courses/building-multimodal-search-and-rag/
• RAG using LlamaIndexL (Optional):
• Building Agentic RAG with LlamaIndex: https://www.deeplearning.ai/short-courses/building-agentic-rag-with-llamaindex/
• JavaScript RAG Web Apps with LlamaIndex: https://www.deeplearning.ai/short-courses/javascript-rag-web-apps-with-llamaindex/
• Other (Optional):
• How Transformer LLMs Work: https://www.deeplearning.ai/short-courses/how-transformer-llms-work/
• Generative AI for Software Development: https://www.deeplearning.ai/courses/generative-ai-for-software-development/

General
Resources (YouTube - Free)
• YouTube Playlists (Free)
• Generative AI in a Nutshell: https://www.youtube.com/watch?v=2IK3DFHRFfw-
• Generative AI Tools (From Edureka): https://www.youtube.com/watch?v=gMa_QHSAxOY&list=PL9ooVrP1hQOFIOOkGN2gbjvse8jcz-
mux
• Introduction to Generative AI and LLMs (From Microsoft, covering intro RAG, Fine Tuning, and most dev topics):
https://www.youtube.com/playlist?list=PLlrxD0HtieHj2nfK54c62lcs3-YSTx3Je
• RAG:
• Learn RAG From Scratch – Python AI Tutorial from a LangChain Engineer: https://www.youtube.com/watch?v=sVcwVQRHIc8
• AI Agents:
• How To Create Ai Agents From Scratch (CrewAI, Zapier, Cursor): https://www.youtube.com/watch?v=PM9zr7wgJX4
• The Complete Guide to Building AI Agents for Beginners: https://www.youtube.com/watch?v=MOyl58VF2ak
• Python Advanced AI Agent Tutorial - LlamaIndex, Ollama and Multi-LLM!: https://www.youtube.com/watch?v=JLmI0GJuGlY
• ADVANCED Python AI Agent Tutorial - Using RAG: https://www.youtube.com/watch?v=ul0QsodYct4
• GitHub Copilot for developers (From Microsoft): https://www.youtube.com/playlist?list=PLlrxD0HtieHgr23PS05FIncnih4dH9Na5
• Ollama:
• Learn Ollama in 15 Minutes - Run LLM Models Locally for FREE (YouTube): https://www.youtube.com/watch?v=UtSSMs6ObqY
• Ollama Course – Build AI Apps Locally (YouTube – Intermediate - Optional): https://www.youtube.com/watch?v=GWB9ApTPTv4
• Other (Optional):
• Advanced Dev topics + Practical project (From FreeCodeCamp.org): https://www.youtube.com/watch?v=mEsleV16qdo
• Generative AI for Developers – Comprehensive Course: https://www.youtube.com/watch?v=F0GQ0l2NfHA
• AI Agents: https://www.youtube.com/watch?v=7WK0w9Z9mPE

General
Resources (Microsoft - Free)
• Microsoft free courses (Free):
• Generative AI for Beginners
• Generative AI for Beginners - .NET
• Generative AI with JavaScript
• AI for Beginners
• AI Agents for Beginners - A Course
• Data Science for Beginners
• ML for Beginners
• Mastering GitHub Copilot for C#/.NET Developers
• Mastering GitHub Copilot for Paired Programming

General
Assignment

Continue your ChatGPT clone app:


1. You can request access to paid OpenAI subscription from this link
2. Add vision capability (can attach images and ask about them)
3. add file input, so users can attach files and ask about them (Try file upload
or OpenAI Assistant API)
4. Try OpenAI Embedding API
5. Try OpenAI Fine-tuning API
6. Follow this course, to create simple RAG system
(Chat with your files).

Prepare your final project Idea

General
Thank You

You might also like