KEMBAR78
Introduction to Generative Models.pptx
Introduction to Generative Models
Unleashing the Power of Artificial Creativity
Submitted by : Abdul Latif Faqeeri
Submitted to : Ms. Rama Rani
Univ Roll : 22073010
What are Generative Models?
● Generative models create new data based on
existing patterns
● Popular examples include Generative Adversarial
Networks (GANs) and Variational Autoencoders
(VAEs)
● They enable machines to generate art, music, and
text
● Generative models have applications in data
augmentation, simulation, and creativity
Photo by Pexels
Autoencoders
● An unsupervised deep learning
algorithm Unlabeled
data
● Are Artificial Neural Networks
● Useful for dimensionality
reduction and feature learning
● X bar is x’s reconstruction
● Z is some latent representation
or code and S is a non linearity
such as the sigmoid.
Photo by Pexels
Autoencoders
Photo by Pexels
Stacked Autoencoder
Photo by Pexels
Implementing a stacked Autoencoder Using Keras
Photo by Pexels
Visualizing the Reconstructions
Photo by Pexels
One way to ensure that an autoencoder is properly trained is to compare
the inputs and the outputs: the differences should not be too significant.
Unsupervised Pretraining using Stacked Autoencoder
Photo by Pexels
Transfer Learning
Training One Autoencoder at a Time
Photo by Pexels
Convolutional Autoencoders
Photo by Pexels
● If dealing with images
● Autoencoders so far will not work ? ( unless the images are very small )
Denoising Autoencoders
Photo by Pexels
Denoising Autoencoders
Photo by Pexels
Variational Autoencoders
Photo by Pexels
● An important category of autoencoders was introduced in 2013 by
Diederik Kingma and Max Welling and quickly became one of the
most popular variants: variational autoencoders (VAEs)
● VAEs are quite different from all the autoencoders we have
discussed so far, in these particular ways:
1. They are probabilistic autoencoders, meaning that their
outputs are partly determined by chance, even after training (as
opposed to denoising autoencoders, which use randomness only
during training)
2. Most importantly, they are generative autoencoders, meaning that
they can generate new instances that look like they were sampled
from the training set.
Variational Autoencoders
Photo by Pexels
Variational Autoencoders
Photo by Pexels
Application in Natural Language Processing
● Generative models are used in language modeling
and text generation
● They can generate realistic text, dialogue, and even
poetry
● They have applications in chatbots, language
translation, and content creation
● Generative models can inspire creative writing and
art
Photo by Pexels
Challenges in Generative Model Training
● Training generative models can be computationally
intensive
● Mode collapse is a common problem, where the model
generates limited variety
● Evaluating the quality of generated content is subjective
and challenging
● Overfitting and loss of diversity are other challenges to
address
Photo by Pexels
Applications in Medical Imaging
● Generative models are used for medical image
generation and synthesis
● They can create synthetic images for training and
data augmentation
● Generative models aid in disease diagnosis and
treatment planning
● They enable medical research and simulations
Photo by Pexels
Ethical Considerations
● Generative models raise concerns about misuse and
manipulation
● Deep fakes and fake news propagation are examples of
potential harm
● Regulations and guidelines need to be developed to
mitigate risks
● User education and awareness are crucial in addressing
ethical challenges
Photo by Pexels
Future Trends in Generative Models
● Advancements in generative models continue to
expand its possibilities
● Integration with reinforcement learning can enhance
creativity
● Research focuses on improving stability and diversity
of generated content
● Applications in robotics, virtual reality, and design are
emerging
Photo by Pexels
Open-source Frameworks for Generative Models
● Popular open-source frameworks include TensorFlow,
PyTorch, and Keras
● These frameworks provide pre-trained models and
resources
● Community support facilitates knowledge sharing and
collaboration
● Developers can experiment and contribute to the field
Photo by Pexels
Generative Adversarial Networks (GANs)
Unlocking the Power of AI Creativity
Introduction to GANs
● GANs are a class of machine learning models.
● They consist of two neural networks: a generator and a
discriminator.
● The generator creates synthetic data, while the
discriminator tries to distinguish between real and fake
data.
● GANs have revolutionized the field of AI by enabling
advanced image and content generation.
Photo by Pexels
Photo by Pexels
How GANs Work
● The generator network takes random
noise as input and generates data.
● The discriminator network receives
both real and fake data and learns to
distinguish between them.
● The generator and discriminator are
trained simultaneously through a
process of competition and
cooperation.
● This iterative process leads to the
generator generating increasingly
realistic data.
Photo by Pexels
Let’s go ahead and build a simple GAN for Fashion MNIST.
Photo by Pexels
The Power of GANs
● GANs consist of a generator and a discriminator network
● The generator learns to create realistic data, while the
discriminator learns to distinguish real from fake
● GANs have revolutionized image synthesis and style
transfer
● They are used in deepfake technology and image super-
resolution
Photo by Pexels
Applications of GANs
● GANs have been used in various
creative fields, such as art, music,
and fashion.
● They can generate realistic
images, textures, and even
human-like faces.
● GANs have also been used for
data augmentation, anomaly
detection, and domain adaptation.
● The applications of GANs
continue to expand across
industries.
Photo by Pexels
Training Challenges
● GAN training can be challenging
due to mode collapse and
instability.
● Mode collapse occurs when the
generator produces limited
variations of data.
● Instability refers to the difficulty in
achieving equilibrium between the
generator and discriminator.
● Researchers are actively working
on techniques to stabilize GAN
training. Photo by Pexels
GAN Variants
● Several variants of GANs have been
developed to address specific
challenges.
● DCGAN (Deep Convolutional GAN)
uses convolutional networks for
image generation.
● CycleGAN can translate images
from one domain to another without
paired training data.
● StyleGAN allows control over the
generated images style and
characteristics.
Photo by Pexels
Ethical Considerations
● GANs raise ethical concerns
regarding the generation of deep
fakes and manipulated content.
● They can be misused for phishing,
impersonation, and spreading
misinformation.
● Regulations and ethical guidelines
are being developed to mitigate
these risks.
● Responsible and ethical use of
GANs is crucial for preserving
trust in AI technology.
Photo by Pexels
Future Developments
● Research in GANs is advancing
rapidly, pushing the boundaries of
AI creativity.
● Improved training algorithms and
architectures are being
developed.
● GANs are expected to have
broader applications in virtual
reality, advertising, and
healthcare.
● The future holds tremendous
potential for the continued Photo by Pexels
Diffusion Models
● ideas behind diffusion models have been around for
many years
● first formalized in their modern form in a 2015 paper by
Jascha Sohl-Dickstein et al. from Stanford University
and UC Berkeley
● The authors applied tools from thermodynamics to
model a diffusion process.
● What is diffusion process ?
Photo by Pexels
Photo by Pexels
Diffusion Models
● The core idea is to train a model to learn the reverse
process: start from the completely mixed state, and
gradually “unmix” the milk from the tea.
● they obtained promising results in image generation
● since GANs produced more convincing images back then,
diffusion models did not get as much attention.
● in 2020, Jonathan Ho et al., also from UC Berkeley,
managed to build a diffusion model capable of generating
highly realistic images, which they called a denoising
diffusion probabilistic model (DDPM)
Photo by Pexels
Diffusion Models
● A few months later, a 2021 paper by OpenAI researchers
Alex Nichol and Prafulla Dhariwal analyzed the DDPM
architecture and proposed several improvements that
allowed DDPMs to finally beat GANs.
Photo by Pexels
Diffusion Models
Photo by Pexels
Diffusion Models
Photo by Pexels
References
Photo by Pexels
Géron, A. (2019). Hands-On Machine Learning with Scikit-Learn, Keras,
and TensorFlow. O'Reilly Media.
Photo by Pexels
ਤੁਹਾਡਾ ਧੰਨਵਾਦ

Introduction to Generative Models.pptx

  • 1.
    Introduction to GenerativeModels Unleashing the Power of Artificial Creativity Submitted by : Abdul Latif Faqeeri Submitted to : Ms. Rama Rani Univ Roll : 22073010
  • 2.
    What are GenerativeModels? ● Generative models create new data based on existing patterns ● Popular examples include Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) ● They enable machines to generate art, music, and text ● Generative models have applications in data augmentation, simulation, and creativity Photo by Pexels
  • 3.
    Autoencoders ● An unsuperviseddeep learning algorithm Unlabeled data ● Are Artificial Neural Networks ● Useful for dimensionality reduction and feature learning ● X bar is x’s reconstruction ● Z is some latent representation or code and S is a non linearity such as the sigmoid. Photo by Pexels
  • 4.
  • 5.
  • 6.
    Implementing a stackedAutoencoder Using Keras Photo by Pexels
  • 7.
    Visualizing the Reconstructions Photoby Pexels One way to ensure that an autoencoder is properly trained is to compare the inputs and the outputs: the differences should not be too significant.
  • 8.
    Unsupervised Pretraining usingStacked Autoencoder Photo by Pexels Transfer Learning
  • 9.
    Training One Autoencoderat a Time Photo by Pexels
  • 10.
    Convolutional Autoencoders Photo byPexels ● If dealing with images ● Autoencoders so far will not work ? ( unless the images are very small )
  • 11.
  • 12.
  • 13.
    Variational Autoencoders Photo byPexels ● An important category of autoencoders was introduced in 2013 by Diederik Kingma and Max Welling and quickly became one of the most popular variants: variational autoencoders (VAEs) ● VAEs are quite different from all the autoencoders we have discussed so far, in these particular ways: 1. They are probabilistic autoencoders, meaning that their outputs are partly determined by chance, even after training (as opposed to denoising autoencoders, which use randomness only during training) 2. Most importantly, they are generative autoencoders, meaning that they can generate new instances that look like they were sampled from the training set.
  • 14.
  • 15.
  • 16.
    Application in NaturalLanguage Processing ● Generative models are used in language modeling and text generation ● They can generate realistic text, dialogue, and even poetry ● They have applications in chatbots, language translation, and content creation ● Generative models can inspire creative writing and art Photo by Pexels
  • 17.
    Challenges in GenerativeModel Training ● Training generative models can be computationally intensive ● Mode collapse is a common problem, where the model generates limited variety ● Evaluating the quality of generated content is subjective and challenging ● Overfitting and loss of diversity are other challenges to address Photo by Pexels
  • 18.
    Applications in MedicalImaging ● Generative models are used for medical image generation and synthesis ● They can create synthetic images for training and data augmentation ● Generative models aid in disease diagnosis and treatment planning ● They enable medical research and simulations Photo by Pexels
  • 19.
    Ethical Considerations ● Generativemodels raise concerns about misuse and manipulation ● Deep fakes and fake news propagation are examples of potential harm ● Regulations and guidelines need to be developed to mitigate risks ● User education and awareness are crucial in addressing ethical challenges Photo by Pexels
  • 20.
    Future Trends inGenerative Models ● Advancements in generative models continue to expand its possibilities ● Integration with reinforcement learning can enhance creativity ● Research focuses on improving stability and diversity of generated content ● Applications in robotics, virtual reality, and design are emerging Photo by Pexels
  • 21.
    Open-source Frameworks forGenerative Models ● Popular open-source frameworks include TensorFlow, PyTorch, and Keras ● These frameworks provide pre-trained models and resources ● Community support facilitates knowledge sharing and collaboration ● Developers can experiment and contribute to the field Photo by Pexels
  • 22.
    Generative Adversarial Networks(GANs) Unlocking the Power of AI Creativity
  • 23.
    Introduction to GANs ●GANs are a class of machine learning models. ● They consist of two neural networks: a generator and a discriminator. ● The generator creates synthetic data, while the discriminator tries to distinguish between real and fake data. ● GANs have revolutionized the field of AI by enabling advanced image and content generation. Photo by Pexels
  • 24.
  • 25.
    How GANs Work ●The generator network takes random noise as input and generates data. ● The discriminator network receives both real and fake data and learns to distinguish between them. ● The generator and discriminator are trained simultaneously through a process of competition and cooperation. ● This iterative process leads to the generator generating increasingly realistic data. Photo by Pexels
  • 26.
    Let’s go aheadand build a simple GAN for Fashion MNIST. Photo by Pexels
  • 27.
    The Power ofGANs ● GANs consist of a generator and a discriminator network ● The generator learns to create realistic data, while the discriminator learns to distinguish real from fake ● GANs have revolutionized image synthesis and style transfer ● They are used in deepfake technology and image super- resolution Photo by Pexels
  • 28.
    Applications of GANs ●GANs have been used in various creative fields, such as art, music, and fashion. ● They can generate realistic images, textures, and even human-like faces. ● GANs have also been used for data augmentation, anomaly detection, and domain adaptation. ● The applications of GANs continue to expand across industries. Photo by Pexels
  • 29.
    Training Challenges ● GANtraining can be challenging due to mode collapse and instability. ● Mode collapse occurs when the generator produces limited variations of data. ● Instability refers to the difficulty in achieving equilibrium between the generator and discriminator. ● Researchers are actively working on techniques to stabilize GAN training. Photo by Pexels
  • 30.
    GAN Variants ● Severalvariants of GANs have been developed to address specific challenges. ● DCGAN (Deep Convolutional GAN) uses convolutional networks for image generation. ● CycleGAN can translate images from one domain to another without paired training data. ● StyleGAN allows control over the generated images style and characteristics. Photo by Pexels
  • 31.
    Ethical Considerations ● GANsraise ethical concerns regarding the generation of deep fakes and manipulated content. ● They can be misused for phishing, impersonation, and spreading misinformation. ● Regulations and ethical guidelines are being developed to mitigate these risks. ● Responsible and ethical use of GANs is crucial for preserving trust in AI technology. Photo by Pexels
  • 32.
    Future Developments ● Researchin GANs is advancing rapidly, pushing the boundaries of AI creativity. ● Improved training algorithms and architectures are being developed. ● GANs are expected to have broader applications in virtual reality, advertising, and healthcare. ● The future holds tremendous potential for the continued Photo by Pexels
  • 33.
    Diffusion Models ● ideasbehind diffusion models have been around for many years ● first formalized in their modern form in a 2015 paper by Jascha Sohl-Dickstein et al. from Stanford University and UC Berkeley ● The authors applied tools from thermodynamics to model a diffusion process. ● What is diffusion process ? Photo by Pexels
  • 34.
  • 35.
    Diffusion Models ● Thecore idea is to train a model to learn the reverse process: start from the completely mixed state, and gradually “unmix” the milk from the tea. ● they obtained promising results in image generation ● since GANs produced more convincing images back then, diffusion models did not get as much attention. ● in 2020, Jonathan Ho et al., also from UC Berkeley, managed to build a diffusion model capable of generating highly realistic images, which they called a denoising diffusion probabilistic model (DDPM) Photo by Pexels
  • 36.
    Diffusion Models ● Afew months later, a 2021 paper by OpenAI researchers Alex Nichol and Prafulla Dhariwal analyzed the DDPM architecture and proposed several improvements that allowed DDPMs to finally beat GANs. Photo by Pexels
  • 37.
  • 38.
  • 39.
    References Photo by Pexels Géron,A. (2019). Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow. O'Reilly Media.
  • 40.