KEMBAR78
Different Architecture Support by Azure ML | PDF | Microsoft Azure | Computer Cluster
0% found this document useful (0 votes)
31 views7 pages

Different Architecture Support by Azure ML

The document discusses different Azure architecture options for deploying machine learning models, including Azure Machine Learning, Azure Functions, Azure Container Instances, Azure Kubernetes Service, Azure Batch AI, and Azure IoT Edge. It also covers scenarios for deploying pre-trained models and training/deploying models, with considerations for each option.

Uploaded by

Abhirath Seth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views7 pages

Different Architecture Support by Azure ML

The document discusses different Azure architecture options for deploying machine learning models, including Azure Machine Learning, Azure Functions, Azure Container Instances, Azure Kubernetes Service, Azure Batch AI, and Azure IoT Edge. It also covers scenarios for deploying pre-trained models and training/deploying models, with considerations for each option.

Uploaded by

Abhirath Seth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Different Architecture Support by Azure ML

1. Azure Machine Learning service (AML):


 Azure Machine Learning service is a comprehensive platform that provides end-to-end
machine learning lifecycle management.
 It supports various deployment options, including Azure Container Instances, Azure
Kubernetes Service (AKS), and Azure Functions.
 With AML, you can package your model as a Docker container along with its
dependencies and deploy it as a scalable, production-ready service.
 It offers features like model versioning, automated scaling, and monitoring, making it
suitable for enterprise-grade ML deployments.
2. Azure Functions:
 Azure Functions is a serverless compute service that allows you to run your code in a
stateless, event-driven environment.
 You can deploy your ML model as an Azure Function, which can be triggered by events
or HTTP requests.
 Azure Functions automatically scales based on the incoming workload, making it suitable
for lightweight models or scenarios with unpredictable traffic patterns.
 It is a cost-effective option as you only pay for the actual execution time of your
functions.
3. Azure Container Instances (ACI):
 Azure Container Instances enables you to run containers in Azure without managing the
underlying infrastructure.
 You can create a container image that includes your ML model and its dependencies,
and then deploy it as a container instance.
 ACI is suitable for quick and easy deployment of individual containers, especially for
scenarios with short-lived workloads or sporadic burst traffic.
 It provides flexibility in terms of resource allocation and is a good choice for rapid
prototyping or development environments.
4. Azure Kubernetes Service (AKS):
 Azure Kubernetes Service is a managed container orchestration service that simplifies
the deployment, management, and scaling of containerized applications.
 You can package your ML model as a Docker container and deploy it on AKS, which
offers scalability, high availability, and automated management capabilities.
 AKS provides features like load balancing, automatic scaling, and rolling updates, making
it suitable for production-grade ML deployments.
 It supports deploying multiple replicas of your model for better performance and fault
tolerance.
5. Azure Batch AI:
 Azure Batch AI is a platform that provides job scheduling and management for AI and ML
workloads.
 You can use Azure Batch AI to distribute the inference or training tasks of your ML model
across a cluster of virtual machines.
 It is designed for computationally intensive workloads that require parallel processing
and can scale up to large clusters.
 Azure Batch AI offers flexibility in terms of virtual machine configuration and job
scheduling options, making it suitable for complex ML workloads.

6. Azure IoT Edge:


 Azure IoT Edge allows you to deploy and run ML models on edge devices or IoT devices.
 You can package your ML model as a Docker container and deploy it to edge devices
using Azure IoT Edge.
 Azure IoT Edge provides offline capabilities, local inferencing, and the ability to run
models on devices with limited computing resources.
 It supports modular deployment, enabling you to deploy pre-processing and post-
processing modules along with your ML model.

Different Deployment Scenario

Scenario 1: We have pre trained model

1. Azure Functions:
 Azure Functions is a serverless compute service that allows you to run your code in a
stateless, event-driven environment.
 You can create a Python function that loads the pickle file, performs predictions, and
returns the results.
 Azure Functions automatically scales based on the incoming workload, ensuring your
function can handle varying traffic patterns.
 It is well-suited for lightweight models and scenarios where you need on-demand scaling
and event-driven execution.
 Azure Functions can be triggered by events (such as HTTP requests, timers, or message
queues) or can be integrated with other Azure services.
2. Azure Container Instances (ACI):
 Azure Container Instances provides a way to run containers in Azure without managing
the underlying infrastructure.
 You can create a Docker container that includes your Python code, dependencies, and
the pickle file with the pre-trained model.
 ACI allows you to deploy the container quickly and easily, providing a scalable
environment to perform predictions.
 It is suitable when you need a more persistent and isolated environment compared to
Azure Functions.
 ACI is a good choice when you have a larger model or specific dependencies that require
containerization.

Both Azure Functions and ACI offer advantages depending on your specific requirements. If your
model and associated code are relatively small and you expect low to moderate traffic, Azure
Functions might be a suitable choice due to its serverless nature and event-driven capabilities. On
the other hand, if you have a larger model or need more control over the environment, ACI can
provide a scalable container-based solution.

Costing:
Usage Idea :
For a 300 MB dataset, considering a 64-bit system, a reasonable starting point would be an Azure
Function instance with at least 1-2 GB of memory. For Single Infrence.

Azure Function:
Type1:
Tier : Premium
Instance : 1 cores , 3.5 GB RAM, 250 GB
Price per hour : 0.21 $ (~ Rs. 17 )

Type2:
Tier : Premium
Instance : 2 cores , 7 GB RAM, 250 GB
Price per hour : 0.43 $ ( ~ Rs. 35 )
Type3:
Tier : Premium
Instance : 4 cores , 14 GB RAM, 250 GB
Price per hour : 0.85 $ (~ Rs. 70 )

Azure Container Instances (ACI):

Type1:
Tier : Premium
Instance : vCPU 2, 8GB RAM,
Price per hour : 0.12 $ (~ Rs. 17 )
Type2:
Tier : Premium
Instance : vCPU 2, 12 GB RAM,
Price per hour : 0.13 $ (~ Rs. 20 )
Other Cost :

 Azure Container Registry


 Azure Block Blob Storage
 Key Vault
 Application Insights

Scenario 2: We have to Train Test and Deploy Model.

1. Azure Batch AI
 Scale and parallel processing: Azure Batch AI is designed to handle large-scale and
parallel AI workloads. If your prediction workload involves processing a large number of
inputs simultaneously or requires significant computational resources, Azure Batch AI
can distribute the workload across a cluster of virtual machines for faster processing.
 Model packaging and dependencies: With Azure Batch AI, you can package your model,
including the pickle file and associated dependencies, as a Docker container. This
container can be deployed to the Azure Batch AI cluster for execution. If your model has
complex dependencies or requires custom software configurations, Azure Batch AI
allows you to specify those in the container environment.
 Job management and scheduling: Azure Batch AI provides job scheduling and
management capabilities, allowing you to define and schedule inference jobs. It can
handle the orchestration of running multiple jobs on the cluster, including managing
resource allocation, monitoring, and job dependencies.
 Training and inference flexibility: While Azure Batch AI is well-suited for training and
distributed inference tasks, it may introduce additional complexity if your goal is to
simply perform predictions using a pre-trained model. The overhead of setting up a
cluster and managing jobs might be more than what is necessary for a prediction-only
workload.

Costing:
Usage Idea :
For a 300 MB dataset, considering a 64-bit system, a reasonable starting point would be an Azure
Function instance with at least 1-2 GB of memory. For Single Infrence.

Azure Batch AI:


Type1:
Instance : D4ds 16 gb ram 150 gb temporary storage
Price per hour : 0.226 $ (~ Rs. 19 )

Different Endpoints
Online Endpoint:

 You have low-latency requirements.


 Your model can answer the request in a relatively short amount of time.
 Your model's inputs fit on the HTTP payload of the request.
 You need to scale up in term of number of request.

When using a managed online endpoint, you pay for the compute and networking
charges. There is no additional surcharge.

Batch Endpoint:

 You have expensive models or pipelines that requires a longer time to run.
 You want to operationalize machine learning pipelines and reuse components.
 You need to perform inference over large amounts of data, distributed in
multiple files.
 You don't have low latency requirements.
 Your model's inputs are stored in an Storage Account or in an Azure Machine
learning data asset.
 You can take advantage of parallelization.

Invoking a batch endpoint triggers an asynchronous batch inference job. Compute


resources are automatically provisioned when the job starts, and automatically de-
allocated as the job completes. So you only pay for compute when you use it.

Blob Storage:
Storage Cost:
$ 0.018 per Gb – Hot
$ 0.01 per Gb -Cool
Different Costing Associated with batch Endpoints:
1. Storage Cost: Azure Blob Storage charges for the amount of data stored in the storage account.
The cost depends on the storage tier (Hot, Cool, or Archive) chosen for your data. Each tier has
different pricing rates per GB per month. You will be billed for the storage capacity used by your 1GB
CSV file.

2. Data Transfer Cost: If you transfer data between Azure regions or between Azure services, there
may be data transfer costs associated with moving the CSV file to the appropriate Azure region for
processing. Data transfer costs vary depending on the data volume and the source and destination
regions.

3. Batch Endpoint Execution Cost: When using the batch endpoint for prediction, Azure may charge
for the compute resources utilized during the batch processing. The cost is based on factors such as
the number of instances used, the duration of the batch execution, and the specific compute
resource configuration (e.g., memory, CPU).

4. Networking Cost: Azure may apply networking costs for data ingress and egress if your batch
endpoint needs to access data from Azure Blob Storage or transfer the prediction results to another
service or location. These costs depend on the data volume transferred and the network egress
rates.

5. Azure Machine Learning Usage Cost: If you're using Azure Machine Learning to deploy and
manage your batch endpoint, there may be additional costs associated with the usage of Azure
Machine Learning services, such as the deployment of compute instances, workspace management,
and other related features.

Total Costing Estimate:

1. Storage Cost:
$ 0.018 per Gb – Hot
2. Data Transfer Cost:
$ 0.01 – 0.02 per Gb

3. Execution Cost:

Instance: vCPU 2, 12 GB RAM,


Price per hour: 0.12 $ (~ Rs. 20)
4. Networking Cost:
$ 0.005 - $ 0.065 per Gb – Hot

5. Some Miscellaneous cost

You might also like