Postgraduate Program in
DATA
SCIENCE
with specialization
Artificial Intelligence or Data Engineering
and Machine Learning
In partnership with Aligned with
Ranked 1st as per ‘Best Global University’
indicator rankings in 2024-25
About the Program
Foundation of Data Science
Program builds your ability to solve real-world business problems by working
with data—right from extracting and cleaning it to analyzing, interpreting, and
deriving predictive insights.
Specialization option-1
Artificial Intelligence and Machine Learning
Build a strong foundation in data analytics and statistics—then specialize in
deep learning to design, train, and deploy intelligent systems for vision,
language, and automation.
Specialization option-2
Data Engineering
Build strong data analytics capabilities and specialize in designing scalable,
reliable, and cost-effective data pipelines for modern, real-time analytics
ecosystems.
About the Program Partner
Hero Vired has partnered with edX, a global online learning platform, to give its
customers access to a curated selection of job-relevant courses and programs.
Together with hundreds of world-class institutions, edX offers thousands of
learning opportunities in critical disciplines like AI, sustainability, and finance.
Harvard University, a prestigious Ivy League institution, stands as a beacon of
academic excellence. Founded in 1636, it's the oldest university in the United
States. Renowned for its rigorous curriculum and world-class faculty, Harvard
consistently ranks among the top universities globally, often securing the #1 spot
in prestigious rankings.
Harvard University is ranked 1st as per ‘Best Global University’ Source:
Harvard edX
indicator rankings in 2024-25
Postgraduate Program in
Data Science with specialization
Foundation of Data Science
This intensive 18-week program prepares you to solve business challenges using data. You’ll learn
to explore and structure raw datasets, extract meaningful patterns, validate hypotheses, and build
predictive models. The curriculum emphasizes hands-on application, helping you build real-world
capabilities in analytical thinking, problem solving, and communication of insights, before enabling
you to specialize in your chosen field of Data Engineering or AIML.
Specialization-1 Specialization-2
Artificial Intelligence &
Data Engineering
Machine Learning
• Learn how machines "learn" like the • Learn to build and manage systems
human brain using data. that move and process large volumes
of data efficiently.
• Build smart systems that:
Recognize images | Understand text | • Understand the complete data lifecycle:
Generate content | Make decisions Collection | Storage | Transformation |
Accessibility for business use
• Master deep learning techniques
used in real-world applications:
• Design scalable and clear data structures.
Face recognition | Chatbots |
Content recommendation | • Automate tasks and manage workflows
AI-generated images/videos using industry-standard tools.
• Work with both real-time and batch data:
• Improve models to be faster,
Daily reports | Live dashboards
smaller, and more efficient.
• Learn deployment of AI models into • Set up systems for:
real-world apps and products. Data cleaning | Error checking |
• Gain the skills to build, fine-tune, and Smooth backend operations
deploy AI systems from scratch.
• Learn core principles of:
• Ideal for those ready to go beyond Data security | Governance
analytics into cutting-edge AI and | Compliance
deep learning.
• Optimize systems for:
Speed | Cost-efficiency | Reliability
Learning Hours
Total Duration - 08 Months
180 hrs +
20 hrs +
20 hrs +
Total Projects &
Self Paced
Live Sessions Industry Sessions
Total hours
of learning 220+ Total weekly
effort 10-12hrs
5 Months 3 Months
Foundation of
Specialization-1 Specialization-2
Data Science
110 hrs +
70 hrs +
70 hrs +
Live Sessions Live Sessions Live Sessions
Program Highlights
Hands-on Learning Capstone Projects
Work on real-world projects and case Solve real-world problems end-to-end,
studies in areas like AI, analytics, and data building AI solutions or data pipelines, with
engineering, applying practical skills a focus on practical application and
through interactive exercises. portfolio development.
Expert-Led Live Career Development
Training Focus
Learn from experienced professionals Build a portfolio, gain practical exposure,
through interactive, real-time sessions and receive career guidance for roles in
focused on data-driven decision-making, analytics, AI, deep learning, and data
AI, and engineering. engineering.
Comprehensive Real-World
Curriculum Application
Covers the full data journey, from data Apply knowledge through hands-on
analysis and statistical techniques to projects, such as working on AI-powered
predictive models, ensuring readiness for solutions or building production-ready data
advanced roles in data science and pipelines, with a focus on practical skills.
engineering.
No Prior Coding Job-Ready
Experience Required Focus
Designed for individuals from diverse Prepare for industry-specific roles, whether
academic backgrounds, with no prior in analytics, AI, model engineering, or data
coding or technical experience necessary engineering, with a focus on building
to pursue data science, AI, or data production-ready, scalable systems.
engineering roles.
Tools & Technologies
Python
Eligibility
• Bachelor's degree in any discipline
• STEM background preferred
Learning Outcomes
• Frame Business Challenges as Data Problems – Understand how to
interpret business challenges and transform them into analytical or AI-driven
data solutions.
• Prepare, Analyze, and Visualize Data – Clean, explore datasets, and identify
trends. Apply statistical thinking, then communicate your insights through
compelling visualizations and narratives.
• Apply Statistical and Machine Learning Methods – Utilize machine learning
techniques and statistical analysis to build predictive models, evaluate
performance, and assess risks.
• Design and Implement AI Models – Work with AI models in areas like
computer vision, natural language processing, and recommendation
systems, using advanced deep learning architectures like CNNs, RNNs, and
transformers.
• Build Scalable Data Architectures and Pipelines – Design
robust data architectures and create scalable, automated
data pipelines using modern techniques in cloud
computing and orchestration tools.
• Capstone Project – Complete an
end-to-end project that demonstrates
your ability to solve real-world
problems using data science, AI, and
scalable data systems.
Curriculum*
PROGRAM DURATION : 08-Months
Approximately 10-12 hours of student effort expected per week
Projects integrated through the curriculum*
Foundation of Data Science | Duration- 5 Months
Module Names Learning Outcome Tools used
Module 1: Build a strong foundation in managing and • SQL (MySQL or
Deep Dive into querying structured data using relational equivalent)
Database & databases. • DBMS platforms
Query Writing
- Understand how databases store, structure,
and manage data for business use.
- Write SQL queries to filter, join, and aggregate
datasets for insights.
- Use advanced SQL techniques like subqueries,
CTEs, and window functions.
- Apply SQL skills in analytics tasks like reporting,
segmentation, and performance tracking.
Module 2: Learn to write Python code and automate data • Python
Python for Data workflows essential for analytics and reporting. • Jupyter
Analysis Notebook
- Write clean, efficient Python code using
variables, loops, and functions.
- Work with data structures (lists, dictionaries,
sets) for organizing data.
- Read and write data files (especially CSVs) using
Python I/O functions.
- Use control structures and lambda functions for
efficient data processing.
*The curriculum is subject to changes
Module Names Learning Outcome Tools used
Module 3: Gain the ability to analyze data statistically and • Python (NumPy,
Statistical support decision-making with evidence. SciPy)
Analysis &
Decision Making - Summarize datasets using central tendency,
dispersion, and skewness.
- Apply probability concepts and distributions in
real-world scenarios.
- Conduct hypothesis testing and interpret
p-values, t-tests, and ANOVA.
- Translate statistical outputs into clear
business recommendations.
Module 4: Learn to transform, clean, and explore data to • Python
Exploratory identify trends and prepare it for modeling. (Pandas,
Data Analysis Seaborn,
(EDA) - Conduct univariate, bivariate, and multivariate Matplotlib),
exploratory analysis.
- Handle missing values, detect outliers, and
apply data transformation techniques.
- Create group summaries, pivot tables, and
correlation visualizations.
- Use EDA techniques to prepare high-quality,
analysis-ready datasets.
*The curriculum is subject to changes
Module Names Learning Outcome Tools used
Module 5: Discover how to build predictive models that • Python (Scikit-learn,
Machine solve real business problems. • Pandas, Matplotlib)
Learning using Jupyter Notebook
Python - Understand machine learning concepts and
build models using real-world data.
- Apply regression, classification, and clustering
techniques effectively.
- Perform data preprocessing, feature
engineering, and model tuning.
- Evaluate model performance using
industry-standard metrics and validation
methods.
MODULE
TOPICS COVERED
(Harvard X)
Introduction to Gain insights into Python and learn regression
Data Science & models (Linear, Multilinear, and Polynomial)
Python and classification models (kNN, Logistic),
utilizing popular libraries such as sklearn,
Pandas, matplotlib, and numPy.
*The curriculum is subject to changes
Curriculum*
Specialization - Artificial Intelligence and Machine Learning | Duration- 3 Months
Module Names Learning Outcome Tools used
Module 1: Lay the foundation for building intelligent • NumPy, Pandas
Deep Learning systems by understanding how deep learning • Matplotlib
Fundamentals works under the hood. • TensorFlow
• Keras
Learn the building blocks of neural
networks—from perceptrons to multilayer
architectures and activation functions.
Explore core math concepts like matrices,
derivatives, and probability behind deep learning.
Build simple neural nets using NumPy and
understand optimization through
backpropagation and gradient descent.
Gain hands-on experience with TensorFlow and
Keras to structure and train deep models
efficiently.
Module 2: Master visual recognition systems that power • TensorFlow
Computer technologies like self-driving cars and image • Keras
Vision with search. • OpenCV
Deep Learning • Roboflow
Understand convolution operations, pooling, and
the architecture of CNNs like VGG, ResNet, and
Inception.
Apply advanced vision techniques for object
detection (YOLO, R-CNN) and image segmentation
(U-Net, Mask R-CNN).
Learn transfer learning, model regularization, and
data augmentation to build high-performance
vision models.
Build and fine-tune models on real datasets using
tools like Roboflow and ResNet.
*The curriculum is subject to changes
Module Names Learning Outcome Tools used
Module 3: Equip yourself to build models that understand • TensorFlow
Language and language and sequential patterns. • Keras
Sequential • Hugging Face
Modeling Learn how RNNs, LSTMs, and GRUs work and Transformers
apply them to tasks like sentiment analysis. • NLTK
Understand modern NLP with attention,
transformers, and models like BERT and
DistilBERT.
Work on real-world text data—tokenizing,
embedding, and training language models.
Fine-tune pre-trained transformer models for
tasks like summarization and question
answering.
Module 4: Explore frontier techniques in AI—creating data, • TensorFlow
Generative AI, training agents, and deploying scalable systems. • Keras
Reinforcement • FastAPI
Learning & Build generative models like GANs and • Docker
Deployment autoencoders to create images and • MLflow
representations. • Streamlit
• Hugging Face
Train reinforcement learning agents to learn and • ONNX
optimize behavior over time.
Apply model compression and tuning to improve
performance and deploy to edge or cloud.
Use MLOps tools and frameworks to turn models
into reliable, production-grade services.
Capstone The capstone lets you prove your AI/ML chops:
identify a real-world prediction problem, build and
train end-to-end models (data prep -> feature
engineering -> model selection hyper-tuning ->
deployment), and present accuracy, fairness, and
inference speed results. It’s your hands-on
showcase of applied machine-learning expertise.
*The curriculum is subject to changes
Curriculum*
Specialization - Data Engineering | Duration- 3 Months
Module Names Learning Outcome Tools used
Module 1: Understand the architecture and modeling • dbt
Foundations of foundations of modern data platforms. • DuckDB
the Modern • Apache Airflow
Data Stack Explore the full lifecycle of a modern data • SQL
stack—from ingestion to BI—and map common
tools to each stage.
Design dimensional models and implement
fact-dimension schemas using tools like dbt and
DuckDB.
Create and test dbt models, build incremental
models, and generate documentation.
Get hands-on with DAG concepts and scheduling
basics to build reliable pipelines.
Module 2: Learn how to process large datasets efficiently • Apache Spark
Distributed using distributed systems and cloud • Delta Lake
Processing & infrastructure. • Databricks
Cloud Platforms • AWS S3
Use Spark’s DataFrame API for transformation, • AWS Console
joins, and aggregations; optimize for large
datasets.
Learn the fundamentals of cloud infrastructure:
storage (S3), IAM roles, cost monitoring, and
lifecycle rules.
Explore Databricks and Delta Lake concepts like
bronze/silver/gold layers, time travel, and
VACUUM.
Practice end-to-end processing by combining
Spark jobs with cloud storage and notebook
workflows.
*The curriculum is subject to changes
Module Names Learning Outcome Tools used
Module 3: Build the ability to handle real-time data • AWS Kinesis
Real-Time Data using industry-standard streaming tools. • boto3
& Stream • Grafana OSS
Processing Understand the building blocks of stream • Prometheus
processing using AWS Kinesis Firehose and
related services.
Build a streaming pipeline to count tweets or
page views in near real-time.
Visualize streaming metrics using Grafana
dashboards with <90s lag.
Monitor system performance and track job
metrics in Prometheus and Grafana.
Module 4: Ensure trust, compliance, and visibility • Great
Data Quality, across your data pipelines. Expectations
Observability & • Apache Airflow
Governance Integrate Great Expectations with Airflow for • OpenMetadata
automated data validation and failure alerts. • Prometheus
• Grafana OSS
Learn how to observe DAG runs, Spark jobs,
and system health through
Prometheus-Grafana dashboards.
Deploy OpenMetadata to scan tables and
visualize data lineage; enforce compliance
fields.
Understand how governance supports BI &
AI readiness through metadata, ownership,
and access tracking.
*The curriculum is subject to changes
Module Names Learning Outcome Tools used
Module 5: Design resilient, cost-effective, and secure data • Delta Lake
Optimization, systems at scale. • Apache Spark
Security & Cost • AWS Cost
Management Apply storage and query optimizations using Delta Explorer
Lake techniques (Z-ordering, caching, • Redshift
compaction). • OpenMetadata
• Airflow
Enforce row/column-level security, implement
IAM roles, and demonstrate encryption at rest/in
transit.
Detect cost drivers in streaming pipelines and
apply techniques like autoscaling, retries, and
caching.
Draft pipeline runbooks and deliver before/after
optimization reports with performance gains.
Capstone The capstone lets you prove your
data-engineering chops: frame a real-world
data-pipeline challenge, architect and deploy an
end-to-end solution (ingestion -> storage ->
transformation -> orchestration -> serving), and
present performance and cost results. It’s your
hands-on showcase of modern data-platform
skills.
*The curriculum is subject to changes
Key Faculty Profiles
Kartik Mudaliar
• MS, Computer Science | KTH Royal Institute of Technology
• B.Tech, IT | Dharmsinh Desai University
8+ Years of Experience
Faculty Faculty
L&T Technology Infosys
Upendra Kumar
• M.Tech | Mahamaya Technical University
17+ Years of Experience
Data Science & Machine Learning Trainer
Synergistic Compusoft Pvt. Ltd.
Shakul Malik
• Master’s in Computer Science | MDU, Rohtak
• B.Sc. in Computer Science | MDU, Rohtak
14+ Years of Experience
Sr. Data Architect Data Engineering Trainer Data Analyst Trainer
Atharva AI TCS Michelin Tyres
Key Faculty Profiles
Vigneshwar V
• Master's degree, Manufacturing Systems and Management |
College of Engineering, Guindy
09+ Years of Experience
Senior AI Consultant & Corporate Trainer
NTT DATA
Soumita Mukherjee
• MBA, Marketing | GIM, Goa
• Bachelor in Design | NIFT
15+ Years of Experience
Account Manager Marketing Manager Category Manager
Amazon Pidilite Industries HUL
Dr. Nitin Sachdeva
• Leading AI Solution at TVS
• Phd from Delhi University
19+ Years of Experience
Principal Data Scientist Senior Manager
TVS Protiviti India
Key Faculty Profiles
Vasudev Gupta
• Master's degree, Artificial Intelligence & Machine Learning |
Indian Institute of Technology, Kanpur
10+ Years of Experience
Head of Data Science & AI
DecisionTree Analytics & Services
Dr. Avinash Kumar Singh
• Doctor of Philosophy (Ph.D.), Information Technology |
Indian Institute Of Information Technology Allahabad
14+ Years of Experience
AI Consultant, Mentor and Coach
Robaita
Jayantilal Bhanushali
• Bachelor of Technology - BTech, Computer Science |
University of Mumbai
11+ Years of Experience
Deputy Vice President - AI in Cybersecurity
Banking Sector
Key Faculty Profiles
Rajan Chettri
• Masters in Computer Application |
Sikkim Manipal Institute of Technology - SMU
15+ Years of Experience
Senior Subject Matter Expert ( SRE / DevOps)
mthree
Sample Portfolio Projects
Foundation of Data Science | Duration- 5 Months
Amazon – Smarter Product Suggestions
Amazon’s massive review data shows who bought what and how
they rated it. You’ll clean it, group purchases by customer, and build
three recommenders: top sellers, “bought X, bought Y,” and matrix
factorisation. Then test which gives the best 5 picks per user—and
wrap the best one in a mini web app for personalized suggestions.
Netflix – Will You Finish the Show?
Netflix wants to predict if viewers will finish a new series or drop off.
You’ll clean rating and genre data, create features like “days since
release” and “episodes per week,” then train three models to predict
viewing time. Use explainable charts to show insights (e.g., comedies
released on Fridays do better), and finish with a dashboard where staff
can tweak show details and see predicted viewing hours update.
Airbnb – Fair-Price Helper for Hosts
New Airbnb hosts want to know what to charge per night. Using city
listing data, you’ll clean details like room size and amenities, group
homes by neighborhood, and add flags like “summer weekend.” Test
three pricing models—basic to advanced—and compare their accuracy.
The best one powers a simple form: enter home details and date, get a
smart price suggestion with a safe range.
Sample Portfolio Projects
Foundation of Data Science | Duration- 5 Months
Starbucks – How Much Coffee to Brew?
Starbucks stores need to forecast just the right number of drinks.
Using daily sales, offers, and weather data, you’ll spot patterns like
“rainy Mondays boost latte sales.” Test three forecasting
methods—including Facebook’s Prophet—and see which predicts next
month best. Wrap it all in a one-page dashboard where managers
enter their store ID and get drink prep plans for the weeks ahead.
Tesla – Spotting EV Batteries at Risk
Tesla wants to catch battery failures early to avoid costly replacements.
Using test data like charge cycles and voltage curves, you’ll clean and
extract warning signs such as “capacity drop.” Train a rule-based
model and a neural net, and compare how early and accurately they
flag issues. Highlight key voltage curve triggers in a visual chart. End
with a short brief estimating yearly warranty claims saved.
Sample Portfolio Projects
Specialization - Artificial Intelligence and Machine Learning | Duration- 3 Months
Google Photos – Spot Your Pet Automatically
Google Photos auto-tags pets—now you’ll train it to do the same.
Using a cat-vs-dog image set, build a CNN that learns to tell them
apart. Resize and augment photos, fine-tune MobileNetV2, and
export to TensorFlow Lite for mobile testing. End with a Streamlit
app where users drop a photo and get an instant “Cat” or “Dog”
label with confidence score.
Disney+ – Auto-Generate Subtitles
Disney wants fast, accurate subtitles in many languages. You’ll
build a speech-to-text pipeline using English audio clips. Convert
audio to spectrograms, train an RNN with Attention or Transformer
to transcribe speech, and aim for over 90% accuracy. End with a
demo that takes any .wav file and outputs time-aligned
captions—helping producers speed up localisation.
Tesla – Lane-Keeping Vision Model
To drive safely, Tesla cars must detect lane lines at all times. Using
Udacity’s dash-cam dataset, you’ll label lane pixels and train a U-Net
segmentation model to highlight them in real time. Augment footage
for day/night conditions and aim for an IoU above 0.75 in low light.
Sample Portfolio Projects
Specialization - Artificial Intelligence and Machine Learning | Duration- 3 Months
Instagram – AI Style-Filter Creator
Instagram wants a pop-art selfie filter. You’ll train a CycleGAN on
selfies and pop-art paintings—no paired images needed. The model
learns to stylize photos while keeping faces clear. Monitor progress
with TensorBoard, then export the best version to Core ML. Wrap up
with an iPhone-ready demo where users snap a selfie and see it
transform live.
Apple Siri – Smarter Voice Commands
Siri needs to learn new smart-home commands like “dim the nursery
lights.” You’ll use a small speech dataset plus 20 custom-recorded
phrases, convert audio to MFCCs, and train a 1D CNN + BiLSTM to
classify intents. Visualize clusters with t-SNE, then deploy the model to
an iOS shortcut that triggers a dummy HomeKit device—proving Siri Siri
can learn new skills with minimal data.
Sample Portfolio Projects
Specialization - Data Engineering | Duration- 3 Months
Netflix – Build a Mini Data Lakehouse
As a Netflix data engineer, you’ll collect daily viewing logs (CSVs),
drop them into S3, and process them into Bronze -> Silver -> Gold
tables with PySpark. Automate the pipeline using Airflow, saving
clean Parquet files to a lakehouse folder. Then use dbt to track
top-10 shows per country in BigQuery. A Grafana dashboard shows
file counts, runtimes, and failures—keeping pipeline health visible at
a glance.
Nike – Real-Time Clickstream Pipeline
Nike’s e-commerce team needs instant insight when shoppers
abandon their carts. You’ll stream synthetic website clicks through
Kafka, process them on-the-fly with Spark Structured Streaming,
and land 1-minute aggregates (cart starts vs checkouts) in a fast
NoSQL store such as Cassandra. Build a Grafana board showing
live conversion rates and set an alert if they dip under a set
threshold. The goal: demonstrate how real-time data engineering
can help marketing teams trigger timely discount pop-ups.
Zillow – Bulk Property ETL to a Warehouse
Zillow shares millions of price updates monthly. Your task: download
zipped CSVs, validate them with Great Expectations, and load into
PostgreSQL or Redshift weekly. Use Airflow to automate the
flow—download, unzip, clean, load—while logging outcomes. Add a
Trino/Presto layer for analysts to query data, and write a short
runbook to replay missed loads safely.
Sample Portfolio Projects
Specialization - Data Engineering | Duration- 3 Months
Spotify – Data Catalog & Lineage Demo
Spotify engineers often ask, “Where did this field come from?” You’ll
deploy OpenMetadata (or Apache Atlas) via Docker, connect it to
MySQL and Spark, and capture lineage as ETL jobs run. Add owners,
tags, and descriptions to key tables. Wrap up with a short video tour
showing how to trace a column from raw logs to
dashboards—showcasing data governance in action.
Uber Eats – Cost-Optimised Delivery
Data Pipeline
Uber Eats wants nightly delivery-time reports without overspending on
cloud costs. You’ll pull raw order JSONs from S3, convert them to
Parquet with AWS Glue, and load them into Redshift Spectrum with
date-based partitions to cut query costs. Use AWS Cost Explorer to
compare spend before and after, then write a one-page savings
memo—showing how smart data engineering balances speed and
budget.
Certification
On successful completion of the program, you will be eligible for the following certificate*
Specialization - 1
C E R T I F I C AT E of C O M P L E T I O N
This is to certify that
Name Surname
has successfully completed the
POSTGRADUATE PROGRAM IN DATA SCIENCE
WITH SPECIALIZATION IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
MONTH YEAR - MONTH YEAR
This program is offered by Harvard University in collaboration with
GetSmarter, an edX partner.
Name Here
CEO, Hero Vired
Specialization - 2
C E R T I F I C AT E of C O M P L E T I O N
This is to certify that
Name Surname
has successfully completed the
POSTGRADUATE PROGRAM IN DATA SCIENCE
WITH SPECIALIZATION IN DATA ENGINEERING
MONTH YEAR - MONTH YEAR
This program is offered by Harvard University in collaboration with
GetSmarter, an edX partner.
Name Here
CEO, Hero Vired
Certificates are indicative and subject to change
*
Certification
Opportunity to gain additional industry and government approved certification from
E
Nasscom Future Skills Prime
A T
F I C
R TI
C E E
P L E A T
I C
SAM R T I F
C E
Name Surname
P L E E
M
Program Name
A T
S A IF I C
R T
C E
PL E
A M
AS
TE T E
IF IC IF ICA
RT T
C E E C ER E
P L E
C A T P LE ICA
T
M IF I M IF
SA RT SA ER
T
C E C
P L E
TE P LE T E
AM ICA M C A
S TIF SA T IF I
C ER C ER
M PLE M PLE
SA SA
Gold - 70% or above score Silver - 60%-69% score Bronze - 50%-59% score
Additional Certification aligned to Competency Standards
developed by SSC NASSCOM in collaboration with Industry
& approved by the Government.
Note - Nasscomm Certification, only one attempt will be provided by Hero Vired for learners who complete the program
*
Certificates are indicative and subject to change
Certification
On successful completion of the program, you will be eligible for the following certificate*
Verified Course Certificate (on successful completion of courses on edX) and
Hero Vired Program Completion certificate (on successful completion of edX course and
successfully meeting the Hero Vired completion criteria)
*
Certificates are indicative and subject to change
Setting you up for Success
We are geared to provide our learners with the right opportunities across growing
functions in the Data Science domain, arming them with career preparedness and
interview support.
FUNCTIONS SALARY RANGE**
Foundation of Data Science
Data Analyst Analyst - 3-5 LPA
Business Analyst Sr. Analyst - 5-8 LPA
Junior Data Scientist
Specialization 1 - Artificial Intelligence and Machine Learning
Data Analyst Analyst : 4-6 LPA (Freshers)
Junior/Data Scientist Sr Analyst : 6- 12 LPA
AI/ML Analyst (upto 3 years of relevant experience)
Deep Learning Engineer
Specialization 2 - Data Engineering
Data Analyst Analyst : 4-8 LPA (Freshers)
Business Analyst Sr Analyst : 6- 12 LPA
ETL Developer (upto 3 years of relevant experience)
Data Engineer
*Note - This will be subject to performance and applicable terms & conditions
*Note - Assured interview opportunities for professionals with a current CTC of up to ₹10 LPA
**Based on experience & performance
THE HERO GROUP
IN EDUCATION
The Hero Group has made significant contributions in the field
of K12, medical education and higher education.
IN PRIMARY AND HIGHER SECONDARY EDUCATION
Raman Munjal BCM Chain Green Meadows
Vidya Mandir of Schools School
IN HIGHER EDUCATION
ISB Founding BML Munjal Dayanand Medical
Members University College & Hospital
THE HERO STORY
The Hero Group is one of the leading business conglomerates in the world.
The company saw its humble beginning in 1956 when the four Munjal brothers
migrated to Ludhiana from Kamalia (now in Pakistan). As first-generation
entrepreneurs, they started out by manufacturing bicycle components and then
rapidly expanding the business. From there, they continued their growth story
by diversifying and deepening their expertise across domains.
Today, the US $5 billion diversified Hero Group is a conglomerate of Indian
companies with primary interests and operations in automotive
manufacturing, financing, renewable energy, electronics and education.
Wait Nahi, Great Kar!
Want more information on the program?
Reach us at 1800 309 3939 | Visit us at www.herovired.com