KEMBAR78
UNIT - 2 Data Analysis | PDF
0% found this document useful (0 votes)
53 views19 pages

UNIT - 2 Data Analysis

Data analysis involves collecting and analyzing raw data to draw insights and make informed decisions. It includes specifying data needs, collecting data from various sources, cleaning the data, analyzing it using statistical techniques, interpreting results, and reporting findings. The main goals are to understand past performance, identify patterns and relationships, and make predictions to improve business outcomes.

Uploaded by

lohit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views19 pages

UNIT - 2 Data Analysis

Data analysis involves collecting and analyzing raw data to draw insights and make informed decisions. It includes specifying data needs, collecting data from various sources, cleaning the data, analyzing it using statistical techniques, interpreting results, and reporting findings. The main goals are to understand past performance, identify patterns and relationships, and make predictions to improve business outcomes.

Uploaded by

lohit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

What is Data Analysis?

“Data is Everywhere”, in sheets, in social media platforms, in product reviews


and feedback, everywhere. In this latest information age it’s created at blinding
speeds and, when data is analyzed correctly, can be a company’s most valuable
asset. “To grow your business even to grow in your life, sometimes all you
need to do is Analysis!”

Table of Content
 What is Data Analysis?
 Why Data Analysis is important?
 Types of Data Analysis Methods
 What is the Data Analysis Process?
 Top Data Analysis Tools
 How to Become Data Analyst?

What is Data Analysis?


Before jumping into the term “Data Analysis”, let’s discuss the term “Analysis”.
Analysis is a process of answering “How?” and “Why?”. For example, how was
the growth of XYZ Company in the last quarter? Or why did the sales of XYZ
Company drop last summer? So to answer those questions we take the data that we
already have. Out of that, we filter out what we need. This filtered data is the final
dataset of the larger chunk that we have already collected and that becomes the
target of data analysis. Sometimes we take multiple data sets and analyze them
to find a pattern. For example, take summer sales data for three consecutive years.
I found out if that fall in sales last summer was because of any specific product
that we were selling or if it was just a recurring problem. It’s all about looking for
a pattern. We analyze things or events that have already happened in the past.

(Example)Why Data Analysis is important?


Let’s say you own a business and sell daily products. Your business model is
pretty simple. You buy products from the supplier and sell them to the customer.
Let’s assume the biggest challenge for your business is to find the right amount of
stock at the given time. You can’t stock excess dairy products as they are
perishable and if they go bad you can’t sell them, resulting in a direct loss for you.
At the same time, you can not understock as it may result in the loss of potential
customers. But data analytics can help you in predicting the strength of your
customers at a given time. Using that result, you can sufficiently stock your
supplies, in turn, minimizing the loss. In simple words, using data analysis, you
can find out the time of the year when your store has the least or the most
customers. Using this info, you can stock your supplies accordingly. So these are
some reasons why analysis of data is important.

Types of Data Analysis Methods


The major Data Analysis methods are:
1. Descriptive Analysis
2. Diagnostic Analysis
3. Predictive Analysis
4. Prescriptive Analysis
5. Statistical Analysis

1. Descriptive Analysis

A Descriptive Analysis looks at data and analyzes past events for insight as to
how to approach future events. It looks at the past performance and understands
the performance by mining historical data to understand the cause of success or
failure in the past. Almost all management reporting such as sales, marketing,
operations, and finance uses this type of analysis.
Example: Let’s take the example of DMart, we can look at the product’s history
and find out which products have been sold more or which products have large
demand by looking at the product sold trends, and based on their analysis we can
further make the decision of putting a stock of that item in large quantity for the
coming year.

2. Diagnostic Analysis

Diagnostic analysis works hand in hand with Descriptive Analysis. As descriptive


Analysis finds out what happened in the past, diagnostic Analysis, on the other
hand, finds out why did that happen or what measures were taken at that time, or
how frequently it has happened. it basically gives a detailed explanation of a
particular scenario by understanding behavior patterns.
Example: Let’s take the example of Dmart again. Now if we want to find out why
a particular product has a lot of demand, is it because of their brand or is it because
of quality. All this information can easily be identified using diagnostic Analysis.

3. Predictive Analysis

Information we have received from descriptive and diagnostic analysis, we can use
that information to predict future data. it basically finds out what is likely to
happen in the future. Now when future data doesn’t mean we have become
fortune-tellers, by looking at the past trends and behavioral patterns we are
forecasting that it might happen in the future.
Example: The best example would be Amazon and Netflix recommender
systems. You might have noticed that whenever you buy any product from
Amazon, on the payment side it shows you a recommendation saying the customer
who purchased this has also purchased this product that recommendation is based
on the customer purchase behavior in the past. By looking at customer past
purchase behavior analyst creates an association between each product and that’s
the reason it shows recommendation when you buy any product.
4. Prescriptive Analysis

This is an advanced method of Predictive Analysis. Now when you predict


something or when you start thinking out of the box you will definitely have a lot
of options, and then we get confused as to which option will actually work.
Prescriptive Analysis helps to find which is the best option to make it happen or
work. As predictive Analysis forecast future data, Prescriptive Analysis on the
other hand helps to make it happen whatever we have forecasted. Prescriptive
Analysis is the highest level of Analysis that is used for choosing the best optimal
solution by looking at descriptive, diagnostic, and predictive data.
Example: The best example would be Google’s self-driving car, by looking at
the past trends and forecasted data it identifies when to turn or when to slow down,
which works much like a human driver.

5. Statistical Analysis

Statistical Analysis is a statistical approach or technique for analyzing data sets in


order to summarize their important and main characteristics generally by using
some visual aids. This approach can be used to gather knowledge about the
following aspects of data:
1. Main characteristics or features of the data.
2. The variables and their relationships.
3. Finding out the important variables that can be used in our problem.

What is the Data Analysis Process?


A Data analysis has the ability to transform raw available data into meaningful
insights for your business and your decision-making. While there are several
different ways of collecting and interpreting this data, most data-analysis
processes follow the same six general steps.
1. Specify Data Requirements
2. Collect Data
3. Clean and Process the Data
4. Analyse the Data
5. Interpretation
6. Report
1. Specify Data Requirements
In step 1 of the data analysis process define what you want to answer through data.
This typically stems from a business problem or questions, such as
 How can we reduce production costs without sacrificing quality?
 How do customers view our brand?
 How can we increase sales opportunities using our current resources?
2. Collect Data
 Find Your Source: Determine what information can be collected from
existing sources, and what you need to find elsewhere.
 Standardize Collection: Create file storage and naming system ahead of time.
 Keep Track: Keep data organized in a log with dates and add any source notes
as you go.
Internal Sources External Sources

Customer service
Social media APIs
data

Marketing analytics Google public data

Sales statistics Public government data

Human resource data Global finance data

Official research statistics

3. Clean and Process the Data


Ensure your data is correct and usable by identifying and removing any errors or
corruption.
 Monitor Errors: Keep a record and look at trends of where most errors are
coming from.
 Validate Accuracy: Research and invest in data tools that allow you to clean
your data in real-time.
 Scrub for Duplicate Data: Identify and remove duplicates so you save time
during analysis.
 Delete all Formatting: Standardise the look of your data by removing any
formatting styles.
4. Analyse the Data
Different data analysis techniques allow you to understand, interpret, and derive
conclusions based on your business question or problem.

Descriptive Analysis Inferential Analysis

Analysis of data that helps show Exploring the relationship


variables in a meaningful way between multiple variables to
and find patterns. make predictions.

Measure of Tendency: The Correlation: Describe the


Descriptive Analysis Inferential Analysis

central position of a frequency


relationship between two
distribution for a group of
variables.
data.

Measure of Spread: Summarising


Regression: Shows or predicts
a group of data by describing
the relationship between two
how to spread out the scores
variables.
are.

5. Interpretation
As you interpret the result of your data, ask yourself these key questions:
 Does the data answer your question? How?
 Does the data help you defend against any objections? How?
 Are there any limitations or angles you haven’t considered?
6. Report
Data Analysis can be used to report to different people:
 A primary collaborator or client
 Executive and business leaders
 A technical supervisor

 Keep it Succinct: Organize data in a way that makes it easy for different
audiences to skim through it to find the information most relevant to them.
 Make it Visual: Use data visualizations techniques, such as tables and charts,
to communicate the message clearly.
 Include an Executive Summary: This allows someone to analyze your
findings upfront and harness your most important points to influence their
decisions.

What is NLP?
NLP stands for Natural Language Processing, which is a part of Computer
Science, Human language, and Artificial Intelligence. It is the technology
that is used by machines to understand, analyse, manipulate, and interpret
human's languages. It helps developers to organize knowledge for performing
tasks such as translation, automatic summarization, Named Entity
Recognition (NER), speech recognition, relationship extraction, and topic
segmentation.

Natural Language Processing – Overview


Natural language processing (NLP) is a subfield of Artificial Intelligence (AI).
This is a widely used technology for personal assistants that are used in various
business fields/areas. This technology works on the speech provided by the user
breaks it down for proper understanding and processes it accordingly. This is a
very recent and effective approach due to which it has a really high demand in
today’s market. Natural Language Processing is an upcoming field where already
many transitions such as compatibility with smart devices, and interactive talks
with a human have been made possible. Natural Language Processing (NLP) is a
field that combines computer science, linguistics, and machine learning to study
how computers and humans communicate in natural language. The goal of NLP is
for computers to be able to interpret and generate human language. This not only
improves the efficiency of work done by humans but also helps in interacting with
the machine. NLP bridges the gap of interaction between humans and electronic
devices.
Some common techniques used in NLP include:
1. Tokenization: the process of breaking text into individual words or phrases.
2. Part-of-speech tagging: the process of labeling each word in a sentence with
its grammatical part of speech.
3. Named entity recognition: the process of identifying and categorizing named
entities, such as people, places, and organizations, in text.
4. Sentiment analysis: the process of determining the sentiment of a piece of
text, such as whether it is positive, negative, or neutral.
5. Machine translation: the process of automatically translating text from one
language to another.
6. Text classification: the process of categorizing text into predefined categories
or topics.
Recent advances in deep learning, particularly in the area of neural networks, have
led to significant improvements in the performance of NLP systems. Deep
learning techniques such as Convolutional Neural Networks (CNNs) and
Recurrent Neural Networks (RNNs) have been applied to tasks such as sentiment
analysis and machine translation, achieving state-of-the-art results.

Advantages of NLP
o NLP helps users to ask questions about any subject and get a direct
response within seconds.
o NLP offers exact answers to the question means it does not offer
unnecessary and unwanted information.
o NLP helps computers to communicate with humans in their languages.
o It is very time efficient.
o Most of the companies use NLP to improve the efficiency of
documentation processes, accuracy of documentation, and identify the
information from large databases.

Disadvantages of NLP
A list of disadvantages of NLP is given below:

o NLP may not show context.


o NLP is unpredictable
o NLP may require more keystrokes.
o NLP is unable to adapt to the new domain, and it has a limited function
that's why NLP is built for a single and specific task only.

Working of Natural Language Processing (NLP)

Working in natural language processing (NLP) typically involves using


computational techniques to analyze and understand human language.
This can include tasks such as language understanding, language
generation, and language interaction.
The field is divided into three different parts:
1. Speech Recognition — The translation of spoken language into text.
2. Natural Language Understanding (NLU) — The computer’s ability to
understand what we say.
3. Natural Language Generation (NLG) — The generation of natural
language by a computer.
1. Speech Recognition:
 First, the computer must take natural language and convert it into
machine-readable language. This is what speech recognition or
speech-to-text does. This is the first step of NLU.
 Hidden Markov Models (HMMs) are used in the majority of voice
recognition systems nowadays. These are statistical models that use
mathematical calculations to determine what you said in order to
convert your speech to text.
 HMMs do this by listening to you talk, breaking it down into small units
(typically 10-20 milliseconds), and comparing it to pre-recorded
speech to figure out which phoneme you uttered in each unit (a
phoneme is the smallest unit of speech). The program then examines
the sequence of phonemes and uses statistical analysis to determine
the most likely words and sentences you were speaking.
2. Natural Language Understanding (NLU):
The next and hardest step of NLP is the understanding part.
 First, the computer must comprehend the meaning of each word. It
tries to figure out whether the word is a noun or a verb, whether it’s in
the past or present tense, and so on. This is called Part-of-Speech
tagging (POS).
 A lexicon (a vocabulary) and a set of grammatical rules are also built
into NLP systems. The most difficult part of NLP is understanding.
 The machine should be able to grasp what you said by the conclusion
of the process. There are several challenges in accomplishing this
when considering problems such as words having several meanings
(polysemy) or different words having similar meanings (synonymy),
but developers encode rules into their NLU systems and train them to
learn to apply the rules correctly.
3. Natural Language Generation (NLG):
NLG is much simpler to accomplish. NLG converts a computer’s
machine-readable language into text and can also convert that text into
audible speech using text-to-speech technology.
 First, the NLP system identifies what data should be converted to text.
If you asked the computer a question about the weather, it most likely
did an online search to find your answer, and from there it decides that
the temperature, wind, and humidity are the factors that should be
read aloud to you.
 Then, it organizes the structure of how it’s going to say it. This is
similar to NLU except backward. NLG system can construct full
sentences using a lexicon and a set of grammar rules.
 Finally, text-to-speech takes over. The text-to-speech engine uses a
prosody model to evaluate the text and identify breaks, duration, and
pitch. The engine then combines all the recorded phonemes into one
cohesive string of speech using a speech database.

Applications of NLP
There are the following applications of NLP -

1. Question Answering

Question Answering focuses on building systems that automatically answer


the questions asked by humans in a natural language.

2. Spam Detection

Spam detection is used to detect unwanted e-mails getting to a user's inbox.


3. Sentiment Analysis

Sentiment Analysis is also known as opinion mining. It is used on the web to


analyse the attitude, behaviour, and emotional state of the sender. This
application is implemented through a combination of NLP (Natural Language
Processing) and statistics by assigning the values to the text (positive,
negative, or natural), identify the mood of the context (happy, sad, angry, etc.)

4. Machine Translation

Machine translation is used to translate text or speech from one natural


language to another natural language.
Example: Google Translator

5. Spelling correction

Microsoft Corporation provides word processor software like MS-word,


PowerPoint for the spelling correction.

6. Speech Recognition

Speech recognition is used for converting spoken words into text. It is used in
applications, such as mobile, home automation, video recovery, dictating to
Microsoft Word, voice biometrics, voice user interface, and so on.

7. Chatbot

Implementing the Chatbot is one of the important applications of NLP. It is


used by many companies to provide the customer's chat services.
8. Information extraction

Information extraction is one of the most important applications of NLP. It is


used for extracting structured information from unstructured or semi-
structured machine-readable documents.

9. Natural Language Understanding (NLU)

It converts a large set of text into more formal representations such as first-
order logic structures that are easier for the computer programs to
manipulate notations of the natural language processing.

 Algorithmic Trading: Algorithmic trading is used for predicting stock


market conditions. Using NLP, this technology examines news
headlines about companies and stocks and attempts to comprehend
their meaning in order to determine if you should buy, sell, or hold
certain stocks.

 Summarizing Information: On the internet, there is a lot of


information, and a lot of it comes in the form of long documents or
articles. NLP is used to decipher the meaning of the data and then
provides shorter summaries of the data so that humans can
comprehend it more quickly.

Predictive Modeling: History, Types,


Applications
What Is Predictive Modeling?
Predictive modeling uses known results to create, process, and
validate a model that can be used to forecast future outcomes. It is a
tool used in predictive analytics, a data mining technique that
attempts to answer the question, "what might happen in the future?"

KEY TAKEAWAYS

 Predictive modeling uses known results to create,


process, and validate a model that can be used to make future
predictions.
 Regression and neural networks are two of the most widely
used predictive modeling techniques.
 Companies can use predictive modeling to forecast events,
customer behavior, and financial, economic, and market risks.

Types of Predictive Modeling


Several different types of predictive modeling can be used to
analyze most datasets to reveal insights into future events.

Classification Models

Classification models use machine learning to place data into


categories or classes based on criteria set by a user. There are
several types of classification algorithms, some of which are:

 Logistic regression: An estimate of an event occurring,


usually a binary classification such as a yes or no answer.
 Decision trees: A series of yes/no, if/else, or other binary
results placed into a visualization known as a decision tree.
 Random forest: An algorithm that combines unrelated
decision trees using classification and regression.
 Neural networks: Machine learning models that review large
volumes of data for correlations that emerge only after millions
of data points are reviewed.
 Naïve Bayes: A modeling system based on Bayes' Theorem,
which determines conditional probability.

Clustering Models

Clustering is a technique that groups data points. It is assumed by


analysts that data in similar groups should have the same
characteristics, and data in different groups should have very
different properties. Some popular clustering algorithms are:

 K-Means: K-means is a modeling technique that uses groups


to identify central tendencies of different groups of data.
 Mean-Shift: In mean-shift modeling, the mean of a group is
shifted by the algorithm so that "bubbles," or maxima of a
density function, are identified. When the points are plotted on
a graph, data appear to be grouped around central points
called centroids.
 Density-based Spatial Clustering With Noise (DBSCAN):
DBSCAN is an algorithm that groups data points together
based on an established distance between them. This model
establishes relationships between different groups and
identifies outliers.

Outlier Models

A dataset always has outliers (values outside its normal values). For
instance, if you had the numbers 21, 32, 46, 28, 37, and 299, you
can see the first five numbers are somewhat similar, but 299 is too
far from the others. Thus, it is considered an outlier. Some
algorithms used to identify outliers are:

 Isolation Forest: An algorithm that detects few and different


data points in a sample.
 Minimum Covariance Determinant (MCD): Covariance is the
relationship of change between two variables. The MCD
measures the mean and covariance of a dataset that
minimizes the influence outliers have on the data.
 Local Outlier Factor (LOF): An algorithm that identifies
nearest neighboring data points and assigns scores, allowing
those furthest away to be identified as outliers.

Time Series Models

Commonly used before other types of modeling, time series


modeling uses historical data to forecast events. A few of the
common time series models are:

 ARIMA: The autoregressive integrated moving average model


uses autoregression, integration (differences between
observations), and moving averages to forecast trends or
results.
 Moving Average: The moving average uses the average of a
specified period, such as 50 or 200 days, which smooths out
fluctuations.
Advantages and Disadvantages of Predictive
Modeling
Predictive Modeling Pros and Cons

Advantages
 Easy to generate actionable insights

 Can test different scenarios

 Increases decision-making speed

Disadvantages
 Computations can be inexplainable

 Bias due to human input

 High learning curve

What is Sentiment Analysis?


Sentiment analysis is a popular task in natural language processing. The
goal of sentiment analysis is to classify the text based on the mood or
mentality expressed in the text, which can be positive negative, or
neutral.

Sentiment Analysis
Sentiment analysis is the process of classifying whether a block of text is
positive, negative, or, neutral. The goal which Sentiment analysis tries to
gain is to be analyzed people’s opinions in a way that can help
businesses expand. It focuses not only on polarity (positive, negative &
neutral) but also on emotions (happy, sad, angry, etc.). It uses
various Natural Language Processing algorithms such as Rule-based,
Automatic, and Hybrid.
Example:
if we want to analyze whether a product is satisfying customer
requirements, or is there a need for this product in the market? We can
use sentiment analysis to monitor that product’s reviews. Sentiment
analysis is also efficient to use when there is a large set of unstructured
data, and we want to classify that data by automatically tagging it.Net
Promoter Score (NPS) surveys are used extensively to gain knowledge of
how a customer perceives a product or service. Sentiment analysis also
gained popularity due to its feature to process large volumes of NPS
responses and obtain consistent results quickly.
Sentiment

Why perform Sentiment Analysis?

Sentiment analysis is the contextual meaning of words that indicates the


social sentiment of a brand and also helps the business to determine
whether the product they are manufacturing is going to make a demand
in the market or not.
According to the survey,80% of the world’s data is unstructured. The data
needs to be analyzed and be in a structured manner whether it is in the
form of emails, texts, documents, articles, and many more.
1. Sentiment Analysis is required as it stores data in an efficient, cost-
friendly.
2. Sentiment analysis solves real-time issues and can help you solve all
real-time scenarios.

Types of Sentiment Analysis

1. Fine-grained sentiment analysis: This depends on the polarity base.


This category can be designed as very positive, positive, neutral,
negative, or very negative. The rating is done on a scale of 1 to 5. If
the rating is 5 then it is very positive, 2 then negative, and 3 then
neutral.
2. Emotion detection: The sentiments happy, sad, angry, upset, jolly,
pleasant, and so on come under emotion detection. It is also known as
a lexicon method of sentiment analysis.
3. Aspect-based sentiment analysis: It focuses on a particular aspect
for instance if a person wants to check the feature of the cell phone
then it checks the aspect such as the battery, screen, and camera
quality then aspect based is used.
4. Multilingual sentiment analysis: Multilingual consists of different
languages where the classification needs to be done as positive,
negative, and neutral. This is highly challenging and comparatively
difficult.
Types of Sentiment Analysis
Sentiment analysis focuses on the polarity of a text (positive, negative,
neutral) but it also goes beyond polarity to detect specific feelings and
emotions (angry, happy, sad, etc), urgency (urgent, not urgent) and
even intentions (interested v. not interested).

Depending on how you want to interpret customer feedback and queries, you
can define and tailor your categories to meet your sentiment analysis needs.
In the meantime, here are some of the most popular types of sentiment
analysis:

Graded Sentiment Analysis

If polarity precision is important to your business, you might consider


expanding your polarity categories to include different levels of positive and
negative:

 Very positive
 Positive
 Neutral
 Negative
 Very negative

This is usually referred to as graded or fine-grained sentiment analysis, and


could be used to interpret 5-star ratings in a review, for example:

 Very Positive = 5 stars


 Very Negative = 1 star

Emotion detection

Emotion detection sentiment analysis allows you to go beyond polarity to


detect emotions, like happiness, frustration, anger, and sadness.

Many emotion detection systems use lexicons (i.e. lists of words and the
emotions they convey) or complex machine learning algorithms.

One of the downsides of using lexicons is that people express emotions in


different ways. Some words that typically express anger,
like bad or kill (e.g. your product is so bad or your customer support is killing
me) might also express happiness (e.g. this is bad ass or you are killing it).

Aspect-based Sentiment Analysis

Usually, when analyzing sentiments of texts you’ll want to know which


particular aspects or features people are mentioning in a positive, neutral, or
negative way.
That's where aspect-based sentiment analysis can help, for example in this
product review: "The battery life of this camera is too short", an aspect-based
classifier would be able to determine that the sentence expresses a negative
opinion about the battery life of the product in question.

Multilingual sentiment analysis

Multilingual sentiment analysis can be difficult. It involves a lot of


preprocessing and resources. Most of these resources are available online
(e.g. sentiment lexicons), while others need to be created (e.g. translated
corpora or noise detection algorithms), but you’ll need to know how to code to
use them.

Alternatively, you could detect language in texts automatically with a language


classifier, then train a custom sentiment analysis model to classify texts in the
language of your choice.

5.

How does Sentiment Analysis work?

There are three approaches used:


1. Rule-based approach: Over here, the lexicon method, tokenization,
and parsing come in the rule-based. The approach is that counts the
number of positive and negative words in the given dataset. If the
number of positive words is greater than the number of negative words
then the sentiment is positive else vice-versa.
2. Machine Learning Approach: This approach works on the machine
learning technique. Firstly, the datasets are trained and predictive
analysis is done. The next process is the extraction of words from the
text is done. This text extraction can be done using different
techniques such as Naive Bayes, Support Vector machines , hidden
Markov model, and conditional random fields like this machine
learning techniques are used.
3. Neural network Approach: In the last few years neural networks
have evolved at a very rate. It involves using artificial neural networks,
which are inspired by the structure of the human brain, to classify text
into positive, negative, or neutral sentiments. it has Recurrent neural
networks, Long short-term memory , Gated recurrent unit , etc to
process sequential data like text.
4. Hybrid Approach: It is the combination of two or more approaches
i.e. rule-based and Machine Learning approaches. The surplus is
that the accuracy is high compared to the other two approaches.

Applications
Sentiment Analysis has a wide range of applications as:
1. Social Media: If for instance the comments on social media side as
Instagram, over here all the reviews are analyzed and categorized as
positive, negative, and neutral.
2. Customer Service: In the play store, all the comments in the form of 1
to 5 are done with the help of sentiment analysis approaches.
3. Marketing Sector: In the marketing area where a particular product
needs to be reviewed as good or bad.
4. Reviewer side: All the reviewers will have a look at the comments and
will check and give the overall review of the product.

Challenges of Sentiment Analysis

There are major challenges in the sentiment analysis approach:


1. If the data is in the form of a tone, then it becomes really difficult to
detect whether the comment is pessimist or optimistic.
2. If the data is in the form of emoji, then you need to detect whether it is
good or bad.
3. Even the ironic, sarcastic, comparing comments detection is really
hard.
4. Comparing a neutral statement is a big task.

You might also like