Notice
Recent Posts
Recent Comments
«   2024/11   »
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
Archives
Today
In Total
관리 메뉴

A Joyful AI Research Journey🌳😊

Understanding Probability Normalization in Naive Bayes 본문

🌳AI Projects: NLP🍀✨/NLP Deep Dive

Understanding Probability Normalization in Naive Bayes

yjyuwisely 2023. 9. 9. 23:33

In the context of the Naive Bayes classifier, probability normalization plays a vital role, especially when we want our probabilities to reflect the true likelihood of an event occurring in comparison to other events.

When predicting class labels using the Naive Bayes formula, we compute the product of feature probabilities for each class. However, these products do not sum up to 1 across classes, making them hard to interpret as probabilities. This is where normalization comes in.

Let's understand this with a basic example:

Suppose we have two classes, A and B, and we compute the product of feature probabilities for both:

P(features|A) = 1/2 and P(features|B) = 1/4.

To get the probability of a data point belonging to class A or B, we need to normalize:

Normalized P(A|features) 
= P(features|A) / [P(features|A) + P(features|B)] 
= 0.5 / (0.5 + 0.25) 
= 2/3 or 0.67.

Similarly, for class B:

Normalized P(B|features) 
= P(features|B) / [P(features|A) + P(features|B)] 
= 0.25 / 0.75 
= 1/3 or 0.33.

After normalization, the probabilities for the classes A and B will sum up to 1. This way, Naive Bayes ensures that our class predictions are probabilistically coherent.

728x90
반응형
Comments