Notice
Recent Posts
Recent Comments
«   2024/09   »
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30
Archives
Today
In Total
관리 메뉴

A Joyful AI Research Journey🌳😊

Computing the Posterior Probability Using Bayes' Theorem 본문

🌳AI Projects: NLP & CV🍀✨/NLP Deep Dive

Computing the Posterior Probability Using Bayes' Theorem

yjyuwisely 2023. 9. 11. 22:24

To determine

P(J∣F,I)

the probability Jill Stein spoke the words 'freedom' and 'immigration', we'll apply Bayes' Theorem:

P(J∣F,I)
=P(J)×P(F∣J)×P(I∣J) / P(F,I)

Where:

  • P(J) is the prior probability (the overall likelihood of Jill Stein giving a speech). In our case, P(J)=0.5.
  • P(F∣J) and P(I∣J) are the likelihoods. These represent the probabilities of Jill Stein saying the words 'freedom' and 'immigration' respectively. Both are given as 0.1.
  • P(F,I) is the evidence or marginal likelihood. This is the overall probability of the words 'freedom' and 'immigration' being said, regardless of the speaker.
    It can be computed as:
P(F,I)
=P(J)×P(F∣J)×P(I∣J) + P(G)×P(F∣G)×P(I∣G)

By plugging in the numbers, we can compute P(J∣F,I):

P(J∣F,I)
=0.5×0.1×0.1 / P(F,I)

After calculating P(F,I) from the formula above, insert its value into the equation to get P(J∣F,I).

This approach, rooted in Bayes' Theorem, provides the foundation for the Naive Bayes classifier in the realm of Natural Language Processing.

728x90
반응형
Comments