2.3 Model for interpreting the data

If the event “is fake” is know and”has !” is unknown then the conditional probability function allows us to compare the probabilities of “has !” with being fake or real:

\(P(\text{has ! | is fake})\) and \(P(\text{doesn't have ! | is fake})\)

If the event “is fake” is unknown and”has !” is known then the likelihood function allows us to evaluate the relative compatibility of the data, “has !” with the event of being fake or real:

\(L(\text{is fake | has !})\) and \(L(\text{is real | has !})\)

where,

\(L(\text{is fake | has !}) = P(\text{has ! | is fake})\) and

\(L(\text{is real | has !}) = P(\text{has ! | is real})\)

type has !
FALSE TRUE
fake 73.3% 26.7%
real 97.8% 2.2%

Since the likelihood function is not a true probability model, we need a normalizing constant. Here it will be \(P(\text{has !})\), i.e., the probability of observing the data.

\[\begin{equation} \begin{split} \text{normalizing constant} &= P(\text{has !}) \\ &= P(\text{has ! }\cap \text{ is real}) + P(\text{has ! }\cap \text{ is fake}) \\ &= P(\text{is real}) \cdot P(\text{has ! | is real}) + P(\text{is fake}) \cdot P(\text{has ! | is fake}) \\ &= 0.6 \times 0.022 + 0.4 \times 0.267 \\ &= 0.12 \end{split} \end{equation}\]

The formula used to calculate the normalizing constant is also called the law of total probability.