For Naive Bayes, how do I test if my distributions are valid?
You know this! Think about it for a second. What is a valid distribution? Sums to 1, right? Thus, you need to make sure that the total probability mass assigned by any probability distribution you learn sums to 1! For NB, you have two distributions: the prior over labels P(Y) and the posterior or conditional over features given the label P(W | Y). In the vase of the vanilla NB, validity should be guaranteed by proper implementation with the Counter and CounterMap classes. Validity becomes more of an issue for your feature conditionals when you use smoothing to give mass to unseen events. Anyway, in short, for each distribution you learn, sum over all the probabilities for all possible events and see if that sum equals (or is very close to) 1.