Is the assumption of a Gaussian distribution reasonable?
Most outlier tests assume that the data (all but the outlier) were sampled from a Gaussian distribution. If this assumption is not true, then the method may identify “outliers” that are part of the same distribution as the others. This is especially a problem with data distributed as a lognormal distribution.