User: | Open Learning Faculty Member:
Which technique had the fastest estimated sampling time?: Haphazard was the fastest.
Further thoughts: It may also be that with a right hand skew, plus less diffuse error distribution in the haphazard model may actually be a fat tail. If this is the case, then some very important outliers could be hiding in that tail. On second thought, I don’t think I’d want to use this sampling technique where missing the effects of outlying, or phenomenon could have a major impact on stake holders involved in decision making. I wouldn’t use this in helping with ecological assessments around environmental safety, conservation issues regarding extremely endangered species, or economically and culturally vital species, such as salmon or herring populations. If there is error, the randomized model seems represents it more effectively, with a more dispersed deviation around the mean, there by prevent the right skew kurtosis.
Was one sampling strategy more accurate than another? I believe so. I think random sampling shows a more accurate distribution of the deviation. However, haphazard is faster, and easier. If the errors in haphazard are predictable, and can accounted for, it may still be appropriate under certain circumstances.