When code doesn’t behave the same way every time, it’s tough to unit test it. Sometimes we want a random element in the code. For instance, maybe we want an error report now and then when a certain problem occurs, but every time would be too much. How can we test that something occurs 10% of the time?
Code that uses random number generation to determine behavior will never be perfectly predictable. We can run a method many times and expect to get close to the desired percentage, but not always. When tests fail occasionally for legitimate reasons, then you get flickers in the continuous integration build, and then people start ignoring build failures. “Oh, that one just fails sometimes. No big deal.” When you hear “The build is red sometimes, no big deal,” that’s a red flag right there. Let’s avoid that.
Code that is random in production should not be random in tests. One way to avoid this is to pass in a seed for random number generation. This gets you the same random number generation each run, but it’s cryptic (“Why should I get exactly 36% response from this test?”) and brittle to refactoring. If the order of the checks in the tested code changes, the test results for that seed will change.
Here’s one way to make the tests nice.
Step 1. Pass in a source of random numbers, defaulting to really-random. (In Scala this is easiest with by-name parameters; in another language, pass a function.)
A simple class whose method returns true 40% of the time:
class Sometimes(rand: => Double = Random.nextDouble) {
def shouldI : Boolean = rand < 0.4
}
Step 2. In the test, create a sequence that contains numbers evenly distributed from 0 to 1, in a random order:
def evenDistribution(n: Int) = Random.shuffle(Range(0,n).map(_.toDouble).map(_/n))
Step 3. Get an iterator over that sequence, and use its next() function to produce the random number. In Scala, notSoRandom.next() is evaluated every time the rand parameter is accessed.
val notSoRandom = evenDistribution(100).iterator
val sometimes = new Sometimes(notSoRandom.next())
Step 4. Be sure the test evaluates the random generator until the sequence is exhausted. Check that the random behavior occurred the expected number of times.
def results = Stream.continually(sometimes.shouldI).take(100)
results.count(t => t) should equal(40)
This achieves randomness in the order of the triggers, while guaranteeing the aggregate frequency that we want to test. I like that it expresses what I want to test (“it returns true 40% of the time”) without specifying which forty of the hundred calls returned true.
UPDATE: This code is available, along with a function that’ll give you an evenly distribution distribution without knowing the length ahead of time, here: https://github.com/jessitron/NotTooRandom
—–
Full code: test and class
A simpler approach to testing code which depends upon a source of random numbers is to inject your random number generator and stub it in tests with a deterministic source of randomness. The easiest way to do this is with a seed value.I've used this technique successfully before to test statistical clustering algorithms and was effective in producing tests which allowed me to test the whole algorithm and yet weren't brittle.
Another alternate approach for acceptance tests where you don't want to mock or you don't have direct access to the classes that you need to provide a seed value for: if you expect something to happen 10% of the time, run the test 1000 times and ensure that the functionality occurs within an acceptable delta value from 100. This is what I had to do recently with an acceptance test suite for some R code that utilized some non deterministic machine learning libraries that didn't have an easily configurable seed value.