To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Random testing

From Wikipedia, the free encyclopedia

Random testing is a black-box software testing technique where programs are tested by generating random, independent inputs. Results of the output are compared against software specifications to verify that the test output is pass or fail.[1] In case of absence of specifications the exceptions of the language are used which means if an exception arises during test execution then it means there is a fault in the program, it is also used as a way to avoid biased testing.

YouTube Encyclopedic

  • 1/3
    Views:
    5 218
    2 903
    439
  • Random Testing - Software Testing
  • Chapter 3: Random Testing
  • Random Testing

Transcription

Okay, we finally got into my favorite testing topic, which is random testing. And random testing just means that test cases are created at least in part using input from a random number generator. This became my favorite testing method a few years ago, when I noticed though without even realizing it, I've written a random tester for every piece of software I ever wrote where I actually cared about its correctness. So, I've written at least a dozen random testers. So, let's look at how this works. We have a random case generator. The test case generator takes its input pseudorandom numbers, so PRNG here stands for pseudorandom number generator. Of course, on computers, we almost never get real random number generators. What we rather get are pseudorandom numbers which are algorithmically generated but for purposes of creating test cases, they're good enough. The nice thing about pseudorandom numbers is they're repeatable. So, when we start using a pseudorandom number generator, we give it a seed which completely determines the sequence of random numbers it's going to generate. So, if we ever want to generate the same random numbers, all we have to do is remember the seed. This is actually nice because we don't have to save of voluminous random tests. We can rather just remember what seed we started with if we ever wanted to get back a particular set of random tests. The other thing that goes into a random test case generator, usually to make a good one, you needed a significant amount of domain knowledge. By domain knowledge, I just mean that you have to understand some properties of the software under test. We're talking about quite a bit of detail about what form this domain knowledge might take and how you encode this as part of the random test case generator. Generated test cases come out of the random case generator and they go into the software under test. The software under test executes and produces some output. The output is inspected by a test oracle. The oracle, as we've already learned, makes a determination whether the output of the software under test is either good or bad. If the output is good--that is to say, if it passes whatever checks we have, we just go back and do it again. On the other hand, if it the output is not okay, we save the test case somewhere for later inspection and we go back and do more random testing. And so, the key to making this all work is, wrap the entire random testing tool chain and some sort of a driver script which runs it automatically. What we do is we start the random tester on some otherwise unused machine and we go and do other things or we go home. And while we're doing other things, the random testing loop executes hundreds, thousands, or millions of times. The next time we feel like seeing what's going on, maybe coming to work in the morning and we basically just see what kind of test cases have been saved. If anything interesting turned out, we have some followup work to do, like creating reportable test case and debugging. If nothing interesting happened, then that's good. We didn't introduce any new bugs and we can rebuild the latest version of a software under test and start the testing loop again. Generally, the way random testing works is, is not necessarily part of the test suite for the software under test, but rather it's a separate testing loop that gets run independently, acts to the separate or an external quality assurance mechanism. If the random test generator is well done, and if we give us a sufficient amount of CPU resources to the testing loop, and if it's not finding any problems, random testing can significantly increase our confidence that the software under test is working as intended. And it turns out that in general, there are only a couple of things that are hard about making this work. First of all, it can be tricky to come up with a good random test case generator, and second, they can be tricky to come up with good oracle. And of course, we've already said that these are the hard things about testing in general, making test cases, and determining if outputs are correct. Basically, the same thing is the case here, but the character of the problems that we run into while doing random testing are a little bit different and that's where we're going to spend the next several units of this course looking at. What I'd like to do now is go over a couple of real examples. One of them involves a very large random tester testing quite complicated pieces of software. The other one is almost trivial, it's a tiny almost one line random tester. This testing a small function, by the way, so let's take a look at this.

History of random testing

Random testing for hardware was first examined by Melvin Breuer in 1971 and initial effort to evaluate its effectiveness was done by Pratima and Vishwani Agrawal in 1975.[2]

In software, Duran and Ntafos had examined random testing in 1984.[3]

The use of hypothesis testing as a theoretical basis for random testing was described by Howden in Functional Testing and Analysis. The book also contained the development of a simple formula for estimating the number of tests n that are needed to have confidence at least 1-1/n in a failure rate of no larger than 1/n. The formula is the lower bound nlogn, which indicates the large number of failure-free tests needed to have even modest confidence in a modest failure rate bound.[4]

Overview

Consider the following C++ function:

int myAbs(int x) {
    if (x > 0) { 
        return x;
    }
    else {
        return x; // bug: should be '-x'
    }
}

Now the random tests for this function could be {123, 36, -35, 48, 0}. Only the value '-35' triggers the bug. If there is no reference implementation to check the result, the bug still could go unnoticed. However, an assertion could be added to check the results, like:

void testAbs(int n) {
    for (int i=0; i<n; i++) {
        int x = getRandomInput();
        int result = myAbs(x);
        assert(result >= 0);
    }
}

The reference implementation is sometimes available, e.g. when implementing a simple algorithm in a much more complex way for better performance. For example, to test an implementation of the Schönhage–Strassen algorithm, the standard "*" operation on integers can be used:

int getRandomInput() {
    // …
}

void testFastMultiplication(int n) {
    for (int i=0; i<n; i++) {
        long x = getRandomInput();
        long y = getRandomInput();
        long result = fastMultiplication(x, y);
        assert(x * y == result);
    }
}

While this example is limited to simple types (for which a simple random generator can be used), tools targeting object-oriented languages typically explore the program to test and find generators (constructors or methods returning objects of that type) and call them using random inputs (either themselves generated the same way or generated using a pseudo-random generator if possible). Such approaches then maintain a pool of randomly generated objects and use a probability for either reusing a generated object or creating a new one.[5]

On randomness

According to the seminal paper on random testing by D. Hamlet

[..] the technical, mathematical meaning of "random testing" refers to an explicit lack of "system" in the choice of test data, so that there is no correlation among different tests.[1]

Strengths and weaknesses

Random testing is praised for the following strengths:

  • It is cheap to use: it does not need to be smart about the program under test.
  • It does not have any bias: unlike manual testing, it does not overlook bugs because there is misplaced trust in some code.
  • It is quick to find bug candidates: it typically takes a couple of minutes to perform a testing session.
  • If software is properly specified: it finds real bugs.

The following weaknesses have been described :

  • It only finds basic bugs (e.g. null pointer dereferencing).
  • It is only as precise as the specification and specifications are typically imprecise.
  • It compares poorly with other techniques to find bugs (e.g. static program analysis).
  • If different inputs are randomly selected on each test run, this can create problems for continuous integration because the same tests will pass or fail randomly.[6]
  • Some argue that it would be better to thoughtfully cover all relevant cases with manually constructed tests in a white-box fashion, than to rely on randomness.[6]
  • It may require a very large number of tests for modest levels of confidence in modest failure rates. For example, it will require 459 failure-free tests to have at least 99% confidence that the probability of failure is less than 1/100.[4]

Types of random testing

With respect to the input

  • Random input sequence generation (i.e. a sequence of method calls)
  • Random sequence of data inputs (sometimes called stochastic testing) - e.g. a random sequence of method calls
  • Random data selection from existing database

Guided vs. unguided

  • undirected random test generation - with no heuristics to guide its search
  • directed random test generation - e.g. "feedback-directed random test generation"[7] and "adaptive random testing" [8]

Implementations

Some tools implementing random testing:

  • QuickCheck - a famous test tool, originally developed for Haskell but ported to many other languages, that generates random sequences of API calls based on a model and verifies system properties that should hold true after each run.
  • Randoop - generates sequences of methods and constructor invocations for the classes under test and creates JUnit tests from these
  • Simulant - a Clojure tool that runs simulations of various agents (e.g. users with different behavioral profiles) based on a statistical model of their behavior, recording all the actions and results into a database for later exploration and verification
  • AutoTest - a tool integrated to EiffelStudio testing automatically Eiffel code with contracts based on the eponymous research prototype.[5]·
  • York Extensible Testing Infrastructure (YETI) - a language agnostic tool which targets various programming languages (Java, JML, CoFoJa, .NET, C, Kermeta).
  • GramTest - a grammar based random testing tool written in Java, it uses BNF notation to specify input grammars.

Critique

Random testing has only a specialized niche in practice, mostly because an effective oracle is seldom available, but also because of difficulties with the operational profile and with generation of pseudorandom input values.[1]

A test oracle is an instrument for verifying whether the outcomes match the program specification or not. An operation profile is knowledge about usage patterns of the program and thus which parts are more important.

For programming languages and platforms which have contracts (e.g. Eiffel. .NET or various extensions of Java like JML, CoFoJa...) contracts act as natural oracles and the approach has been applied successfully.[5] In particular, random testing finds more bugs than manual inspections or user reports (albeit different ones).[9]

See also

References

  1. ^ a b c Richard Hamlet (1994). "Random Testing". In John J. Marciniak (ed.). Encyclopedia of Software Engineering (1st ed.). John Wiley and Sons. ISBN 978-0471540021.
  2. ^ Agrawal, P.; Agrawal, V. D. (1 July 1975). "Probabilistic Analysis of Random Test Generation Method for Irredundant Combinational Logic Networks". IEEE Transactions on Computers. C-24 (7): 691–695. doi:10.1109/T-C.1975.224289.
  3. ^ Duran, J. W.; Ntafos, S. C. (1 July 1984). "An Evaluation of Random Testing". IEEE Transactions on Software Engineering. SE-10 (4): 438–444. doi:10.1109/TSE.1984.5010257.
  4. ^ a b Howden, William (1987). Functional Program Testing and Analysis. New York: McGraw Hill. pp. 51–53. ISBN 0-07-030550-1.
  5. ^ a b c "AutoTest - Chair of Software Engineering". se.inf.ethz.ch. Retrieved 15 November 2017.
  6. ^ a b "Is it a bad practice to randomly-generate test data?". stackoverflow.com. Retrieved 15 November 2017.
  7. ^ Pacheco, Carlos; Shuvendu K. Lahiri; Michael D. Ernst; Thomas Ball (May 2007). "Feedback-directed random test generation" (PDF). ICSE '07: Proceedings of the 29th International Conference on Software Engineering: 75–84. ISSN 0270-5257.
  8. ^ T.Y. Chen; F.-C. Kuo; R.G. Merkel; T.H. Tse (2010), "Adaptive random testing: The ART of test case diversity", Journal of Systems and Software, 83 (1): 60–66, doi:10.1016/j.jss.2009.02.022, hdl:10722/89054
  9. ^ Ilinca Ciupa; Alexander Pretschner; Manuel Oriol; Andreas Leitner; Bertrand Meyer (2009). "On the number and nature of faults found by random testing". Software Testing, Verification and Reliability. 21: 3–28. doi:10.1002/stvr.415.

External links

This page was last edited on 21 November 2023, at 14:57
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.