Feature requests for UnitTest class


#1

It was a nice surprise when I saw the UnitTest and -Runner classes appear in git recently. Great to have a lightweight set of utility classes that work “the juce way” to simplify testing.

Are the classes already ready to use by the public, or will they still change a lot? In the version I pulled yesterday, the implementations of UnitTestRunner::getNumResults() and UnitTestRunner::getResult() were missing from juce_UnitTest.cpp, so this is probably still work in progress.

One addition I’d love to see is a pair of virtual methods - like JUnit’s setUp() and tearDown() - that are automatically called by UnitTest::performTest before and after running the test case. (So that I can abort tests that fatally failed by simply returning from UnitTest::runTest() without having to worry about calling a tearDown() method myself)

The motiviation is that there is quite a bit of initialization common for a set of tests that I want to pull up in a common base class. I can’t put that in the con-&destructor because the classes under test rely on juce being properly initialized / not yet shut down. Of course I could avoid that by managing the instantiation and registration of tests manually, but I’d rather want to use the neat trick with the static instance automatically registering the test.


#2

Glad you like them! In fact, I didn’t realise that I hadn’t implemented those two methods yet - doh! If only I’d written a unit test for the unit tester, I might have spotted that!

TBH I’ve never actually used or researched any 3rd party test systems, I just wrote something that did what I needed for my internal tests, so there may be some great ideas out there which I didn’t think of adding. I’ll certainly add a setup/teardown method, that could be a handy addition, thanks!


#3

From my experience with JUnit, the static methods in the Assert class where quite useful for keeping the tests small and readable and get more helpful error messages:
http://junit.sourceforge.net/javadoc/org/junit/Assert.html

If a comparison with Assert.assertEquals(…) fails, the expected and actual value that were compared are appended to the error message. Often you get a pretty good idea what exactly went wrong without having to run the test again in your debugger to see why the comparison failed.

Something like this would be a very welcome addition to the UnitTest class for me (although I’m not sure at all if its a good idea to use templates here, or one should add distinct signatures for the most commonly used types)

	template <class ValueType>
	void expectEquals(ValueType actual, ValueType expected, const String& failureMessage_ = String::empty)
	{
		bool result = (actual == expected);
		String failureMessage = failureMessage_;
		if (!result)
			failureMessage << " (expected " << expected << " but was " << actual << ")";
		
		expect(result, failureMessage);
	}

#4

Ah, that’s an excellent idea for a helper method - thanks!


#5

Thanks a lot for adding those methods.

For anyone else interested in using the UnitTest class, here’s a small command line application to execute all tests in your project and print the outcome to the console. It should automatically pick up all tests found in your project (if you defined them as suggested in the UnitTest class documentation).

You can specify which test(s) to run as command line parameters, if no parameter is given all tests are executed. “–list” prints out a list of all unit-tests in your project.

///////////////////////////////////////////////////////////////////////////////
class UnitTestApplication  : public JUCEApplication
{

public:
    UnitTestApplication()
    {
        // never create any Juce objects in the constructor - do all your initialisation
        // in the initialise() method.
    }

    ~UnitTestApplication()
    {
        // all your shutdown code must have already been done in the shutdown() method -
        // nothing should happen in this destructor.
    }

    void initialise (const String& commandLine)
    {
        Array<UnitTest*>& allTests = UnitTest::getAllTests();
        int numTestsPassed = 0;
        int numTestsFailed = 0;

        if (commandLine == "--list")
        {   // just list the names of all available UnitTests and quit
            for (int i=0; i<allTests.size(); i++)
            {
                std::cout << allTests[i]->getName() << "\n";
            }
            JUCEApplication::quit();
            return;
        }

        UnitTestRunner runner;
        if (commandLine.isEmpty())
        {  // run all tests defined in the includes files
            runner.runAllTests(true);   
        }
        else
        {   // parse names of the tests to run from the command line (separated by blanks)
            StringArray namesInCmdLine;
            namesInCmdLine.addTokens(commandLine, " ", "\"");         
            for (int i=0; i<namesInCmdLine.size(); i++)
            {   // test can be surrounded with quotation marks
                namesInCmdLine.set(i, namesInCmdLine[i].unquoted());
            }

            // find the tests specified in the command line
            Array<UnitTest*> testsToRun;
            for (int i=0; i<allTests.size(); i++)
            {
                int indexInCmdLine = namesInCmdLine.indexOf(allTests[i]->getName());
                if (indexInCmdLine >= 0)
                {
                    testsToRun.add(allTests[i]);
                    namesInCmdLine.remove(indexInCmdLine);
                }
            }

            // if there still names left in the list...
            if (namesInCmdLine.size() > 0)
            {   // ...mark them as failed because the test class could not be found
                std::cout << "Invalid unit-test(s) specified in command line: " << namesInCmdLine.joinIntoString(" ") << "\n";
                numTestsFailed += namesInCmdLine.size();
            }

            // run only the specified tests
            runner.runTests(testsToRun, true);
        }

        // iterate over all results
        for (int i=0; i<runner.getNumResults(); i++)
        {
            const UnitTestRunner::TestResult* result = runner.getResult(i);

            // count total passes and failures
            numTestsPassed += result->passes;
            numTestsFailed += result->failures;

            // print messages for those tests that failed
            StringArray messages = result->messages;
            for (int j=0; j<messages.size(); j++)
            {
                std::cout << result->unitTestName << ": " << messages[j] << "\n";
            }
        }

        if (numTestsFailed > 0)
        {   // return the number of failed tests
            JUCEApplication::getInstance()->setApplicationReturnValue(numTestsFailed);
            std::cout << numTestsFailed << " tests failed, and " << numTestsPassed << " tests were succesfull.\n";
        }
        else
        {
            std::cout << "all " << numTestsPassed << " tests passed.\n";
        }

        JUCEApplication::getInstance()->quit();
    }

    void shutdown()
    {

    }

    const String getApplicationName()
    {
        return T("UnitTestApplication");
    }

    const String getApplicationVersion()
    {
        return T("1.0");
    }

};

@jules:
Is it the right approach to put all code in the initialise method? This app is meant to be a command-line tool, but occasionally one of my tests will try to open a Window to show details about a failure, that’s why I need all the Juce GUI and messaging stuff to be initialized.


#6

Cool - thanks for sharing!

Yep, that’ll work just fine.


#7

Just one minor thing:
Isn’t there an i missing in UnitTest::initalise? or is it yet another British way to spell initialize?


#8

Bugger. I always mis-type it like that, but normally notice before getting as far as checking it in! Thanks, I’ll correct that…


#9

Cool beanz!

I use Google’s C++ testing suite because, well, I’m already using it! :smiley:

But it’ll be nice to likely see some test harnesses and mocks coming and I’ll be sure to open source any mocks that I make that might be useful…


#10

I might have a dumb question, but can I run unit tests on a class inherited from a class that already has unit tests? Won’t it clash when the test methods are called?


#11

It’s not intended to handle complicated virtual classes - the idea is that you just write a dedicated, simple class to run each type of test.


#12

Oh, then I misunderstood the principle… I had a few of my classes inherit from UnitTest, in each I added the test functions (set up/test/tear down) and ran everything from only one central UnitTestRunner (in the presence of a GUI button).

This way I could test each (or so) method of classes enabled with UnitTest support, and make sure the basic methods are working correctly.

The good thing with this is that each test is ran for each instance of the plugin, for example, I test setParameter by setting a random value, and reading it back to see if the parameter engine working behind is working correctly.


#13

Well, I suppose you could do it that way, but that’s not how it was intended to work.

The UnitTest class object represents the test itself, not the thing that’s being tested. I’d recommend keeping the two concepts in separate classes.


#14

Indeed, I think I reached the limits of this solution: As I try to test plugins (that share some common code, in which there are unit tests), the UnitTestRunner launches one test per instance of each plugin sharing the code! Which in my case, brings more trouble than solutions :lol: