Smart Tech for a better Web
All the worlds speaks about test-driven development and unit testing. But most people finishing there exams don't have a clue about what this means. To be honest, I was surprised that people could get out of a university just with one or two Junit tests implemented. Well, here are some words about testing.
A unit test is nothing else than some lines of test code which tries to check out if the lines of business logic you wrote actually works. To help you with this, tools like JUnit and PHPUnit (or SimpleTest in PHP environment) have been written. They are frameworks which give you some methods to work with. To be clear: a unit test is some kind of class which instantiates another class and checks if the parameters you put in bring the correct results.
Hardcore people say, before you write the business logic, you'll have to write a test case. This is test driven development: write your test, then write the actual business logic. For me it works usually like this: I write my business logic and as soon as I have something cool working, I write a testcase. Sometimes it behaves a bit different. Depends on. But that's a matter of taste. As usual I recommend not to be to extreme about everything and just try out what is best for you.
Everything. Well, not really. A good testing ratio is at 70%. More does mean that you even test your exceptions, less means you have forgotten some classes. There are tools available which help you with checking out how much you have tested. In Javaworld it's Cobertura, in PHP its Spike PHP test coverage, for example.
Usually you have to extend your tester class from a class called TestCase or similar. In PHPUnit its PHPUnitFrameworkTestCase. They make your classes executable in the testing environment. Then implement methods - all methods which implement an actual test is prefixed with test.
An example from Log4PHP:
The testing frameworks usually look for methods like this and execute them one by one. In newer JUnit Versions you have to annotate testing methods. OK, and this is a valid PHP testcase:
Again, its Log4PHP. I instantiate a class which I am intending to test and call a method on it. The result is stored in $v. What I expect is stored in $e. All testing frameworks provide you so called assert methods, which enable you to compare or check otherwise if the expectation meets the actual result. In my case, "blub" is expected. If this assertion fails, my tool shows me that error.
If you change business logic or refactor something you can simply execute old tests and recognize if something goes wrong. People say, before putting an echo or System.out.prinln somewhere, write an test. I agree here. A good testing ratio makes your software stable and you take care of side effects, even when software grows. And before you deliver you run all your tests. This way you make sure that everything is fine.
To make it short, test methods should never depend on the success of test methods. Each test method can be executed stand alone. Otherwise your life will be hell - think on thousand methods depending on each other. Same goes to test data. A testcase must not depend on data another test method created. This will cause you hell.
I know that this is very difficult with databases. Testing frameworks usually give you "tearUp" or "tearDown" methods which are called before a test method starts. You can create your test-data in the databases in these methods. But this is very time consuming. Test executions of this kind can easily take hours. Best is, keep your tables short, make a SQL file for each package or even each test case. There are no best practices which fits on each project.
In my last project we have used Excel (=Manager aware) to take care of the data and then generated SQL files out of it. Time consuming, complex, but it worked. The relations in this tables where to strong, we couldn't keep the data care up otherwise.
Having said the above, you can imagine that you should think about testability BEFORE you code. Spaghetti code isn't testable. Make short methods. Keep in mind what "Separation of Concerns" means. Same goes to databases. Make them plugable. Not at all time foreign keys are good. Think about it twice - they are mighty but can be evil too. I have worked in projects where it was very heavy duty or even impossible to delete any data. If you write your application testable, its going to be good designed in much cases.
This doesn't mean you should put all methods to public scope! Test cases should be in the same package as their Test classes are, so you can try to work with package scoped methods as well. However, whatever you do, testable code is good, but it never should break encapsulation just for being testable.
In most cases private methods can be tested with the test of public methods. Test coverage tools help to identify test lacks.
Try to test all public methods! If you need to create files, use the temp folder. Set up databases for the test. Use Jetty and HSQL if need it embedded - in PHP its PDO and SQLite. Don't waste your time with testing exceptions which usually never happen. Just use 20% of your time to make 80% of tests! Try to test business logic - if that means the creation of complex objects, so be it. Maybe you can share the logic. But don't get bored with testing getter and setters. Those are called thousand of times within normal tests or procedures, no need to do that.
Calculate the same time as implementing the testcase. Too much? Not really! Imagine you are writing business code for 5 hours. Thousands of outputs on the console you read manually. You delete it, check it in and later want to fix again. Put those outputs in again? Have commented it out? Code is ugly? Put all your stuff in a testcase. Nobody claims if you need 5 hours for it, if its done properly. Sometimes testcases need 3 days but the fix just 1 hour. This can happen in EAI enviroments, where multiple systems communicate. No problem here - these is business critical, automatic tests are the best you can do.