In medicine and journalism there are these models, these principles that are a guidance to the practitioner. For example doctors have the Hippocratic Oath which is an oath historically taken swearing to practice medicine ethically.
According to The Elements of Journalism, a book by Bill Kovach and Tom Rosenstiel, there are nine elements of journalism.
So I though of proposing myself a similar set for testing:
1. Testing’s first obligation is to show the truth.
This means to take off the veil of fake certainty that everything works. Also to validate any assumptions claiming there is nothing that reduces the value of the product.
2. Its first loyalty is to the client(s).
Here the clients can be users, stakeholders, managers. Its important to know who are the clients in the first place.
3. The essence of testing is questioning a product in order to evaluate it.
This is actually inspired from the definition that James Bach gives to testing.
4. The testers must maintain an independence from the people that give them the info.
For example the data gathered from a developer should not be trusted 100 percent. People will give you what they think or what they assume is right.
5. The testers should inform based on their analysis and thought.
This means they should not just take the info they receive and jump to use it to inform others. If initial information is false, it will result in more disinformation.
6. The information provided by testers should be also reviewed.
This means a bug can be set invalid, or the reports should be reviewed by someone else.
7. The information about the product to test should be significant, interesting and relevant.
Testing should consider risk areas, priority based on circumstances, context and the information should be useful.
8. The information provided should be comprehensive and proportional.
This means to inform the client right to the point, also in an acceptable format. No one will read a 100 pages report when its not needed.
9. Testers should be allowed to exercise their personal conscience.
This means the approach should be left for testers to decide and have a relative independence.
I wanted to do a python script that compares two folders and checks differences in files, file size, permissions etc. Not only comparing a folder with another one in the same giving time, but also comparing the same folder with itself after some modifications were done, like a new build.
The purpose for that is to see potentials problems. If you identify a file that changed its size, you can then edit it and compare its content. For a log file it is expected to change, so you analyze that in a different way. But for other files the difference might be the key to see a failure.
There is a human instinct not to trust change, but still pursue it. The change is a new behavior, and new means unknown, or untested. People tend to go like cattle in similar directions because they assume someone else has done it before and its safe. So for example some shops has more visitors because other people saw there are many visitors. So they also go there, incrementing the visitors number.
As testers we have the same need. We need to check if the new stuff if its OK, if its safe.
Going back to my problem, that of comparing folders and checking the differences, because I didn’t understood the behavior change, I wanted to do somehow a time consuming python script.
But then I saw that following this steps:
ls -laR > folder1.txt
ls -laR> folder2.txt
diff folder1.txt folder2.txt
did the job.
It solved my problem and I gave up to an implementation that is universal (something that I can use in Windows maybe) or list the differences in a more human readable format.
That is partially because I am lazy , but also because my purpose is to test, not to implement code. It is to take more responsibility to prevent the possible issues rather then creating stuff that only appears to be useful.