If this blog sounds familiar, it might be because you’ve aready read it. This, however, is the translated version of Andre’s Dutch blog which you can re-read right here.
I recently read a blog about the factors that can make test automation a success. This was written with the help of TRIMS, which you can read about in the following blog: https://automationintesting.com/2019/08/trims-automation-in-testing-strategy.html
TRIMS stands for Targeted, Reliable, Informative, Maintainable and Speedy. In other words, your test automation should:
1. serve a purpose (other than just automating for the heck of automating)
2. be reliable, and definitely not return ‘false positives’
3. be informative and be a basis on which we can do additional
‘exploratory’ tests
4. be low maintenance. Software is continuously changing, and it is an
illusion that testcode never needs to be adapted in the future.
5. provide quick insights, but above all be easily and quickly editable in
terms of maintainance. It can be the case that maintenance or getting
test results takes too long. This will definitely lead to the downfall of the
successes you achieved by implementing test automation.
My vision on test automation embraces the 5 key factors mentioned above. I recognize all these facets in my current project. Within this project, I mainly work with Postman, (https://www.getpostman.com/downloads/). In this blog I will take you on a journey on how to use Postman and implement these success factors. Despite this focus, you will probably also recognize aspects that can help in improving more testtools than just Postman.
Targeted
It’s tempting to immediately start automating. Still, it is wise to first think about which scenarios to test. Make sure you always work risk-oriented, and first look at which functionality is added and/or changed. Do you know which scenarios you will be testing? Then think about how you can or want to do the validations, without writing too much (javascript)code yet. This can help you to figure out whether you want to do similar assertions. Maybe these are easy to feed with variables. Maintaining automated tests is much easier when you only work with one piece of code that can be reused in a lot of checks. If you get results per scenario that only differ on certain aspects, then create variables for these results. Define these variables in your pre-request script for each scenario. Then call these variables from within tests on a higher (sub)folder level. This way your code will stay maintainable. You can read more about this in the chapter “Maintainable” further on in this blog.
Besides it is advisable to describe scenarios that at least define the base functionality of the API-under-test. Then, add detailed descriptions where possible. Writing them down does provide quick insights into whether all ‘functional’ requirements have been covered. On the other hand, this will avoid testing purely technical requirements, these detailed functionalities are probably already covered by unit tests.

Reliable
It’s a well known saying that as a tester you need to make sure that whatever you’re checking actually checks what you think you’re checking. Still following this logic? In short, “test your test”! It often happens that test code returns false positives, be aware of this. Never stop thinking about the methods of assertions you’re using for validating a ‘functional’ requirement. Want to make quick progress by using the right validations? Then I definitely recommend to check out the possibilities of the Chai library, and in particular this page: https://www.chaijs.com/api/bdd/. This page clearly shows what is possible, but also which checks are usually more reliable than other alternatives. This is shown with ‘// Not recommended‘ and ‘// Recommended‘ tags. Even though Postman accepts java script in general, the BDD Chai library is highly recommended for both readability and maintainability.
False positives
I would like to share a practical example of false positives. Take a situation in which you think you’ve got a fantastic json scheme and a beautiful test. For example, you have a scheme in which you clearly state that certain elements should be of type ‘string’, ‘number’ or ‘array’. Then apparently, the element ‘required’ seems to be missing. With both validation based on tv4 (https://github.com/geraintluff/tv4) and ajv (https://github.com/epoberezkin/Ajv), your json validation will be worthless. Perhaps your request does return an error response in which none of the elements have an overlapping name with the expected elements. Then the validation would still state that the response matches the json scheme. This can’t be right! From this we learn that whatever you add to your json schedule, you should always use the ‘required’ element.
Seen that Postman’s own website: (https://learning.getpostman.com/docs/postman/scripts/postman_sandbox/) labels the tv4 method as ‘Deprecated’, I will provide an example of the ajv method. Small sidenote: for readability purposes I added a super simple json scheme (which works perfectly fine for experimenting purposes):

Additional recommendation
If your software is already returning json output, but your developers can’t or won’t provide you with a ready-to-use json scheme, then you can also automatically generate and finetune schemes yourself. There are several online tools, but I can imagine that for security reasons you don’t wish to share any sensitive information with the internet. In that case you might consider an XMLspy license. I personally really enjoy working with this tool. Despite the name, this piece of software can deal with both xml’s and json’s. XMLSpy also offers many user-friendly possibilities to use enumerations, max string length, and many other elements you can add to a json scheme, without you having to know how to write such a scheme.
Informative
Whenever a test fails, you want to quickly be able to determine whether this is a bug or perhaps a fault in the test. In any case, you want to have as much information as possible to quickly and easily do your analysis and come to a conclusion, or even a solution. There are several possibilities to get this supportive information. For example, you can write data to the Postman console (shortkey CTRL + ALT + C). There’s also the possibility to add extra feedback to the ‘default’ assertions, or to use variables in the description of assertions. I will discuss a few with the help of some examples:
- console.log(“Your custom message”) or console.log(customObject)
- pm.expect(actualUsedTestDataFilePath, “test setup fails try to change your workdirectory”).to.match(expectedTestDataFilePath);
- pm.expect.fail(“The reason why this testcase fails is because your code did X, so probably condition Y or Z was not met while expected so”)
- pm.test(request.name + ” – variable name catched (” + actualNewAnimalName + “) and saved to environment variables for follow up test cases”, function () { … });
Console.log
Console.log has several possibilities. The two options I use most are displaying a functionally written string, which can be just text, and displaying a value from a variable. For example: console.log(“there are found ” + actualResponseBody.length + ” animals with status \’available\’”); where actualResponseBody is of course a previously defined variable, in this case a variable of type ‘Array’. It is also possible to return arrays or complete (Json) objects, which you can see below in an example screenshot. This can be really useful for debugging and exploring test situations.

pm.expect.fail
In case we find recurring error situations, e.g. in configurations, then it can be useful to make them deliberately fail within an if/else statement with a very specific error message. Where the console.log only shows feedback, this feedback is also directly returned in the testtab (after running an individual request) and/or in the testrunner (when running a testsuite). Below you can find another example.

pm.expect(x, “customized feedback”)
Despite the many ways to write possible assertions, it is often possible to write an optional parameter/string alongside the regular feedback. Instead of just writing .expected(expectedValue).to.equal(actualValue); you can place a second element behind the first variable, extended with the literal feedback that you would like returned whenever this assertion fails. So instead of “Expect ’text X’ to equal ’text Y'”, this would result in something along the lines of “The provided Label A shows the wrong result : Expect ’text X’ to equal ’text Y'”. The image below shows the results of both ways of writing:

pm.test(request.name + ” – variable name catched (” + actualNewAnimalName + “)”
At times, when creating tests and/or playing with the scenarios, it can be helpful to directly get your feedback without looking at your console or testrunner. For me personally it really helps to have certain elements (such as unique keys from the database) returned in the test name. This way you can quickly check whether the right check is being applied to the right features and if the scope is still correct.

Maintainable
I quickly found out that moving tests for individual scenarios to a higher (sub)folder saved a lot of copy paste work and drastically improved maintainability. Code no longer needed to be changed on dozens of places at the same time. Another eye opener was using if/else statements combined with “request.method” (for example, see the image below). This made it possible to create flows in just one (sub)folder in which you could do requests for different types, such as “Post”, “Get”, “Put”. And this could be done without unnecessarily making tests fail and/or without having to make separate folder structures (for testing and/or using the different types of requests). On top of that, it became possible to initialize complete prerequisite situations, e.g. by using a “Post” or “Put” request in order to check these results with a “Get” request. All of this while keeping our maintainable checks on a higher folder level.

Order of test levels
When your tests are defined on folder or (as can be seen in the image above) collection level, these are executed in order when running your scenario. In the image below you can see that on scenario level, only one test has been created. However, in the ‘Test Results’ tab you can see that two tests have been executed. The first of these tests was created at collection level, as described above. Depending on the complexity of your project, it is up to you to determine whether you create these ‘generic’ tests on (sub)folder level or on collection level.

Reducing the amount of tests
It may be interesting to make use of variables in your pre-request Script in order to reduce maintainability. When you initialise the pre-request scripting per individual scenario, you can reduce the number of tests. Reducing the number of tests will keep your entire postman-collection easy to maintain. Example: at my current project we have designed “Happy Flow” and “Error Flow” situations per service. The error flows have responses according to a certain structure, including a clear description why the request failed. We can check this structure on the main level. The detailed message is put into the pre-request script and becomes a variable. This variable can be called by other tests on a higher level. Using multiple levels can reduce your amount of individual tests, but still cover a lot of detail-information, covering multiple scenarios.
Variable Nomenclature
For the sake of good maintainability and readability, I am used to label all the variables which contain the actual response-content. These variables get the label ‘actual’. Just like that, the variables which concern information about expected outcome are labelled ‘expected’. This seems very simple actually, but is also very powerful. I experienced the use of a joined nomenclature and labelling the expected outcome as a big plus, especially when you are not the only one writing the tests. To understand each other written tests can reduce the risk of misinterpretation.
Traceability of Variables
Structuring your variables can be very useful when implementing new scenarios. Lately I started naming the variables to their place of origin. While doing this I determine Collection Variables and non-collection variables (for example a variable used only in a pre-request script or scenario and/or subfolder level). This way I can check quickly if it is necessary to choose a new variable, or to overwrite an existing variable. Next to Collection and Non-Collection variables I distinguish a third category: Variables initialised during a test. Example: collect data from one response and put this into a variable, which I can re-use on a later stage of a test-flow.
Changes
It may happen that you have to change multiple similar lines of code, despite the fact that you are very careful concerning maintainability. This in example can happen when there is a mayor (unforeseen) change in the source application. If this happens, do not forget the very useful Postman-feature to export your entire collection at every given moment. The export is a json file, which you can edit in your favourite editor. The all known CTRL+F or “find and replace” buttons in your editor might come in handy to change a lot of tests/data/scenarios in one attempt. For big changes like these, I now use UltraEdit. After this, simply Import the changed file into your Postman GUI and carry on testing!
Speedy
Gathering feedback as fast as possible, Having this in mind, be aware of the Run Time of your collection with all the test scenarios and assertions. Go through your collection regularly while scanning possible redundant assertions or complete scenarios. If you have the chance, explore the already created unit tests of the application. Are there unit tests which have overlap whit your your Postman assertions? Consider then if you want them really there as well in your collection. I am sure you do not want to check every possible value of an enumeration list, while this already had been done in a much faster unit test.
Testdata vs Performance
Having performance in mind, it is best to keep your testdata set as small as possible. Currently I am working on a project where it is required to send a document (raw file) accompanied with each call message. Instead of a relatively heavy .docx or pdf file, try using a plain text file. Especially when the test is focussed on metadata, and not on the file. This reduces the processing time, and enhances your Run performance.
Is the file size or extension really important for testing? In that case, implement separate scenarios, or even separate collections if you want to cover this topic! Coming back to the T of the TRIMS model, it is always recommended to do testing with a specific target/risk in mind. So only use big files if your target is to test big files. Is your target a small change in metadata? Do not include the big file-tests, but execute these ones separately, and only when you really need to. This will save you a lot of time and increase your performance.
Newman
If you compare a user interface with a underlying code layer on the topic of speed, the underlying code always wins. When you use Postman, this is not different. Use Postman for the setup of new regression sets. Or use it for debugging software/test scenarios and above all: exploration testing. But be aware: if you want to run regression sets on a regular basis, it is better to use the much faster Newman. Newman is the commandline version of Postman. Newman uses the export of collection- and environment files from Postman. Orchestration can be done with Newman and (preferably) with another CI/CD tool in your CI/CD pipeline.
Do not hesitate to contact
Talking about Speedy: If you want to ‘hit the ground running’ concerning Postman, please feel free to contact me through my LinkedIn page, or use the Newspark LinkedIn page. If you do, I can help you with json exports of all examples which are mentioned above. All example use the open source Pet store API and can be used with working scripts. So in conclusion: are you having experience with Postman, using the TRIMS approach? Or are you dealing with the examples in another way? Please let me know and feel free to connect on LinkedIn!
