Postman – time to improve and professionalize

Last year I wrote a blog about the basics of Postman and the TRIMS MNEMONIC. This time I would like to focus on some improvements and the lessons which I learned after I wrote that blog.  I will first start with general lessons learned, and after that I will show some nice tricks and code examples which make life a bit easier.

Lessons learned

We often do not realize how much we actually learn by just doing our daily job until you actually start to summarize it. Below I will share some useful insights including examples from my daily job experience.

Risk based testing

I cannot repeat it enough: think about what you want/need to test before you start copy pasting existing stuff (with just some small changes), and by doing so, creating chaos. By doing so you might test a lot, but you won’t have an overview of what you actually are testing. Yes, due to time pressure (and my enthusiastic work spirit) I occasionally make this mistake as well. Most of the times I found out it was not the most effective way to reach my goal, which is to get a good impression about  the delivered software. I end up with the feeling that it is of sufficient high quality to go live.

It is better to first understand what the software is doing and should do, and then to brainstorm about risks. Preferably together with the developers and possibly with a business analyst as well, given that they are involved. With them you should create an overview of high-over scenario’s you want to test. This way you will know upfront which scenarios can be combined in the same flow of subsequent REST calls (including catching returned data from headers and or bodies of these calls, and to re-use them in the follow-up calls). If you did this preparation well, it will become easier to link the scenario’s together and create re-usable code to do the assertions you want to do.

Another nice example of a pitfall is copy-pasting a lot of scenarios of two existing API services (for example, in our situation an update service and a search service) which are combined to provide bulk updates, to improve the performance and reduce the number of calls. It is easy to copy paste the scenario’s which are already used for the regression test of the separated services. However that might not be very interesting, because the code under test is re-used by the developers. It is far more interesting to think about the situations which create exceptions or weird behavior. Especially the ones which are introduced by combining these services. So don’t test all possible input but think about new created possibilities for the output.

So to summarize; when you need to test a new service which actually combines two existing services: don’t retest everything, but just test the new introduced risks.

Code reviews

It is strange to think that code created by developers needs to be checked by a tester because he/she might have made mistakes, while on the other hand, the developers and other team members should just trust the tester and his test work. Because this test work nowadays contains a lot of coding and/or scripting as well. When you look at my situation; I actually don’t have a developers background. Therefore I would expect that there is even a bigger question mark about my code and scripting skills. However, up until now I still have to request a code review by myself. And even in the hurry of finishing sprints we often skip the code review of the test code. Of course that is a risk and a missed opportunity to improve the quality and skills of the tester on your team. My experience is that when my code is reviewed by a developer, I learn a lot about coding possibilities and the developer learns a lot about the testers mind set. So I should say a win-win situation. After reading this blog just add the test review subtask to your default subtask template to ensure you always keep learning from each other and to improve the overall quality.

Balance between re-using and/or cloning code/scenario’s

Based on my previous blog I had a nice conversation, with someone who read it. We talked about how I use Postman and how Postman should be used. One of his questions was how to deal with the exact same scenario’s, but with only one difference. For example in the header, for a validation on authorization. My advice was to just accept that you will sometimes have to clone the exact same scenario’s. Don’t waste your time creating all kinds of difficult and creative ways to reduce the same code on a few places. I told him that for only two small different situation he shouldn’t use the Postman data files feature (although this actually is nice for a lot of other situations, e.g. when you want to test bulk or when you want to test different and complex data). Besides, the advice should always be to find the right balance between cloning code and re-using code. Make use of the possibility, but also think about the readability, maintainability and so on. The advantage of cloning the scenario is that you can easily give the additional scenario a clearer explaining title of what you test. Besides, you will have more possibilities to do additional customized tests, for example based on an error flow, which will generally validate on different results than a happy flow.

Half automated test support

It is great when Postman can do almost all the work for you. However sometimes situations are too time consuming to automate them. You should accept these situations and  test them manually. But at the same time think about how tools like Postman can support you in generating test data so that you can do the validations faster and more reliable. If you think well about using useful names for variables/key values and for instance by including timestamps/UUID’s as test data, it can help you speed up the debugging process and help with tracing where the error situations/bugs are introduced.

For my current project, our UI’s do not change too much. So in my opinion it is waste of time to automate the UI testing itself (and afterwards maintain the test code/ environment). But for the rare situations the UI’s will change, I want to do some quick checks with a wide variety of data.  To do so I work with datafiles a lot, to generate the needed variety of data (which in the end will be displayed in the UI’s). Below images give a simplified impression of how this works. But here you can find more background information about this, and how to set it up.

 







Improvements

Hooks

The usage of hooks is very common if you google it in combination with e.g. Cucumber or Specflow however you will find less results if you will try to find it in combination with Postman. That’s a shame because it can be quite helpful to create re-usable hooks.

For example we use a script extensively which cleans the initialized variables except the ones which are needed for multiple projects.  The easiest solution is to create a dummy call to the hostname with a re-usable Pre-request Script for cleaning the environment variables and the global variables.

This might have a few benefits. The first reason for doing this is that your list with variables (on your personal/local machine) will not get polluted and become endlessly long. For example when you run tests and afterwards during refactoring you change the variable names. If you don’t clean your variables, this list can get polluted very fast with a lot of non-used variables. The second reason is to avoid unexpected failures on the build pipeline when running the automated tests. After the refactoring variables might be initialized and therefore these tests could run without failures locally, however when you run them in your build pipeline and  forget to initialize them in the complete regression set, these tests are likely to fail.

Of course you have the possibility to just add the script below in a pre-request or test tab with if/else loops, but for us it works best to do it in the Pre-request Script of a dummy call, this way we can just copy paste the ‘hooks after’ folder into other projects.

// Script below clean both environment and global variables, and only saves the environment variables, which are needed for each project (like api-key and hostname)

// start clean script

let tempApikey = pm.environment.get(“xxx-apikey-YYY”);
let tempHostname = pm.environment.get(“hostname”);
pm.environment.clear();
console.log(“Cleared all environment variables”);
pm.environment.set(“xxx-apikey-YYY”, tempApikey );
pm.environment.set(“hostname”, tempHostname );
pm.globals.clear();
console.log(“Cleared all globals”);
// end clean script

Re-usable Libraries

From my experience I know it can be quite useful to have structured libraries with custom made code which can be re-used for multiple assertions and validations (like with several standardized functions which can be used in the step definition files of the Specflow/Cucumber testcase). Unfortunately Postman does not have that possibility (or they hide it that good, that I am not aware of it). However luckily there are some ‘dirty’ workarounds like the possibility to use the eval-function. When using the eval-function you can store executable JavaScript as a string within a variable and by re-using this variable the JavaScript within the variable will be executed every time the variable is called. Below you can see how this re-usable code is initialized within the Pre-request Scripts from the collection (so on the highest level possible). The if-statement is used to avoid that this code will be executed every time (which avoids unnecessary run time). When the code is set once, it stays in your global variables until you reset the global variables again (so be aware that if you need to change the code of the library function you should first clear your global variables, otherwise it will not be reset).  Again a small note that the situation below could also be tested without the eval-function, I simplified the example here to make it easier to understand.

When the initial library function is set, it can be used at all levels of the test tab (i.e. collection level, sub folder level or individual request level). Below is an example at the individual request level. The log below the screen is based on the three requests shown. They all have the exact same code within the test tab as shown in the example for the ‘impala’. I highlighted the first line in the log (where the global function is stored to the global variables) to prove that the code within the if statement is only executed once (from within the collection level Pre-request script), during the first call. From then on, it is executed from the script stored within the global variables and you won’t see the initialization any more for the other two calls.

Improve readability

In my previous blog I already wrote that it is recommended to use clear names to improve the traceability. This can be done by adding the pre-fix ‘collectionVariable’ to the functional name of the variable. But, when my code was reviewed, it became clear that it might still be hard to read sometimes. Variable names can become quite long and, as you can see below, the variables are then cut off in the collection variable overview. Because of this reason we decided to add the suffix ‘FromCollectionVariables’ at the end of the variable name instead  of in front of it. This made the code much more readable.

Continuous Learning and Continuous Improving

As mentioned above already, with the code reviews, you always keep learning from each other and by doing so, improve the overall quality. If you want to improve me our yourself, do not hesitate to get in touch. Feel free to contact me through my LinkedIn page, or use the Newspark LinkedIn page. If you do, I can also help you with the json exports of all examples which are mentioned above. All examples use the open source Pet store API and can be used with working scripts.