platformOS

QA and Testing Best Practices - Part 2: Tips and Tricks

Pawel Kowalski | April 8, 2019

QA and Testing Best Practices - Part 2: Tips and Tricks

Welcome to part 2 of our series where we share our experience with QA and testing. In part 1, we told you about our QA process, the tools we use, and discussed end-to-end testing and TestCafe. In this part, we delve deeper, and explore some best practices for planning, organizing, and writing your tests.

Organizing end-to-end tests in your project

Structure

The first step when planning end-to-end tests for your project is to split your tests into a logical structure to help you identify what's actually tested within your test suite. If your test has 50 lines and a lot of expects, it shows that you are testing too much within one test.

Splitting your tests will also help you write test names that help you identify problems and understand what you have covered in your tests.

This is an example of how NOT to organize your tests:

// Bad -- This example is how NOT to do it
test('Create, Read, Update, Delete pattern using AJAX and customization', async t => {
  await t
    .click(homePage.link.ajax)
    .typeText(feedback.input.create_message, 'Lorem ipsum')
    .click(feedback.radio.radioExcellent)
    .click(feedback.button.submit)
    .click(feedback.button.refresh)
    .expect(feedback.table.tableRows.count)
    .eql(1)
    .expect(feedback.data.rating.innerText)
    .eql(feedback.txt.createRating)
    .expect(feedback.data.message.innerText)
    .eql(feedback.txt.createMessage);
  let customization_id = await feedback.data.id.innerText;
  await t.typeText(feedback.input.update_id, customization_id).click(feedback.radio.radioMeh);
  await t
    .typeText(feedback.input.update_message, 'Dolor ipsum')
    .click(feedback.button.update)
    .click(feedback.button.refresh)
    .expect(feedback.data.rating.innerText)
    .eql(feedback.txt.updatedRating)
    .expect(feedback.data.message.innerText)
    .eql(feedback.txt.updatedMessage);
  await t
    .typeText(feedback.input.delete_id, customization_id)
    .click(feedback.button.delete)
    .click(feedback.button.refresh)
    .expect(feedback.table.tableRows.count)
    .eql(0);

});

Instead, you SHOULD DO something like this:


// Good  -- how it should be done.
test('customizations_delete_all cleans feedback correctly', async t => {
  await t.expect(feedback.table.tableRows.count).gte(0);

  // clean database
  await t.navigateTo('/feedback/clean').wait(500);

  await t.expect(feedback.table.tableRows.count).eql(0);
});

test('Create, Read', async t => {
  await t
    .click(feedback.radio.create.excellent)
    .typeText(feedback.input.create_message, 'Lorem ipsum')
    .click(feedback.button.submit)
    .click(feedback.button.refresh);

  await t.expect(feedback.table.tableRows.count).eql(1);
  await t.expect(feedback.data.rating.innerText).eql(feedback.txt.createRating);
  await t.expect(feedback.data.message.innerText).eql(feedback.txt.createMessage);
});

test('Update, Read', async t => {
  let customization_id = await feedback.data.id.innerText;

  await t
    .typeText(feedback.input.update_id, customization_id)
    .click(feedback.radio.update.meh)
    .typeText(feedback.input.update_message, 'Dolor ipsum')
    .click(feedback.button.update)
    .click(feedback.button.refresh);

  await t.expect(feedback.data.rating.innerText).eql(feedback.txt.updatedRating);
  await t.expect(feedback.data.message.innerText).eql(feedback.txt.updatedMessage);
});

test('Delete, Read', async t => {
  let customization_id = await feedback.data.id.innerText;

  await t
    .typeText(feedback.input.delete_id, customization_id)
    .click(feedback.button.delete)
    .click(feedback.button.refresh);

  await t.expect(feedback.table.tableRows.count).eql(0);
}).after(async t => {
  /*
    At the end, create one entry to make sure when DB clean test is passing
   it actually had something to clear
  */
  await t
    .click(feedback.radio.create.excellent)
    .typeText(feedback.input.create_message, 'This should be cleared by next test run')
    .click(feedback.button.submit);

});


The output of the first example looks like this:

Feedback - CRUD using Ajax

✓ Create, Read, Update, Delete pattern using AJAX and customization

Which means it can fail in any one of four separate operations inside of the test.

The second example communicates which part of the feature works and which part failed (if anything fails) more precisely:

Feedback - CRUD using Ajax
✓ customizations_delete_all cleans feedback correctly
✓ Create, Read
✓ Update, Read

✓ Delete, Read

The only downside is that the second example will run slower, because it will load the page multiple times. To avoid this, you can set the disablePageReloads setting on test or fixture to disable page reloads between tests.


Environments

If you are like us, and you like to clear/seed the database from your tests using the browser, you will need to implement some safety switches, so that your users won't enter the page that will delete all their data by accident. Assuming you have a feature called feedback, you would create the feedback_clean endpoint to remove rows that have been created by the previous test.

We protect users by only allowing them to access endpoints on staging, where we run tests before the code gets promoted to the production environment.

In platformOS, it is very simple to do that in a quick and reusable way. Create an authorization_policy and use it on a page that you want to protect:

// authorization_policies/is_staging.liquid
---
name: is_staging
---
{% if context.environment == "staging" %}true{% endif %}


// modules/feedback/public/views/pages/feedback_clean.liquid
---
slug: feedback/clean
authorization_policies:
  - is_staging
---

{% execute_query 'modules/feedback/clean' %}

Note: Actually, this is not necessary, because platformOS has another safety switch, which prevents executing the customizations_delete_all mutation on production. Still, it's a good practice to protect the endpoint, because the underlying implementation (usage of protected mutation) might change without the person managing the page knowing. See our example implementation here.


Page objects

Page object is a design pattern which has become popular in test automation for enhancing test maintenance and reducing code duplication. A page object is an object-oriented class that serves as an interface to a page of your Automated User Tests. The tests then use the methods of this page object class whenever they need to interact with that page of the UI. The benefit is that if the UI changes for the page, the tests themselves don’t need to change, only the code within the page object needs to change. Subsequently, all changes to support that new UI are located in one place. - by Selenium

A couple of best practices to follow when using page objects:

  1. Page objects should serve you, not the other way around. Don't try to extract everything into page objects. If your objects are getting bloated to the point that you see a hundred lines and variables, it is quite possible that you went overboard, and your test code will become hard to understand and maintain. Maybe it's time to split the fixture into multiple ones. Maybe it's time to verify if you only test the end result, the goal of your user.
  2. Try to write self-containing tests that will not depend on files that are not directly connected to the feature currently tested. This isn't always a 100% possible - prefer duplication over multiple dependencies if that happens. Example: If you have the same validation message in 3 forms, duplicate those messages. Otherwise, if one validation message in one place changes, at least 2 tests will fail, and you will have to extract them anyway. If you are working on only one project and don't care about replicability in the future, you can usually get away with ignoring this rule. Our case is quite the opposite: everything we do we need to do with replicability in mind, so we need to think about scale at every step of the development, including testing.
  3. Make the fixture start directly at the feature URL. Do not test getting there by clicking - this will speed up your tests, and you won't test clicking the same links 50 times. Navigating between modules should be left for the integration part of the test suite. 
  4. Write your own npm package with helpers that you repeat in multiple projects and page-objects (example: @platform-os/testcafe-helpers). Test your helpers! You have to be sure that they work.
  5. Read about other risks of using page objects the wrong way in this post, which I think, correctly identifies problems with maintaining page objects when they are misused.


platformOS modules

If you keep your tests independent, you can clearly see what is a "feature test" and what is "integration test". This will also make it very easy for you to distribute the test and test it on your CI.

A feature (module) test checks if a feature is working correctly without any external factors. Think about it as you think about a pure function - you give it input A and expect output A


# Example placement of tests inside modules
marketplace-nearme-example ⇒ tree modules/**/test
modules/admincms/test
├── index.js
└── page-object.js
modules/contacts/test
├── index.js
└── page-object.js
modules/direct_s3_upload/test
├── index.js

└── page-object.js

Integration (flow) tests check whether modules are working correctly in a project environment. For example, if you don't treat the menu as a module, check if it allows users to get where they want to go. But the general idea is to treat as much as possible as modules — they are easier to encapsulate, duplicate across projects, and test individually.


Integration and team communication

End-to-end tests can save you a lot of time, especially if you integrate them with your automated workflow. Make sure to keep an eye on these aspects when thinking about introducing them into your company:

  1. You cannot deploy when tests are failing - this is critical. You should test everything important AND you should not try to work around failing tests by skipping them.
  2. Skips in our projects are used as a TODO/Bug report with the proper message and a comment explaining what needs to be done for the test to be viable. Regression tests are an amazing tool for a developer assigned to fix a bug - they work similarly to the TDD approach.

    // Example
     test.skip('Unknown language has layout - unskip when fixed', async t => {
       await t.navigateTo('/multilanguage?language=js&name=John&url=https://documentation.platform-os.com');
     
       const footer = await Selector('footer');
       await t.expect(await footer.exists).ok();

     });
  3. You have to be sure that if your tests are passing, your app can be safely deployed. Ideally, your continuous integration service will deploy it for you.
  4. Using different reporters, you can inform your team about tests failing. Usually, your CI will inform about a build failing, but a build can fail for multiple reasons and won't tell much about the tests, at least without additional work from your part. Adding a Slack reporter can make the feedback loop much shorter.


Test features from the user's perspective

We found that the best approach is to only test the end result (goal) of a feature from the user's perspective. For example, if you test a user story that says "User can log in", don't test if every character inside the flash notice is correct — check if it's a success or if a logout button appeared on the page. If you test the search page, test if there are more than 0 results. Then at some point, you might want to separately test the "Show 15/25/50 results" button.


Writing tests

The only way to improve your testing skills is to actually write tests, experience some pain points, and find out what to do about them. Sometimes you already know all the pieces, but haven't yet connected the dots — this section is meant to show you some connections between those dots.

Detecting unidentified elements

If something is very hard to select, adding an HTML attribute might be the best way out from CSS selector hell. We prefer HTML data-* attributes, for example, data-test="avatar-image". Writing code that's easy to test is in itself a skill worth learning. Here, let us point out that using the BEM notation (or anything similar) for CSS and data-attributes for selecting elements in JS should make your life much easier.


// Example

Please note that the active CSS class is used only in CSS, for styling. The data attribute active is used by JavaScript. Decoupling these is a best practice to follow. Structuring your code like this makes it easier to test as well.


Flow hooks

Utilize before, after, beforeEach, afterEach hooks to DRY up your code and to help you harness concurrency, if necessary.


// Example - last test of the suite
test('Delete, Read', async t => {
  let customization_id = await feedback.data.id.innerText;

  await t
    .typeText(feedback.input.delete_id, customization_id)
    .click(feedback.button.delete)
    .click(feedback.button.refresh);
   
  await t.expect(feedback.table.tableRows.count).eql(0);
}).after(async t => {
  /*
    At the end, create one entry.
        Next time tests are run it will have some data to work with.
  */
  await t
    .click(feedback.radio.create.excellent)
    .typeText(feedback.input.create_message, 'This should be cleared by next test run')
    .click(feedback.button.submit);

});


Separating actions from assertions

Separate actions from assertions — an empty line goes a long way when improving code readability.

// Example
await t.setFilesToUpload(updateProfile.input.ajax, [`${uploads}/hero.png`]);

await t.expect(updateProfile.files.ajax.textContent).contains('hero.png');

await t.expect(updateProfile.files.ajaxImage.count).eql(1);


Avoiding excessive chaining

It is very tempting to just chain everything into one big chain of commands in TestCafe, but it hurts readability. It's easier to see the fails and keep lines from breaking when using prettier formatting. We try to keep one .expect per line when asserting.


// Bad
await t
  .click(homePage.link.ajax)
  .typeText(feedback.input.create_message, 'Lorem ipsum')
  .click(feedback.radio.radioExcellent)
  .click(feedback.button.submit)
  .click(feedback.button.refresh)
  .expect(feedback.table.tableRows.count)
  .eql(1)
  .expect(feedback.data.rating.innerText)
  .eql(feedback.txt.createRating)
  .expect(feedback.data.message.innerText)
  .eql(feedback.txt.createMessage);

// Good
await t
  .click(homePage.link.ajax)
  .typeText(feedback.input.create_message, 'Lorem ipsum')
  .click(feedback.radio.radioExcellent)
  .click(feedback.button.submit)
  .click(feedback.button.refresh);

await t.expect(feedback.table.tableRows.count).eql(1);
await t.expect(feedback.data.rating.innerText).eql(feedback.txt.createRating);

await t.expect(feedback.data.message.innerText).eql(feedback.txt.createMessage);


 Cleaning/seeding data

One of the best concepts you can implement when writing tests is to have tests start with the same state of the application every time they are run.

Depending on your case you can either:

  • Clean your database at the beginning of running tests, seed data, and then proceed with testing. You can do that in a browser (like we do) or programmatically (i.e. opening an API endpoint that will clear your whole database).
  • Clean everything after tests, and seed data right after that. This approach shows the "start state" in the database when tests are finished for you to inspect if necessary.

Generating fake data

At some point you will be testing user names, addresses, credit cards, dates and other data that you might want to hardcode. There is another way. Use a library that will generate fake data for you, like Faker.js.


// Example
import faker from 'faker';

const name = faker.name.findName(); // Rowan Nikolaus

const email = faker.internet.email(); // Kassandra.Haley@erich.biz

Using non-hardcoded, generated data will test your application more completely. If you use the same email every time (ie. example@example.com) you might not catch a bug that prevents users from using a + sign in their email (ie. pawel+test@platform-os.com).

Tests should also test for edge cases:

  • when input is left empty
  • when a number field received a string
  • when you insert an emoji, UTF8

There is a joke that describes this strategy very well:

A code tester walks into a bar. Orders a beer. Orders ten beers. Orders 2.15 billion beers. Orders -1 beer. Orders nothing. Orders a cat. Tries to leave without paying.

One might argue that it's impossible to test everything, and we agree. But try not to limit yourself to testing only the happy path of your perfect user. Users make mistakes and your application can either handle them or not.

Note: Randomizing data means you probably will create much more database entries (i.e. users), which can be a very good thing if handled correctly.


Writing smart CSS selectors

Don't write too specific CSS selectors to avoid fails when something in the HTML structure changes. You can't avoid all issues of course, but you can make your tests more robust by utilizing attributes other than CSS classes, for example: type, name, data-*, and scoping.

Scope using .find(). For example, if you have multiple forms, it's quite possible that each form has only one submit button, you can scope your selectors to each form, and then find the submit button in each form, since there's only one.


// Bad 
Selector('.mt-3:nth-of-type(2) > form[action="/api/customizations"] input[name="customization_id"].form-control'),
Selector('.row > .mt-3:nth-of-type(2) label:nth-of-type(2)').withText('Meh')

// Good
this.form = {
  profile: Selector('[data-test="profile-form"]')
};

this.input = {
  avatar: this.form.profile.find('[id~="avatar"]'),
  banner: this.form.profile.find('[id~="banner"]')

};

Scoping also helps with making future updates smaller:


// Bad - in case of change in form, 4 updates will be required
this.form = {
  html: Selector('[data-test="html-form"]'),
  profile: Selector('[data-test="profile-form"]'),
};

this.submit = {
  html: Selector('[data-test="html-form"] button'),
  profile: Selector('[data-test="profile-form"] button')
};

// Good
this.form = {
  html: Selector('[data-test="html-form"]'),
  profile: Selector('[data-test="profile-form"]'),
};

this.submit = {
  html: this.form.html.find('button'),
  profile: this.form.profile.find('button')

};

Fixing a bug starts with a regression test

When your client reports a bug it is very tempting to jump right into fixing it. We would argue that it's not the best approach. This is more of a piece of process advice, but following this simple process allowed us to catch, fix and never reintroduce a lot of bugs after they have been reported.

  1. Create a card/issue/ticket in your ticketing system.
  2. Write a test that reproduces the bug — this sometimes discovers that the bug is actually somewhere else. Include the ticket/issue number inside the test name for future reference.
  3. Try not to only cover the simplest case, adapt to what else users might do in this case as well. Repeated regressions are frustrating for users.
  4. Fix the bug — the test should be green now.

Now you are safe. Your test will fail in case someone tries to break the feature in the same way and your users will have more confidence in your team.


Bonus tip: Single Page Applications

Because single page applications don't need page reloads, and a reload can loose state of the application, you should look into the disablePageReloads setting to prevent unnecessary reloads.


// Example 1 - Set on fixture
fixture.disablePageReloads('Registration');

test('Too short password is not accepted', () => {});

// Example 2 - Set on test
fixture('Registration');

test.disablePageReloads('Too short password is not accepted', () => {});

// Example 3 - Set on fixture and override in test
fixture.disablePageReloads('Registration');
test('Email taken shows validation error', () => {});
test.enablePageReloads('Too short password is not accepted', () => {});

Bonus tip: Performance testing

TestCafe has access to all performance metrics in the browser via ClientFunction, and if you want to, you can use it to measure Real User Metrics and create assertions that keep your performance numbers in check. This will prevent your application from getting slowed down by inefficient code. It will also help your team to respect performance budgets if you have those.

Below, you can see a snapshot of the performance object available in modern browsers. You can calculate differences between these points in time to know how long certain operations took or how much system memory your app uses.


// Snapshot of `performance` object for your disposal
{
  "timeOrigin": 1552050498975.833,
  "timing": {
    "navigationStart": 1552050498983,
    "unloadEventStart": 0,
    "unloadEventEnd": 0,
    "redirectStart": 0,
    "redirectEnd": 0,
    "fetchStart": 1552050498992,
    "domainLookupStart": 1552050498996,
    "domainLookupEnd": 1552050499066,
    "connectStart": 1552050499066,
    "connectEnd": 1552050499190,
    "secureConnectionStart": 1552050499122,
    "requestStart": 1552050499190,
    "responseStart": 1552050499327,
    "responseEnd": 1552050499330,
    "domLoading": 1552050499333,
    "domInteractive": 1552050499603,
    "domContentLoadedEventStart": 1552050499603,
    "domContentLoadedEventEnd": 1552050499674,
    "domComplete": 1552050500921,
    "loadEventStart": 1552050500922,
    "loadEventEnd": 1552050500922
  },
  "navigation": {
    "type": 0,
    "redirectCount": 0
  }
}

// Browser example
performance.timing.domInteractive - performance.timing.connectStart
358

Note: At the time of writing this article, a small workaround is needed, read about it here: https://github.com/DevExpress/testcafe/issues/3546#issuecomment-471503749 — it may be fixed by the time you are reading this though.

Examples

We tried to implement most of the advice in this article in our public examples repository to ensure the stability of this resource.

We hope you found these tips and tricks helpful. If so, stay tuned for part 3, where we talk about troubleshooting and speeding up your development process.



Interested in knowing more about partnering with platformOS?

Ensure your project’s success with the power of platformOS.