testjavascript / nodejs-integration-tests-best-practices

✅ Beyond the basics of Node.js testing. Including a super-comprehensive best practices list and an example app (March 2024)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Best practices ideas

goldbergyoni opened this issue · comments

Example here:
#43

YO, Two strategic topics:

  1. Repo name and concept - Seems like we're going with the repo name: 'Integration Tests Best Practices', Any counter thoughts? Maybe 'Node.js Tests Best Practices' which makes it even more generic, opens the door for more topics but also less focused?

  2. Best practices sketch - I'm including here a first draft of best practices idea and categorization (there will be more and each will include a longer explanation with code example), does this feel like the right taxonomy and will end in interesting content?

The golden principles

Super simple, declarative and short (7 loc) tests

Web server setup (3)

  • Random port - Done
  • Return the full address (maybe)
  • Expose open and close methods - Done
  • Mind remote environment (maybe)
  • Same process - Done

Infrastructure setup (6)

  • Use docker-compose - Done
  • In global-setup - Done
  • Optimize speed - Done
  • Keep up in dev env - Done
  • Use production migration to build-up - Done
    - RAM folder - Not yet

Basic tests - (5)

  • Axios, not supertest (configure global instance that doesn't throw when HTTP=!200)
  • Generate JWT secret for authentication -
  • Assert for objects including the status
  • Structure Describes by route and stories
  • Keep unit tests good practices (AAA, name)

Tests isolation (8)

  • Intercept any outside calls, isolation
  • Define network interception in beforeEach, cleanup in afterEach
  • Disable all network requests except those which were explicitly defined (nock.enableNetConnect)
  • Once you have a default, but need to override in a specific test - Create a unique request pattern or remove a global scope
  • Simulate collaborator failures
  • Explicitly define request schema to test the outgoing call (how explicit to be?)
  • Record requests to discover various integration patterns and collaborator to build the tests upon

Dealing with data (7)

  • Each test act on its own records - Avoid coupling
  • Seed only metadata -
  • Clean-up only in the end -
  • Use randomizing factory (e.g. Rosie like) -
  • Check response schema -
  • Test large responses -
    - More than single page in paging (Should write)
    - Insert two in parallel (Should write)

Error handling and metrics

  • Test various error handling flows and outcome
    - Test metrics
  • Test OpenAPI documentation
  • Test the contracts
  • Test for memory leaks
  • Tag pure black-box tests to reuse against remote environment
  • Test DB/ORM migrations

Message queue related testing (8)

  • Flatten the test, get rid of callbacks
  • Thoughtful decision about real vs fake
  • Test a poisoned message
  • Test idempotency
  • Test DLQ
  • E2E (Retry, DLQ, names)
  • Test ACK
  • Test start failure, Zombie
  • Test for metadata, JWT

Workflow (6)

  • Tune a test runner for ongoing testing
  • Start with integration tests
  • Focus on feature coverage
  • Test various error handling
  • The KATA
  • Slow test
  • When unit

Other ideas

  • Parallelize requests

More will come here. Suggest more?

The homepage readme moved here:
https://github.com/testjavascript/integration-tests-a-z/tree/awesome-homepage

Before I spend a long time writing 40 best practice, LMK if you have thoughts on:

  1. The content - See msg above
  2. The homepage look&feel
  3. The bullet format example (number 1 in the list only, Place a start and stop method within your app entry point)

Wow, this is amazing!!! Kudos!

  1. I'm fine with Integration Tests Best Practices, this is a broad topic already IMO and if we want to add other topics we can create dedicated repo.
  2. The content looks interesting and valuable!

I just don't understand The golden principles part. what it should contain?

Hi!
Just a couple of thoughts with no specific order here:

Content related aspects

  • Debugging flaky tests and practices to avoid it
  • Expanding on the 'test metrics' part - do we stop at the metric output? or do we discuss about what to do with these? i.e sending it to prometheus, analyzing the data etc..
  • Adding qualitative metrics to our tests? how do we know the test has value? when should we write a test and vice-versa.
  • Performance optimization of large test-bases [ expanding on 'optimize speed' ]: Parallel execution, remote execution etc..
  • Running an isolated amount of tests locally during dev and emphasizing that all tests should run before committing.
  • deriving causality - using tools to link specific commits (outside the project) to failing tests:
    This can be achieved by executing tests on a mocked request along with a real-world request and analyzing the delta if it happens.
    I assume that this can be achieved with some contract based configuration .

General Aspects

  • The repos name can contain 'Awesome' in it and be part of the awesome collective
  • Adding a checklist of things to have in order of importance will help people to get on the wagon faster IMO
  • A real world example - let's make something that shows off this whole thing with comments to the relevant parts in the repo
  • Boilerplates for the popular CLIs? i.e testable-nest, testable-express etc..
  • Typescript?

Again, just random thoughts here.

@silicakes Gold stuff here. First few questions:

Debugging flaky tests and practices to avoid it

Tests should not be flaky, all of these guidelines together are meant to yield great non-flaky tests. Are you referring to specific situation? Why would tests by flaky?

Performance optimization of large test-bases [ expanding on 'optimize speed' ]: Parallel execution, remote execution etc..

Kindly elaborate on remote execution?

Boilerplates for the popular CLIs? i.e testable-nest, testable-express etc..

Sounds interesting, can you share few more works on this idea?

The repos name can contain 'Awesome'

Awesome libs === Links aggregators. We show here a full app with many code and non-trivial practices, is this doing justice with us to put us in the same basket with these libs?

Tests should not be flaky, all of these guidelines together are meant to yield great non-flaky tests. Are you referring to specific situation? Why would tests by flaky?

That's right, however sometimes you'll get a flaky test for not adhering to these principles and it could be nice to show a debugging workflow to discover the cause

Kindly elaborate on remote execution?

Executing tests over CI, remote machines, executing just a subset of tests during development etc.
Basically everything that revolves around providing feedback as fast as possible.

Boilerplates for the popular CLIs

The same way you have templates for various frameworks and tools (i.e npx create-react-app myApp --template typescript), you can have a couple to set up the initial testing configuration and bootstrap

Awesome libs

I agree, we're not going to aggregate links, disregard that.
(We might create an aggregation for testing libraries and tools though, which can be a nice appendix for these practices)