workduck-io / mex-integration-monorepo

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

nx-serverless-template

serverless npm peer dependency version (scoped) code style: prettier

Table of contents


Template Layout


.
├── stacks/    # stack for each serverless configuration/template and its associated files
├── libs/      # shared libraries
├── tools/
├── README.md
├── jest.config.js
├── jest.preset.js
├── nx.json
├── package.json
├── serverless.base.ts  # base configuration for serverless
├── tsconfig.base.json
├── workspace.json
├── .editorconfig
├── .eslintrc.json
├── .gitignore
├── .husky              # git hooks
├── .nvmrc
├── .prettierignore
├── .prettierrc

Prerequisites


  • Nodejs protip: use nvm

    ⚠️ Version: lts/fermium (v.14.17.x). If you're using nvm, run nvm use to ensure you're using the same Node version in local and in your lambda's runtime.

  • 📦 Package Manager

    • Yarn

      (or)

    • NPM Pre-installed with Nodejs

  • 💅 Code format plugins

    On your preferred code editor, Install plugins for the above list of tools

Usage


Install project dependencies

  • Using Yarn

    ```shell
    yarn
    ```
    
    Generate a new stack
nx workspace-generator serverless <STACK_NAME>

Set the basePath of the custom domain manager for each new stack in serverless.ts file

Stack name shouldn't include special characters or whitespaces

Run with -d or --dry-run flag for dry run

Generate new library
nx g @nrwl/node:lib --skipBabelrc --tags lib <LIBRARY_NAME>

Stack name shouldn't include special characters or whitespaces

Run with -d or --dry-run flag for dry run

Package stack
  • To package single stack

    nx run <STACK_NAME>:build --stage=<STAGE_NAME>
  • To package stack affected by a change

    nx affected:build --stage=<STAGE_NAME>
  • To package all stacks

    ```shell
    nx run-many --target=build --stage=<STAGE_NAME>
    ```
    
Deploy stack to cloud
  • To deploy single stack

    nx run <STACK_NAME>:deploy --stage=<STAGE_NAME>
  • To deploy stack affected by a change

    nx affected:deploy --stage=<STAGE_NAME>
  • To deploy all stacks

    ```shell
    nx run-many --target=deploy --all --stage=<STAGE_NAME>
    ```
    
  • - **Remove stack from cloud**
    • To remove single stack

      nx run <STACK_NAME>:remove --stage=<STAGE_NAME>
    • To remove stack affected by a change

      nx affected:remove --stage=<STAGE_NAME>
    • To remove all stacks

      ```shell
      nx run-many --target=remove --all --stage=<STAGE_NAME>
      ```
      
    Run tests
    • To run tests in single stack

      nx run <STACK_NAME>:test --stage=<STAGE_NAME>
    • To run tests affected by a change

      nx affected:test --stage=<STAGE_NAME>
    • To run tests in all stacks

      ```shell
      nx run-many --target=test --all --stage=<STAGE_NAME>
      ```
      
    Run offline / locally
    • To run offlline, configure serverless-offline plugin as documented here and run below command

      ```shell
      nx run <STACK_NAME>:serve --stage=<STAGE_NAME>
      ```
      
    Understand your workspace
    nx dep-graph
    

    Further help

    Nx Cloud


    Computation Memoization in the Cloud

    ​ Nx Cloud pairs with Nx in order to enable you to build and test code more rapidly, by up to 10 times.

    ​ Visit Nx Cloud to learn more and enable it

    Currently not active

    Purpose


    This repository was created to enable bi-directional sync in Mex!

    What is Bi-directional sync?

    It is a way to keep information in sync across multiple services/tools. Artifacts generated from one service/tool can be consumed by another service/tool and vice versa while preerving context.

    For example, I have a process in which users report bugs on Slack and an APM is responsible to convert the slack reports to Jira tickets. Then the Jira Ticket is assigned to a developer and the developer is responsible to resolve the ticket. On the resolution of the ticket, the APM is responsible to update the slack report. The above process in our ideal world is bi-directional sync where I have a slack channel, jira and github in sync with each other. On every update in the slack channel, the jira ticket is updated and on every update in the jira ticket, the github issue is updated. And the flow of information is not limited in a single direction. On resolution of the ticket, the Jira ticket is updated and slack channel is updated keeping everyone in the loop at all times.

    Where is the innovation?

    We dont want to be like Zapier where information flows in a single direction. We want to keep information flowing in both directions by default. This event includes the context of the information. For example, While sharing a google doc on slack, the comments on google doc and the threaded replies on the message are synced. So the user can get the entire context irrespective of the platform

    Broad Overview


    Basic architecture

    There are seven main microservices at work here, which correspond to seven different lambdas in the AWS Lambda service. Each lambda output is queued before it hits another. The microservices are:

    1. Gatekeeper
    2. ServiceHandler
    3. EventFilter
    4. FlowService
    5. RuleEngine
    6. TransformEngine
    7. AuthService

    The EventFilter, FlowService, RuleEngine, and TemplateEngine form the Integration Logic Layer.

    Gatekeeper

    The job of the gatekeeper is to ensure that the integration logic is only invoked when the event is valid. It performs basic security checks and verifies the origin of requests. If the request is valid, the request goes to the serviceHandler.

    ServiceHandler

    The serviceHandler is the core of the integration logic. It largely has two functions:

    • Convert the incoming event into the desired MexFormat and then invoking the integration logic.
    • Convert the output of the integration logic from the MexFormat to the service format and then invoke it using the credentials it receives from the authservice.

    EventFilter

    The eventFilter is responsible for filtering the incoming events. It is responsible for filtering the incoming events based on:

    • Supported events types (service based)
    • If the event was emitted by Mex itself

    FlowService

    This microservice is actually a miniservice and performs multiple roles. It exposes the APIs for CRUD of flows, and also provides the config for executing flows. (Config includes templates, transformations, auth level etc). It is also responsible for retrieving the flows related to the event from the database.

    RuleEngine

    The ruleEngine checks if the event follows the rules specified in the flow config. Basic JSON based rules are currently supported

    TransformEngine

    The transformEngine, as the name suggests, is used to transform the event into the desired output format based on the flow config. (Based on the transformation logic used in action-request-helper library)

    AuthService

    Like the FlowService, the AuthService is a miniservice and performs multiple roles. It provides the APIs for CRUD of auth of workspaces. It also takes care of all the authentication utilities(maintaining access and refresh token, revoking permissions on bot removal etc). This is the last step before the event is sent to the serviceHandler.

    There is a queue for each microservice. The queue is used to ensure that the microservices are invoked in the correct order.

    Flow Constructs


    While talking about flows, we will using certain terminologies. The following are the terms used:

    1. Template
    2. Template Unit
    3. Flow
    4. Flow Unit
    5. Execution
    6. Execution Unit
    7. AuthType
    8. AuthScope
    9. Rules
    10. Transformations
    11. TLI (Top Level Identifier)

    About

    License:MIT License


    Languages

    Language:TypeScript 98.3%Language:JavaScript 1.7%Language:Shell 0.0%