filipeforattini / ff-iac-github-actions

The next simple pipeline for your project.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Github Actions Fast Pipelines

semantic-release

This is a personal work in progress. Keep in mind your suggestions are welcome! :)

These workflows are highly opinionated kubectl-apply or helm-upgrade pipelines.

tldr;

  1. Config tour repository with these secrets.
  2. Create a directory .github/workflow in your repository and add this file.
  3. Create a another directory manifests and add this file.
  4. Commit and open your repository actions page! :)

Table of contents
  1. Introduction
    1. Features
    2. Motivation
    3. Repository Patterns
    4. Repository Examples
    5. Environments
    6. Repository Structure
    7. Repository Secrets
  2. Pipelines
    1. Flow
    2. Starting
    3. Static Web Application (app)
    4. Mobile Application (mob)
    5. Service (svc)
    6. Infrastructure as Code (iac)
    7. Package (pkg)
  3. Actions
    1. Config Scrapper

Introduction

Motivation

We, as solutions builders, spend hours of precious time trying to build and deploy a solution. As the complexity of your solution rises, the challenges of it's versioning and dependencies management follows.

What can we do about this time invested in DevOps? Standardize solutions' deliverables to standardize their build. With that in mind we will define a few patterns to enhance developer experiences towards build & deployment cycles.

I've finished the staging environment, now have to plan the production env. No, every environment should be equivalent and work better with autoscaling.

Repository Patterns

This pipeline assumes you have just 5 types of repositories:

Name Short Description Result
Web Application app Front-end application with internet-facing ingress language-based container
Mobile Application mob Mobile application apk, aab
Service svc Microservice that may - or may not - have ingress nginx-based container
Infrastructure as Code iac Code that generates cloud infrastructure -
Package pkg Embedded code artifact language-based artifacts

Those repositories must obey a name pattern:

{ecosystem}-{type}-{name/client/integration}

Examples:

  • ff-svc-clients: microservice that manages clients' data
  • ff-app-budget: application that organizes the company finances
  • ff-mob-auth: 2FA mobile application
  • ff-iac-aws: infra as code to manage the aws environment
  • ff-pkg-csv: csv handler package

Repository Examples

Checkout the few repositories that implement this repository logic:

Type Solution Repository Pipeline Deploy
app filipeforattini/ff-app-react
app filipeforattini/ff-app-vue
mob filipeforattini/ff-mob-react-native
svc filipeforattini/ff-svc-express
svc filipeforattini/ff-svc-fastapi
svc filipeforattini/ff-svc-flask
svc filipeforattini/ff-svc-moleculer
svc filipeforattini/ff-svc-nestjs
svc filipeforattini/ff-svc-nextjs

Environments

Every application should have 5 environments:

Name Short Description
Development dev Env for you and your team to test and explore
Staging stg Stable env for code shipping
Sandbox sbx Production-like env for external developers
Production prd Where the magic happens
Disaster Recovery dry Production copy

Development (dev)

Should be sync'ed with your latest code.

Here is where the magic happens:

  • Databases are born e die daily.
  • Here is where your crazy ideas and prototypes appears.

In this environment, every service should probably connect to every dependencies' sandbox environments.

This environment should be inside your company's inner VPN.

Staging (stg)

Should be your internal sharing environment for testing and QA'ing.

Here you test the versioning, should be sync'ed with your tags.

Here you will simulate every production upgrade.

In this environment, every service should probably connect to every dependencies' sandbox environments.

This environment should be inside your company's inner VPN.

Sandbox (sbx)

Should be your production-like environment for external partners testers and a few flags or mocks should guarantee no finantial consequences.

In this environment, every service should probably connect to every dependencies' sandbox environments.

This environment should be public and using the same version.

Production (prd)

The environment that your clients will access.

Should be similar to your staging environment.

Disaster Recovery (dry)

The environment that will replace your production environment when it goes offline.

Should be identical to your production environment.

May have a few features disabled.


Repository Structure

Your repositor's structure should follow a few

directory description
/.github Collection of Github artifacts
/manifests Your solutions' artifacts for deploys
/src Your code

Example:

├─ .github
│  └─ workflows
│  │  └─ pipeline.yml
│  └─ dependabot.yml
├─ manifests
│  ├─ configs
│  │  └─ dev.env
│  ├─ dependencies
│  │  └─ dev.yml
│  ├─ secrets
│  │  └─ dev.gpg
│  ├─ k8s.yml
│  └─ helm.yml
├─ build
│  // distibuition version of our code
└─ src
   // our code goes here

Repository Secrets

Name Description
GPG_PASSPHRASE
KUBE_CONFIG Your ~/.kube/config file as base64.
PIPELINE_DEPLOY_TOKEN A GitHub token, see the permissions below.
REGISTRY_USERNAME Registry username.
REGISTRY_PASSWORD Registry password.

Pipelines

Before start: fork it! You downt want to have your pipeline broken by someone's commit ant let your customers in the dark in case something happens.

This pipeline was designed to make every single build predictable and enhance developers experience with a few set of features.

Features

Versioning with Semantic-Release

Linter:

  • Hadolint for Dockerfiles
  • ESLint for Javascript
  • PyLint for Python

Static analysis:

  • GitLeaks for repository
  • Trivy for repository and image
  • Open Source Static Analysis Runner
  • GitHub's CodeQL analyzer
  • Dynamic container generator

Flow

This is a birs' view of the whole pipeline.

%%{init:{"theme":"neutral"}}%%
%%{init: {'themeVariables': { 'background-color': 'transparent'}}}%%

flowchart
  start[Start]
  analysis[Analysis]
  node-test[Node Tests]
  python-test[Python Tests]
  static-analysis[Static Analysis]

  start --- analysis

  analysis --- |event/push| static-analysis
  analysis --- |event/pull_request| static-analysis
  analysis --- |event/workflow_dispatch| trigger-manual

  static-analysis --- |lang/javascript| node-test
  static-analysis --- |lang/python| python-test
  static-analysis --- |lang/go| go-test

  subgraph node:
    node-test --- node-release
    node-release --- node-trigger
  end

  subgraph python:
    python-test --- python-release
    python-release --- python-trigger
  end

  subgraph go:
    go-test --- go-release
    go-release --- go-trigger
  end

  node-trigger --- |deployment/dev| finish
  python-trigger --- |deployment/dev| finish
  go-trigger --- |deployment/dev| finish

  trigger-manual --- |env/xxx| finish

  analysis --- |event/deployment| build
  build --- |env/dev| env-dev
  build --- |env/stg| env-stg
  build --- |env/prd| env-prd

  subgraph dev:
    env-dev --- |app.dev.domain.io| DEV
    env-dev --- |app-commit.dev.domain.io| DEV
  end

  subgraph stg:
    env-stg --- |app.stg.domain.io| STG
  end

  subgraph prd:
    env-prd --- |app.prd.domain.io| PRD
  end

Insights:

  • Every push (commits with new code) should be done in other branches, git-flow style.
  • Every push should deploy a commit version into dev
  • Every pull-request or push into develop or main/master should semantic version automatically and generate a tag
  • Every tag should deploy a new version to stg
  • Deploys to prd should be done manually while the project grows maturity

Starting

Choose what pipeline your project will use by adding into your repository the file .github/workflows/pipeline.yml.

Suggestion for single environment systems:

name: pipeline

on:
  push:
  deployment:
  release:
    types: [created]
  pull_request:
    types: [opened, reopened]

jobs:
  SVC:
    uses: filipeforattini/ff-iac-github-actions/.github/workflows/svc.yml@main
    secrets: inherit
    with:
      mainBranch: main
      containerRegistry: ghcr.io
      environmentsAsNamespaces: true

Suggestion for multi environment systems:

name: pipeline

on:
  push:
  deployment:
  release:
    types: [created]
  pull_request:
    types: [opened, reopened]

  workflow_dispatch:
    inputs:
      environment:
        description: "Environment"
        required: true
        type: choice
        default: "dev"
        options:
          - dev
          - stg
          - prd

jobs:
  SVC:
    uses: filipeforattini/ff-iac-github-actions/.github/workflows/svc.yml@main
    secrets: inherit
    with:
      mainBranch: main
      containerRegistry: ghcr.io

Static Web Application

Example of pipeline.yml:

name: pipeline

on:
  push:
  deployment:
  release:
    types: [created]
  pull_request:
    types: [opened, reopened]

  workflow_dispatch:
    inputs:
      environment:
        description: "Environment"
        required: true
        type: choice
        default: "dev"
        options:
          - dev
          - stg
          - prd

jobs:
  SVC:
    uses: filipeforattini/ff-iac-github-actions/.github/workflows/app.yml@main
    secrets: inherit
    with:
      mainBranch: main
      containerRegistry: ghcr.io
      environmentsAsNamespaces: true

Service

Example of pipeline.yml:

name: pipeline

on:
  push:
  deployment:
  release:
    types: [created]
  pull_request:
    types: [opened, reopened]

  workflow_dispatch:
    inputs:
      environment:
        description: "Environment"
        required: true
        type: choice
        default: "dev"
        options:
          - dev
          - stg
          - prd

jobs:
  SVC:
    uses: filipeforattini/ff-iac-github-actions/.github/workflows/svc.yml@main
    secrets: inherit
    with:
      mainBranch: main
      containerRegistry: ghcr.io
      environmentsAsNamespaces: true

Parameters

Name Required Default Description
mainBranch false master Main repository branch may interfere with versioning
ecosystem false - Special prefix that will be added to the image name
containerRegistry false ghcr.io Container registry to upload container images
environmentsASnamespaces false false Separate environments as namespaces
staticAnalysis false false Enable static analysis scans
autoVersioning false true Enable auto versioning with semantic versioning
nodeMatrix false '[16, 17, 18]' Node's testing matrix
nodeVersion false '18' Node's container default version
pythonMatrix false '["3.8", "3.9", "3.10"]' Python's testing matrix
pythonVersion false '3.10' Python's container default version
goMatrix false '["1.18"]' Go's testing matrix
goVersion false '1.18' Go's container default version
platforms false linux/386,linux/amd64,linux/arm/v7,linux/arm/v8,linux/arm64,linux/ppc64le,linux/s390x Multi-plataform container builds

Deploy with kubetl apply

Create a file k8s.yml in your manifests directory.

#@data/values
---
port: 1234

env:
  - name: TZ
    value: America/Sao_Paulo

ingress:
  enable: true
  className: traefik

  tls:
    enable: true
    domain: your.domain

MOB pipeline

Requirements
Android

Follow the Android Developer Guide for more insights.

Generate your key:

keytool -genkey -v \
  -keystore $HOME/.android/ff-pipeline.jks \
  -alias pipeline-key \
  -keyalg RSA \
  -keysize 2048 \
  -validity 10000

Export your key with keytool from Android's:

keytool -export \
  -rfc \
  -keystore $HOME/.android/ff-pipeline.jks \
  -alias pipeline-key \
  -file pipeline-key.pem

Add the variable ANDROID_KEYSTORE_PASSWORD with the password used: Add the variable ANDROID_KEYSTORE_CERT with the value of pipeline-key.pem:

# cat pipeline-key.pem
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----

Add to your build.gradle config:

android {
  signingConfigs {
    config {
      storeFile file(System.getenv("ANDROID_KEYSTORE_PATH"))
      storePassword System.getenv("ANDROID_KEYSTORE_PASSWORD")
      keyAlias "pipeline-key"
      keyPassword System.getenv("ANDROID_KEYSTORE_PASSWORD")
    }
  }
}

Requirements

Configure your k8s cluster and get your ~/.kube/config.

Daily work

Commits & Versioning

git commit -m "action(scope): subject"

Where the actions:

  • feat: new feature for the user, not a new feature for the build script
  • fix: bug fix for the user, not a fix for a build script
  • docs: documentation changes
  • style: formatting, lack of semicolons, etc; no changes to the production code
  • refactor: refactoring the production code, for example. renaming a variable
  • test: adding missing tests, refactoring tests; no changes to the production code
  • chore:updating grunted tasks, etc; no changes to the production code

Adds BREAKING CHANGE in the commit message and it will generate a new major version.

Secrets

gpg -v \
  --symmetric \
  --cipher-algo AES256 \
  --output ./manifests/secrets/dev.gpg \
  ./manifests/secrets/dev.env

Thanks to:

Example ecosystem

This ecosystem generates few data per second as samples for our apis.

Architecture

Full independent

In this implementation, each service has its own resources.

%%{init:{"theme":"neutral"}}%%
%%{init: {'themeVariables': { 'background-color': 'transparent'}}}%%

flowchart
  https---ingress

  subgraph k8s
    ingress---|ff-svc-nestjs.dev.forattini.app|nestjs
    ingress---|ff-svc-nextjs.dev.forattini.app|nextjs
    ingress---|ff-svc-fastapi.dev.forattini.app|fastapi
    ingress---|ff-svc-moleculer.dev.forattini.app|moleculer

    subgraph ff-svc-nextjs
      nextjs---rabbitmq-nextjs[rabbit]
    end

    subgraph ff-svc-moleculer
      moleculer---postgres-moleculer[postgres]
      moleculer---mysql-moleculer[mysql]
      moleculer---redis-moleculer[redis]
      moleculer---rabbitmq-moleculer[rabbit]
      moleculer---etcd-moleculer[etcd]
      moleculer---nats-moleculer[nats]

      moleculer---rabbitmq-nextjs
    end

    subgraph ff-svc-fastapi
      fastapi---postgres-fastapi[postgres]
      fastapi---rabbitmq-fastapi[rabbit]

      fastapi---rabbitmq-nextjs
    end

    subgraph ff-svc-nestjs
      nestjs---postgres-nestjs[postgres]
      nestjs---rabbitmq-nestjs[rabbitmq]

      nestjs---rabbitmq-nextjs
    end
  end

Shared resources

In this implementation, all services connects to a shared resource.

%%{init:{"theme":"neutral"}}%%
%%{init: {'themeVariables': { 'background-color': 'transparent'}}}%%

flowchart
  https---ingress

  subgraph k8s
    ingress---|ff-svc-nestjs.dev.forattini.app|nestjs
    ingress---|ff-svc-nextjs.dev.forattini.app|nextjs
    ingress---|ff-svc-fastapi.dev.forattini.app|fastapi
    ingress---|ff-svc-moleculer.dev.forattini.app|moleculer

    subgraph ff-svc-moleculer
      moleculer---postgres-moleculer[postgres]
      moleculer---mysql-moleculer[mysql]
      moleculer---redis-moleculer[redis]
      moleculer---rabbitmq-moleculer[rabbitmq]
      moleculer---etcd-moleculer[etcd]
      moleculer---nats-moleculer[nats]
    end

    subgraph ff-svc-nextjs
      nextjs---rabbitmq-moleculer
      nextjs---postgres-moleculer
      nextjs---mysql-moleculer
      nextjs---redis-moleculer
    end

    subgraph ff-svc-fastapi
      fastapi---postgres-moleculer
      fastapi---rabbitmq-moleculer
    end

    subgraph ff-svc-nestjs
      nestjs---postgres-moleculer
      nestjs---rabbitmq-moleculer
    end
  end

Actions

Config Scrapper

About

The next simple pipeline for your project.

License:The Unlicense


Languages

Language:JavaScript 74.5%Language:Shell 19.6%Language:Makefile 5.8%