solarwinds-apm-bindings is an NPM package containing a binary node add-on.
The package is installed as a dependency when the SolarWinds APM Agent (solarwinds-apm) is installed. In any install run, SolarWinds APM Agent will first attempt to install a prebuilt add-on using node-pre-gyp and only if that fails, will it attempt to build the add-on from source using node-gyp.
This is a Linux Only package with no Mac or Windows support.
The package implements a low-level interface to liboboe
, a closed-source library maintained by SolarWinds. liboboe
implements communications and aggregation functions to enable efficient sampling of traces. Traces are sequences of entry and exit events which capture performance information.
Development must be done on Linux.
To setup a development environment on a Mac use a Docker container (see below).
Mac should have:
- Docker
- Xcode command line tools (simply installed by terminal
git
command) - SSH keys at github
Building with node-gyp
(via node-pre-gyp
) requires:
- Python (2 or 3 depending on version of npm)
- make
- A proper C/C++ compiler toolchain, like GCC
Those are available in the Docker Dev Container.
git clone
to start.
src
directory contains the C++ code to bind to liboboe.oboe
directory containsliboboe
and its required header files.liboboe
is downloaded from: https://agent-binaries.cloud.solarwinds.com/apm/c-lib. Pre-release versions are at: https://agent-binaries.global.st-ssp.solarwinds.com/apm/c-libtest
directory contains the test suite..github
contains the files for github actions.dev
directory contains anything related to dev environment
- Start the Docker daemon (on a Mac that would be simplest using Docker desktop).
- Create a
.env
file and set keys for the backend:
SW_TEST_PROD_SERVICE_KEY={a valid **production** service key}
SW_APM_SERVICE_KEY={a valid service key to any of dev/staging/production}
SW_APM_COLLECTOR={optional url of the collector at dev/staging}
- Run
npm run dev
. This will create a docker container, set it up, and open a shell. Docker container will have all required build tools as well as nano installed, and access to GitHub SSH keys as configured. Repo code is mounted to the container. - To open another shell in same container use:
docker exec -it dev-bindings /bin/bash
The setup script ensures a "clean" work place with each run by removing artifacts and installed modules on each exit.
This repo has a "single" GitHub package named node
scoped to solarwindscloud/solarwinds-bindings-node
(the repo) which has multiple tagged images.
Those images serve two main purposes:
- They complement the official node images (https://hub.docker.com/_/node) with specific end-user configurations.
- They provide the build environments for the multiple variations (os glibc/musl, node version) of the package.
At times it may be useful to set a "one off" docker container to test a specific feature or build.
- Run
npm run dev:oneoff
. This will create a docker container, set it up, and open a shell. Docker container will have access to GitHub SSH keys as configured. Repo code is copied to the container. - To specify an image to the "one off" container pass it as argument. For example: run
npm run dev:oneoff node:latest
to get latest official image ornpm run dev:oneoff ghcr.io/solarwindscloud/solarwinds-bindings-node/node:14-alpine3.9
to get one of this repo custom images.
Test are run using Mocha.
- Run
npm test
to run the test suite against the collector specified in the.env
file (SW_APM_COLLECTOR
).
Note: the initial default initialization test will always run against production collector using SW_TEST_PROD_SERVICE_KEY
from the .env file.
The test
script in package.json
runs test.sh
which then manages how mocha runs each test file. To run individual tests use npx mocha
. For example: npx mocha test/config.test.js
will run the config tests.
Building is done using node-pre-gyp.
- Before a build,
setup-liboboe.js
must run at least once in order to create symbolic links to the correct version of liboboe so theSONAME
field can be satisfied. - Run
npx node-pre-gyp rebuild
. More granular commands available. Seenode-pre-gyp
documentation.
The install
and rebuild
scripts in package.json
run setup-liboboe.js
as the first step before invoking node-pre-gyp
. As a result, initial npm
install will set links as required so skipping directly to step 2 above is possible. That said, setup-liboboe.js
can be run multiple times with no issues.
Debugging node addons is not intuitive but this might help (from stackoverflow)
First, compile your add-on using node-pre-gyp
with the --debug
flag.
node-pre-gyp --debug configure rebuild
(The next point about changing the require path doesn't apply to solarwinds-apm-bindings because it uses the bindings
module and that will find the module in Debug
, Release
, and other locations.)
Second, if you're still in "playground" mode, you're probably loading your module with something like
var ObjModule = require('./ObjModule/build/Release/objModule');
However, when you rebuild using node-pre-gyp
in debug mode, node-pre-gyp
throws away the Release version and creates a Debug version instead. So update the module path:
var ObjModule = require('./ObjModule/build/Debug/objModule');
Alright, now we're ready to debug our C++ add-on. Run gdb against the node binary, which is a C++ application. Now, node itself doesn't know about your add-on, so when you try to set a breakpoint on your add-on function (in this case, StringReverse) it complains that the specific function is not defined. Fear not, your add-on is part of the "future shared library load" it refers to, and will be loaded once you require() your add-on in JavaScript.
$ gdb node
...
Reading symbols from node...done.
(gdb) break StringReverse
Function "StringReverse" not defined.
Make breakpoint pending on future shared library load? (y or [n]) y
OK, now we just have to run the application:
(gdb) run ../modTest.js
...
Breakpoint 1, StringReverse (args=...) at ../objModule.cpp:49
If a signal is thrown gdb will stop on the line generating it.
Finally, here's a link to using output formats (and the whole set of gdb docs) gdb.
tl;dr Push to feature branch. Create Pull Request. Merge Pull Request. Manual release. Package is always released in conjunction with SolarWinds APM Agent. See release process for details.
The package is node-pre-gyp
enabled and is published in a two step process. First prebuilt add-on tarballs are uploaded to an S3 bucket, and then an NPM package is published to the NPM . Prebuilt tarballs must be versioned with the same version as the NPM package and they must be present in the S3 bucket prior to the NPM package itself being published to the registry.
There are many platforms that can use the prebuilt add-on but will fail to build it, hence the importance of the prebuilts.
- Push to main is disabled by branch protection.
- Push to branch which changes any Dockerfile in the
.github/docker-node/
directory will trigger docker-node.yml. - Workflow will:
- Build all Dockerfiles and create a single package named
node
scoped tosolarwindscloud/solarwinds-bindings-node
(the repo). Package has multiple tagged images for each of the dockerfiles from which it was built. For example, the image created from a file named10-centos7-build.Dockerfile
has a10-centos7-build
tag and can pulled fromghcr.io/solarwindscloud/solarwinds-bindings-node/node:10-centos7-build
. Since this repo is public, the images are also public.
- Build all Dockerfiles and create a single package named
- Workflow creates (or recreates) images used in other workflows.
- Manual trigger supported.
push Dockerfile ─► ┌───────────────────┐ ─► ─► ─► ─► ─►
│Build Docker Images│ build & publish
manual ──────────► └───────────────────┘
- Push to main is disabled by branch protection.
- Push to branch will trigger push.yml.
- Workflow will:
- Build the code pushed on a default image. (
node
image from docker hub). - Run the tests against the build.
- Build the code pushed on a default image. (
- Workflow confirms code is not "broken".
- Manual trigger supported. Enables to select node version.
- Naming a branch with
-no-action
ending disables this workflow. Use for documentation branches edited via GitHub UI.
push to branch ──► ┌───────────────────┐ ─► ─► ─► ─► ─►
│Single Build & Test│ contained build
manual (image?) ─► └───────────────────┘ ◄── ◄── ◄── ◄──
- Creating a pull request will trigger review.yml.
- Workflow will:
- Build the code pushed on each of the Build Group images.
- Run the tests on each build.
- Workflow confirms code can be built in each of the required variations.
- Manual trigger supported.
pull request ────► ┌──────────────────┐ ─► ─► ─► ─► ─►
│Group Build & Test│ contained build
manual ──────────► └──────────────────┘ ◄── ◄── ◄── ◄──
- Merging a pull request will trigger accept.yml.
- Workflow will:
- Clear the staging S3* bucket of prebuilt tarballs (if exist for version).
- Create all Fallback Group images and install. Since prebuilt tarball has been cleared, install will fallback to build from source.
- Build the code pushed on each of the Build Group images.
- Package the built code and upload a tarball to the staging S3 bucket.
- Create all Prebuilt Group images and install the prebuilt tarball on each.
- Workflow ensures node-pre-gyp setup (config and S3 buckets) is working for a wide variety of potential customer configurations.
- Manual trigger supported. Enables to select running the tests after install (on both Fallback & Prebuilt groups)
merge to main ─► ┌──────────────────────┐
│Fallback Group Install│
manual (test?) ──► └┬─────────────────────┘
│
│ ┌───────────────────────────┐ ─► ─► ─►
└─► │Build Group Build & Package│ S3 Package
└┬──────────────────────────┘ Staging
│
│ ┌──────────────────────┐ │
└─► │Prebuilt Group Install│ ◄── ▼
└──────────────────────┘
- Release process is
npm
and GitHub Actions triggered. - To Release:
- On branch run
npm version {major/minor/patch}
(e.g.npm version patch
) then have the branch pass through the Push/Pull/Merge flow above. - When ready - manually trigger the Release workflow.
- On branch run
- Workflow will:
- Build the code pushed in each of the Build Group images.
- Package the built code and upload a tarball to the production S3 bucket.
- Create all Target Group images and install the prebuilt tarball on each.
- Publish an NPM package upon successful completion of all steps above. When version tag is
prerelease
, package will be NPM tagged same. When it is a release version, package will be NPM taggedlatest
.
- Workflow ensures node-pre-gyp setup is working in production for a wide variety of potential customer configurations.
- Workflow publishing to NPM registry exposes the NPM package (and the prebuilt tarballs in the production S3 bucket) to the public.
- Note: solarwinds-apm-bindings is not meant to be directly consumed. It is developed as a dependency of solarwinds-apm.
manual ──────────►│Confirm Publishable│
└┬──────────────────┘
│
│ ┌────────────────────────────┐ ─► ─► ─►
└► │Build Group Build & Package │ S3 Package
└┬───────────────────────────┘ Production
│
│ ┌────────────────────┐ │
└─► │Target Group Install│ ◄── ▼
└┬───────────────────┘
│
│ ┌───────────┐
└─► │NPM Publish│
└───────────┘
tl;dr There is no need to modify workflows. All data used is externalized.
- Local images are defined in docker-node.
- S3 Staging bucket is defined in package.json.
- S3 Production bucket is defined in package.json.
- Build Group are images on which the various versions of the add-on are built. They include combinations to support different Node versions and libc implementations. Generally build is done with the lowest versions of the OSes supported, so that
glibc
/musl
versions are the oldest/most compatible. - Fallback Group images include OS and Node version combinations that can build for source.
- Prebuilt Group images include OS and Node version combinations that can not build for source and thus require a prebuilt tarball.
- Target Group images include a wide variety of OS and Node version combinations. Group includes both images that can build from code as well as those which can not.
- Create a docker file with a unique name to be used as a tag. Common is to use:
{node-version}-{os-name-version}
(e.g16-ubuntu20.04.2.Dockerfile
). If image is a build image suffix with-build
. - Place a Docker file in the
docker-node
directory. - Push to GitHub.
- Find available tags at Docker Hub or use path of image published to GitHub Container Registry (e.g.
ghcr.io/$GITHUB_REPOSITORY/node:14-centos7
) - Add to appropriate group json file in
config
.
- Create an
alpine
builder image and acentos
builder image. Use previous node version Dockerfiles as guide. - Create
alpine
,centos
andamazonlinux2
test images. Use previous node version Dockerfiles as guide. - Follow "Adding an image to GitHub Container Registry" above.
- Follow "Modifying group lists" above.
- Remove version images from appropriate group json file in
config
. - Leave
docker-node
Dockerfiles for future reference.
tl;dr No Actions used. Matrix and Container directive used throughout.
- All workflows
runs-on: ubuntu-latest
. - For maintainability and security custom actions are avoided.
- Configuration has been externalized. All images groups are loaded from external json files located in the
config
directory. - Loading uses fromJSON function and a standard two-job setup.
- Loading is encapsulated in a shell script. Since the script is not a "formal" action it is placed in a
script
directory. - All job steps are named.
- Jobs are linked using
needs:
.
Repo is defined with the following secrets For testing:
SW_APM_COLLECTOR
SW_APM_SERVICE_KEY
SW_TEST_PROD_SERVICE_KEY
For S3 Interaction:
PROD_AWS_ACCESS_KEY_ID
PROD_AWS_SECRET_ACCESS_KEY
PROD_AWS_ROLE_TO_ASSUME
STAGING_AWS_ACCESS_KEY_ID
STAGING_AWS_SECRET_ACCESS_KEY
STAGING_AWS_ROLE_TO_ASSUME
COMMON_ENVS_STAGING_AWS_ACCESS_KEY_ID
COMMON_ENVS_STAGING_AWS_SECRET_ACCESS_KEY
For Release:
NPM_AUTH_TOKEN
Copyright (c) 2016 - 2022 SolarWinds, LLC
Released under the Apache License 2.0
Fabriqué au Canada : Made in Canada 🇨🇦