A starter Squid project to demonstrate its structure and conventions. It accumulates kusama account balances and serves them via GraphQL API. For a full reference of Subsquid features consult Docs and FAQ.
- Quickstart
- Setup for Parachains
- Setup for Localnets, Devnets and Testnets
- Development flow
- Deploy the Squid
- Conventions
- Type Bundles
- node 16.x
- docker
The scripts below use make. To get make work on windows one can use WSL, otherwise just have a look at one-liners in Makefile.
# 1. Install dependencies
npm ci
# 2. Compile typescript files
make build
# 3. Start target Postgres database and detach
make up
# 4. Now start the processor (blocks the terminal)
make process
# 5. The command above will block the terminal
# being busy with fetching the chain data,
# transforming and storing it in the target database.
#
# To start the graphql server open the separate terminal
# and run
make serveSubsquid provides Squid Archive data sources for most parachains. Use lookupArchive(<network name>) to lookup the archive endpoint by the network name, e.g.
processor.setDataSource({
archive: lookupArchive("kusama", { release: "FireSquid" })
//...
});To make sure you're indexing the right chain one can additionally filter by genesis hash and other options provided by LookupOptions:
processor.setDataSource({
archive: lookupArchive("kusama", {
release: "FireSquid",
genesis: "0xb0a8d493285c2df73290dfb7e61f870f17b41801197a149ca93654499ea3dafe"
}),
//...
});If the chain is not yet supported, please fill out the form to submit a request.
Non-production chains, e.g. Devnets and Testnets are not supported by lookupArchive and one has to provide a local Squid Archive as a data source.
Inspect archive/docker-compose.yml and provide the WebSocket endpoint for your node.
Then run (in a separate terminal window)
docker compose -f archive/docker-compose.yml upThe docker-compose will start the archive gateway at port 8888 and it can immediately be used with the processor (even if it's not in sync it will eventually catch up):
processor.setDataSource({
archive: `http://localhost:8888/graphql`,
});Additionally, an explorer GraphQL API and a playground is started at http://localhost:4350/graphql. While it's optional to run it, it proved to be a very useful tool for debugging, developing and exploring on-chain data due to the rich filtering interfaces it provides.
To drop the archive, run
docker compose -f archive/docker-compose.yml down -vStart development by defining the schema of the target database via schema.graphql.
Schema definition consists of regular graphql type declarations annotated with custom directives.
Full description of schema.graphql dialect is available here.
Mapping developers use TypeORM EntityManager
to interact with the target database during data processing. All necessary entity classes are
generated by the squid framework from schema.graphql. This is done by running npx squid-typeorm-codegen
command.
All database changes are applied through migration files located at db/migrations.
sqd(1) tool provides several commands to drive the process.
It is all TypeORM under the hood.
# Connect to database, analyze its state and generate migration to match the target schema.
# The target schema is derived from entity classes generated earlier.
npx sqd db create-migration
# Create template file for custom database changes
npx sqd db new-migration
# Apply database migrations from `db/migrations`
npx sqd db migrate
# Revert the last performed migration
npx sqd db revert
# DROP DATABASE
npx sqd db drop
# CREATE DATABASE
npx sqd db create This is an optional part, but it is very advisable.
Event, call and runtime storage data come to mapping handlers as raw untyped json. While it is possible to work with raw untyped json data, it's extremely error-prone and the json structure may change over time due to runtime upgrades.
Squid framework provides tools for the generation of type-safe, spec version aware wrappers around events, calls and runtime storage items. Typegen generates type-safe classes in types/events.ts, types/calls.ts and types/storage.ts respectively. All historical runtime upgrades are accounted out of the box. Typical usage is as follows (see src/processor.ts):
function getTransferEvent(ctx: EventHandlerContext): TransferEvent {
// instantiate the autogenerated type-safe class for Balances.Transfer event
const event = new BalancesTransferEvent(ctx);
// for each runtime version, reduce the data to a common interface
if (event.isV1020) {
const [from, to, amount] = event.asV1020;
return { from, to, amount };
} else if (event.isV1050) {
const [from, to, amount] = event.asV1050;
return { from, to, amount };
} else {
const { from, to, amount } = event.asV9130;
return { from, to, amount };
}
}Generation of type-safe wrappers for events, calls and storage items can be now done with a single
command as specVersions are provided by FireSquid Archives (see typegen.json)
npx squid-substrate-typegen typegen.jsonIf for some reason a FireSquid Archive is not available, it is still possible to
generate the spec versions file using squid-substrate-metadata-explorer (it may take some time):
npx squid-substrate-metadata-explorer \
--chain wss://kusama-rpc.polkadot.io \
--out kusamaVersions.jsonland then source the generated file in typegen.json:
{
"outDir": "src/types",
"specVersions": "kusamaVersions.jsonl", // the result of chain exploration
"events": [ // list of events to generate
"Balances.Transfer"
],
"calls": [ // list of calls to generate
"Timestamp.set"
],
"storage": [
"System.Account" // list of storage items. To generate wrappers for all storage items, set "storage": true
]
}Subsquid offers a free hosted service for deploying your Squid. First, build and run the docker image locally and fix any errors or missing files in Dockerfile:
bash scripts/docker-run.sh # optionally specify DB port as an argumentAfter the local run, follow the instructions for obtaining a deployment key and submitting the Squid to Aquarium.
Squid tools assume a certain project layout.
- All compiled js files must reside in
liband all TypeScript sources insrc. The layout oflibmust reflectsrc. - All TypeORM classes must be exported by
src/model/index.ts(lib/modelmodule). - Database schema must be defined in
schema.graphql. - Database migrations must reside in
db/migrationsand must be plain js files. sqd(1)andsquid-*(1)executables consult.envfile for a number of environment variables.
Substrate chains that have blocks with metadata versions below 14 don't provide enough information to decode their data. For those chains, external type definitions](https://polkadot.js.org/docs/api/start/types.extend) are required.
Type definitions (typesBundle) can be given to squid tools in two forms:
- as a name of a known chain (currently only
kusama) - as a json file of a structure described below.
{
"types": {
"AccountId": "[u8; 32]"
},
"typesAlias": {
"assets": {
"Balance": "u64"
}
},
"versions": [
{
"minmax": [0, 1000], // block range with inclusive boundaries
"types": {
"AccountId": "[u8; 16]"
},
"typesAlias": {
"assets": {
"Balance": "u32"
}
}
}
]
}.types- scale type definitions similar to polkadot.js types.typesAlias- similar to polkadot.js type aliases.versions- per-block range overrides/patches for above fields.
All fields in the types bundle are optional and applied on top of a fixed set of well-known frame types.
Polkadot.js provides lots of specialized classes for various types of data.
Even primitives like u32 are exposed through special classes.
In contrast, the squid framework works only with plain js primitives and objects.
For instance, account data is passed to the handler context as a plain byte array. To convert it into a standard human-readable format one should explicitly use a utility lib @subsquid/ss58:
// ...
from: ss58.codec('kusama').encode(rec.from),
to: ss58.codec('kusama').encode(rec.to),It is possible to extend squid-graphql-server(1) with custom
type-graphql resolvers and to add request validation.
More details will be added later.