The Hasura DuckDB Connector allows for connecting to a DuckDB database or a MotherDuck hosted DuckDB database to give you an instant GraphQL API on top of your DuckDB data.
This connector is built using the Typescript Data Connector SDK and implements the Data Connector Spec.
Below, you'll find a matrix of all supported features for the DuckDB connector:
Feature | Supported | Notes |
---|---|---|
Native Queries + Logical Models | ❌ | |
Simple Object Query | ✅ | |
Filter / Search | ✅ | |
Simple Aggregation | ❌ | |
Sort | ✅ | |
Paginate | ✅ | |
Table Relationships | ✅ | |
Views | ❌ | |
Distinct | ❌ | |
Remote Relationships | ✅ | |
Custom Fields | ❌ | |
Mutations | ❌ |
[Prerequisites or recommended steps before using the connector.]
- The DDN CLI and Docker installed
- A supergraph
- A subgraph
- Have a MotherDuck hosted DuckDB database, or a persitent DuckDB database file — for supplying data to your API.
The steps below explain how to Initialize and configure a connector for local development. You can learn how to deploy a connector — after it's been configured — here.
ddn auth login
ddn connector init duckdb --subgraph my_subgraph --hub-connector hasura/duckdb
In the snippet above, we've used the subgraph my_subgraph
as an example; however, you should change this
value to match any subgraph which you've created in your project.
When you initialized your connector, the CLI generated a set of configuration files, including a Docker Compose file for
the connector. Typically, connectors default to port 8080
. Each time you add a connector, we recommend incrementing the published port by one to avoid port collisions.
As an example, if your connector's configuration is in my_subgraph/connector/duckdb/docker-compose.duckdb.yaml
, you can modify the published port to
reflect a value that isn't currently being used by any other connectors:
ports:
- mode: ingress
target: 8080
published: "8082"
protocol: tcp
Now that our connector has been scaffolded out for us, we need to provide a connection string so that the data source can be introspected and the boilerplate configuration can be taken care of by the CLI.
The CLI has provided an .env.local
file for our connector in the my_subgraph/connector/duckdb
directory. We can add a key-value pair
of DUCKDB_URL
along with the connection string itself to this file, and our connector will use this to connect to our database.
The file, after adding the DUCKDB_URL
, should look like this example if connecting to a MotherDuck hosted DuckDB instance:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://local.hasura.dev:4317
OTEL_SERVICE_NAME=my_subgraph_duckdb
DUCKDB_URL=md:?motherduck_token=eyJhbGc...
To connect to a local DuckDB file, you can add the persistent DuckDB database file into the my_subgraph/connector/duckdb
directory, and since all files in this directory will get mounted to the container at /etc/connector/
you can then point the DUCKDB_URL
to the local file. Assuming that the duckdb file was named chinook.db
the file should look like this example:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://local.hasura.dev:4317
OTEL_SERVICE_NAME=my_subgraph_duckdb
DUCKDB_URL=/etc/connector/chinook.db
With the connector configured, we can now use the CLI to introspect our database and create a source-specific configuration file for our connector.
ddn connector introspect --connector my_subgraph/connector/duckdb/connector.yaml
Hasura DDN uses a concept called "connector linking" to take NDC-compliant
configuration JSON files for a data connector and transform them into an hml
(Hasura Metadata Language) file as a
DataConnectorLink
metadata object.
Basically, metadata objects in hml
files define our API.
First we need to create this hml
file with the connector-link add
command and then convert our configuration files
into hml
syntax and add it to this file with the connector-link update
command.
Let's name the hml
file the same as our connector, duckdb
:
ddn connector-link add duckdb --subgraph my_subgraph
The new file is scaffolded out at my_subgraph/metadata/duckdb/duckdb.hml
.
The generated file has two environment variables — one for reads and one for writes — that you'll need to add to your subgraph's .env.my_subgraph
file.
Each key is prefixed by the subgraph name, an underscore, and the name of the connector. Ensure the port value matches what is published in your connector's docker compose file.
As an example:
MY_SUBGRAPH_DUCKDB_READ_URL=http://local.hasura.dev:<port>
MY_SUBGRAPH_DUCKDB_WRITE_URL=http://local.hasura.dev:<port>
These values are for the connector itself and utilize local.hasura.dev
to ensure proper resolution within the docker container.
Let's start our connector's Docker Compose file by running the following from inside the connector's subgraph:
docker compose -f docker-compose.duckdb.yaml up
Finally, now that our DataConnectorLink
has the correct environment variables configured for the connector,
we can run the update command to have the CLI look at the configuration JSON and transform it to reflect our database's
schema in hml
format. In a new terminal tab, run:
ddn connector-link update duckdb --subgraph my_subgraph
After this command runs, you can open your my_subgraph/metadata/duckdb.hml
file and see your metadata completely
scaffolded out for you 🎉
View the full documentation for the DuckDB connector here.
Check out our contributing guide for more details.
The DuckDB connector is available under the Apache License 2.0.