maoosi / prisma-appsync

⚡ Turns your ◭ Prisma Schema into a fully-featured GraphQL API, tailored for AWS AppSync.

Home Page:https://prisma-appsync.vercel.app

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Question: How to use multiple Lambda data sources?

ocheezyy-lw opened this issue · comments

I am looking into switching from postgraphile to your tool. The two things missing that I notice are support for postgres routines/functions and support for appending Lambda data sources through a provider.

I completely understand the routines/functions since prisma is the one introspecting the database, but I think Lambda data sources could fit in here due to the project targeting Appsync.

Do you think this is something that's possible to fit into the project? If so I would love to contribute in any way I can.

I do see the instant concern of testing a Lambda locally but there could be some ways to work with that.

Hey @ocheezyy-lw, I’m not entirely sure to understand your ask.

Prisma-AppSync already is a Lambda Resolver. So you can write custom business logic inside the main handler function, like you would in any Lambda Data Source. If you are looking to extend the default CRUD operations, then you can add custom resolvers (see: https://prisma-appsync.vercel.app/features/resolvers.html).

Could you please elaborate on what you have in mind and your use case?

Hey, sorry for the confusion. In my current use case I have Lambda functions that I have added to Appsync as data sources. The goal of this is to expose the Lambda through Appsync including input types and response types.

I understand the input types and response types will have to be created in the custom-schema.gql with extendSchema on the generator but I don't quite see how I could add in a resolver to target the Lambda from the custom-resolvers.yml.

I could be missing some core knowledge here as most of it was abstracted away the first time I've done it but I was looking for there to be a way to create the resolver and attach it to the Lambda data source that was created on Appsync. On top of that if the Lambda's could be executed from the dev (local) environment even if they have to target a deployed Lambda.

That's alright, just trying to understand your use case!

Is there a particular reason why you'd want to create multiple Lambda functions as AppSync data sources, instead of the just the one by default provided with Prisma-AppSync?

I actually see now that I could just create custom resolvers inside the prisma client.

With that I have some lambda's in place that use a decent amount of npm packages. If I were to move all of the logic into custom resolvers and have one Lambda layer or install all packages directly into the Prisma-Appsync resolver Lambda would that affect execution time, cold start time, etc.

My main reason for having separate Lambdas was due to concern of cold start time, besides that it was just a little cleaner to deal with functions that were separarted out compared to a massive lambda with multiple methods.

Let's assume we want to create a new sayHelloWorld Query that triggers a new say-hello-world.ts lambda data source function. Here is how to implement it...

1/ First, we will declare our new query inside custom-schema.gql:

# custom-schema.gql
extend type Query {
  sayHelloWorld: String!
}

2/ We do the same inside custom-resolvers.yaml using a unique identifier dataSource. We will use say-hello-world, but this can be anything you want:

# custom-resolvers.yaml
- typeName: Query
  fieldName: sayHelloWorld
  dataSource: say-hello-world

3/ Then, we update the appsync generator config inside our schema.prisma file:

// schema.prisma
generator appsync {
    provider = "prisma-appsync"
    extendSchema    = "./custom-schema.gql"
    extendResolvers = "./custom-resolvers.yaml"
}

4/ We create a new say-hello-world.ts file in our project:

// say-hello-world.ts
export const main = async (event) => {
  return "Hello world";
};

5/ We need to tell the local dev server that the say-hello-world dataSource identifier links to our say-hello-world.ts lambda function. To do so, we update our server.ts file:

// server.ts
// ...
import { AppSyncSimulatorDataSourceConfig } from "amplify-appsync-simulator";

(async () => {
  // ...

  // load say-hello-world.ts file
  const sayHelloWorld = await import(join(process.cwd(), "say-hello-world.ts"));

  // register data sources for local dev server
  const dataSources: AppSyncSimulatorDataSourceConfig[] = [
    {
      type: "AWS_LAMBDA",
      name: "prisma-appsync", // link to the default prisma-appsync handler
      invoke: lambdaHandler.main,
    },
    {
      type: "AWS_LAMBDA",
      name: "say-hello-world", // link to our new lambda handler
      invoke: sayHelloWorld.main,
    },
  ];

  createServer({
    schema,
    lambdaHandler,
    resolvers,
    port,
    wsPort,
    watchers,
    dataSources, // add this
  });
})();

6/ That's it! You can now query sayHelloWorld from your local dev server API:

query {
  sayHelloWorld
}

7/ Note that you'd also need to adapt the default provided CDK boilerplate to link the say-hello-world dataSource to the lambda function:

if (['lambda', 'prisma-appsync'].includes(resolver.dataSource) && this.dataSources.lambda) {

else if (resolver.dataSource === "say-hello-world") {
    new appSync.Resolver(this, resolvername, {
        api: this.graphqlApi,
        typeName: resolver.typeName,
        fieldName: resolver.fieldName,
        dataSource: sayHelloWorldDataSource, // lambdaDataSource
    })
}

More info on creating a lambdaDataSource using the CDK: https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_appsync.LambdaDataSource.html#example

Thank you so much for the very detailed response. It made it much easier to understand.

You didn't answer my question though of if it makes sense include extra packages into the resolver to allow for extended custom resolvers without using extra lambda data sources.

Like if I were to include pinpoint client into the resolver and then have a custom resolver that grabbed info from prisma, then executed a pinpoint call in the resolver itself. Would this not make sense to do, if it might then does it affect performance?

Yes, it's probably better to use a custom resolver for this.

As long as you don't go over AWS Lambda's size and space limits, you should be okay. And if you need to, you can up the Lambda memory size to improve performances.

Plus, the AWS SDK is already a part of the Lambda execution runtime, so you don't need to include extra libraries to use Pinpoint.

Awesome, thank you!

This is my final question for now I promise. Are you guys planning on migrating from AWS SDK v2 to v3? If I were to make any changes to my resolvers by putting them into the handler, it would depend on that.

The AWS SDK being part of the Lambda execution runtime isn't something related to Prisma-AppSync - it is something that is available by default inside AWS Lambda functions.

To use v3 instead of v2, you simply need to configure your Lambda function to use Node18, which is already the default when using the provided CDK Boilerplate:

runtime: lambda.Runtime.NODEJS_18_X,

Note that you'd still need to install the AWS SDK locally (yarn add @aws-sdk/[package] --dev) to access the SDK from the local dev server environment.

Please refer to this article for more info:
https://aws.amazon.com/blogs/compute/node-js-18-x-runtime-now-available-in-aws-lambda/