aws / aws-lambda-dotnet

Libraries, samples and tools to help .NET Core developers develop AWS Lambda functions.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Tracking .NET 8 Support

normj opened this issue · comments

Update 2/22/2024

The .NET 8 Lambda runtime has been released.

https://aws.amazon.com/blogs/compute/introducing-the-net-8-runtime-for-aws-lambda/


Completed Tasks

  • Amazon.Lambda.RuntimeSupport, the Lambda runtime client, has been updated for .NET 8 including making the assembly trimmable for .NET 8.
  • Use new .NET 8 APIs in Amazon.Lambda.RuntimeSupport to ensure the .NET runtime understands the amount of memory allocated to the function #1578
  • Event packages have been updated for .NET 8 including making the assemblies trimmable for .NET 8.
  • .NET CLI Lambda template updated to have custom runtime templates target .NET 8 and use the provided.al2023 runtime.
  • Amazon.Lambda.Annotations has been updated to support building as an executable assembly. AOT trimming warnings have also been addressed. This allows you target .NET 8 and deploy either as a self contained executable or AOT. A new project template using this feature, more documentation updates and a .NET 8 Lambda build image for AOT will be coming. Checkout James Eastham video on how to use this feature now: https://www.youtube.com/watch?v=kyb16r-Oul0
  • .NET 8 version of the .NET Mock Lambda Test Tool has been released.
  • The .NET 8 OCI image has been published to ECR https://gallery.ecr.aws/lambda/dotnet
    • public.ecr.aws/lambda/dotnet:8
  • .NET 8 build image used for container builds especially Native AOT builds has been released https://gallery.ecr.aws/sam/build-dotnet8
  • Amazon.Lambda.Tools has been updated to use the .NET 8 build image unless override by the --container-image-for-build switch. Amazon.Lambda.Tools will automatically use container builds when building for Native AOT to use matching Amazon Linux 2023 build environment.
  • Amazon.Lambda.AspNetCoreServer and Amazon.Lambda.AspNetCoreServer.Hosting have been updated to target .NET 8 and support Native AOT trimming.
  • Amazon.Lambda.Templates 7.0.0 has been released targeting the .NET Lambda templates to .NET 8.
  • Managed .NET 8 Lambda runtime using the identifier dotnet8 has been released.

Thanks for tracking this. Just to clarify, should we expect all Amazon.Lambda.* packages to be .NET 8 compatible when support is ready?

Also, looking at the completed tasks, it seems a lot has already been done. How much work is remaining?

Hi @petro2050

I avoided making a list of what needs to be done because that might imply it is a finite list and as we know with software as we go through things new things and ideas pop up. The long poll task is deploying the managed runtime which requires a lot of operational tasks to make sure everything is in place so it can be well monitored and supported.

From the Amazon.Lambda.* perspective most everything has been verified for .NET 8. The outstanding libraries that still need to be verified is the ASP.NET Core bridge library Amazon.Lambda.AspNetCoreServer and our Amazon.Lambda.Annotations library. We are also working with the SAM team that produce the build container images to get a .NET 8 container image released. That will using Native AOT with .NET 8 a lot easier.

@normj we appreciate that you are on top of this, as some IDE developers are significantly behind the eight ball on support.

Updated descriptions with some info on getting started now without managed runtime.

Thank you for the information and the updates, @normj! We are really looking forward to this and appreciate it!

For projects currently deployed "as-is" on the AWS::Serverless::Function 6.x runtime should we be planning on adding the custom runtime or will 8 also be available without the additional container/image/build setup? Thanks!

@jakenuts Using custom runtimes is a way to use .NET 8 in Lambda today. Eventually we will be releasing a new managed runtime for .NET 8. That means your AWS::Serverless::Function resources could target a dotnet8 runtime when it is released. Unfortunately I can't give a timeframe when the new managed runtime will be available other then it is coming.

@normj & @jakenuts - with .net 7.0 we found that building <SelfContained>True</SelfContained> works fine when deployed with a dotnet6 lambda runtime - the the native runtime is ignored - given that the self-contained is used.

One guesses that will continue work for .net 8.0 again with dotnet6 runtime and hopefully dotnet8?

Albeit we're keen to try with the new al 2023 image as we'd otherwise get a very bizarre runtime issue with deploying a container build on .Net 7.0 on the older al2 image. Fingers crossed this is resolved as @normj was unable to reproduce it.

To us the big value add of this is new runtime is support for Arm64 (with AoT) lambdas and the lower cost!

Does it mean that we can do Native AOT builds for .NET8 managed runtime (with PublishAot=true)?

@Illivion you can indeed, to reshare Norm's update from the description I have a video on YouTube :)

https://www.youtube.com/watch?v=kyb16r-Oul0

@jeastham1993 it looks like that video is using a custom runtime. To clarify - will it be possible to use Native AOT with the managed runtime once it's released, or will we still need to use a custom runtime for that?

@DillonN-build yep, you will run be able to native AOT with the managed runtime. Native AOT compiled apps can run anywhere, and once the managed runtime is out we will release some guidance for using both custom runtimes and managed runtimes.

With the .NET 6 managed runtime, you can already use an executable assembly, and this is how native AOT works. We are just working out the finder details but as I said, on release we will provide guidance on how to do that.

commented

sorry all of this is a bit confusing I have
image
basically the latest with vs 2022 17.8.1 which from with the release of .net 8

my confusion come from.... I want to create a SQS function, but the template still defaults to .net6 neverminded .net7 and well now it should really be .net8.

this discussion/issue, is that about adding the supported or is it already supported and i can simply change it to .net7/8 this is for SQS

@jeastham1993 Any indication James as to when this feature will be supported and the managed runtime will be GA?

@alexander-manley We are working hard on getting the .NET 8 managed runtime out but we can't give an ETA. There is a lot of coordination across many teams with competing priorities so things are very fluid.

https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html#runtimes-future

This page says the indicative target launch is January 2024.

If anyone is interested, please try the preview version of the .NET 8 OCI image from ECR and let us know any feedback you have. Thanks!

https://gallery.ecr.aws/lambda/dotnet

public.ecr.aws/lambda/dotnet:8-preview.2023.12.14.16

Using an AWS base image for .NET

The company I work for has a use case that requires being able to install java (java-17-amazon-corretto-headless and a few other supporting packages) into the runtime image. In .NET 7, they used yum to accomplish this. Is there a way to support this use case in the new .net 8 images which don't appear to have yum in them any more?

@farmergreg yum has been replaced with the lighter weight microdnf in the runtime image. I just tested and the command dnf install java-17-amazon-corretto-headless works to find and install the package. You should generally be able to replace yum with dnf (or microdnf which is what is really running)

@Beau-Gosse-dev thank you for the suggestion! I tried it, and it doesn't quite work for me in our Dockerfile. Here's the error I get:

[dotnet-8-runtime-aws-lambda 2/4] RUN dnf install java-17-amazon-corretto-headless:
0.778 Downloading metadata...
1.788 error: failed to remove /var/cache/yum/metadata/amazonlinux-2023-x86_64/repodata

The Dockerfile:

FROM public.ecr.aws/lambda/dotnet:8-preview.2023.12.14.16 AS dotnet-8-runtime-aws-lambda
RUN dnf install -y java-17-amazon-corretto-headless

After some testing, I've discovered the following: It appears that the above Dockerfile builds on Windows 11, but not on Ubuntu 22.04.3 LTS.

Hello,

Just reporting in. I have a .Net 8 based (Non AOT, Non Lambda Annotation) Asp.Net REST api that I've deployed via the latest preview image. I also have a .Net 8 Lambda Function deployed as the Authorizer in an ApiGateway with the same preview image. (public.ecr.aws/lambda/dotnet:8-preview, which currently is pointing at 8-preview.2023.12.15.17)

I'm not doing anything spectacular or out of the ordinary. This is a pretty bare bones REST api that that has some service classes that hit a mysql RDS instance, and does some Cognito communication.

Up until today, this has all been running in .net 7. I started migrating everything to .net 8. So far everything is working exactly as it did with the public.ecr.aws/lambda/dotnet:7 image. I have made no changes to the code at all, other than to target .net 8 in the build.

Looking forward to the stable image hopefully soon.

Edit: This is using the AMD64 image.

Thanks,
Joe

I created the .Net 7 Container template project. Then switched the csproj to net8.0 and updated the Dockerfile image to public.ecr.aws/lambda/dotnet:8-preview. Deployed and works as expected.

In addition, I also tried arm64 public.ecr.aws/lambda/dotnet:8-preview-arm64. Deployed and works as expected.

@normj How are things looking, are we still expecting a January rollout of managed .NET 8 support?

If anyone is interested, please try the preview version of the .NET 8 OCI image from ECR and let us know any feedback you have. Thanks!
https://gallery.ecr.aws/lambda/dotnet

hi, I've just recently tried latest 8-preview with ASP.NET 8 Minimal API, container image with executable assembly and ARM64 platform.

All good so far!

Here is code reference if anyone curious: https://github.com/ahanoff/how-to/tree/main/aspnet8-minimal-api-lambda-container-image and blog post https://ahanoff.dev/blog/aspnet8-minimal-api-lambda-container-image

I got the Intel 8-preview image working as well. Looks good.

I have had some luck building a dotnet 8 minimal api built with AOT using the following docker file

`FROM public.ecr.aws/lambda/provided:al2023 AS base
RUN dnf install clang-15.0.6-3.amzn2023.0.2.x86_64 --assumeyes
RUN dnf install libicu-67.1-7.amzn2023.0.3.x86_64 --assumeyes
RUN dnf install zlib-devel-1.2.11-33.amzn2023.0.4.x86_64 --assumeyes
RUN dnf install wget --assumeyes

RUN rpm --import https://packages.microsoft.com/keys/microsoft.asc
RUN wget -O /etc/yum.repos.d/microsoft-prod.repo https://packages.microsoft.com/config/fedora/37/prod.repo
RUN dnf install -y dotnet-sdk-8.0

ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["Testing.Api.csproj", "Testing.Api/"]
COPY ["Testing.Shared.csproj", "Testing.Shared/"]
RUN dotnet restore "Testing.Api/Testing.Api.csproj"
COPY . .
WORKDIR "/src/Testing.Api"
RUN dotnet build "Testing.Api.csproj" -c $BUILD_CONFIGURATION -o /app/build

ARG BUILD_CONFIGURATION=Release
RUN dotnet publish -c $BUILD_CONFIGURATION -o /app/publish

ENTRYPOINT ["/app/bootstrap"]
`

You can then output the binary zip it and create a lambda function with it.
docker build --output=. --target=base .

How are the timescales for the .NET 8 managed runtime looking? Are we still on for this month? Thanks!

Reporting in... our other lambda's based on this image all work correctly.

The lambda that I referenced above in a previous comment is being transitioned to a custom Ubuntu image that uses the nuget packages to implement the lambda. It appears that in our use case, Ubuntu better supports the package set that we need when compared to Amazon Linux.

I just checked the official website and the new date is February 2024.

https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html

We updated the estimate today to February. We are close and working hard on it but switching to Amazon Linux 2023 is providing some fresh challenges that we need some more time with.

I keep getting the following error using this example:

Error: .NET binaries for Lambda function are not correctly installed in the /var/task directory of the image when the image was built. The /var/task directory is missing.
INIT_REPORT Init Duration: 5.45 ms Phase: invoke Status: error Error Type: Runtime.ExitError
START RequestId: 47535845-75db-4dcf-8013-abdd6a67b6fb Version: $LATEST
RequestId: 47535845-75db-4dcf-8013-abdd6a67b6fb Error: Runtime exited with error: exit status 102
Runtime.ExitError
END RequestId: 47535845-75db-4dcf-8013-abdd6a67b6fb
REPORT RequestId: 47535845-75db-4dcf-8013-abdd6a67b6fb Duration: 7.48 ms Billed Duration: 8 ms Memory Size: 1024 MB Max Memory Used: 3 MB

If anyone is interested, please try the preview version of the .NET 8 OCI image from ECR and let us know any feedback you have. Thanks!
https://gallery.ecr.aws/lambda/dotnet

hi, I've just recently tried latest 8-preview with ASP.NET 8 Minimal API, container image with executable assembly and ARM64 platform.

All good so far!

Here is code reference if anyone curious: https://github.com/ahanoff/how-to/tree/main/aspnet8-minimal-api-lambda-container-image and blog post https://ahanoff.dev/blog/aspnet8-minimal-api-lambda-container-image

We updated the estimate today to February. We are close and working hard on it but switching to Amazon Linux 2023 is providing some fresh challenges that we need some more time with.

Thank you for the update.

@werebear73-tritelph Can you share how you are building your image? Is it a multi-stage Dockerfile and if so can you share it?

@werebear73-tritelph are you using arm64 runtime?

@normj are you aware of any issues with Amazon.Lambda.AspNetCore.Hosting running under .NET 8? I'm going in circles trying to figure out where my problem is. It doesn't occur when running the webapi project on localhost.

The behavior I'm seeing are logs cut off in under 1s of execution (sometimes mid-message), after waiting a few seconds, API gateway times out, and after 15 minutes the lambda itself times out.

@dguisinger - from what you are saying - we may have run into something similar to this in .net7.

We got as far as thinking it was related to the AL and dotnet runtime we hoped would get resolved with AL2 and net 8. We crated a test repo but @normj et al per this issue were unable to reproduce.

Our work around was to bundle in the runtime as a single executable.

@Simonl9l Interesting... not sure if its the same issue or not.

I'm using a different architecture than you (WebAPI controllers wrapping MediatR CQRS handlers). As far as I can tell, load doesn't make a difference.

It feels like an async bug, but I can't track it down and its only happening in Lambda. It kind of feels like what you get if you forget to await on a method... it starts executing, but then returns control before its finished... but its odd to me that something is maintaining the process for the next 15 minutes until I hit the lambda timeout. Its an instant response when I execute locally.

Even weirder, the logs always stop at one of two positions. One is immediately after getting "AWSSDK: Found credentials using the AWS SDK's default credential search", and the other is half way through recording a line of text in the log, always stopping med sentence on the same character.

The whole reason we are using the Amazon.Lambda.AspNetWeb nuget package is so we can use the integrated Swagger UI. The Swagger UI seems to work just fine, there are no errors while loading the html content.

Its not specific to a particular WebAPI method, they all seem to be having this behavior.

@werebear73-tritelph are you using arm64 runtime?

@ahanoff
I was trying to use x86

@werebear73-tritelph Can you share how you are building your image? Is it a multi-stage Dockerfile and if so can you share it?

@normj
I have tried several different ways using information from several different sources (blogs, aws, docs, githubs, and youtube). I think I am doing something wrong. Initially, I was trying to build something similar to @dguisinger. API with MediatR CQRS handlers. But I continue to run into several walls until I have backed up to just using a template application to try and deploy so I can work from there back to my original vision.

Please understand that I am trying to recreate my previous results. But I can't seem to do so. Please disregard my previous message. These are the results that I get when attempting again. I think part of my issue is that I am using a different build command so that I can push a image directly to ECR to use for the Lambda.

FROM public.ecr.aws/lambda/provided:al2023 AS base
RUN dnf install clang-15.0.6-3.amzn2023.0.2.x86_64 --assumeyes
RUN dnf install libicu-67.1-7.amzn2023.0.3.x86_64 --assumeyes
RUN dnf install zlib-devel-1.2.11-33.amzn2023.0.4.x86_64 --assumeyes
RUN dnf install wget --assumeyes

RUN rpm --import https://packages.microsoft.com/keys/microsoft.asc
RUN wget -O /etc/yum.repos.d/microsoft-prod.repo https://packages.microsoft.com/config/fedora/37/prod.repo
RUN dnf install -y dotnet-sdk-8.0

ARG BUILD_CONFIGURATION=Release
WORKDIR /src
COPY ["./src/interfaces/ApplicationName.DomainUser.ApiLambdaAot/ApplicationName.DomainUser.ApiLambdaAot.csproj", "ApplicationName.DomainUser.ApiLambdaAot/"]
RUN dotnet restore "ApplicationName.DomainUser.ApiLambdaAot/ApplicationName.DomainUser.ApiLambdaAot.csproj"
COPY . .
WORKDIR "/src/ApplicationName.DomainUser.ApiLambdaAotcd"
RUN dotnet build "ApplicationName.DomainUser.ApiLambdaAot.csproj" -c $BUILD_CONFIGURATION -o /app/build

ARG BUILD_CONFIGURATION=Release
RUN dotnet publish -c $BUILD_CONFIGURATION -o /app/publish

ENTRYPOINT ["/app/bootstrap"]

Return from the AWS Lambda:

{
  "errorType": "Runtime.InvalidEntrypoint",
  "errorMessage": "RequestId: fa5c27cc-b2cf-4f36-a526-dd7ef1a41cf4 Error: fork/exec /app/bootstrap: no such file or directory"
}

Results in the AWS Lambda Log:

INIT_REPORT Init Duration: 1.17 ms	Phase: init	Status: error	Error Type: Runtime.InvalidEntrypoint
INIT_REPORT Init Duration: 0.63 ms	Phase: invoke	Status: error	Error Type: Runtime.InvalidEntrypoint
START RequestId: fa5c27cc-b2cf-4f36-a526-dd7ef1a41cf4 Version: $LATEST
RequestId: fa5c27cc-b2cf-4f36-a526-dd7ef1a41cf4 Error: fork/exec /app/bootstrap: no such file or directory
Runtime.InvalidEntrypoint
END RequestId: fa5c27cc-b2cf-4f36-a526-dd7ef1a41cf4
REPORT RequestId: fa5c27cc-b2cf-4f36-a526-dd7ef1a41cf4	Duration: 1.98 ms	Billed Duration: 2 ms	Memory Size: 1024 MB	Max Memory Used: 2 MB	

I will continue to work on this and see if I can make some progress. I will post here if I make some progress.

@Simonl9l I was going to try switching to a bundled runtime. Do you have an example of your project file's property group for setting up self contained/publish single file? I've tried a bunch of things but I keep getting runtime and target errors.

I did try some additional things:

  1. I removed async from my WebAPI method and did an Task.Run().Result to try running the internal code, same result
  2. I removed all my MediatR pipeline behaviors so it becomes a simple async call, again, same result
  3. I removed MediatR and called the async handler directly, same result
  4. I threw an exception handler into my WebAPI controller method and buried any exception that bubbles up while calling the async handler directly and returned a status code of 400. Worked fine, instantly returned my status code of 400.
  5. Added all my commented out code back in, using MediatR and behaviors, etc but leaving the call encapsulated with the try/catch block...... Worked once, failed multiple times afterwards, haven't been able to get it to work again.

I feel like I'm hitting a brick wall at this point

@dguisinger per my post above there is a link to our sample .net 7 repo in post above- it’s .net 7 but easily updatable .net8. Here is the csproj

Our general plan given .net 8 and AoT support with Slim Host Builder with the code generated Route Mapping is to just deploy native Linux binaries.

Here is the csproj

Is the csproj link working for others?

Edit: Oops, missed the original link when scanning: https://github.com/Layer9Labs/LambdaMinimalApiNet7

@dguisinger per my post above there is a link to our sample .net 7 repo in post above- it’s .net 7 but easily updatable .net8. Here is the csproj

@Simonl9l I gave this a try.... it appears to have built fine, but is telling me it can't find the executable. It looked like your serverless config was still using the full project name for your entry point whereas your project file was specifying an assembly name of "bootstrap". Did you have to do anything extra to get it to work?

I have to say, at this point I am extremely frustrated.... last night even my swagger endpoint just randomly stopped working and hasn't come back. As far as I can tell, all I did at the time was add aws debug logging to the config file. I removed that, and it still won't work. This has been a severe blow to productivity and I can't decide if I should backport all my code to c# 10 or if I should ditch Lambda for Docker. I shouldn't have to spend 10 hours a day for several days randomly commenting lines out or adding console writeline statements to try to figure out why code is basically locking up under the Lambda environment.

[Edit]. Switching "Microsoft.AspNetCore": "Information" back to "Warning" made swagger start working again. 😵‍💫 Makes no sense

[Edit 2] Removing Logging.AddConsole() and Logging.AddLambdaLogger() from the application makes it run my MediatR code successfully 75% of the time, with a 25% chance of timeouts..... I am .... befuddled.... the less logging/console output, the more it works

It looked like your serverless config was still using the full project name for your entry point whereas your project file was specifying an assembly name of "bootstrap". Did you have to do anything extra to get it to work?

@dguisinger the resulting executable needs to be named bootstrap for custom runtimes I believe. If you create a new .NET 7 AoT project from the Amazon.Lambda.Tools template, it will have a property in the csproj file something like <AssemblyName>bootstrap</AssemblyName>

It's been a few months, but when I last played around with a .NET 8 custom runtime lambda, I started from the .NET 7 AoT template and I think it was pretty easy to just upgrade the version to net8.0

I'm trying to convert my .Net 8 web Api to work as a Lambda using @ahanoff article (https://ahanoff.dev/blog/aspnet8-minimal-api-lambda-container-image) as a guide but running into the error: Error: fork/exec /lambda-entrypoint.sh: exec format error Runtime.InvalidEntrypoint

Running this locally in Docker I see [INFO] (rapid) exec '/var/runtime/bootstrap' (cwd=/var/task, handler=) and no errors.

Packages I'm using

  • Amazon.Lambda.AspNetCoreServer v8.1.1
  • Amazon.Lambda.AspNetCoreServer.Hosting v1.6.1

In my code I've added builder.Services.AddAWSLambdaHosting(LambdaEventSource.HttpApi);

and my Dockerfile

FROM public.ecr.aws/lambda/dotnet:8-preview AS base

FROM mcr.microsoft.com/dotnet/sdk:8.0 as build
WORKDIR /src
COPY . .

ARG NUGET_FEED_URL
ARG NUGET_FEED_USERNAME
ARG NUGET_FEED_PASSWORD

RUN dotnet nuget add source $NUGET_FEED_URL --username $NUGET_FEED_USERNAME --password $NUGET_FEED_PASSWORD --store-password-in-clear-text
RUN dotnet restore "TEST.Api/TEST.Api.csproj"

RUN dotnet build "TEST.Api/TEST.Api.csproj" --configuration Release --output /app/build

FROM build AS publish
RUN dotnet publish "TEST.Api/TEST.Api.csproj" \
    --configuration Release \
    --runtime linux-arm64 \
    --self-contained false \
    --output /app/publish \
    -p:PublishReadyToRun=true  

FROM base AS final
WORKDIR /var/task
COPY --from=publish /app/publish .
CMD ["TEST.Api"]

Build command

docker build --platform linux/amd64 --build-arg NUGET_FEED_URL=xyz --build-arg NUGET_FEED_USERNAME=xyz --build-arg NUGET_FEED_PASSWORD=xyz -t test-api:v1 .

I've tried invoking endpoints via Api Gateway and Lambda function urls both give the same error.

Apologies if this is not the proper place to post this. I'm new to Lambda and GitHub issues etiquette.
Thanks!

Your Dockerfile is publishing for arm64, but your command line is building on amd64, that might be the source of the error.

Your Dockerfile is publishing for arm64, but your command line is building on amd64, that might be the source of the error.

Thank you! Getting errors now when trying to build my app with arm64 but I can figure that out later. I switched everything to x86 just to get it working for now and everything is good. TY again!

@normj - any updates per SAM - #1611 (comment)

Also one hopes that the Rider AWS Toolkit will include support too?

I'm trying to convert my .Net 8 web Api to work as a Lambda using @ahanoff article (https://ahanoff.dev/blog/aspnet8-minimal-api-lambda-container-image) as a guide but running into the error: Error: fork/exec /lambda-entrypoint.sh: exec format error Runtime.InvalidEntrypoint

thanks @jc1231 for report and @martincostello for quick resolution. I've updated code samples to default to x86_64 architecture.

@Simonl9l We will get VS, SAM and Rider all updated for .NET 8. I might not happen all on the same exact day but everything should be released close together.

As far as updates I can say the technical challenges we ran into with Amazon Linux 2023 have been resolved. For those curious this PR, #1661, gives a pretty good description of the issue we were having. It was one of those issues that took sometime to narrow down the issue but once we did it was only a few lines of code.

@normj Thanks! Yes seems you've all been busy with the AL2023! Thanks for plugin away.

The thread above an others is all getting a tad confusing. It would be good it there was a summary Wiki/MD page on this?

Are they any current instruction of how to build either an x86_64 or arm64 Lambda with AoT on a Mac (M1 Apple Silicon) for .Net 8.0?

All attempts seem to result in Host machine architecture (Arm64) differs from Lambda architecture (X64). Building Native AOT Lambda functions require the host and lambda architectures to match.

The understanding is that doing a container build (on MacOS/M1) should still work - since net8.0 - as the MSFT improvement since .Net 7.0 ?

@Simonl9l I think some of the documentation like here is out of date, and I'll make a note to get it updated. Previously arm64 Lambdas were not supported, but that has been fixed. You should be able target your Lambda to arm64 and then you won't get the error about cross-architecture builds.

All, thanks a lot for this discussion and the valuable insights. I have also played around with running ASP.NET Core 8.0 Minimal APIs on AWS Lambda a little while ago. As I develop on a MacBook Pro M1, I wanted to share my approach and some settings just in case they turn out to be helpful for anybody.

For my exercise, I used the Minimal API starter project and made minimal changes.

Dockerfile

The Dockerfile looks like this:

ARG RUNTIME=linux-arm64

# Stage 1: Build
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /source
ARG RUNTIME

COPY . .
RUN dotnet publish --configuration Release --runtime ${RUNTIME} --self-contained

# Stage 2: Set up runtime
FROM public.ecr.aws/amazonlinux/amazonlinux:2023
WORKDIR /var/task
ARG RUNTIME

ENV ASPNETCORE_HTTP_PORTS=8080
ENV DOTNET_RUNNING_IN_CONTAINER=true
ENV DOTNET_VERSION=8.0.0
ENV ASPNET_VERSION=8.0.0

COPY --from=build /source/bin/Release/net8.0/${RUNTIME}/publish/* .

# For AWS Lambda, the entrypoint must be specified with an absolute path.
ENTRYPOINT ["/var/task/Lambda.Container"]

Building the Container

I use the following build script (build.sh) to build the container:

#!/bin/bash

REPOSITORY_NAME=lambda_container
TAG=1.0

docker buildx build --platform=linux/arm64 --tag ${REPOSITORY_NAME}:${TAG} .

Pushing the Container

Once built, I use the following script (push.sh) to push the container to AWS ECR:

#!/bin/bash

# AWS ACCOUNT AND REGION SETTINGS
ACCOUNT_ID=YOUR-ACCOUNT-ID
REGION=eu-central-1
REPOSITORY_NAME=lambda_container
TAG=1.0

# Let's assume a repository was created with the following command:
# % aws ecr create-repository --repository-name ${REPOSITORY_NAME} --image-scanning scanOnPush=true
#
# The AWS ECR server will be: ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com
# The repository URI will be: ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPOSITORY_NAME}
# See: https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html

# Authenticate Docker client.
aws ecr get-login-password --region ${REGION} | docker login --username AWS --password-stdin ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com

docker tag ${REPOSITORY_NAME}:${TAG} ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPOSITORY_NAME}:${TAG}
docker tag ${REPOSITORY_NAME}:${TAG} ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPOSITORY_NAME}:latest

# Push the image to AWS ECR.
docker push ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPOSITORY_NAME}:${TAG}
docker push ${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${REPOSITORY_NAME}:latest

Deploying the AWS Lambda Function

I use serverless for deployments. The configuration file (serverless.yml) looks like this:

service: lambda-container
frameworkVersion: '3'

provider:
  name: aws
  endpointType: REGIONAL
  region: eu-central-1
  # When deploying containers built for the linux/arm64 architecture, we must set the architecture.
  # Otherwise, the default value of x86_64 will be used. That then leads to am error when invoking the function.
  architecture: arm64
  memorySize: 128

functions:
  weatherForecast:
    image: YOUR-ACCOUNT-ID.dkr.ecr.eu-central-1.amazonaws.com/lambda_container:latest
    events:
      - http:
          path: /{proxy+}
          method: ANY
          cors: true

To deploy, you simply execute the following command:

serverless deploy

Running the Container Locally

Here is how I run the container locally, using a bash script (run.sh):

#!/bin/bash

REPOSITORY_NAME=lambda_container
CONTAINER_NAME=${REPOSITORY_NAME}
TAG=1.0

# Run the container locally.
docker run --detach --publish 8080:8080 --name ${CONTAINER_NAME} ${REPOSITORY_NAME}:${TAG}

If anyone is interested, please try the preview version of the .NET 8 OCI image from ECR and let us know any feedback you have. Thanks!

https://gallery.ecr.aws/lambda/dotnet

public.ecr.aws/lambda/dotnet:8-preview.2023.12.14.16

Using an AWS base image for .NET

After having played around with the amazonlinux:2023 base image (see my commend above), I've now also tested the dotnet:8-preview-arm64 OCI image. This works, too, and also supports internationalization (which the amazonlinux:2023 image does not out of the box). However, my function is measurably slower when compared to the managed dotnet6 runtime. I am not sure whether that is due to the OCI image being in preview at the moment.

It would be good to understand what this might look like once both the managed dotnet8 runtime and the OCI image are released. Will the managed runtime be faster "by design"? What can we expect?

Hi, do you know when this is to be released? Net 6 is due to expire in 9 months and we have to migrate our entire infrastructure but dotnet8 isnt yet listed on https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html#runtime-support-policy
and is due to be released in february (which is now)

There's still another 20 days of February left 😄

@mikebollandajw We are on track for releasing in February as the Lambda docs say as long as we don't hit any hiccups. We are moving at the speed of Lambda deployments which are very cautious as it rolls the changes across the AWS regions.

The .NET 8 build image used for container builds especially Native AOT builds has been released. https://gallery.ecr.aws/sam/build-dotnet8

Version 5.10.0 of Amazon.Lambda.Tools has been released which will used the image when you are doing a Native AOT build or use the --use-container-for-build switch.

The .NET 8 build image used for container builds especially Native AOT builds has been released. https://gallery.ecr.aws/sam/build-dotnet8

Version 5.10.0 of Amazon.Lambda.Tools has been released which will used the image when you are doing a Native AOT build or use the --use-container-for-build switch.

@normj, thanks! Can I ask which images I should use for building and deploying an ASP.NET Core 8.0 Minimal API as a Lambda function? I'm currently using the following for building and as a runtime:

  • Build Stage Base Image: mcr.microsoft.com/dotnet/sdk:8.0
  • Runtime Base Image: public.ecr.aws/lambda/dotnet:8-${ARCH} (where ARCH=arm64)

Should I replace the build stage image with the .NET build image (e.g., public.ecr.aws/sam/build-dotnet8:1-arm64)?

@ThomasBarnekow if your build is working fine now and you're not targeting native AOT, then it's probably fine to keep doing what you're doing.

Our .NET 8 SAM build image is built on Amazon Linux 2023, which is also the base OS that will run .NET 8 managed AWS Lambda functions. Having the same base OS between your build and runtime image is important for native AOT.

Another reason to use it would be if you wanted to already have SAM CLI or Amazon.Lambda.Tools bundled in your build image.

You can checkout the full source of the image here: https://github.com/aws/aws-sam-build-images/blob/develop/build-image-src/Dockerfile-dotnet8

Which is based on the runtime time here: https://github.com/aws/aws-lambda-dotnet/tree/master/LambdaRuntimeDockerfiles/Images/net8

The .NET 8 build image used for container builds especially Native AOT builds has been released. https://gallery.ecr.aws/sam/build-dotnet8

Version 5.10.0 of Amazon.Lambda.Tools has been released which will used the image when you are doing a Native AOT build or use the --use-container-for-build switch.

@normj - great to see this out - thanks to you an date team for the efforts ! I can now build dotnet 8.0 AoT arm64 lambdas on my MacOs machine!

However on first blush with (I'm on Apple Silicon) a:

dotnet lambda package -farch arm64

It seem to pul the creek package I get this error:

... invoking 'docker run --name tempLambdaBuildContainer-54ee22ce-4a46-4df8-9cd3-13acd8823713 --rm --volume "/Users/{profile}/Projects/{repo path}":/tmp/source/ -i -u 502:20 -e DOTNET_CLI_HOME=/tmp/dotnet -e XDG_DATA_HOME=/tmp/xdg public.ecr.aws/sam/build-dotnet8:latest-arm64 dotnet publish "/tmp/source/" --output "/tmp/source/bin/Release/net8.0/publish" --configuration "Release" --framework "net8.0" --self-contained true /p:GenerateRuntimeConfigurationFiles=true --runtime linux-arm64 /p:StripSymbols=true' from directory /{repo path}
... docker run: Unable to find image 'public.ecr.aws/sam/build-dotnet8:latest-arm64' locally
... docker run: latest-arm64: Pulling from sam/build-dotnet8
... docker run: 3b4f997427bc: Already exists
... docker run: d80a9b6ce03f: Download complete
.... (etc)
... docker run: Digest: sha256:fbce9d067d8ecd07027bcce4c8be48325662523698ed384df71f362b5df94778
... docker run: Status: Downloaded newer image for public.ecr.aws/sam/build-dotnet8:latest-arm64
... docker run: System.IO.IOException: No space left on device : '/tmp/dotnet'
... docker run:    at System.IO.FileSystem.CreateParentsAndDirectory(String fullPath, UnixFileMode unixCreateMode)
... docker run:    at System.IO.FileSystem.CreateDirectory(String fullPath, UnixFileMode unixCreateMode)
... docker run:    at System.IO.Directory.CreateDirectory(String path)
... docker run:    at Microsoft.Extensions.EnvironmentAbstractions.DirectoryWrapper.CreateDirectory(String path)
... docker run:    at Microsoft.DotNet.Configurer.FileSystemExtensions.<>c__DisplayClass0_0.<CreateIfNotExists>b__0()
... docker run:    at Microsoft.DotNet.Cli.Utils.FileAccessRetrier.RetryOnIOException(Action action)
... docker run:    at Microsoft.DotNet.Configurer.DotnetFirstTimeUseConfigurer.Configure()
... docker run:    at Microsoft.DotNet.Cli.Program.ConfigureDotNetForFirstTimeUse(IFirstTimeUseNoticeSentinel firstTimeUseNoticeSentinel, IAspNetCertificateSentinel aspNetCertificateSentinel, IFileSentinel toolPathSentinel, Boolean isDotnetBeingInvokedFromNativeInstaller, DotnetFirstRunConfiguration dotnetFirstRunConfiguration, IEnvironmentProvider environmentProvider, Dictionary`2 performanceMeasurements)
... docker run:    at Microsoft.DotNet.Cli.Program.ProcessArgs(String[] args, TimeSpan startupTime, ITelemetry telemetryClient)
... docker run:    at Microsoft.DotNet.Cli.Program.Main(String[] args)
ERROR: Container build returned 1

Any suggestions ?

Edit:

I had to increase the Docker (desktop) Resources!

However oddly it seem that this build setup does not seem to be able to access my global configured NuGet Sources, and have not idea how to add a private (with security token) NuGet Source into the dotnet lambda package

@Simonl9l I think what is going on is that your globally configured NuGet source is defined in a file that isn't accessible by the Docker container. By default, the Docker container can't access your whole drive, or anything on your filesystem at all. We tell it to mount the project or solution folder, which you can see here, so that is has access to the source code it needs to build.

You can try using the --code-mount-directory argument (see dotnet lambda package --help for more info) to mount more of your file system that might include the NuGet config file. But that may be a little heavy handed if the file is buried in a completely different root directory. Maybe you could copy the NuGet config file closer to the source code?

commented

#1611 (comment)

could someone address my question, its always a million times easier if the templates show how its done... is this what this discussion is about getting to that point? aka so we can have it work... if not then when will the templates be updated.

@Seabizkit The templates in Visual Studio will be updated to target .NET 8 when the .NET 8 Lambda managed runtime is released.

Thank you for your efforts! any estimate on when the .NET 8 Lambda managed runtime is released?

Im having issues building the image with the --self-contained option for dotnet publish as per @ThomasBarnekow example docker solution above when i need to enable globalization and time zones: as per the following:

ENV
DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false
LC_ALL=en_US.UTF-8
LANG=en_US.UTF-8
RUN apk add --no-cache
icu-data-full
icu-libs

Any thought on this or maybe if anyone can provide me a docker example with this alsop enabled.
Thank you in advanced

ICU data can also be included with a .NET application by using App-local ICU via a NuGet package: https://learn.microsoft.com/dotnet/core/extensions/globalization-icu#app-local-icu

Thank you @martincostello. do you happen to have an example or even the nuget package link? I dont seem to be able to find the nuhet package for it.

Thank you for your efforts! any estimate on when the .NET 8 Lambda managed runtime is released?

Im having issues building the image with the --self-contained option for dotnet publish as per @ThomasBarnekow example docker solution above when i need to enable globalization and time zones: as per the following:

ENV DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 RUN apk add --no-cache icu-data-full icu-libs

Any thought on this or maybe if anyone can provide me a docker example with this alsop enabled. Thank you in advanced

I've updated my Dockerfile based on the latest container releases. Here's the latest version, noting again that I am building an ASP.NET Core 8.0 Minimal API:

ARG PROJECT=<YOUR-PROJECT-NAME>
ARG OS=linux
ARG ARCH=arm64

# Stage 1: Build
FROM public.ecr.aws/sam/build-dotnet8:latest-${ARCH} AS build

WORKDIR /source
COPY . .

ARG PROJECT
ARG OS
ARG ARCH

RUN dotnet publish ${PROJECT}/${PROJECT}.csproj \
    --configuration Release \
    --runtime ${OS}-${ARCH} \
    --self-contained false \
    --output /publish \
    -p:PublishReadyToRun=true

# Stage 2: Set up runtime
FROM public.ecr.aws/lambda/dotnet:8-${ARCH}

WORKDIR /var/task
COPY --from=build /publish/* .

ARG PROJECT

ENV EXECUTABLE=${PROJECT}
ENV ASPNETCORE_HTTP_PORTS=<YOUR-PORT-NUMBER>

# For AWS Lambda, the entrypoint must be specified with an absolute path.
# When using a variable in the path, we must use the shell form of ENTRYPOINT.
ENTRYPOINT exec /var/task/${EXECUTABLE}

Thank you both much appreciated, just to add to. the image you put @ThomasBarnekow im running pretty much the same image but on a multi project clean architecture pattern so i first restore csproject files and later publish the project with no-restore option as per official docker documentation:

# https://hub.docker.com/_/microsoft-dotnet
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /source

# copy csproj and restore as distinct layers
COPY *.sln .
COPY aspnetapp/*.csproj ./aspnetapp/
RUN dotnet restore

# copy everything else and build app
COPY aspnetapp/. ./aspnetapp/
WORKDIR /source/aspnetapp
RUN dotnet publish -c release -o /app --no-restore

# final stage/image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build /app ./
ENTRYPOINT ["dotnet", "aspnetapp.dll"]

@dide0100, my solution also follows a multi-project clean architecture pattern. I have a main project, identified as the PROJECT argument in the Dockerfile. I have additional projects for the Domain, Application, Infrastructure, and Presentation layers. There's yet another project for cross-cutting stuff and one last project for Presentation layer contracts, e.g., for use by API clients.

You can avoid that by mounting a cache. For example:

FROM mcr.microsoft.com/dotnet/sdk:8.0

COPY . ./

RUN --mount=type=cache,id=nuget,target=/root/.nuget/packages \
    dotnet test {YOUR ARGS HERE}

RUN --mount=type=cache,id=nuget,target=/root/.nuget/packages \
    dotnet publish {YOUR ARGS HERE}

Thank you so much guys. Very helpful. Will try to figure out the last comment from @martincostello as I would appreciate a slightly more comprehensive example if possible. Once more thank you all.

This isn't for Lambda, but EKS, but here's an example of a Dockerfile we use to build an ASP.NET Core 8 AoT API using a cache to avoid needing to run dotnet restore separately to create its own layer:

FROM --platform=$BUILDPLATFORM mcr.microsoft.com/dotnet/sdk:8.0 AS build
ARG TARGETARCH

COPY . /source
WORKDIR /source

SHELL ["/bin/bash", "-o", "pipefail", "-c"]

RUN dpkg --add-architecture arm64 && \
    apt-get update && apt-get install --no-install-recommends --yes clang gcc-aarch64-linux-gnu llvm zlib1g-dev zlib1g-dev:arm64

RUN --mount=type=cache,id=nuget,target=/root/.nuget/packages \
    dotnet test --arch "${TARGETARCH//amd64/x64}" --configuration Release

RUN --mount=type=cache,id=nuget,target=/root/.nuget/packages \
    dotnet publish ./src/MyAppName --arch "${TARGETARCH//amd64/x64}" --output /app --self-contained true --use-current-runtime

FROM mcr.microsoft.com/dotnet/runtime-deps:8.0-jammy-chiseled-extra AS final
WORKDIR /app
EXPOSE 8080

ENV ASPNETCORE_FORWARDEDHEADERS_ENABLED=true

COPY --from=build /app .

ENTRYPOINT ["./MyAppName"]

When I use the dotnet lambda package command on a NET8/NativeAOT project, I get a bunch of Nuget Restore failures that I don't get when I publish normally.

I'm assuming this is because it's using the container build, and the feeds are private. I have the Azure Artifacts Credentials Provider tools installed locally to handle this normally, but presumably the container build isn't able to use this. Is there an alternative?

For what it's worth, I'm able to deploy a Lambda API in .NET 8 right now using Amazon.Lambda.Annotations and Amazon.Lambda.AspNetCoreServer.Hosting with dotnet lambda deploy-function just fine.

We just pushed out new versions of Amazon.Lambda.AspNetCoreServer and Amazon.Lambda.AspNetCoreServer.Hosting that have been updated to target .NET 8 and support Native AOT trimming. This was a major update for Amazon.Lambda.AspNetCoreServer to 9.0.0 but that was largely due to dropping .NET Core 3.1 support from the library.

Here are the release note changes: https://github.com/aws/aws-lambda-dotnet/blob/master/RELEASE.CHANGELOG.md#release-2024-02-15

To be clear this was just the release of the client libraries focusing on making them Native AOT compatible. The managed runtime is getting closer but not here yet.

When I use the dotnet lambda package command on a NET8/NativeAOT project, I get a bunch of Nuget Restore failures that I don't get when I publish normally.

I'm assuming this is because it's using the container build, and the feeds are private. I have the Azure Artifacts Credentials Provider tools installed locally to handle this normally, but presumably the container build isn't able to use this. Is there an alternative?

@jamiewinder I had the same problem with my private Azure Artifacts feed. However, the solution is fortunately very simple (but not so simple to find, unfortunately). You'll have to provide the username and personal access token (PAT) in an environment variable, following a NuGet-defined naming convention.

Let's assume you have the following nuget.config file with the second entry representing your private Azure Artifacts feed. Note the key Artifacts. As in my actual API, I've chosen a sample key that is identical to the name of the Azure Artifacts feed that you would see in the URL.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <packageSources>
        <add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3"/>
        <add key="Artifacts"
             value="https://pkgs.dev.azure.com/your-org/your-project/_packaging/Artifacts/nuget/v3/index.json"/>
    </packageSources>
</configuration>

In the build stage of your Dockerfile, you'll need the following ARG before the first dotnet command (e.g., dotnet restore, dotnet publish):

ARG NuGetPackageSourceCredentials_Artifacts

In the Dockerfile I shared a little earlier, you'd find the line directly before the docker publish command. I didn't include it in what I shared because it wasn't necessary in that example.

Do make sure that the key (e.g., Artifacts) in your nuget.config is the postfix of your ARG environment variable (e.g., NuGetPackageSourceCredentials_Artifacts) in your Dockerfile.

As an environment variable, NuGetPackageSourceCredentials_Artifacts has the following format:

export NuGetPackageSourceCredentials_Artifacts="Username=${ArtifactsUsername};Password=${ArtifactsPassword}"

Note that ${ArtifactsPassword} is a personal access token (PAT) that is stored in the session token cache by the Azure Artifacts Credential Helper. Assuming you have successfully authenticated, you'll be able to retrieve that PAT as follows:

SessionTokenCache="$HOME/.local/share/MicrosoftCredentialProvider/SessionTokenCache.dat"
ArtifactsFeed="Artifacts"
ArtifactsUsername="email-address-used-to-access-azure-devops"
ArtifactsPassword=$(jq -r --arg ArtifactsFeed "$ArtifactsFeed" 'to_entries[] | select(.key | test($ArtifactsFeed)) | .value' "$SessionTokenCache")

When building a Docker container yourself, you'd use that information as follows (note the last --build-arg):

docker buildx build \
  --platform="${CONTAINER_PLATFORM}" \
  --tag "${REGISTRY_NAME}/${REPOSITORY_NAME}:${TAG}" \
  --build-arg="OS=${OS}" \
  --build-arg="ARCH=${ARCH}" \
  --build-arg="NuGetPackageSourceCredentials_Artifacts=${NuGetPackageSourceCredentials_Artifacts}" .

Thanks @ThomasBarnekow . In my case I'm not using a Dockerfile at all, but I can try using one to see if it works around the problem.

I'm also currently not reliant on using PAT tokens. Ultimately I want this to run in Azure DevOps and this'll make it tricky to do so since we're currently using a credentials provider which seems to work with the Lambda Tools just fine, just not when doing a container build.

We just pushed out new versions of Amazon.Lambda.AspNetCoreServer and Amazon.Lambda.AspNetCoreServer.Hosting that have been updated to target .NET 8 and support Native AOT trimming. This was a major update for Amazon.Lambda.AspNetCoreServer to 9.0.0 but that was largely due to dropping .NET Core 3.1 support from the library.

Here are the release note changes: https://github.com/aws/aws-lambda-dotnet/blob/master/RELEASE.CHANGELOG.md#release-2024-02-15

To be clear this was just the release of the client libraries focusing on making them Native AOT compatible. The managed runtime is getting closer but not here yet.

Still getting ValueError: Unsupported Lambda runtime dotnet8 from SAM CLI, though.

@nibblesnbits Keep an eye on this PR to be released in order to use SAM CLI with .NET 8: aws/aws-sam-cli#6429

Thank you @normj and team for all your hard work on this effort!

image

When will the Amazon.CDK.Lib (v2.128.0) be updated? The Amazon.CDK.AWS.Lambda.Runtime still only holds DOTNET_6 as choice, DOTNET_8 is nowhere to be found?

@DennisJansenDev
You can use new Runtime("dotnet8", RuntimeFamily.DOTNET_CORE) until it's updated.

@DennisJansenDev

You can use new Runtime("dotnet8", RuntimeFamily.DOTNET_CORE) until it's updated.

Aaah perfect! Thank you very much! 😁

As people have seen .NET 8 is starting to show up. We are not announcing it "released" yet because it takes awhile to get out all of the pieces connected to Lambda runtime.

@DennisJansenDev and @Dreamescaper The latest CDK release v2.129.0 now includes dotnet8. Please give it a shot!

Fantastic news! Well done @normj @Beau-Gosse-dev and everyone involved!

Release is done! Thanks you all for your patience and looking forward to hearing what you all build with the new .NET 8 Lambda runtime.

https://aws.amazon.com/blogs/compute/introducing-the-net-8-runtime-for-aws-lambda/

Congratulations @normj and everyone involved. This release will bring a lot of benefits and performance improvements.

@normj you are my hero.

Thank you!
.NET8 AOT on AWS Lambda Managed Runtime
is 40% faster cold start (init duration)
and (looks like a bug or something but) 95% cheaper (billed duration)

image
image
image

Billed Duration.. is it correct? 🤔

Will this same GitHub issue track the .NET 8 managed runtime being released to AWS gov cloud?

Why does dotnet lambda package use docker for building?? It's not working on my wsl2 env (such as no csproj found errors, kinda mount or something issue) btw dotnet publish -r linux-x64 works totally fine, though.