openiddict / openiddict-samples

.NET samples for OpenIddict

Home Page:https://documentation.openiddict.com/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Blazor BFF client example

damienbod opened this issue · comments

Hi all

I created a Blazor BFF client with an OpenIddict server. I think this is a good way of securing Blazor hosted in an ASP.NET Core application. It also makes it easier to define a strong CSP and other security headers as dynamic definitions can be used as well as removing the access tokens from the public part of the client, authenticating the client and removing the need for refresh tokens in the public part of the client. It also makes it possible to use SignalR in a more secure way (no need for an access token in the URL)

https://github.com/damienbod/AspNetCoreOpeniddict/tree/main/Blazor.BFF.OpenIddict

Let me know if you would like this and I'll create a PR next week, if not no problem, I know there is a lot of opinions here.

Greetings and happy new year Damien

Hey @damienbod!

Greetings and happy new year Damien

Happy new year! 🎉

I created a Blazor BFF client with an OpenIddict server. I think this is a good way of securing Blazor hosted in an ASP.NET Core application.

Haha, what a nice timing! @florianwachs pinged me yesterday to know if OpenIddict had a BFF sample: https://twitter.com/fexdev/status/1476979361404309505 😄

I must admit I'm not a huge fan of the BFF approach - as I don't think the protection against token exfiltration it offers
outweighs the additional complexity and increased latency it causes - but I can't deny there's a demand for that, so let's do it.

I see in your sample that there's no proxy part and that the "resource" stuff directly uses cookies. You'll probably want to add a new "Api" project and use YARP for the proxy part (otherwise there's no point using OIDC at all if your APIs only use cookies 😄).

Let me know if you have any question.

Thanks @kevinchalet

I like the BFF for a number of reasons and I don’t think the performance is much worse. I like the idea of only securing a single app instead of two. I see the following advantages, disadvantages

Good

  • Single trusted application instead of two apps, public untrusted UI + public trusted API (reduced attack surface)
  • Trusted client protected with a secret or certificate
  • No access/reference tokens in the browser
  • No refresh token in the browser
  • Web sockets security improved (SignalR), no access/reference token in the URL
  • Backchannel logout, SSO logout possible
  • Improved CSP and security headers (can use dynamic data and block all other domains) => possible for better protection against XSS (depends on UI tech stack)
  • Can use MTLS, OIDC FAPI, client binding for all downstream API calls from trusted UI app, so much improved security possible for API calls.
  • No architecture requirement for public APIs outside same domain, downstream APIs can be deployed in the protected zone.
  • Easier to build, deploy (my experience so far), Easier for me means reduced costs.
  • Reduced maintenance due to reduced complexity. (This is my experience so far)

Bad

  • Non domain API calls, Downstream APIs require redirect or second API call (YARP, OBO, OAuth2 Resource Owner Credentials Flow
    , certificate auth etc)
  • PWA support not out of the box
  • Performance worse if downstream APIs required (i.e an API call not on the same domain)
  • All UI API POST, DELETE, PATCH, PUT HTTP requests must use anti-forgery token or force CORS preflight as well as same site protection.

I will call the example “Dantooine”, good?

I don’t think the “basic” sample should implement a YARP downstream API as this is mostly not required. If a downstream API is used, then YARP redirect, ROPC, OBO or certificate authentication can be used. As YARP with a public API is just one way of doing this, maybe this could be a second example. I could create a second YARP example after this, then maybe a third service to service, fourth client binding with MTLS and so on, but I think the getting started BFF should not include the downstream API. I have not required public downstream API calls in any projects so far, but at some stage I will have to use a public user API from my UI I guess.

Looking forward to your feedback, greetings Damien

Great feedback, I'm feeling we'll have a nice debate so thank you for that! 😃

I don’t think the “basic” sample should implement a YARP downstream API as this is mostly not required.

I'm not so sure about that.

The whole point of using OIDC is to enable cross-domain authentication/authorization. There are two scenarios for that:

  • The authorization server/identity provider is hosted on a different domain, possibly managed by a third-party organization.
  • The resource servers/APIs are hosted on different domains (also possibly managed by someone else, tho' it's a less frequent case).

In your example, the APIs, the Blazor server and the WASM "client" are all part of the same "application" (they are hosted on the same exact domain), so it's only the authorization server that is external in that case. At this point, if you decide to self-host your own identity provider but don't have any public APIs meant to be used by other types of clients (e.g mobile, desktop apps), then why bothering with an external provider at all, since its only client will be the SPA "backend" anyway?

My feeling is that this is a niche scenario:

  • If you don't want to opt for a micro-services approach or end up hosting the APIs and the Blazor stuff on the same host, then using cookies authentication the good old way without ever using OIDC simplifies things a lot and you still get the pros and cons of using cookies.
  • If you opt for a true micro-services approach, then you'll want the APIs to be split into multiple, separate and independent applications that will probably be hosted on different subdomains. And in this case, a proxy thing will be needed for the WASM application to communicate with the downstream APIs.

Single trusted application instead of two apps, public untrusted UI + public trusted API (reduced attack surface)

Well, by opting for the BFF pattern, you're actually just moving things: sure the WASM application executed by the browser is no longer responsible of the OIDC dance, but it's transferred to the backend-for-frontend server, not removed completely. I don't think it reduces the attack surface in this case.

Trusted client protected with a secret or certificate

I'm feeling this is where the BFF pattern gives a false sense of security: sure only the backend-for-frontend server application will be able to finalize the OIDC authentication dance, but there's nothing preventing me from copying the authentication cookie returned to the SPA and using it in a different application to send any API request to the BFF I want. And since it will simply transfer these calls to the downstream APIs, you end up with a "trusted client" that actually simply forwards requests from an untrusted source.

No access/reference tokens in the browser
No refresh token in the browser

That's exact, but the only actual difference between that and a simple HttpOnly authentication cookie is that it's (normally) impossible to read it using JS APIs, making cookies indeed way harder to exfiltrate. But if your SPA has a XSS flaw, there's eventually nothing that will prevent a malicious actor from planting scripts that will do bad things in the context of the user's browser.

Web sockets security improved (SignalR), no access/reference token in the URL

Avoiding sensitive stuff in URLs is definitely an improvement as it ensures it's not accidentally logged, but on the other hand, using cookies (or any "automatic" authentication mechanism like Basic or Integrated Windows Authentication, actually) comes with greater responsibility: you must check the origin of WS connections and implement antiforgery countermeasures to avoid CSRF attacks.

Backchannel logout, SSO logout possible

To me, logout in the context of a SPA has always been nothing more than a UX thing, meant to tell the user whether he/she's still logged in or not: the DOM/network calls can always be manipulated to show things in the SPA that are only supposed to be visible to logged in users. The only thing that eventually matters is whether API calls are rejected or not after we decide a client application is no longer allowed to perform actions on behalf of the users (e.g because they logged out or because the authorization was revoked). Even with the ban of third-party cookies by browser vendors that broke the "check session" stuff, token revocation is of course still possible, so you can catch 401 responses and consider the user "logged out" in the SPA 😃

Improved CSP and security headers (can use dynamic data and block all other domains) => possible for better protection against XSS (depends on UI tech stack)

Correct me if I'm wrong but CSP and BFF are orthogonal things: you can implement a super strict CSP even without using BFF (of course, there are extra steps, like adding the remote IdP to connect-src to ensure things like configuration retrieval work flawlessly).

Can use MTLS, OIDC FAPI, client binding for all downstream API calls from trusted UI app, so much improved security possible for API calls.

Security of an entire system is only as strong as the weakest point in the chain. In this case, it's the WASM <-> BFF link that is the weakest link: while you can implement strong client authentication and token binding between the BFF and the IdP and the resource servers, it doesn't improve the overall security because the authentication cookie can still be copied and used in a different context than your SPA: your BFF proxy will still happily forward all the requests sent by your SPA or any other app.

I will call the example “Dantooine”, good?

Sounds great 😃

Cheers.

Hi @kevinchalet

Thanks for the great feedback!

The whole point of using OIDC is to enable cross-domain authentication/authorization. There are two scenarios for that: The authorization server/identity provider is hosted on a different domain, possibly managed by a third-party organization.

I almost always use an identity provider and almost never a standard alone. Most applications are enterprise applications and by using a IDP, the accounting from the company can be used but properly security through an OIDC server. A OIDC client with no API is used a lot. If a second application was created and we implement it, then a second separate API if this is a mobile made for the UI and re-use the business logic. The APIs normally have different security requirements. An API for a trusted backend is completely different to an API for a mobile, SPA. Also some IDPs force me to use service-to-service API security…

If you don't want to opt for a micro-services approach or end up hosting the APIs and the Blazor stuff on the same host, then using cookies authentication the good old way without ever using OIDC simplifies things a lot and you still get the pros and cons of using cookies.

This is what I use in a common use case but with a separate IDP. I avoid full standalones.

If you opt for a true micro-services approach, then you'll want the APIs to be split into multiple, separate and independent applications that will probably be hosted on different subdomains. And in this case, a proxy thing will be needed for the WASM application to communicate with the downstream APIs.

In this case I opt for service-to-service API security or OBO flow and deploy most APIs in the protected zone with higher security. I avoid making APIs which can be used for multiple UIs because they never match the client needs and are expensive to produce, maintain with no benefit. I aim for shared business logic with multiple APIs specific for the client using it. Then I secure this as best as possible. I use both network and application security for all APIs. The public UIs would be the weakest link and the risks need to be analysed.

I think to copy a HTTP only cookie protected with CSRF protections is hard. XSS is always a problem and hard to protect against. This risk is less with a CRSF protected, same site http only cookie compared to using tokens in the local/session storage.

Backchannel logout, SSO logout possible

I have the requirement in enterprise a lot for a SSO logout and a user logs out multiple UI identities for that user. This is only possible using the backchannel.

Strict CSP uses nonces and this is changed with every page load. This is easy when using a backend hosted (ie using a razor host file for the SPA root) and then the dynamic bits are created on the backend. This is hard with meta CSP definitions or pure SPAs. I use NetEscapades.AspNetCore.SecurityHeaders to define all the security headers and CSP if possible, really easy doing it this way.

Security of an entire system is only as strong as the weakest point in the chain. In this case, it's the WASM <-> BFF link that is the weakest link: while you can implement strong client authentication and token binding between the BFF and the IdP and the resource servers, it doesn't improve the overall security because the authentication cookie can still be copied and used in a different context than your SPA: your BFF proxy will still happily forward all the requests sent by your SPA or any other app.

This is true, but I believe there are less weak links/risks with a BFF UI. Copying a cookie is not so easy using Javascript if the cookie is protected good. XSS attacks will always find a way, this is my attitude so I try to reduce what is possible after an XSS attack.

I suggest 2 examples:

  • A Blazor WASM BFF with no downstream API, just the OIDC server rendered client.
  • A Blazor WASM BFF with downstream API using YARP

Or just:

  • A Blazor WASM BFF with API using YARP

Let me know what you prefer. I do the PR then this week.

Greetings Damien

Hey!

I have the requirement in enterprise a lot for a SSO logout and a user logs out multiple UI identities for that user. This is only possible using the backchannel.

Backchannel logout stricto sensu is probably the easiest part, what's more involved is the revocation of the authentication cookie - something that is not handled natively by ASP.NET Core unless you use Identity and its security stamp feature, that only kicks in after 30 minutes by default - and the communication between your BFF and your SPA to notify it the user has logged out.

Strict CSP uses nonces and this is changed with every page load. This is easy when using a backend hosted (ie using a razor host file for the SPA root) and then the dynamic bits are created on the backend. This is hard with meta CSP definitions or pure SPAs. I use NetEscapades.AspNetCore.SecurityHeaders to define all the security headers and CSP if possible, really easy doing it this way.

Well, for "static" servers, CSP hashes are an excellent alternative to nonces (they actually provide a higher security level as the integrity of assets is guaranteed), 'tho you need to have adequate tooling to easily regenerate these hashes every time you update the assets. For "dynamic" servers, it's also interesting to note that NetEscapades.AspNetCore.SecurityHeaders also offers a tag helper for generating these hashes.

This is true, but I believe there are less weak links/risks with a BFF UI. Copying a cookie is not so easy using Javascript if the cookie is protected good. XSS attacks will always find a way, this is my attitude so I try to reduce what is possible after an XSS attack.

Fundamentally the BFF pattern blurs the line between confidential clients and public ones, a pragmatical distinction that was very clear in the base OAuth 2.0 specification and that is way less clear with the BFF pattern: the "backend client" is assigned a client secret and that's enough for its promoters to pretend it's a confidential/trusted client and assume all the security benefits it implies are granted. Well, no: it's an hybrid client with a trusted part - the BFF - and an untrusted part - the frontend - from which APIs calls are forwarded.

With a regular confidential client, you can be confident beyond a reasonable doubt that API calls actually originate from the real client (something that can be hardened by token binding for critical applications). With a BFF "confidential" client, requests are forwarded from an untrusted source and you can't be sure requests really originate from the SPA: I can copy the cookie and use it in a different application: the API will still think the calls originate from the BFF backend. IMHO, BFF backends shouldn't be treated as "trusted" clients.

Let me know what you prefer. I do the PR then this week.

The Blazor WASM BFF with no downstream API, just the OIDC server rendered client scenario seems to be well covered in your today's blog post and you already have a repository for this sample, so I don't think there's a point in doing another identical example: I'll update the README to add a link to your repository for those interested in this pattern.

Blazor WASM BFF with API using YARP seems the best option to me as it likely represents the most common case for BFF.

Cheers.

Done: db2b880 🎉

@kevinchalet this is a really informative issue; thank you! While you mention not being a fan of BFF, I cannot seem to find any javascript/typescript based samples that use a more direct approach in the repository. Are you able to point to a SPA sample that you feel properly addresses the XSS/CSP concerns and is a good example of a browser-based client?

@anorborg as explained in #178, the JS samples have been replaced by .NET clients, that are much easier to maintain in the long term.

If you're an Angular user, both https://github.com/damienbod/angular-auth-oidc-client (developed by @damienbod 😃) and https://github.com/manfredsteyer/angular-oauth2-oidc are heavily used and have samples that should put you on the right track.

Great discussion and examples, it put me on the right track, thank you so much @kevinchalet and @damienbod!