sebadob / rauthy

OpenID Connect Single Sign-On Identity & Access Management

Home Page:https://sebadob.github.io/rauthy/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support `/introspect` endpoint

polarathene opened this issue Β· comments

Short-version: I suggest to drop /tokenInfo, and instead implement /introspect which appears to be missing support despite being the official OIDC endpoint for this purpose?


/tokenInfo

Presently there is a /tokenInfo endpoint:

image

When querying /.well-known/openid-configuration for the /introspect token introspection endpoint, it reports /tokenInfo endpoint instead:

{
  "introspection_endpoint":"http://localhost:8080/auth/v1/oidc/tokenInfo"
}

That endpoint however is not equivalent and appears to be non-standard / inconsistent across implementations? (note all the following references have deprecated this endpoint):

I could not find any RFC where /tokeninfo is defined as a standardized endpoint (despite rauthy swagger docs implying it is OIDC standard?):

I did a brief search before raising this issue and found no context for why this was implemented. The endpoint was already present with your open-source release commit, so no context from that history either.


/introspect

Given the above, it should be communicated more clearly that this is a legacy endpoint, or potentially considering dropping it as I'm not sure what value it might provide for rauthy? If there is demand for it for some reason, I'm sure those users would engage here to clarify why.

Instead /tokenInfo should be replaced by /introspect endpoint, with the implementation conforming to RFC 7662 - Section 2 where this endpoint is defined.

References for the /introspect endpoint documented by alternative open-source services:


Feedback - OpenAPI docs

I realize any standardized APIs should probably avoid straying from case conventions, but I did find it a little odd to encounter some endpoints that used camelCase, and a little bit for those that used snake_case.

  • kebab-case is most appropriate for these when you have the control to decide, so that might be a breaking change you'd want to consider phasing in at some point.
  • For /tokenInfo I believe elsewhere it was most commonly /tokeninfo, although one resource did have /token-info. Mostly a non-issue given OIDC provides an endpoint to easily query these πŸ‘

NOTE: It'd also be neat if the endpoints were documented publicly online instead of needing to run rauthy to lookup via swagger docs. I know we're all limited with time, so it's probably just wishful thinking πŸ˜„

If you're curious at all, the Ory Hydra OpenAPI docs are implemented via using Redoc instead of Swagger, and Docusaurus instead of mdBook for docs in general, to integrate those two together they've used Redocusaurus.

  • Docusaurus does supports some more advanced features, but for the most part is markdown focused like most doc generators, so it shouldn't be too different if you were to consider migrating.
  • I'm not sure why by both rauthy and kanidm docs on Github Pages seem rather slow to load when I would visit them each time, which isn't the case with Github Pages for docs generated with MkDocs Material (another which I have worked with).

Hey,

thanks a lot for the issue and your work.

Yes, the tokenInfo is an older endpoint and was one of the very first ones. It is there for oauth2 support and with OIDC you would not use any introspection endpoint, despite it's naming. You would use the userinfo_endpoint.

introspection_endpoint -> OAuth2
userinfo_endpoint -> OIDC

The tokenInfo has the "wrong" tag oidc in the OpenAPI docs, but that's only because I did never add any oauth specific tag. I actually can't even remember why I named it tokenInfo in the past, this is years old and I have not done anything to this endpoint in a long time. I won't never use camelCase these days to name endpoints tbh and yes it's true that the naming is off. There are some other very old endpoints like /oidc/rotateJwk for instance with that naming.
These are exactly the things I need to clean up before I can release a v1.0.0 and I would probably have missed the camelCase endpoints, thank you!

In the end, all these names for OAuth endpoints are not defined in a way as how their exact names should be. All the snippets in the RFCs are just example names that most apps follow. The same is true for OIDC. This is why we have the openid-configuration. But yes, it makes sense to follow common naming and I have no issues renaming some of them with a breaking change. All endpoints in the openid-configuration are usually auto looked up anyway and I can deprecate others and phase out over 2 releases. I would rather have breaking changes now and a cleaner v1.0.0 later.

I was pretty busy in the last 3 weeks pushing Hiqlite towards a first stable version. This will bring some improvements to Rauthy as well when it comes to HA deployments, because it is more stable and has a way better internal design than my older redhac. As soon as I have the first v0.1.0 there, I will do some work on Rauthy again. I want to do improve the UI situation as well and am already exploring different approaches offline.


NOTE: It'd also be neat if the endpoints were documented publicly online instead of needing to run rauthy to lookup via swagger docs. I know we're all limited with time, so it's probably just wishful thinking πŸ˜„

I have this on the TODO as well, but yes as you said, time... -.-
I do have a public instance for testing experiamental FedCM stuff though. I could open the API docs from that one.

I am not that happy with the SwaggerUI though, as it limits quite a few things I would like to do. For instance adding extra authentication on top or using Rauthy's API Keys for the Try out, but I did not find any way so far how I could achieve that with the given possibilities easily. However, it is super nice and easy for standard scenarios
The reason why I went with it was actually path of least resistance and least amount of extra work. I do have all the docs right inside the code and most of it can be auto-generated instead of needing to maintain docs for the API itself in another place, where they could get out of sync super easily.

If you're curious at all, the Ory Hydra OpenAPI docs are implemented via using Redoc instead of Swagger, and Docusaurus instead of mdBook for docs in general, to integrate those two together they've used Redocusaurus.

I used Docusaurus quite a while ago and I was not too excited about it. It did not have an inbuilt search and you needed to do this via some addon (can't exactly remember), which was not working during local dev though, and it was just "slow" out of the box.

It's weird that you have loading issues with mdbook from Github. When I click the link it shows up immediately. I mean, it is just a bunch of static HTML pages and JS is only used for the search. So far I am super happy with mdbook, because it does what it should do while coming with almost no overhead at all.


I will take a look at all the endpoints as soon as I have Hiqlite in v0.1.0 and make sure their naming is consistent. Either rename them directly when they should be internal-only endpoints or just serve new + deprecated version for an extra release.

Thanks again for the detailed issue!

I quickly exposed the testing SwaggerUI and mentioned it in the docs:

Docs

Swagger UI

It is there for oauth2 support and with OIDC you would not use any introspection endpoint, despite it's naming. You would use the userinfo_endpoint.

introspection_endpoint -> OAuth2 userinfo_endpoint -> OIDC

I understand.

I was just giving rauthy as a spin locally with Dovecot (OAuth2 / OIDC docs) and Roundcube connected to it (I tried ory and kanidm prior, but both added much more friction to get something going quickly with minimal configuration).

  • Users have been raising issues about Dovecot not working well with their auth providers where they were using /introspect, I only had /userinfo working thus far and didn't quite grok what differences might be causing their issues with /introspect.
  • Initially I mistook /tokenInfo as equivalent to /introspect due to the lack of this endpoint in Swagger and /tokenInfo being reported as the /introspect endpoint from openid-configuration request πŸ˜… (later realizing they were totally different)

Yes, the tokenInfo is an older endpoint and was one of the very first ones. It is there for oauth2 support
I actually can't even remember why I named it tokenInfo in the past, this is years old and I have not done anything to this endpoint in a long time.
In the end, all these names for OAuth endpoints are not defined in a way as how their exact names should be. All the snippets in the RFCs are just example names that most apps follow. The same is true for OIDC.

Oh. Then perhaps I misunderstood with the deprecations and popularity of /tokeninfo as an endpoint?

From RFC 7662:

image

However your tokenInfo endpoint only accepts application/json not application/x-www-form-urlencoded for request type according to the Swagger docs. This was also the reason why Dovecot failed with it's request AFAIK:

dovecot: auth: Error: oauth2(jane.doe@example.test,172.16.42.5,<4OlUf9seAIKsECoF>): oauth2 failed: Introspection failed: Object doesn't begin with '{'

If I do a curl request without a JSON payload, the error is:

Payload error: Json deserialize error: EOF while parsing a value at line 1 column 0/

According to RFC 7662, the expected error response body should be { "active": false }? (application/json response type is fine here). The error status was a 400, but swagger docs also only state 401/404, from the RFC it doesn't seem like it should return a 400 status either, which I'm guessing is due to the error output above not being handled by rauthy due to lack of support for request via application/x-www-form-urlencoded?

Software like Dovecot does not seem to support changing how that request is made. I can provide basic auth via URL https://id:secret@doman/endpoint for either the reverse proxy or rauthy to accept.


Slightly off-topic. Although I was unsuccessful at finding an RFC where /tokeninfo might have been defined separately for OAuth2 (/introspect via RFC 7662 was easy enough), this has finally cleared up some confusion I had with Dovecots OAuth2 config docs:

### OAuth2 password database configuration

## url for verifying token validity. Token is appended to the URL
# tokeninfo_url = http://endpoint/oauth/tokeninfo?access_token=

## introspection endpoint, used to gather extra fields and other information.
# introspection_url = http://endpoint/oauth/me

## How introspection is made, valid values are
##   auth = GET request with Bearer authentication
##   get  = GET request with token appended to URL
##   post = POST request with token=bearer_token as content
##   local = Attempt to locally validate and decode JWT token
# introspection_mode = auth

They've got both documented there, not that the tokeninfo endpoint would work with rauthy in that scenario either. For context introspection_url is also used with /userinfo and the default introspection_mode = auth, while introspection_mode = post is intended for calling /introspect (works with Ory Hydra).


It's weird that you have loading issues with mdbook from Github. When I click the link it shows up immediately.

I'll chalk it up to something playing up locally like DNS then πŸ‘ I know it has caused some issues previously, I thought I had left it at 1.1.1.1 but it's back to automatic assignment via dhcp πŸ™„

I used Docusaurus quite a while ago and I was not too excited about it.

Last time I evaluated it was early 2021 IIRC, went with mkdocs material. No clue how it compares these days, but since the speed issue was most likely delayed DNS resolution on my end, nevermind the suggestion :)

Initially I mistook /tokenInfo as equivalent to /introspect due to the lack of this endpoint in Swagger and /tokenInfo being reported as the /introspect endpoint from openid-configuration request πŸ˜… (later realizing they were totally different)

No you were right and I messed up. The issue was that I introduced a regression (probably a very long time ago) when I changed the payload validation on the whole API. It should accept form requests instead of json, but it is not anymore. This is the reason for the issues.

The endpoint itself catches any error and in that case, it will simply return { active: false }. The input validation however happens before it actually gets into the request handler logic, which is why you received the HTTP 400 even though this should never happen.

Slightly off-topic. Although I was unsuccessful at finding an RFC where /tokeninfo might have been defined separately for OAuth2 (/introspect via RFC 7662 was easy enough), this has finally cleared up some confusion I had with Dovecots OAuth2 config docs:

Tbh I am not sure about the naming. Some resources call it tokeninfo, others token_info or introspect. And I get it why they come up with things like token_info, because you POST a token and the server returns the embedded information if it is valid. But yes, this is pretty confusing. I mean, even the dovecot config apparently uses both names. However, when I take a look at the example from its config, the tokeninfo would mean the access token would be inlined with the URL, which is of course a big security issue. So maybe the tokeninfo was the first iteration just like OIDC started with the implicit flow, which uses the same way of doing things.

Rauthy's behavior is (should be) the one of the introspection endpoint with the only problem, that I screwed up in the past and changed the accepted encoding from Form to Json. I will fix this and build a nightly image for you for testing.
I will also implement some configurable form of endpoint authentication as the RFC suggests, to prevent token scanning.
The only werid thing is that they don't specify an exact method of authentication, but I will have a look at what I can do about it.

I'll chalk it up to something playing up locally like DNS then πŸ‘ I know it has caused some issues previously, I thought I had left it at 1.1.1.1 but it's back to automatic assignment via dhcp πŸ™„

Got it, so then you probably had the well known "10 sec connection lags" you also get with SSH + DNS issues?

the tokeninfo would mean the access token would be inlined with the URL, which is of course a big security issue.

I don't think it is (if it's being treated akin to /userinfo)? An access token as an opaque token (not a JWT id token) would not really reveal much or be different from the equivalent to /userinfo with Authorization: Bearer ...? (aside from URL parameters not being a good practice for such data) (noted concern below regarding history/log capture of URL vs header)

/userinfo doesn't need to be protected AFAIK and is a GET, whereas /introspect returns more information (potentially sensitive from what I recall?), or was more flexible πŸ€·β€β™‚οΈ (you'd know this better than me, but it seems to vary across providers as some always issue JWT / ID tokens, rather than slimmer opaque access tokens).

The Dovecot introspect_mode = get however would be bad (it seems to optionally include the client id + secret this way too, but my expertise in this area is limited; I know it's bad due to being captured in browser history and server logs)


Tbh I am not sure about the naming. Some resources call it tokeninfo, others token_info or introspect. And I get it why they come up with things like token_info, because you POST a token and the server returns the embedded information if it is valid. But yes, this is pretty confusing.

I think it's broadly adopted as /introspect these days?

Interestingly Auth0 has their /tokeninfo endpoint deprecated with advice to instead use /userinfo, not /introspect which would align with the Dovecot implementation of tokeninfo_url discussed above (Auth0 /tokeninfo was also POST with application/json). Auth0 doesn't appear to have/support an /introspect endpoint.


I will also implement some configurable form of endpoint authentication as the RFC suggests, to prevent token scanning.
The only werid thing is that they don't specify an exact method of authentication, but I will have a look at what I can do about it.

You could secure the access however you want I imagine, the main intent was that the endpoint was protected. Some are quite flexible with this, such as using mTLS and other options, but you'll almost always find basic auth.

There is also the approach that Ory Hydra took with that route protection responsibility delegated to a separate layer such as an API gateway (like Tyk) or Reverse Proxy (like Caddy / Traefik), which is something I can appreciate - but I can also get behind secure by default (I do have a gripe with kanidm that forbid opt-out of TLS when you have a proxy / LB in front already handling that, and their refusal to discuss budging from that).


Got it, so then you probably had the well known "10 sec connection lags" you also get with SSH + DNS issues?

I've run into some DNS related failures in recent months with a Dockerfile failing to run some rust binaries like cargo-binstall due to how DNS was handled there (depending on feature selection that affected DNS resolution if building static I think).

My router is old and sometimes acts up, but Docker container itself also reliably failed to perform some queries (TXT records from Github that were multi-part I think, and reverse DNS queries to crates.io were flakey) which was related to DNS routing through the Docker embedded DNS service unless explicitly setting DNS to say 1.1.1.1.

I haven't noticed any hiccups on the host out of Docker that come to mind beyond rauthy and kanidm docs recently. Github Pages is still fast with mkdocs material based docs, so I assumed the common denominator was mdbook. Not really able to spare the time to investigate it further as everything else works fine and the docs are fine after the delay for a while (perhaps an hour later DNS TTL expires and it performs the lookup again).


I will fix this and build a nightly image for you for testing.

Great thanks! ❀️

Really no rush, I still haven't thanked you for your detailed response on a discussion thread I opened in February and only just now found time to go through.

I really appreciate the effort and time you put into the project and user engagement 😎 I don't often see that when I engage with other maintainers.

/userinfo doesn't need to be protected AFAIK and is a GET, whereas /introspect returns more information (potentially sensitive from what I recall?), or was more flexible πŸ€·β€β™‚οΈ (you'd know this better than me, but it seems to vary across providers as some always issue JWT / ID tokens, rather than slimmer opaque access tokens).

/userinfo does not need protection because it validates the exact same token that is used for authorization, while the tokeninfo / introspect can validate any other given token.
Including sensitive params during GET requests are only bad because of logs all over the place, yes. The URL itself is encrypted in transit, but having this stuff logged is an issue.

Interestingly Auth0 has their /tokeninfo endpoint deprecated with advice to instead use /userinfo, not /introspect which would align with the Dovecot implementation of tokeninfo_url discussed above (Auth0 /tokeninfo was also POST with application/json). Auth0 doesn't appear to have/support an /introspect endpoint.

The /introspect would be the successor to /token_info, yes. These do have the same behavior. The /userinfo is an OIDC endpoint which kind of achieves the same goal, which is validating an access token, but behaves differently. As you mentioned already, the endpoint accepts a json insted of form request and the response is different as well.

There is also the approach that Ory Hydra took with that route protection responsibility delegated to a separate layer such as an API gateway (like Tyk) or Reverse Proxy (like Caddy / Traefik), which is something I can appreciate - but I can also get behind secure by default (I do have a gripe with kanidm that forbid opt-out of TLS when you have a proxy / LB in front already handling that, and their refusal to discuss budging from that).

There are lots of pros and cons to all of these solutions, for sure. With Rauthy, you can start it without TLS if you are behind a reverse proxy. The reasons is simply that I run it inside Kubernetes, and all my Kubernetes nodes either create their own internal VPN, or I am using a service mesh like linkerd. These solutions already add their own encryption and access policies, so there is no point in having TLS inside TLS imho. You could also just reject the /introspect on the public API, which would be the easiest solution, but I like to have everything secure by default and opt-out if you need to.

So Rauthy now accepts 2 ways of auth for introspection:

  1. Provide a valid JWT Bearer token
  2. Provide the client_id:client_secret as Basic auth, while the client_id must be the same as in the token. This should not get you into trouble, because a client should only ever accept a JWT that is issued for itself and never for others.

Really no rush, I still haven't thanked you for your detailed response on a discussion thread I opened in February and only just now found time to go through.

I really appreciate the effort and time you put into the project and user engagement 😎 I don't often see that when I engage with other maintainers.

Thank you, I appreciate it!

Its actually very helpful when I get such issues, because it's almost impossible to test everything myself all the time and keep an eye on it. But I added an integration test for the token introspection now, so there should not be any regression in the future.

You can test with

ghcr.io/sebadob/rauthy:0.25.0-20240805-lite

or

ghcr.io/sebadob/rauthy:0.25.0-20240805

This should fix your issue, hopefully.

@polarathene Did the nightly fix your issue?

While this shouldn't really be a concern going forward, but for context if the body content-type sent is not the expected one, the response is now a 400 status with:

Url encoded error: Content type error.

If you have the control to detect the content-type of the request was not application/x-www-form-urlencoded, you could perhaps provide a better error message there. Minor UX improvement.


When testing the endpoint manually via Insomnia, I got:

{
	"timestamp": 1723151215,
	"error": "Unauthorized",
	"message": "invalid AUTHORIZATION header"
}

..and my first thought was "Oh right rauthy uses API-Key ...", forgetting that was for rauthy admin endpoints not OIDC endpoints πŸ˜…

Since you're flexible with the supported authorization headers on this endpoint, I don't have much feedback here for UX improvement. I suppose the message could better clarify the supported kinds Basic and Bearer? (the Swagger docs for the endpoint would be fine too but /introspect isn't listed there yet for some reason, even though /tokenInfo is listed as deprecated)

I was able to authorize with Bearer type repeating the same token data I sent. I guess that makes sense as I tested with the ID Token (JWT) that was provided to Dovecot for the /userinfo endpoint to query with, and that user was the rauthy admin account.


curl request (Bearer)
curl --request POST \
  --url http://auth.example.localhost/auth/v1/oidc/introspect \
  --header 'Authorization: Bearer eyJhbGciOiJFZERTQSIsImtpZCI6IlNZdHRUZUpNWENFMlZCOHlOczRJN2t0eiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE3MjMxNTA3NTUsImV4cCI6MTcyMzE1MjU1NSwibmJmIjoxNzIzMTUwNzU1LCJpc3MiOiJodHRwczovL2F1dGguZXhhbXBsZS5sb2NhbGhvc3QvYXV0aC92MSIsInN1YiI6InphOVV4cEg3WFZ4cXJ0cEViVGhvcXZuMiIsImF1ZCI6InJvdW5kY3ViZSIsInR5cCI6IkJlYXJlciIsImF6cCI6InJvdW5kY3ViZSIsInNjb3BlIjoib3BlbmlkIGVtYWlsIHByb2ZpbGUiLCJlbWFpbCI6ImphbmUuZG9lQGV4YW1wbGUudGVzdCIsInByZWZlcnJlZF91c2VybmFtZSI6ImphbmUuZG9lQGV4YW1wbGUudGVzdCIsInJvbGVzIjpbInJhdXRoeV9hZG1pbiIsImFkbWluIl19.w-6NCkfYcugeuxl4_JMIqt75OOOOFsh4jpFRGb6apFY9OR8fCRWK_vBoFG-vH98bpr5BmxjaZSsEZ_cwclsGAw' \
  --header 'Content-Type: application/x-www-form-urlencoded' \
  --data token=eyJhbGciOiJFZERTQSIsImtpZCI6IlNZdHRUZUpNWENFMlZCOHlOczRJN2t0eiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE3MjMxNTA3NTUsImV4cCI6MTcyMzE1MjU1NSwibmJmIjoxNzIzMTUwNzU1LCJpc3MiOiJodHRwczovL2F1dGguZXhhbXBsZS5sb2NhbGhvc3QvYXV0aC92MSIsInN1YiI6InphOVV4cEg3WFZ4cXJ0cEViVGhvcXZuMiIsImF1ZCI6InJvdW5kY3ViZSIsInR5cCI6IkJlYXJlciIsImF6cCI6InJvdW5kY3ViZSIsInNjb3BlIjoib3BlbmlkIGVtYWlsIHByb2ZpbGUiLCJlbWFpbCI6ImphbmUuZG9lQGV4YW1wbGUudGVzdCIsInByZWZlcnJlZF91c2VybmFtZSI6ImphbmUuZG9lQGV4YW1wbGUudGVzdCIsInJvbGVzIjpbInJhdXRoeV9hZG1pbiIsImFkbWluIl19.w-6NCkfYcugeuxl4_JMIqt75OOOOFsh4jpFRGb6apFY9OR8fCRWK_vBoFG-vH98bpr5BmxjaZSsEZ_cwclsGAw
curl request (Basic)
# Basic Auth base64 encoded value generated via echo + sed (extracts the client_secret value from an .env file):
curl --request POST \
  --url http://auth.example.localhost/auth/v1/oidc/introspect \
  --header "Authorization: Basic $(echo -n "roundcube:$(sed -n 's/^RC_OAUTH2__CLIENT_SECRET=//p' secrets.env)" | base64 -w0)" \
  --header 'Content-Type: application/x-www-form-urlencoded' \
  --data token=eyJhbGciOiJFZERTQSIsImtpZCI6IlNZdHRUZUpNWENFMlZCOHlOczRJN2t0eiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE3MjMxNTA3NTUsImV4cCI6MTcyMzE1MjU1NSwibmJmIjoxNzIzMTUwNzU1LCJpc3MiOiJodHRwczovL2F1dGguZXhhbXBsZS5sb2NhbGhvc3QvYXV0aC92MSIsInN1YiI6InphOVV4cEg3WFZ4cXJ0cEViVGhvcXZuMiIsImF1ZCI6InJvdW5kY3ViZSIsInR5cCI6IkJlYXJlciIsImF6cCI6InJvdW5kY3ViZSIsInNjb3BlIjoib3BlbmlkIGVtYWlsIHByb2ZpbGUiLCJlbWFpbCI6ImphbmUuZG9lQGV4YW1wbGUudGVzdCIsInByZWZlcnJlZF91c2VybmFtZSI6ImphbmUuZG9lQGV4YW1wbGUudGVzdCIsInJvbGVzIjpbInJhdXRoeV9hZG1pbiIsImFkbWluIl19.w-6NCkfYcugeuxl4_JMIqt75OOOOFsh4jpFRGb6apFY9OR8fCRWK_vBoFG-vH98bpr5BmxjaZSsEZ_cwclsGAw
{
	"active": true,
	"scope": "openid email profile",
	"client_id": "roundcube",
	"username": "za9UxpH7XVxqrtpEbThoqvn2",
	"exp": 1723152555
}

/introspect seems work well πŸ‘


I think I've noticed something behaving differently between Rauthy and Authelia when actually integrating /introspect with Dovecot and Roundcube. Both Dovecot and Roundcube have config to manipulate the login username.

I've still got to verify an observation that seems like Authelia provides an unaltered username which retains mixed case, while my earlier test with Rauthy the value seemed to be lowercase transformed, which then failed to match the username field above. At first I thought this was Roundcube, so I tested with Authelia. I'll switch back to Rauthy to confirm. I don't think this observation is related to the /introspect endpoint though.

Feedback - Username vs User ID vs Email address

Is username correct? That seems to reflect a user ID that I have no control over (similar to how I can't use PUT for setting the secret of a client, only to generate a new one).

When testing the /introspect endpoint with Dovecot, as a result it did not have information that I could compare to the login username which would then be used to match a user account / mailbox that Dovecot manages.

With Authelia the username configured would appear as the username in that /introspect response. That doesn't mean it'd actually match the login username Dovecot receives (XOAUTH / OAUTHBEARER provides a username to login with and the OAuth2/OIDC token as password to query /introspect or /userinfo with). Dovecot can modify the login username received by changing to lower case for example or if it received an email address, trimming off the @example.com suffix to compare to any field of the endpoint response.

With rauthy, if /introspect were used (not that it should be /userinfo is fine), to make Dovecot work the email as a login username would need to have the rauthy username. Thus /userinfo would similarly look like this:

{
	"id": "za9UxpH7XVxqrtpEbThoqvn2",
	"sub": "za9UxpH7XVxqrtpEbThoqvn2",
	"name": "Rauthy Admin",
	"roles": [
		"admin",
		"rauthy_admin"
	],
	"mfa_enabled": false,
	"email": "za9uxph7xvxqrtpebthoqvn2@example.test",
	"email_verified": true,
	"preferred_username": "za9uxph7xvxqrtpebthoqvn2@example.test",
	"given_name": "Rauthy",
	"family_name": "Admin",
	"locale": "en"
}

Instead of (jane.doe@example.test):

{
	"id": "za9UxpH7XVxqrtpEbThoqvn2",
	"sub": "za9UxpH7XVxqrtpEbThoqvn2",
	"name": "Rauthy Admin",
	"roles": [
		"rauthy_admin",
		"admin"
	],
	"mfa_enabled": false,
	"email": "jane.doe@example.test",
	"email_verified": true,
	"preferred_username": "jane.doe@example.test",
	"given_name": "Rauthy",
	"family_name": "Admin",
	"locale": "en"
}

That preferred_username perhaps should map to username for /introspect? I'm not sure if you can change preferred_username either (I haven't looked over the docs yet).

I think when I read about implementing SCIM, the advice somewhere was to key on user IDs, not email/username since those can change. With Authelia the username and email are distinct, username is used for login while rauthy uses email. Rauthy also presents in the UI entries as ID + Email, but I'm not sure how useful a random ID is?

Something to maybe consider :)


For reference here is equivalent responses from Authelia:

/userinfo:

{
	"amr": [
		"pwd"
	],
	"aud": [
		"roundcube"
	],
	"auth_time": 1723180947,
	"azp": "roundcube",
	"client_id": "roundcube",
	"email": "jane.doe@example.test",
	"email_verified": true,
	"iat": 1723181120,
	"iss": "https://auth.example.localhost",
	"name": "Jane Doe",
	"preferred_username": "jane",
	"sub": "c09c699f-4f25-4a34-9103-46f6e23e762c"
}

/introspect:

{
	"active": true,
	"client_id": "roundcube",
	"exp": 1723184721,
	"iat": 1723181120,
	"scope": "email openid profile",
	"sub": "c09c699f-4f25-4a34-9103-46f6e23e762c",
	"username": "jane"
}

While this shouldn't really be a concern going forward, but for context if the body content-type sent is not the expected one, the response is now a 400 status with:

Yes, of course it is, everything is validated and that endpoint should be url encoded like mentioned in the RFC.
I do not have direct control over that error message because the validation happens inside t he actix framework.


The documentation for that endpoint does not exist yet, it's only a nightly test image. But you can disable auth if you need:

# Can be set to `true` to disable authorization on `/oidc/introspect`.
# This should usually never be done, but since the auth on that endpoint is not
# really standardized, you may run into issues with your client app.
# If so, please open an issue about it.
# default: false
#DANGER_DISABLE_INTROSPECT_AUTH=false

Since it's a nightly test image, the documentation for auth is not included yet. So it works just like I decribed it above:

So Rauthy now accepts 2 ways of auth for introspection:

  • Provide a valid JWT Bearer token
  • Provide the client_id:client_secret as Basic auth, while the client_id must be the same as in the token. This should not get you into trouble, because a client should only ever accept a JWT that is issued for itself and never for others.

(the Swagger docs for the endpoint would be fine too but /introspect isn't listed there yet for some reason, even though /tokenInfo is listed as deprecated)

Yes, I forgot to add the endpoint to the OpenAPI spec generation. Will do that with the next image and mention the auth behavior there as well, that makes sense, thanks.


I've still got to verify an observation that seems like Authelia provides an unaltered username which retains mixed case, while my earlier test with Rauthy the value seemed to be lowercase transformed, which then failed to match the username field above. At first I thought this was Roundcube, so I tested with Authelia. I'll switch back to Rauthy to confirm. I don't think this observation is related to the /introspect endpoint though.

I will change the username at this point to the users email. This is the same behavior as it is for the preferred_username in OIDC context. Rauthy does not provide or maintain any user-chosen username, only a user_id, which I will add as sub on that endpoint to be more clear about it.

I think when I read about implementing SCIM, the advice somewhere was to key on user IDs, not email/username since those can change. With Authelia the username and email are distinct, username is used for login while rauthy uses email. Rauthy also presents in the UI entries as ID + Email, but I'm not sure how useful a random ID is?

Correct, just must never rely on the preferred_username for anything sensitive, as this is just some arbitrary, unvalidated value. Rauthy only includes it for convenience.

The documentation for that endpoint does not exist yet, it's only a nightly test image.

I assumed that the Swagger docs from the UI were meant to include an update for /introspect. The /tokenInfo endpoint was updated as deprecated there, hence the confusion.

Yes, I forgot to add the endpoint to the OpenAPI spec generation. Will do that with the next image and mention the auth behavior there as well, that makes sense, thanks.

πŸ‘


I will change the username at this point to the users email. This is the same behavior as it is for the preferred_username in OIDC context.

Thanks, that's what I thought πŸ˜…

Rauthy does not provide or maintain any user-chosen username, only a user_id, which I will add as sub on that endpoint to be more clear about it.

πŸ‘


Earlier I mentioned the following observation:

I've still got to verify an observation that seems like Authelia provides an unaltered username which retains mixed case, while my earlier test with Rauthy the value seemed to be lowercase transformed, which then failed to match the username field above.

As was evident from the JSON response, and in the UI, the email was already lowercased by rauthy. I had not realized it was normalizing that, which is why I had the mismatch with username at /introspect using the /userinfo User ID (id) field.

I don't have an issue with normalizing to lower-case, Roundcube and Dovecot were doing the same by default too. Just something that confused me while testing /introspect due to copy/paste expectation πŸ˜“

As was evident from the JSON response, and in the UI, the email was already lowercased by rauthy. I had not realized it was normalizing that, which is why I had the mismatch with username at /introspect using the /userinfo User ID (id) field.

Ah okay, got it. Yes in the beginning, Rauthy was strict with the casing, but people had issues because they registered with Batman@batcave.io and wondered why there were not able to log in with batman@batcave.io. E-Mails are lowercase only anyway, so Rauthy always converts them before saving.


That would be the response after changes, which might include additional fields if they actually exist like cnf for instance.

grafik

You may have another look with

ghcr.io/sebadob/rauthy:0.25.0-20240809-lite

if you like. But it should be fine now.

E-Mails are lowercase only anyway, so Rauthy always converts them before saving.

AFAIK technically they're not.

The local-part is case-sensitive, it's just common practice for providers to be case-insensitive to avoid bad actors and the like. Likewise there is non-standard features like Gmail ignoring any . in the local-part, and different delimiters for sub-addressing (when supported).

As part of the confusion when looking into Roundcube to see if they were to blame, there was a lengthy discussion on that very concern, which normalizes to lowercase by default.

It's fine, just wasn't something I noticed at the time. On the bright side, that's potentially the cause of some unresolved issues reported at Docker Mailserver in the past, so I need to update docs about that default behaviour with Dovecot too :)


You may have another look with

Awesome, thanks!