json-api-dotnet / JsonApiDotNetCore

A framework for building JSON:API compliant REST APIs using ASP.NET and Entity Framework Core.

Home Page:https://www.jsonapi.net

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Micro Services Architecture

verdie-g opened this issue · comments

I'm considering using this library for a future project but one requirement is that it will follow a micro-service architecture. I was wondering if it was possible to have something like that:
image

That is, one gateway service that is just a pass-through to other services that owned one ore more resources.

In that drawing the gateway receives a request to fetch the article 231 and include its author. Articles and authors are owned by two different service so the gateway sends two requests to get the article and the author. Then it responds to the client with a single JSON:API document.

Of course consistency is lost here because getting the author and the article is not done in a single transaction, but that's fine for my use-case.

Is that a valid JSON:API use-case? Can we already do that with a custom resource repository?

In that drawing the gateway receives a request to fetch the article 231 and include its author.

That's not what GET /articles/231/relationships/author does. It fetches the author ID that's related to article 231. It returns only an author ID. If there is none associated, it returns data:null. If article 231 does not exist, it returns 404. To fetch both the article and the author, instead use GET /articles/231?include=author.

Of course consistency is lost here because getting the author and the article is not done in a single transaction, but that's fine for my use-case.

Yes, referential integrity is lost if Article Service and Author Service are independent systems, but that has nothing to do with transactions. Reading data does not use a transaction. It's the database itself that prevents dangling dependencies from being created in the first place (using foreign key constraints).

Is that a valid JSON:API use-case? Can we already do that with a custom resource repository?

I wouldn't recommend doing so, unless it remains as simple as in your picture: a single to-one relationship. In that case, it's probably easier to hand-code it out in a custom resource service. Effectively you'd use JsonApiDotNetCore only for serialization. Repositories are not a good fit: they handle complex queries, composed of (nested) filters, sorting, pagination, sparse fieldsets, and inclusion of related resources (so you'd need to implement handling all that). However, I wonder why you'd want to use JsonApiDotNetCore in the first place if you're spreading out the data over unrelated systems. It's similar to building a stored procedure for every operation, which negates the flexibility that JSON:API provides. You'd either not support it or hand-code out all the possible combinations, which is a lot of work and unmaintainable. You're probably better off with another framework or a custom-built API. There is no magical component within JsonApiDotNetCore that stitches the results from multiple data sources back together, which is very complex. For example, how would you efficiently sort/paginate over data aggregated from multiple systems? That's why data warehouses were invented; they nightly crawl the systems to render aggregated reports.

One last tip: don't underestimate the complexities you're adding with a microservices design. See https://itnext.io/a-practical-guide-to-modular-monoliths-with-net-59da23c01137.

Thanks for correcting my inaccuracies.

I wonder why you'd want to use JsonApiDotNetCore in the first place if you're spreading out the data over unrelated systems

The schema of

I think I would definitely use JADNC it for a new project because JSON:API is very appealing for an external web api and this library helps having a fast development velocity. Now when your business grows a lot you might have to split your applications.

Let me refine my question: where do you think JADNC stands in a large scale company where you had to split your monolith in a few smaller macroservices mostly for scalability and development cycle reasons? Do you have experience with that?

See https://itnext.io/a-practical-guide-to-modular-monoliths-with-net-59da23c01137.

Interesting paper from Google. It did remind me of Orleans. They actually mention the latter in the end:

The closest solutions to our proposal are
Orleans [74] and Akka [3]. These frameworks also use abstractions to decouple the application and runtime [...] None of these systems support atomic rollouts,
which is a necessary component to fully address challenges
C2-C5.

However Orleans does seem to support Blue/Green deployment according to Orleans - Server configuration.

The way I see it, JADNC at its heart is a form of low-code platform. It is convention-based, so it takes minimal effort to consistently expose a database model through a standards-based REST API protocol in a performant and scalable way. This makes it suitable for publicly exposed APIs, where consumers use varying programming languages. The stateless design enables to scale out to multiple instances based on load, and various controls protect against malicious users trying to pull all of your data or exhaust the database server. See https://www.jsonapi.net/getting-started/faq.html#how-do-i-optimize-for-high-scalability-and-prevent-denial-of-service.

Then there's extensibility. The callback-based resource definitions enable to implement your own validations and business rules through code. Custom controllers can be added for file uploads or RPC-style commands. Taking it a step further, many internal building blocks are pluggable via the IoC container.

From an enterprise architecture (application landscape) perspective, I would position JSON:API as a facade to expose an internal database in a platform-neutral way, where the resource model largely matches the database model. Every team would be responsible for its own (private) database, along with its API. So different systems/departments (shopping, inventory, payment, shipping, etc) would all have their own team with their own API. All can be publicly exposed through one host using a gateway such as YARP in front of it, which handles authentication and rate limiting. On top of that, other teams leverage that to build the public website, the customer service tool, etc. There's an open issue for adding OpenTelemetry support to enable distributed tracing (feedback welcome).

When there's a need to invoke external actions from inside the API (ie. send email), the Transactional Outbox pattern can be used to enqueue an outgoing async message to NServiceBus, RabbitMQ, Kafka etc. We have a sample for that in IntegrationTests/Microservices.

In a solution with well-defined architectural boundaries (domains, commands/queries and all that), it's usually a bad idea to switch to microservices or even NuGet packages. The further parts are away, the harder to maintain. It's basically giving up all compiler intelligence. Now you have to deal with versioning, breaking changes, release schedules, centralized configuration, service discovery, network outages, retries, integration testing, release notes and migrations, and a lot more. What works best for my team is using solution filters (.slnf files). We have ~30 of 100+ projects loaded in each of them.

Hope this helps.

So different systems/departments (shopping, inventory, payment, shipping, etc) would all have their own team with their own API. All can be publicly exposed through one host using a gateway

I think that's my biggest concern. If the gateway is simply forwarding the requests to the right service, it leaks the internal architecture into the public interface. If we go back to my drawing, the user has to know they can't include data that is owned by another service internally. I'm not sure how big of an issue it is but it can be awkward for the user

There's an open issue for adding OpenTelemetry support to enable distributed tracing (feedback welcome).
I've some extensive experience with OTEL, I'll have a look.

What works best for my team is using solution filters (.slnf files). We have ~30 of 100+ projects loaded in each of them.

Glad to hear that. I'm working in a company with 100+ repositories (not projects) so the building is a complex challenge.

I think that's my biggest concern. If the gateway is simply forwarding the requests to the right service, it leaks the internal architecture into the public interface. If we go back to my drawing, the user has to know they can't include data that is owned by another service internally. I'm not sure how big of an issue it is but it can be awkward for the user

If you're in one of those organizations that reorgazine and shuffle all departments every two-three years for no good reason, then probably build something entirely custom. You'll be in full control at any time. However it remains a pain and is very costly. One can wonder what's fundamentally wrong with such an org, but it's not something you can control. On the other hand, if the architecture is stable, making requests to multiple endpoints is quite common practice. For example, see the GitHub API. You often need to issue a few requests to accomplish what you want. Either way, it helps to have a deprecation strategy. Ever seen those websites that include a Google Maps block, but instead of seeing the map, it shows "this API version is no longer available"? Ensure analytics are in place, so you can contact your top customers to align your plans. If your audience are mobile devices, checking for updates at startup is an easy way to ensure only the latest API version is used.

If you're in one of those organizations that reorgazine and shuffle all departments every two-three years for no good reason

lol you know it

see the GitHub API. You often need to issue a few requests to accomplish what you want

Good example. I think it's acceptable to just document "for this endpoint you can only include relations X, Y, Z".

Thanks for your extensive answers, it gave me a good idea about when to use JADNC. I'll definitely consider using it for future projects.

Happy to share my insights. And thanks for bringing this up. It helps to better understand how our framework is being used and gives valuable insights into the needs and challenges of our users.