HackSoftware / Django-Styleguide

Django styleguide used in HackSoft projects

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Issue with `InputSerializer` Classes and Schema Generation

Farhaduneci opened this issue Β· comments

Hello team,
I hope you're doing well. I want to express my appreciation for the outstanding effort put into this project once again.

I've come across an issue related to the style guide that I'd like to share with you all. It concerns the InputSerializer classes, which are fantastic in general. However, I've noticed that when attempting to generate schemas for the project, these serializers become problematic and hinder the maintenance of a clean code-base.

The root of the problem lies in the use of the ref_name within drf-spectacular or other similar libraries that assist in schema generation.

The crux of the matter is that when multiple classes contain a serializer named InputSerializer, the library cannot distinguish between them.

Consequently, it either renders them all as a single class or raises an error (depends on what library you use, drf-spectacular does not raise errors).

One potential solution is to define a unique ref_name for each serializer class. However, this approach would clutter the code, and I am keen on avoiding such an outcome.

class OutputSerializer(serializers.ModelSerializer):
    class Meta:
        model = Model
        exclude = ["field"]
        
        ref_name = "Something.OutputSerializer"

Therefore, I would greatly appreciate any insights or suggestions on how we can tackle this issue effectively.

Thank you for your attention and assistance.

@RadoRado can't wait to hear your workaround, please let me know if I'm doing something wrong because I've already seen that you said you're OK with drf-spectacular in your team.

@Farhaduneci Hello πŸ‘‹

I somehow missed the notification on that. Sorry about that πŸ‘

I'll check the issue and get back to you.

Thanks, @RadoRado ✌️

@Farhaduneci Not to make you wait any longer:

  1. Can you provide me with an example piece of code, that fails the particular InputSerializer behavior, so I can iterate on top of it?
  2. What you are describing, sounds like it can be solved using one of those tools - https://drf-spectacular.readthedocs.io/en/latest/customization.html#step-1-queryset-and-serializer-class

It's either explicitly defining get_serializer_class, or simply using @extend_schema.

Let me know if that's not the actual problem that you are describing πŸ‘

Thanks for your response, Radoslav.

I'm currently utilizing drf-spectacular version 0.26.2 in my project and have exactly used @extend_schema in my code as you can see below in one of my list APIs:

@extend_schema(
    tags=spectacular_tags,
    responses=OutputSerializer,
)
def get(self, request):
    filters_serializer = self.FilterSerializer(data=request.query_params)
    filters_serializer.is_valid(raise_exception=True)

    plans = plan_list(filters=filters_serializer.validated_data)

    return get_paginated_response(
        pagination_class=self.Pagination,
        serializer_class=self.OutputSerializer,
        queryset=plans,
        request=request,
        view=self,
    )

The OutputSerializer for this API is:

class OutputSerializer(serializers.ModelSerializer):
    class Meta:
        model = Plan
        fields = [
            "name",
            "slug",
        ]

        ref_name = "Billing.Plans.PlanListAPI.OutputSerializer"

And these two generate the following schema together:

image

Everything works fine up to this stage, however if I remove the ref_name and define another OutputSerializer somewhere else in my project which does not have the ref_name as well, like below:

class PaymentVerifyAPI(APIView):
    permission_classes = (AllowAny,)

    class OutputSerializer(serializers.Serializer):
        # class Meta:
        #     ref_name = "Billing.Payments.PaymentVerifyAPI.OutputSerializer"
        test = serializers.CharField()

    @extend_schema(
        responses=OutputSerializer,
    )
    def post(self, request):
       ...

That another API will override the current definition of OutputSerializer in the first API class! Like, they do not have any scopes of definition.

Screenshot

This is exactly the problem I'm facing with, and for solving it. I need to define ref_name for each and every API by hand, which is not a cool thing to do at all.

I hope these help to further understand the issue.

@Farhaduneci Okay, this is indeed a very weird behavior.

My suggestion / intuition (haven't looked at drf-spectacular code yet) is that they are using the class name, as an unique schema name.

Can we test something:

Can you rename the OutputSerializer in the last API, to, for example, PaymentVerifyOutputSerializer and see if this solves the issue πŸ€”

If that's the case, this will confirm the serializer class name <-> schema name mapping.

It works, indeed. But this kinda doesn't feel cool, like having the same name for all the input serializers.

Is it OK to prepend the API class name to the local serializers of each API?

@Farhaduneci It does not feel cool indeed.

Let me think about possible solutions / workarounds πŸ‘

Thanks 😁

Please also check this comment of mine as well, kinda related to what we're facing here.

#105 (comment)

I thought commenting on a closed issue would make it open again, but it didn't.

So that's why I'm referencing it here.

@Farhaduneci I've played around with this and here is my suggestion (so far), following the idea to push the "not so cool" parts deeper in the project abstraction layer.

We usually have a set of "base" APIs, that define some specific framework-level behavior, that's important for the project.

Here, we can use the same approach and define something like that:

class BaseReadApi(APIView):
    def get_serializer_class(self, context=None):
        cls = self.OutputSerializer

        if not hasattr(cls, "Meta"):
            cls.Meta = type("Meta", (), {})

        if not hasattr(cls.Meta, "ref_name"):
            cls.Meta.ref_name = f"{self.__class__.__name__}.OutputSerializer"

        return cls

General idea - make sure that our serializers will always have an unique ref_name, which is then used internally by drf-spectacular, when building the response schema.

Here's the relevant code - https://github.com/tfranzel/drf-spectacular/blob/master/drf_spectacular/openapi.py#L1530

And the returned serializer name is then used as a registry - https://github.com/tfranzel/drf-spectacular/blob/master/drf_spectacular/openapi.py#L1581

That's why the override is happening.

Now, following this approach, we can have the following APIs:

class SomeApi(BaseReadApi):
    class OutputSerializer(serializers.Serializer):
        test = serializers.CharField()

    def get(self, request):
        return Response()


class OtherApi(BaseReadApi):
    class OutputSerializer(serializers.Serializer):
        shano = serializers.CharField()

    def get(self, request):
        return Response()

And the schemas are going to be correct.

Now, you can take that and make it more general (a BaseApi) or have it separate (BaseReadApi, BaseWriteApi) - this really depends on your taste & the additional context of your project.

Hopefully this helps!

Hello @RadoRado ,

Thank you for your fantastic suggestion and the time put in this one.

Your idea of using the "BaseReadApi" and giving serializers a unique ref_name is really intriguing.

I believe it would be helpful to note this kind of abstraction in the main text of the guideline so that newbies line me try to implement them since the project kickstart. It's kinda hard to introduce a new base class when you have a bunch of them already defined in the code and using the "APIView" πŸ˜…

Thanks once again.
Your repo is a huge asset.

@Farhaduneci Yep, if you already have a big project, swapping a new BaseApi doesn't sound like a good idea (or at least, it's going to have so internal resistance)

Another thing that you might do is to "proxy" the extend_schema decorator:

from drf_spectacular.utils import extend_schema as extend_schema_base


def extend_schema(*args, **kwargs):
    def decorator(f):
        responses = kwargs.get("responses", None)

        if responses is not None:
            ref_name = responses.__qualname__

            if not hasattr(responses, "Meta"):
                responses.Meta = type("Meta", (), {})

            if not hasattr(responses.Meta, "ref_name"):
                responses.Meta.ref_name = ref_name

        return extend_schema_base(*args, **kwargs)(f)

    return decorator

This seems to be doing the trick.

Of course, the implementation needs to be double-checked, because I just wrote it & tested it quickly.

Cheers

@RadoRado, I've been using this solution for a few days now, and it appears to be functioning adequately. However, I've encountered occasional errors related to the __qualname__ saying that this property is not defined.

Additionally, I've noticed that each class might contain request as well as responses, both of which are utilized for InputSerializer and OutputSerializer. Here's the current code I'm employing, which seems to be working satisfactorily:

import secrets

from drf_spectacular.utils import extend_schema as extend_schema_base


def extend_schema(*args, **kwargs):
    def inner_decorator(view_func):
        responses = kwargs.get("responses", None)
        request = kwargs.get("request", None)

        if responses is not None:
            extend_schema_responses(responses)

        if request is not None:
            extend_schema_request(request)

        return extend_schema_base(*args, **kwargs)(view_func)

    return inner_decorator


def extend_schema_responses(responses):
    if not hasattr(responses, "Meta"):
        responses.Meta = type("Meta", (), {})

    if not hasattr(responses.Meta, "ref_name"):
        responses.Meta.ref_name = secrets.token_hex(16)


def extend_schema_request(request):
    if not hasattr(request, "Meta"):
        request.Meta = type("Meta", (), {})

    if not hasattr(request.Meta, "ref_name"):
        request.Meta.ref_name = secrets.token_hex(16)

@Farhaduneci this is looking good πŸ‘

And if it does the job well - kudos!

For now, I'm closing this issue.

@Farhaduneci feel free to reopen it, if you want to provide some additional information and/or ask more questions related to this topic.

Cheers!