kubernetes-client / gen

Common generator scripts for all client libraries

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Support client generation for CRD

tamalsaha opened this issue · comments

Following @mbohlool 's advice kubernetes/kube-openapi#13 (comment), I have managed to generate swagger.json for my CRD. In the spirit of not reinventing the wheel, I was hoping to generate Java client using this repo. But I noticed that the swagger.json file is hardcoded to Kubernetes repos swagger.json https://github.com/kubernetes-client/gen/blob/master/openapi/preprocess_spec.py#L302 .

Do you have any suggestion how can I go about generating Java client from my CRD's swagger.json ?

I tried using the swagger-generator directly who produced something https://github.com/tamalsaha/kube-openapi-generator . I am not sure whether I should be generating types shared from Kubernetes types and also how should auth work.

You can add a flag to client generator to completely bypass preprocessing step if you don't need it.

I did some experimentation with the Java client generator for our Voyager project and was able to generate client. The generated client uses the models from Kubernetes java-client. One limitation is that there is no util.Config so instantiating client is kind of awkward.

Here are the relevant repos:

Here is the list of changes I had to make:

  • Update swagger.json path. I think this can be fixed by passing swagger.json using a --build-args.
  • Use my forked swagger-codegen to update templates for pom.xml, etc.
  • Update artifactid etc. for java generator config.
  • Update type and import mappings in java generator config so that generated code uses Kubernetes java-client models instead of regenerating them.
  • Skip generating custom object api client.
  • Pre-process swagger.json to remove definitions of official Kubernetes objects and shorten prefix for Voyager specific type definitions.

I would like to get your comments on how CRD projects should generate client for their projects based on this experiment.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.