Netflix / ribbon

Ribbon is a Inter Process Communication (remote procedure calls) library with built in software load balancers. The primary usage model involves REST calls with various serialization scheme support.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Is this project still active

GrapeBaBa opened this issue · comments

Hi team,

Seems this project has not been updated a few months, i want to know if Ribbon will use the latest rxnetty version to update.

Much of Ribbon is made obsolete by new features introduced in RxNetty 0.5. We are in the process of determining whether we will continue developing and supporting Ribbon or roll any missing features into RxNetty or a newer and more much lightweight version of Ribbon.

@elandau that worries me a little bit. There will be spring cloud users hesitant to move to netty based clients right away. I'd prefer a newer lighter weight version.

@elandau Thanks for all the great work!

I agree with @spencergibb. This is a bit worrying. We have invested lots of effort on building our service stack on top of ribbon. Is there any high level roadmap resource that we can view, just so we can plan accordingly and be in a better position to react to future changes?

Hi there, we are also using this project as a base for all our microservices' clients, so whenever you have decided what will you do with the project, please let us know as soon as possible.

Personally, I think a lighter ribbon version would be the right approach, as many features like http templating/proxing and the integracion with rxnetty, archaius and eureka are enough reasons for ribbon to exist.

Netflix OSS still touts this as a keystone project. Am I incorrect in assuming that? If so, what types of client side load balancing are the Netflix team using? Has the architecture changed, or is ribbon just not the direction moving forward? Many use this project, but it has became difficult to maintain. If RxNetty is the way forward, it may be worth vesting more in client loadbalancing within that project or an external one. The example in that project is trivial and not as advanced as this.

Dear Billy,

We haven’t been good at communicating out, so I understand your frustration. Your question triggered a series of internal discussions, so I apologize for our delayed response. You touched on a point that we at Netflix had looked at only from the lens of the day to day incremental problem solving but not really sitting down and framing the bigger picture.

Ribbon comprises of multiple components some of which are used in production internally and some of which were replaced by non-OSS solutions over time. This is because Netflix started moving into a more componentized architecture for RPC with a focus on single-responsibility modules. So each Ribbon component gets a different level of attention at this moment.

More specifically, here are the components of Ribbon and their level of attention by our teams:

ribbon-core/ [deployed at scale in production]
ribbon-eureka/ [deployed at scale in production]
ribbon-evcache/ [not used]
ribbon-guice/ [not used]
ribbon-httpclient/ [we use everything not under com.netflix.http4.ssl. Instead, we use an internal solution developed by our cloud security team]
ribbon-loadbalancer/ [deployed at scale in production]
ribbon-test/ [this is just an internal integration test suite]
ribbon-transport/ [not used]
ribbon/ [not used]

Even for the components deployed in production we have wrapped them in a Netflix internal http client and we are not adding new functionality since they’ve been stable for a while. Any new functionality has been added to internal wrappers on top of Ribbon (such as request tracing and metrics). We have not made an effort to make those components Netflix-agnostic under Ribbon.

Recognizing these realities and deficiencies, we are placing Ribbon in maintenance mode. This means that if an external user submits a large feature request, internally we wouldn’t prioritize it highly. However, if someone were to do work on their own and submit complete pull requests, we’d be happy to review and accept. Our team has instead started building an RPC solution on top of gRPC. We are doing this transition for two main reasons: multi-language support and better extensibility/composability through request interceptors. That’s our current plan moving forward.

We currently contribute to the gRPC code base regularly. To help our teams migrate to a gRPC-based solution in production (and battle-test it), we are also adding load-balancing and discovery interceptors to achieve feature parity with the functionality Ribbon and Eureka provide. The interceptors are Netflix-internal at the moment. When we reach that level of confidence we hope to open-source this new approach. We don’t expect this to happen before Q3 of 2016.

I hope that answers your questions but don’t hesitate to follow up with thoughts and suggestions.

Thank you for bringing our focus back on a matter we’ve neglected.

This is the answer we are looking for, and it's good to have a general idea of direction (At Netflix) going forward especially with open source technology.

Load balancing is the key piece of this software that quickly gives us HA in the cloud without single point of failure when integrating services. In general, RPC is not going to be the direction for many writing microservices, so it will be interesting to see where companies move from here (REST w/ externalized models/schemas, protocol buffers w/ gRPC, MOM, or likely a mix)

Thanks for the heads up, and hope this gets communicated to other interested parties.

@drtechniko Since ribbon was placed into maintenance mode[1] it doesn't look like any new PRs have been accepted[2]. What process should be followed?

However, if someone were to do work on their own and submit complete pull requests, we’d be happy to review and accept.

[1] https://github.com/Netflix/ribbon#project-status-on-maintenance
[2] https://github.com/Netflix/ribbon/pulls?utf8=%E2%9C%93&q=is%3Apr%20merged%3A%3E2016-04-12