Product Roadmap 2019
manishrjain opened this issue · comments
Here's the product roadmap for 2019:
- Stronger Type System
- Support official GraphQL spec natively (#933).
- Live streaming of updated responses
- Gremlin support
- Full JSON support across exports, bulk loading, and live loading.
- Upserts
Enterprise features:
- Binary backups
- Full Backups
- Incremental Backups
- Access Control Lists
- Audit Logs
- Encryption at Rest
- Dgraph cluster running across remote regions
- Point in time recovery
Tell us what more you'd like to see happen in 2019!
Would be nice if ACLs could be in the Community Edition.
Scott
@manishrjain Thanks for the update! Can you share any info about how you intend to implement the GraphQL spec while preserving the existing functionality? Custom directives?
As for Live Streaming, will this simply be the Subscription portion of the GraphQL spec, or do you intend to implement live queries (such as with a @live
directive?)
Would be totally awesome if we could have a chat about OpenID and protecting schema queries based on roles from the JWT tokens. https://blog.grandstack.io/authorization-in-graphql-using-custom-schema-directives-eafa6f5b4658. If required I could provide full time OpenIDConnect server (Keycloak) on a public endpoint for the duration of the implementation/testing.
how you intend to implement the GraphQL spec while preserving the existing functionality? Custom directives?
We will have a different endpoint called /graphql
or something, which would run GraphQL queries. That way we can maintain and improve what we currently have, while also supporting GraphQL official spec.
Any chance of you offering a paid managed instance?
We will have a different endpoint called
/graphql
or something, which would run GraphQL queries. That way we can maintain and improve what we currently have, while also supporting GraphQL official spec.
@manishrjain For clarity, are we to understand that the /graphql
endpoint would only implement a subset of the GraphQL+- API? Beyond backward compatibility, what are the reasons to stick with GraphQL+-?
GraphQL+- implements a lot of features, which are not part of the GraphQL spec, like variable assignment, math operations, shortest path queries, and so on. GraphQL isn't the right fit for all the features that one needs from a database.
But, GraphQL is a great language for apps to be built on -- and that's the aim here, is to support it to allow building apps easier on Dgraph. Dgraph is a great graph DB, but also a great, general purpose primary DB for apps; and we see more and more people/companies use Dgraph to build apps.
Any chance of you offering a paid managed instance?
Not a priority. It would depend on how things go in 2019, and whether we have the resources needed to pursue what would effectively be another huge project (almost a mini-company) within Dgraph.
Support for multiple databases (schemas) on the same server is very important for us for the reasons mentioned in the relevant issue
#2693
But, GraphQL is a great language for apps to be built on
So, the intention is to have a standard GraphQL endpoint, so apps can directly access the Dgraph server?
Scott
For outside-living apps like Web/Cli/Mobile you will still have to have an authentication proxy on query/mutation level. Which is going in the direction of neo4j plugin system. Not complete.
Wouldn't it make more sense to add role-based-access-control based on OIDC? So Queries/Mutations can be protected by simple group decorator, then user's signed token has to contain list of groups user is subscribed to. Entire implementation of token verification is in this one file on Kubernetes repo. https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/plugin/pkg/authenticator/token/oidc/oidc.go A good place to start.
This would make it possible to expose /graphql endpoint to the world without having to rewrite every single call inside of API that ends up calling unprotected /graphql endpoint or with ACL's that don't fit authorization standards of any of the modern Web/CLI/Mobile apps.
@styk-tv - There is also still the very important part of web interaction called user input sanitation, which is just as much, if not more, of a security aspect than RBACs or ACLs. I mean, we don't want to run into little Bobby Tables, right? 😁
How will Dgraph also handle user input sanitation (and the other things mentioned in that article linked to little Bobby)?
Scott
Scott, Yes, graphql injections are real. So question is will dgraph as community work out authorization, DoS, injections or just say hey, here is /graphql we have worked out this issue or follow footsteps of bare graphql and let Apollo grow into a superpower?
Hi, no plans for open chyper support yet?
No concrete plans for Open Cypher yet. We'll see about that if that changes and the year progresses.
Re: /graphql
endpoint, once we have that in place, we'll see what all functionality is provided by existing frameworks like Apollo, and what we need to build as a database. Of course, we want to prioritize the security of data, but at the same time, we don't want to "become" Apollo.
maybe a simple open cypher to graphql+- conversion engine can help those of us still stucked with a neo4j backend that wants to scale with dgraph,i wonder if, meanwhile,it Is possibile to build It externally as library
Re: /graphql endpoint, once we have that in place, we'll see what all functionality is provided by existing frameworks like Apollo, and what we need to build as a database. Of course, we want to prioritize the security of data, but at the same time, we don't want to "become" Apollo.
That is smart and for that reason, I think it would be a wasted effort to offer a pure GraphQL connection to Dgraph, with the intentions of using Dgraph as a "backend" for client-side apps. There must be a layer of business logic in front of Dgraph and behind the GraphQL endpoint for any sized application to be safe and work smartly.
Scott
Every few months I jump in to check on dgraph and I am excited to see manifested plans for GraphQL. I believe dgraph has the potential to become a go to general purpose dB for apps.
For anyone asking about authentification, sanitisation and similar: The standard practice is to have (in this case) dgraph run behind something like graphql-yoga which is the server component that adds the business logic / authentification layer.
In practice, every production GraphQL system looks like this:
Prisma / AppSync / PostGraphile / dGraph <—> GraphQL Server <— GraphQL Clients (Web, Mobile, other Servers etc)
As soon dGraph supports standard spec graphQL, its DB power can be stitched to the GraphQL Server. The server handles auth, caching, stitching of other sources and serves the clients.
On first glance, it might be a little confusing to conceptually differentiate the components. Anybody coming by, might want to read this post and that comment:
In my advisory work regarding GraphQL, I often have this Q&A (copy past)
Question
If dGraph / Prisma / PostGraphile itself already provide a GraphQL interface (i.e, it is a GraphQL server already), at what situation do we need another layer of Apollo-server or graphql-yoga?
I don’t see any extra benefit adding another layer. Can you shed some light on this?
Answer
To provide additional logic like access control etc. I.e. Prisma does not support roles and can’t make a difference between a logged in/out user.
Also, the GraphQL Server is able to sticht multiple GraphQL endpoints (Schemas) together.
So one can use an old MySQL database via Prisma jointly together with a GraphQL wrapped REST API of 3rd Party Providers, access & security logic all together with new data stores like graph and key-value dbs for incremental / feature based adoption.
Question
Why are there so many graph ql servers?
Answer
Conceptually, GraphQL is a REST replacement allowing a new form of communication. Just like you might string together services based on REST, you string them together based on GraphQL. Only with the advantage, that GraphQL is self documenting, allowing an automatic merger of all the functionalities in the upstream chain without manually fetching & joining REST calls.
To communicate, a GraphQL endpoint needs to be served. This however, is not to be confused as having to be “the Server Endpoint” to i.e. public web and mobile clients.
Prisma / AppSync / PostGraphile / dGraph <—> GraphQL Server <— GraphQL Clients (Web, Mobile, other Servers etc)
This isn't quite correct IMHO. It would be more like this.
MongoDB<—>Prisma -|
MySQL<—>Prisma -|<— GraphQL Clients (Web, Mobile, other Servers etc)
PostGres<—> Prisma -|
or
PostGres<—>PostGraphile-|<— GraphQL Clients (Web, Mobile, other Servers etc)
or
all kinds of backends<—>AWS Appsync-|<— GraphQL Clients (Web, Mobile, other Servers etc)
or maybe
Dgraph<—>GraphQL-Yoga-|<— GraphQL Clients (Web, Mobile, other Servers etc)
or also maybe
Dgraph<—>Business Logic Server<—>GraphQL Gateway-|<— GraphQL Clients (Web, Mobile, other Servers etc)
In other words, Dgraph shouldn't be placed in the same boat as Prisma, AWS AppSync or PostGraphile. It is ONLY a database, currently and I highly doubt it will get to be much more or rather, I don't think it should. Again IMHO.
Scott
This isn't quite correct IMHO. It would be more like this.
MongoDB<—>Prisma -|
MySQL<—>Prisma -|<— GraphQL Clients (Web, Mobile, other Servers etc)
PostGres<—> Prisma -|
Prisma is only an adapter for managing old school databases via graphQL (table creation etc) and should not talk to the public directly as it doesn’t provide access logic (just like a DB). As its website header says “Prisma replaces traditional ORMs”. It would be like having mongoose for mongoDb exposed to the public. Its possible, but not advised. Beside, the whole point of GraphQL is to have one request to serve everything in one query response. The diagram above would mean clients have 3 endpoints to manage and would have to merge & cache themselves. A graphql server is the one caching and stitching. I.e. for serving a nice friend list feature, taking the userId
from MySQL (via prisma), friends common connections / graph relations (from dgraph) and Analytics data (from a wrapped REST API from BigQuery) neatly packed to the frontend.
So native GraphQL support through dgraph is baisically just that = “prisma” + db. Or Postgres + PostGraphile, or Neo4j + GraphQL Plugin. So it very well lives in the same boat and would be a great alternative.
On a side note: AppSync is a very different thing in the list. AppSync is like GraphCMS, meteor, or to be understood as a firebase solution in GraphQL.
When dgraph is bringing GraphQL support, its nothing more than a dB with a smart REST / GraphQL interface for easy stack integration. And that is great.
I was assuming yoga was in front of Prisma.
To be more correct, it should be this.
MongoDB<—>Prisma -business logic-GraphQL-Yoga-|
MySQL<—>Prisma -business logic-GraphQL-Yoga-|<— GraphQL Clients (Web, Mobile, other Servers etc)
PostGres<—> Prisma -business logic-GraphQL-Yoga-|
The point I was trying to make overall was, Dgraph shouldn't be connecting straight to GraphQL Clients (outside of servers).
Scott
The point I was trying to make overall was, Dgraph shouldn't be connecting straight to GraphQL Clients (outside of servers).
Haha 😅 then we just missed each others note. You are absolutely right: Dgraph GQL should not be exposed to public consumers and therefore should not bring yet another user/role manager.
@smolinari Thanks Scott. Your explanation lead me to https://github.com/maticzav/graphql-shield which is a dream come true.
@D1no - Actually, I think Dgraph does need a user/ role management, because that would offer a way to get a form of logical separation within a database for the purposes of multi-tenancy. What Dgraph doesn't need is to have an extra GraphQL API as mentioned here:
We will have a different endpoint called /graphql or something, which would run GraphQL queries.
The current GraphQL API should be specs compliant to begin with (and even then, GraphQL Clients outside of other servers shouldn't connect to it directly). I understand trying to keep backwards compatibility, but they shouldn't have broken away from the spec to begin with. That's what specs are for. 😁
I guess if the new endpoint is to offer a specs compliant GraphQL API and it wasn't intended for client-side access (as in devices on the web) and at some point in a major version, GraphQL+ will be deprecated, then I guess I am for it too. I might have misunderstood the issue that started it all, as the OP was talking about Relay and Apollo, which are client-side (as in devices on the web) libraries. If that was my misunderstanding all along, then I apologize for wasting everyone's time. 😄
Scott
That makes me curious, please consider this: If /graphql is not to be used directly by the clients (no jwt auth mechanism, not recommended to be exposed outside) then what is the point of /graphql endpoint built in? In neo4j ALL my interactions with the db are using neo4j-graphql-js using bolt protocol (not through /graphql plugin).
To follow that example, if graphdb could follow similar pattern of creating graphdb-graphql-js
component then no /graphql endpoint would be necessary. Difficulty here is allowing yoga/apollo tools to setup standard graphql api, allowing transformations on schemas, permission decorators, rules in IDL as standard type/query/mutation/subscription and translating that into GraphQL+- requests with the assistance of graphdb-js
driver.
This way graphdb-graphql-js
component would allow anyone to integrate as they want and /graphql could then be simply an example instance of yoga/apollo with several examples potentially demonstrating permissions graphq-shield or custom schema directives, middleware and so on.
Since GraphQL endpoint usually consumed directly by public client, I vote that GraphQL support as new Dgraph client API that can be only used by Dgraph client or separate GraphQL-Dgraph conversion package/library, rather than opened to public /graphql endpoint.
First reason, just like some others already state, separated security/auth/authz will be a problem, I'm afraid at the end Dgraph will be pushed to be an oauth/openid client, while not all of us using it, Dgraph should be remain agnostic to that kind of technology.
Second reason is performance, conversion from GraphQL query to Dgraph query would be a resource intensive, let the backend app server handle it, while Dgraph server focus on dealing with database.
Third reason is confusing multiple schemas, GraphQL it self can have more than one schemas (for public, for admin, etc) let this GraphQL schema variation and GraphQL query validation handled by backend app server before sent it to Dgraph.
That makes me curious, please consider this: If /graphql is not to be used directly by the clients (no jwt auth mechanism, not recommended to be exposed outside) then what is the point of /graphql endpoint built in?
- ❌ Clients: End-User Devices
- ✅ Services: Other GraphQL Microservices & Middleware
@styk-tv A GraphQL API is like a very smart Rest API. You can use it to serve clients on the web (browser, mobile app etc.) or other services (apache server, BI, data warehouse etc.). The later is meant for dgraph — spec compliant GraphQL means seamless service to service communication.
similar pattern of creating graphdb-graphql-js component then no /graphql endpoint would be necessary.
Graph QL is language / library independent; i.e. I have a C# & Erlang service that consume GraphQL. It respects the spec, and anything that supports the spec can be easily integrated. That is what the endpoint is for. It elevates access via protocol instead of library transform logic, providing to any language:
- Automatic merging of schemas with other services
- No need for ORM code
- Self documenting api for both consumption and mutation
The Gateway (like Apollo, shield etc.) deals with Auth/Roles etc and is completely unrelated to the DB. Those libraries are merely a wrapper for GraphQL spec.
Credit: Adapted from @schickling and a pretty good post about Service to Service in GraphQL
I.e. Service to service means, you could build graph analytics of open source commits in dgraph and merge them effortlessly with GitHubs GraphQL API to go even deeper.
@D1no I think fundamentally we're talking about the same thing. Let's use terminology from your diagram. My interests are in GraphQL Gateway (with OpenID) so parts of the token data can participate in the queries (like who is the user or what groups they belong to). Microservices layer needs to understand the database. So in order not to mix and not to retype the schema of the database in the Micoservices layer you need a smart set of components so you can define security against the schema. Currently this can be COMPLETELY abstracted using two methods in my case. By using neo4j-graphql-js i have control over schema augmentation to autogenerate queries, mutations and subscriptions as I see fit without rewriting resolvers (or by writing only the ones I want leaving types autogenerated) As well as in the same schema definition I can have Role(Group)-Based-Access-Control defined (graphql-shield with openid) not only per query/mutation/subscription but also on field level. (again without retyping my resolvers which is a massive job especially when you have multiple siloed teams doing it). Above BTW is 100% reality with only two npm's. This 95% decrease in effort of Gateway+Microservices is what this discussion should be about. This is not 2013.
Now in regards to language its the community adoption rates that should drive focus initially. Personally you may have C#, Erlang or I may have Python or @manishrjain may have Go but you can't ignore adoption and effort that goes into Node for Yoga/Apollo for the GraphQL Gateway and Microservices layers from your diagram. It would be silly to ignore this if this project want's to truly participate in GraphQL community. So having said that focus on Node.js (npm) component dgraph-graphql-js
in my humble opinion should be considered as part of dgraph 2019 roadmap. Disconnected from the actual database but growing at its own pace and maintained by independent individuals but still in sync with dgraph growing db capability.
👍 I'm happy to see upserts on the list for 2019.
While it's possible with custom code inside a transaction, it's fairly brittle since all writes to these nodes would need to run through this custom code. If someone isn't aware of this (or if the db is accessed from outside application code), duplicate records can easily be written.
It also makes the application code unnecessarily complicated.
With the growth in popularity of graphql, the ability to use a native graph database behind a graphql app server (such as Apollo server) could be incredibly powerful. The ability to pass a client graphql query along (after the requested data has been authorized using something like graphql-middleware) in a single database request would make building app backends incredibly simple.
I'm glad that @manishrjain and the dgraph team are making this a priority now. However, it's critical that the interaction between the Graphql Server (i.e. Apollo Server) and dgraph be both simple enough to allow users to delegate entire queries to dgraph but also robust enough to provide the ability to customize the database query when simply delegating all incoming fields is not sufficient.
My suggestion would be to create a separate library that could be used in conjunction with the dgraph node.js client and would effectively generate a graphql+- query that can be passed to the dgraph client (similar to how neo4j-graphql-js generates a cypher query for the neo4j node.js driver). This would accomplish the following:
- It would allow users to build upon the existing graphql server ecosystem (such as Apollo Server, graphql-shield for field-level authorization, etc)
- It would not require any direct changes to dgraph or the dgraph client
- It would provide the ability to delegate entire queries but still provide the full capabilities of graphql+- for arbitrarily complex queries. (possibly by decorating the type definitions with custom directives similar to neo4j-graphql-js)
Add to this a fully managed hosted dgraph offering, and it would be incredibly fast and simple to get a powerful graphql backend up and running.
be both simple enough to allow users to delegate entire queries to dgraph but also robust enough to provide the ability to customize the database query when simply delegating all incoming fields is not sufficient
That would require Dgraph to build out the middleware. That would require Dgraph to assume what business logic you need, and I'm talking about core business logic, like permissions i.e. who can see what in terms of data.
I have to again, majorly disagree with this kind of thinking. "Graph" in GraphQL has absolutely nothing to do with a graph database. It's just a coincidence and maybe even a marketing ploy, that Dgraph took GraphQL as the basis for their API. And I don't mean marketing ploy in a bad sense.
If you listen to the videos created when Facebook originally introduced GraphQL, there was never (and still is, in fact) any mention of a Graph database. Facebook doesn't even have one. When they speak of a Graph, they are speaking of the "business graph". All the things that your business circles around. These things deal with with data AND processes. It has to do with workflows and capabilities. It has nothing to do with a Graph database and the relational data within it. The fact a business has relationships in its data is the only connection between graph databases and business graphs.
And sure, the frontend client pulling up data straight out of Dgraph can be advantageous. But, there will still and should be business logic in front of the data sources in your business or application, because you might have more than just Dgraph running in the background serving up data. There should never be a "direct" pass of a query to the database, because the Dgraph team simply can't, in a million years, know or guess your business needs in terms of data manipulation and protection. And, they absolutely shouldn't be wasting their time guessing at what it is you want as a middleware. Dgraph should only understand how to store, manipulate and offer graph data. They can offer an API to query, persist and mutate that data. Whether or not it is GraphQL spec compliant or not has nothing to do with your application's GraphQL gateway. And it never will. And, it never should.
Scott
@smolinari I'm not sure how you interpreted my post, but I don't believe you're disagreeing with my approach above.
That would require Dgraph to build out the middleware. That would require Dgraph to assume what business logic you need, and I'm talking about core business logic, like permissions i.e. who can see what in terms of data.
No, using an approach like what I've suggested above (similar to neo4j-graphql-js) would necessitate a graphql server which would handle all the business logic, including the ability to handle authorization however you see fit, etc. This business logic layer could use the library, let's call it dgraph-graphql-js, along with the dgraph client to delegate certain portions of your queries that precisely match your database schema.
similar to neo4j-graphql-js
Let's be clear about what this is. It is a way to translate or use Cypher within a GraphQL endpoint. It is not a business logic layer and it is not what was proposed by @manishrjain in the first post.
This business logic layer could use the library, let's call it dgraph-graphql-js
And that is my point. Who is going to write this layer? The Dgraph team? That is what I am saying. They shouldn't. It's not their realm of responsibility. At best, as mentioned in this issue (which started the more spec compliant epic above), the Dgraph team could build a further connector for Dgraph to connect with Prisma.
In the end, the Dgraph team shouldn't worry about ANY business logic layers or anything that may have to do with business logic.
The thing I'd like to see, and to me, it is much more important than getting their GQL+- API more GQL spec compliant or any kind of "pass-through" for their API language, is getting us a logical partitioning of the data by creating a user RBAC system within the database. I want to be able to use Dgraph in a multi-tenant/ multi-database user environment and can't because it is currently a one user, one database system.
And I really hope they don't put that in the enterprise features. The enterprise features should be more about administration of the database, once it starts getting bigger. That's when such services can be "sold" for good money.
Scott
@smolinari i'm sorry man but logic has escaped you. you are talking about not to include business rules but then you ask for RBAC? this is exactly what @smkhalsa is talking about. its place is not in database but in layer like neo4j-graphql-js (or proposed dgraph-graphql-js). and you may think that GrandStack has nothing to do with neo4j at all and its just a miracle of community work but on a close look you will see that it is a deliberate effort of neo4j to bring attention of ALL graphql (Apollo/Yoga) crowd directly into neo4j. hence I will say what i've said before, separate javascript library is required. effort in that direction is a requirement. the sooner dgraph team understands that adoption of dgraph is closely related to graphql front end solutions the better for everyone. and on another note, making backup or clustering or autosharding an enterprise feature makes me sick. might as well follow redisgraph over and out.
but then you ask for RBAC.
This request isn't for business logic for application users. It's for data access for database/ system users, so that a database can be logically separated between tenants. It's a way to make Dgraph multi-tenant capable. It's in the roadmap under Access Control Lists
.
Instead of insulting me, maybe ask questions first, to get a better understanding. My logic is perfectly fine.
And thanks for pointing this out. GrandStack is exactly what I am talking about. It is a business logic layer. It sits between Neo4J and the consumer and forms a GraphQL API endpoint. Notice it requires sending Cypher queries to Neo4j. That's not what is being asked for here. In fact, they are probably learning that neo4j-graphql-js
doesn't solve the whole problem for developers, so now GrandStack is the answer.
What is being asked for here, is that Dgraph should have an API more conform with GraphQL itself. Nobody is asking for a business layer in between. At least, they shouldn't be. And that is the issue. Nobody should be connecting their client apps directly to Dgraph.
Getting to the core of this discussion. This is what triggered this fork in the conversation:
But, GraphQL is a great language for apps to be built on -- and that's the aim here, is to support it to allow building apps easier on Dgraph....
That is incorrect thinking IMHO. It's not what Dgraph needs to be good at, because a GraphQL endpoint should be a couple layers above Dgraph in an application system environment. So making Dgraph GraphQL spec compliant is a waste of effort, IF it means some of the features won't be available and IF it means the idea is for apps to connect straight to the database. If the Dgraph team are changing their current API to drop the +/- bit, ok. But, that doesn't sound like the plan. If there is the intention to actually allow clients (end user devices) to connect directly to Dgraph, that is flawed thinking, because it puts access control to a single database in the client application, which is not secure!!!! In other words, let other solutions do that like Prisma. They are proper GraphQL servers and they can concern themselves with application security. Dgraph shouldn't be a proper GraphQL server, because there MUST be a business logic layer in between it and the GraphQL endpoint. Having a pure GraphQL / Client API (for end user devices) within Dgraph actually defeats the purpose of GraphQL and that is to allow clients (end user devices) to query for data from any number of back-ends.
And, I'll also throw this forward. Looking at neo4j-graphql-js
, it's a mistake to begin with. A schema generated by the database is basically breaking the core premise of NoSQL. The schema should be code controlled and the database should just dumbly follow it. Developers of Neo4j thinking neo4j-graphql-js
is a good thing will be disappointed in the grander scheme of their application development, once they get to any kind of scale.
In fact, I'll also throw this forward and maybe @manishrjain can chime in. Who is the concept of offering a spec compliant GraphQL API supposed to help? Front-end developers? If yes, how? Back-end developers? If yes, how? DevOps? If yes, how? DB Admins? If yes, how? Business Owners? If yes, how?
Scott
GraphQL is for web. Cypher is for graphdb. Apache Spark (one of the top ETL frameworks) supports open cypher so users can run cypher queries on streaming and batch data so if you want to leverage the big data community I would recommend cypher.
Loading wikidata. It's tricky right now due to dgraph being strict in its schema enforcement (good) but difficult to ignore errors (bad, unless I missed the config option). The dataset is very useful and increasingly significant.
maybe a simple open cypher to graphql+- conversion engine can help those of us still stucked with a neo4j backend that wants to scale with dgraph,i wonder if, meanwhile,it Is possibile to build It externally as library @junknown
GraphQL is for web. Cypher is for graphdb. Apache Spark (one of the top ETL frameworks) supports open cypher so users can run cypher queries on streaming and batch data so if you want to leverage the big data community I would recommend cypher. @kant111
Once gremlin support is added, you can use this toolkit to transparently run cypher queries. https://github.com/opencypher/cypher-for-gremlin
Currently, at mutation, it is possible to add some value to the existing without pulling the value in an application layer. Now we need to read the existing value at the application layer and update the value and set it back to the DB, which is not straight forward and sometimes error-prone. And this process of updating a value always increases network calls. Eventually, the performance of the application decreases.
But it is very well possible in other SQL kinds DB, eg
UPDATE Table SET total=total+100 WHERE id=2
Also, other NoSQL graph databases like ArangoDB support these kinds of query.
IMO this feature will help developers to save a lot of time and effort, and performance will improve.
Any indication when Live streaming of updated responses
will be released?
Hey @joshhopkins this will be part of GraphQL Layer we're building. As soon the GraphQL Layer is ready we can have an idea when Live streaming of updated responses will be available.
@ashim-k-saha not quite sure if I get the question right. Are you talking about something like increment/concatenation?
Well the new transaction op called "upsert block" you'd be able to do something like updates. At first It will be upserts focussed, but with the Conditional Mutation in Upsert Block you'll be able to do updates.
Check this issue for details #3059
@MichelDiz Not exactly increment/concatenation operations.
Let me try to explain with a simple example. Let's say I need to transfer $500 from account A to B.
To solve the above problem, currently,
- I need to fetch the information about the account A & B in the application layer.
- Next step will be to check, "Does A have more than $500 or not?"
// That is
if A.Amount > 500 {
// ...
}
- If the above condition becomes true, then I need to subtract $500 from account A. At the same time add $500 to account B.
// Which is basically
A.Amount = A.Amont - 500
B.Amount = B.Amount + 500
- Then with the help of mutation, I can save them into the database.
As far as I can understand @if
from #3059, #3412 & #3612,
- Without fetching the data in the application layer, I can check the condition, "Does A have more than $500 or not?".
- But subtracting the amount $500 form A and add the amount $500 to account B is still not possible through mutation.
This type of problem can be solved easily in SQL databases with the following query,
UPDATE ACCOUNT SET amount=amount - 500 WHERE id=A
UPDATE ACCOUNT SET amount=amount + 500 WHERE id=B
- And an
unsigned int
constraint will prevent negative values in the amount column.
Hey @joshhopkins this will be part of GraphQL Layer we're building. As soon the GraphQL Layer is ready we can have an idea when Live streaming of updated responses will be available.
Awesome. It's discussed here but I can't see anything definite: Is the idea that GraphQL will be client facing or used as a replacement for GraphQL+- on the application layer? If the former, is subscriptions for GraphQL+- (created, updated and deleted triggers) also planned?
@nmabhinandan I would still think support for openCypher natively is much better than doing all the conversions! I tried GrapQL, Gremlin, and openCypher and I don't see GraphQL or Gremlin as clean and expressive as openCypher.
@joshhopkins it will be a layer built in Dgraph. we'll not replace GraphQL+- as GraphQL alone doesn't support all GQL+- features. Both langs will coexist.
I don't think we have plans to add subscription in GraphQL+- cuz we gonna have it with the GQL layer. Also about the "triggers" I think that there isn't any request on the community. A detailed and well supported request would be good to have tho.
@ashim-k-saha In the upsert block you could do something like:
upsert {
query {
me(func: eq(email, "aman@dgraph.io")) {
v as uid
cash as bank_amount
B as math(cash - 330)
}
}
mutation {
set {
uid(v) <bank_amount> val(B) .
}
}
}
I just saw that you added upserts to version 1.1. I haven't tried it yet, but I wanted to personally thank whoever wrote that feature. Dealing with upserts was close to 50% of my code on a small app I'm building with Dgraph as the sole data store.
GQL support -
GQL just got voted as an ISO standard
Dgraph really does look amazing and i can think of some really cool use cases for us. You guys have really put a lot of hard work into making this work and work well!
To me it seems pretty sad to make basic backups/restore an enterprise feature. I can understand partial backups/point in time recovery etc being the enterprise plan but the basic backup/restore functionality is really a blocker for adoption.
I completely agree with @BradErz. No backups in the non-enterprise version is a blocker for us.
We came to the same conclusion as @BradErz, unfortunately.
Interesting conversation. Makes me think that there will be a fine line between community and enterprise version for every new feature. That hinders adoption much more. I just don't want to be monitoring the feature development continously. I'd rather want to feel safe. Now, not feeling safe.
Gremlin support is very important for a GraphDB.
What about W3C‘s standardized SPARQL?
https://www.w3.org/TR/rdf-sparql-query/
Regarding subscriptions.
I would rather do CDC ( change data capture ) on digraphs WAL exposed over grpc .
That allows us to do pub sub , secondary workloads
I really don't have any interest is using graohql.
Please add Open Cypher support
when "Gremlin support" in roadmap can be released?
Yes, both Gremlin and cypher support are in the roadmap for 2020
Do you plan to support geographically distributed cluster? I'm evaluating dgraph, and this feature is very important for us.
Just a heads-up: 2019 is over! 😉
Maybe start a new thread "Product Roadmap 2020"?
Scott
Is there a dedicated issue for Gremlin support?
Splitting predicates into multiple groups for horizontal scaling:
https://discuss.dgraph.io/t/splitting-predicates-into-multiple-groups/5771/3
2020 goals
Closing this roadmap. For 2020 roadmap, please see here: #4724