bevyengine / bevy

A refreshingly simple data-driven game engine built in Rust

Home Page:https://bevyengine.org

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Editor-Ready UI

cart opened this issue ยท comments

This is a Focus Area tracking issue

Before we can start work on the Bevy Editor, we need a solid UI implementation. Bevy UI already has nice "flexbox" layout, and we already have a first stab at buttons and interaction events. But Bevy UI still needs a lot more experimentation if we're going to find the "right" patterns and paradigms. Editor-Ready UI has the following requirements:

  • Embraces the Bevy architecture: Bevy ECS, Bevy Scenes, Bevy Assets, Bevy Events
  • A Canvas-style API for drawing widgets with shapes and anti-aliased curves
  • Define a consistent way to implement widgets
  • A core set of widgets: buttons, inputs, resizable panels, etc
  • Theme-ability
  • "Interaction" and "focus" events
  • Translation-friendly. We can't be anglo-centric here

Active Crates / Repos

No active crates or repos. Feel free to make one. Link to it in this issue and I'll add it here!

Sub Issues

No active issues discussing subtopics for this focus area. If you would like to discuss a particular topic, look for a pre-existing issue in this repo. If you can't find one, feel free to make one! Link to it in this issue and I'll add it to the index.

Documents

I wanted to share a few quick thoughts on UI systems in games. I'm a gamedev professionally, and I read this list of requirements, and had a bit of a knee-jerk reaction. These are good features from a basic point of view, but sort of miss the boat on what I would consider are the harder questions around UI:

  • Structure: Immediate Mode vs. Retained Mode
  • Performance: How fast is a Noop render (where nothing in the UI changes)?
  • Styling: CSS-like, or in-code?
  • Flexibility: How easy is it to build any UI, vs. just 'widgets'

I'm happy to elaborate on any of these, but primarily I'd recommend trying to learn from where Unity is at the moment. They started with an immediate mode UI engine for both the Editor and the Runtime. Later, because of performance and flexibility, they adopted a retained mode UI for the Runtime. Now, as of two years ago, they've finally been building a (really fantastic) unified UI system for both the Editor and Runtime, open source.

This is a great talk on the new design (UIElements/UIToolkit): https://www.youtube.com/watch?v=zeCdVmfGUN0

I'd highly recommend taking as much learning as you can from their path. Frankly, both previous systems were a total mess. They seemed great in the moment, but just collapsed under any sort of complexity.

What makes UI challenging is that it's very hard to build incrementally, you have to plan a lot of these things in from the start. I don't think Bevy UI should be nearly as complicated as the latest Unity UI, but it's really worth learning, and not repeating their mistakes.

And also, if the intention for Bevy is to start off as a hobbyist engine, incremental is perfectly fine. I would just expect to have to rewrite the UI a few times as you want to attract larger projects.

Some early distillation of thoughts is happening here:
https://hackmd.io/lclwOxi3TmCTi8n5WRJWvw?view

At the moment anyone is free to add their particular domain experience. This is stopgap - suggestions are that Cart takes ownership of the hackmd / it's used as ingest for a github wiki/markdown page

Would it be worth also looking as SwiftUI as an example of prior art?

Would it be worth also looking as SwiftUI as an example of prior art?

Add your write-up of it to the doc if youโ€™d like ๐Ÿ‘

commented

Can we add "screen-reader friendly" and "multiple input types" (keyboard, mouse) as hard requirements for any UI framework / UI primitives?

At the risk of being very ignorant, is screen-reader friendliness a high priority for a visual editor? I have a feeling visually impaired folks will have difficulty getting use out of the editor. Making a visual scene editor friendly to visually impaired users sounds like an entire research project. Mouse + keyboard navigation is a clear win for accessibility so I think I'd want to focus efforts there, given our limited resources.

I'm certainly not saying we shouldn't make ourselves screen reader friendly, as that will absolutely be useful in other apps, I'm just questioning if it should be an Editor-UI priority. I'm curious what the use cases would be for screen readers + visual editors.

Please tell me if I'm wrong here.

commented

Yeah no worries, it was a genuine question. I'm not sure how well screen readers are supported in other editors, I am coming from a context of developing on the open web and it is really important in that context. I imagine native software is a lot more complicated.

The extent to which it is important comes down to how visual the editor is. If there are lots of text inputs (e.g. a way to change transform values of an entity numerically rather than just using click + drag) then I think screen reader support would be something to consider. If at least in the short-term we are looking at a more purely graphical interface then it is less important.

commented

Also for the record, I don't use screen readers so I don't actually know what users would prefer in this case. Maybe folks would rather just edit the RON files directly. Let's not worry about it too much. :-)

Have been looking at SwiftUI and one of the nice things about the declarative approach is that it does mean your data is well structured for adding things like that in the future.

The general concepts they use are really elegant and I suspect would translate well to Rust, although it may be an "all or nothing" commitment to go that route.

Main principle is defining a tree of very lightweight units composed together to create consistent and predictable results with surprisingly little boilerplate and small number of primitive/leaf components. Data dependencies are all declared up front so the framework knows what to update as & when the underlying data changes and there's an "environment" concept so that data can be passed down the hierarchy without needing to be passed through every hop.

I quite like the bounds calculation process too, where parent gives its children size "suggestions", they respond with their desired size and parent determines ultimate positioning (but has to respect child's desired size)

Worth watching the WWDC '19 & '20 talks on it all for ideas.

@tektrip-biggles: FYI, @raphlinus has done a bunch of great research in this whole topic, as well as in comparison to existing systems like SwiftUI:

https://raphlinus.github.io/rust/druid/2019/10/31/rust-2020.html

A few thoughts:

Screen Reading: If the goal is for Bevy UI to support in-game UI as well as the editor, that means the UI will need to support gamepad-only and keyboard-only navigation of UI from the beginning. So in a lot of ways, you're already halfway there to basic screen-reading support. The bigger question is one of UX design (ie. does the Editor require the mouse?), rather than implementation.

FRP: Modern FRP-based approaches work fantastically for the web. I think they're really strong when you have a flat UI that is structured as a hierarchy. Game UI can be much more complex. In-Game UI often isn't a flat plane, nor a tree. There might be a myriad of text cards, health-bars, holographic displays, etc. Depending on the game, it may be hard to treat this as a single tree.

Additionally, there are entire projects currently figuring out FRP in Rust (Yew, etc). It's been a massive undertaking, and most require lots of macros, templates, generics, etc. And that's without having to build a renderer. So I worry about complexity, scope here.

What I'd favor is a general, high performance UI rendering engine, integrated with the ECS. An FRP crate could be built on top of it, but wouldn't be explicitly baked into the solution. That would allow UI-heavy and 2D games to use FRP as needed, but not require jamming all other use cases inside of it.

I think Godot can serve as a nice project to look at, while developing the BevyUI

@Kleptine

It's been a massive undertaking, and most require lots of macros, templates, generics, etc. And that's without having to build a renderer. So I worry about complexity, scope here.

What I'd favor is a general, high performance UI rendering engine, integrated with the ECS. An FRP crate could be built on top of it.

I think that's right. Especially the part about complexity and scope. I really really like developing UI in the FRP style and would love if Bevy supported it, but it's a mountain of work that we don't need to take on right now.

I think really nailing a UI rendering engine and ECS would us a good foundation to build different higher level UI experiments on. I could see a React-like UI system someday that treats the ECS UI the way React treats the DOM.

commented

I 100% agree with this @ncallaway and I've had similar thoughts over the last few days. The high-level API is something that can - perhaps should - be developed only once the underlying system has been established and is found to be performant and fit for purpose.

If it means writing very verbose or repetitive code for the time-being, that is a worthwhile trade-off IMO.

commented

Also agree with @Kleptine that the high-level stuff can be user-land crates for the time being.

commented

One point @Kleptine

  • Styling: CSS-like, or in-code?

Why not start in-code and then we can add some optional CSS-like solution later (I'm thinking something like how CSS-in-JS works, i.e. it mostly compiles to target language at build time so everything is static at runtime).

I could see a React-like UI system someday that treats the ECS UI the way React treats the DOM.

Precisely my thoughts as well. The ECS would be a great way to store this information in a flat structure. You should take a look at the way Unity stores their UI components in a raw buffer (in the UIElements design talk). It's fairly similar.

Why not start in-code and then we can add some optional CSS-like solution later.

I think that could work out. Unity used in-code styling for their IMGUI implementation. One of the challenges was just that it was cumbersome to feed through all of the different styles through all parts of your program. It might be better, though, if styles can be stored as constants, somehow and exported statically.

So I think some more succinct form of styling would be nice. A CSS-like alternative could be added as a crate, although it might make the API a little more challenging to design. But I agree it's a fairly big task, probably too big for now. Personally, I would be averse to anything other than a CSS subset. There's enough UI styling languages out there that there's no need to reinvent the wheel.

Edit: Another downside of code-only styling is that you need to recompile to see changes. Bevy is already fast to recompile, but you might still have to spend a few minutes replaying the game to get back to the UI you were looking at. It'd be ideal if styling were an asset that could be hot-reloaded in-place, just like any other asset.

commented

It'd be ideal if styling were an asset that could be hot-reloaded in-place, just like any other asset.

Anyone please correct me if I'm wrong, but I think styling as it currently works in Bevy UI can be saved/loaded from scene assets at runtime.

It might be better, though, if styles can be stored as constants, somehow and exported statically.

I was assuming that this would be the case, or at least that UI styling would be built up out of composable functions (i.e. "mixins"). I think this would already be quite easy to do with Bevy UI, but I haven't actually tried it.

commented

I wrote in Discord about the possibility of adopting the Every Layout (https://every-layout.dev/layouts/) primitives as our "building blocks" for layout. I think we could potentially just copy the CSS from the Every Layout components into Bevy UI as some sort of "style mixins".

I'm a big fan of their composable approach to flexbox layout primitives, and since Bevy UI currently uses the flexbox model anyway this would be a good fit: https://every-layout.dev/rudiments/composition/

P.S you have to pay for access to the book, but the CSS itself is not patented, so we can use it.

As to screen reader support in the editor/UI, I can speak a bit to that.

I'm a blind game developer working on adding accessibility to Godot. It may not be as relevant here as it is in Godot since I gather that more code-based workflows are first-class here in a way they aren't with Godot, but a few use cases I have in mind:

  • Working alongside sighted developers who would prefer a visual editor. Right now we've got a handful of sighted developers building audio-only experiences for us, but I think the feedback loop could be drastically increased if they could work alongside blind developers vs. filtering feedback through slow playtest cycles. Right now hiring me as a Unity developer is probably impossible since Unity itself is inaccessible, but my work makes Godot accessible enough that I'm able to build a game with a hybrid editor/CLI workflow. It'd be less hybrid if I started my accessibility work earlier.
  • Some tasks are hard to get right in text. I'm specifically thinking of tilemap editors, and have some PoC code that replaces Godot's tilemap editor with an alternate accessible interface. Screen reader access to the editor would let me do something similar with Bevy.
  • Accessible modding tools. Godot's system for DLCs/mods offers using the editor as one option. If I build a moddable game, being able to ship a cross-platform accessible editor would be nice.

I'm running up against some Godot limits that make accessibility challenging to implement, so tentatively pencil me in as willing to help with Bevy UI accessibility. My big condition for helping out is that it be baked directly into the UI (I.e. separate components are fine, but I'd want to ship it as a UI requirement that someone might choose to disable for whatever reason, than as a third-party crate.) I'd also like for it to be as integrated with the repo as is the UI crate, such that CI failures breaking accessibility are blockers. IOW, I'm fine with people not launching the screen reader system if they'd rather not, but I'd want UI consumers to automatically have it and be assured that it works with the most recent UI. Hope that's acceptable.

In terms of making my job easier, here are two bits of advice:

  1. Make keyboard navigation first-class. Godot has this problem right now. The tree widget is absolutely broken for keyboard navigation, and fixing it isn't a priority. I'd do it myself, but since I can't see what I'm doing, it's like coding with a strip of cloth between me and the keyboard. Not saying it needs to be exhaustive and platform-specific, but keyboard/gamepad support should be consistent, and fixing breakage should be prioritized.
  2. Send events for just about everything. Focus enters/leaves a widget. Text is added to, or removed from, a TextEdit. Focus/selection moves around an editor. I need to intercept just about everything and provide a speech representation. Godot has some lacks here too, and there's been a bit of resistence to adding signals I need, meaning accessibility becomes more and more hacked-on.

Anyhow, hope that helps. Sorry for going long. I'm about to launch an accessible Godot game, but am happy to try Bevy for my next title and work on accessibility. One good aspect of audio-only games is that they don't exactly require lots of resources--a blank screen and good audio are usually enough. :)

I'd vote for accessibility / screen-reader friendly support as part of Editor Ready UI too. I think @ndarilek lays out great reasons why accessibility is important in the Editor itself.

The other reasons why I'd vote to tackle it as early as possible (even if it does expand scope somewhat) are:

  • It's much easier to build-in early when you don't need it than bolt-on late when you do (similar to i18n).
  • If accessibility support is baked into the core of the UI, then it's much more likely that games using Bevy will be more accessible by default.

I don't necessarily think we'll be able to get the first version of the editor to be in a place where it integrates with popular screen-readers on all platforms, but I would really like the core of the UI system to at least have all the pieces in place so that if someone wanted to add screen-reader support the UI system doesn't get in the way.

commented

Thank you so much @ndarilek!! This is really insightful.

Ideally, and especially because we have prior art in the form of Godot, we shouldnโ€™t need to rely on blind/partially sighted contributors to implement this. So sign me up as well.

commented

Copying @ncallaway's comment from Discord.

@ncallaway:
From the todomvc I've been working on the missing pieces that'd be useful from the UI system:

  • Bug: Despawned Node Entity Continues to Affect Layout. #351
  • Bug: Despawned Node Entities are not removed from their parent Node's Children. #352
  • Bug: Move Interaction::Clicked to Release not Press (or have both available).
  • Bevy UI Feature: pixel-ratio scaling. #195
  • Bevy UI Feature: Additional styling (e.g. border, box-shadow, text-decoration, etc). [CSS / border] [CSS / box-shadow] [CSS / text-decoration]
  • Bevy UI Feature: Focus model / events (related to #54 ?).
  • Bevy UI Feature: Interaction::DoubleClicked (related to #389 ?).
  • Bevy UI Feature/Research: Animation (related to #91 ?).
  • UI Primitive: Scrolling container.
  • UI Primitive: Text Input component (single-line first, but eventually textarea too).
commented

Based on discussion on Discord, I think #195 should be a priority right now. I'm keen to get going on some of the styling stuff but I don't want to have to re-do too many bits for DPI scaling.

There is an OpenGL accelerated GUI framework in Makepad: https://github.com/makepad/makepad which looks incredible, though the code doesn't seem super well documented nor easily usable outside of makepad.

@cart
At the risk of being very ignorant, is screen-reader friendliness a high priority for a visual editor?

Really appreciate @ndarilek took the time to share their own experience with you.

I can also recommend reading the "Why?" section of @ndarileks godot-accessibility` README. (Also I would like to acknowledge @cart's willingness to admitting not having knowledge around this issue & being open to learning. )

Examples like "Working alongside sighted developers who would prefer a visual editor" really highlight to me the importance that the tools we create not exclude people from being part of a development team by being inaccessible.

And, the reality/virtue of developers being "lazy" means that, as suggested by @ndarilek, the best approach is to have the support included in the base system, enabled by default.

When @cart announced Bevy it was only a few days after Godot had received significant (and justified) criticism in relation to accessibility in this thread on "Making Advanced GUI Applications with Godot" and I'd intended to drop a link to the comments (which I'd recommend reading for more context/perspectives) but hadn't so I'm glad @stefee raised accessibility as a consideration early on.

For people on the Twitters, I can also recommend following @ianhamilton_ ("Accessibility specialist, helping studios avoid excluding gamers with disabilities") which I've found helpful for gaining further insight into both the positive & negative sides of the current state of accessibility in relation to games.

Given the abundance of UI libraries and frameworks using Google's Material Design Language Specification, the Visual Components are an already very well know abstraction for front end devs: https://material.io/design

The spec answers a lot of the UI abstraction questions better than anything else I've seen.

I'm also a visually impaired developer, so sign me up for any accessibility related testing and help. I as of now have nothing to add, everything was said there before, this comment is basically a I am there as well kind of message.

Notes are being assembled on hackmd to track Discord conversation.

Interesting prototype from @james.i.h on Discord: https://gist.github.com/jihiggins/bd4e6cfe76a28d8913d641ea2ea9ad65

For styling I think it would be good to take inspiration from the tailwind CSS framework. I personally hate working with CSS and I really hope that Bevy doesn't adopt it or something like it.

I personally hate working with tailwind and love working with CSS and I really hope that Bevy do adopt it or something like it :)

Instead of tailwind i would recommend taking inspiration from bulma, its have utility and component-like classes. I think Bevy needs more built-in components/widgets than it does right now

As inspiration:

My 2 cents: I think choosing the way to go is a clash between major fractions of the frontend community.

There is a big chunk of people, who love CSS, maybe because they do it as their day job and feel comfortable with it. They may want to use bevy as a hobby engine. Some may work on AAA titles, as there are also AAA HTML/CSS UI libraries available - example: Coherent Gameface.

There are people, who despise CSS. That's mostly people who do little to no work with it and feel overwhelmed because they have no clue how to structure and work with it to make everything fall in place. My fear is that this is mostly the AAA folk, who do no-/low-code (as seen in Unreal Engine and Unity, for example) and people who don't use the web (mobile and desktop devs).

This leads me to believe that there are three distinctively different main target audiences: The "web lobby", the "please-no-web lobby" and the "low-to-no-coders".

Bevy wants to be an engine by and for developers - as it reads on the website, so I assume it's for everyone who develops a game, including the three distinctions above. Getting all of that onto one page seems to be near impossible to me, if it all should be core bevy.

Which is why I would suggest to go and spike the issue. What do all of the paths have in common conceptually? What kind of properties do they set and how does layouting work? What happens when multiple screen proportions (4:3 vs 16:9 for example, but also mobile vs tablet and similar) and resolutions (1280 x 720 vs 2560 x 1440 and similar) have to be targeted? It may be a good idea to create a comparison matrix and have it filled out by different developers. Make it viewable to everyone and foster discussions, so that modern best-practices are used. Then come up with a base for them all (leaving out their respective golden kitchen sinks and special features) and set that as the bevy internal standard. From that point, plugins can take over, so that there may be a bevy_web_native_ui, bevy_tailwind_ui, bevy_flutter_ui, bevy_omg_is_my_ui_nice, bevy_whatever_its_ui, and so on can exist and implement whatever standard they want. Bevy users can then explicitly install the one they want and not care for the others. Some of them may be officially endorsed and land examples in the book, so people can get started faster.

For the editor, that would mean either building on the internal standard, or developing one of the plugins in the process. The former sounds cool, since it makes sure that the base is solid, rich enough and already enables people to create things out of the box. The latter has the obvious benefit that it integrates with existing tools, attracts people who are familiar with the technology and also already provides one of the plugins. Again, which one, though? You will always turn away some groups โ˜น

I'd second what @minecrawler wrote above. There is one important omission in (sure, it wasn't meant to be exhaustive but anyway ๐Ÿ˜‰):

Which is why I would suggest to go and spike the issue. What do all of the paths have in common conceptually? What kind of properties do they set and how does layouting work? What happens when multiple screen proportions (4:3 vs 16:9 for example, but also mobile vs tablet and similar) and resolutions (1280 x 720 vs 2560 x 1440 and similar) have to be targeted? It may be a good idea to create a comparison matrix and have it filled out by different developers. Make it viewable to everyone and foster discussions, so that modern best-practices are used. Then come up with a base for them all (leaving out their respective golden kitchen sinks and special features) and set that as the bevy internal standard.

Namely the "composition in time" (the paragraph above discusses composition in space which is usually less of an issue). Composition in time is less visible, but much more aggressive and strict when it comes to options users/devs will get when using Bevy.

Please read https://potocpav.github.io/programming/2020/05/01/designing-a-gui-framework.html to understand "composition in time" and why it's much bigger issue than "composition in space". A good test case for "composition in time" could be this: vlang/ui#7 (comment) .


Btw. don't get the impression after reading the blog post, that "composition in time" requires immediate mode UIs. It's fully agnostic despite the author of that blog post uses an immediate mode library as backend for his UI library.

While I don't disagree with your general point, I'm pretty sure there are plenty of people that have enough experience with CSS to see the real disadvantages.

I don't see any argument against CSS if and only if it is implementated as an opt-in abstraction, and not as the core mechanic. Why shouldn't you have both?

I'm pretty sure there are plenty of people that have enough experience with CSS to see the real disadvantages.

I don't think you have to look too far away. I do work with CSS in small and big applications in several professional corporate teams every day and even hold trainings. I know that CSS has its pros and cons, just like every other technology out there. However this discussion should not go into an emotional direction why one tech is bad. We want to create a UI for bevy, so we should be problem-oriented. Many people prefer CSS, many people don't like it. Let's roll with that and find a solution to make them all happy ๐Ÿ™‚

"composition in time"

That's a very interesting topic which we have not touched, yet. It's distinctively different than to decide if we want to go web or not. However, I also fear that there are multiple architectures possible.

While I am not sure I fully grasp how Concur works, I am not entirely a fan of it (re-creating elements on change sounds inefficient, especially animations might be very scary). The author also says that they are mostly viable with retained mode libraries like React (the VDOM only propagates what really changed to the browser, so it shouldn't recreate the entire button) and immediate mode UI like ImGui, which is light-weight and is entirely recreated on each draw anyway.

What I'd like to see, though, would be the usage of the ECS. Would it for example be possible to wire certain widgets into it by giving them a certain query, which provides them with the right data at the right time? A health bar could get a query, which delivers it the current and max values (plus styling) onChange, making the ECS the single source of truth.

However, it is another point which we should talk about. Is it possible to abstract it so that we can package plugins (for this case, I doubt it, but may be wrong), or do we need to choose one? I'd very much like to hear more concepts :)

"composition in time"

...
What I'd like to see, though, would be the usage of the ECS. Would it for example be possible to wire certain widgets into it by giving them a certain query, which provides them with the right data at the right time? A health bar could get a query, which delivers it the current and max values (plus styling) onChange, making the ECS the single source of truth.

Yep. I'd very much like to see this as well. It's basically a sort of pub-sub pattern twice in a row - input sources (timer, ECS button component click, keyboard press, ...) publish to those "brokers" who are interested in them (subscribed to the given topic in pub-sub terms) and then these notified "brokers" publish to those (e.g. ECS) components interested in these (and change the health bar in reaction to those).

Finding a neat syntax for this in a "structured programming language" appears to be nontrivial (basically in a no concurrency environment this can be easily implemented using gotos, but elsewhere I can think of only heavyweight structures like Go channels which still do not provide the syntactical simplicity one would expect from such a pattern).

However, it is another point which we should talk about. Is it possible to abstract it so that we can package plugins (for this case, I doubt it, but may be wrong), or do we need to choose one? I'd very much like to hear more concepts :)

I think the orthogonal distinction "composition in space" and "composition in time" holds for any "concept". The question is how to implement this in existing languages - and here I expect many answers ๐Ÿ˜‰ (just read the whole thread vlang/ui#7 incl. all linked threads recursively to get "shocked").


IDN maybe it's time to rethink "structured programming" on it's own and revise it by addition of such a "double pub-sub" construct? Mech lang does it (though it's a bit of an extreme - basically all "variables" are "live" by default which is inefficient if the frequency with which the variable shall change is lower than the cost of bookkeeping it requires).

commented

If we're discussing composition of values over time, that's exactly what FRP is designed for. I have already created a robust, production ready, feature-complete, thread-safe, zero-cost ultra lightweight FRP library for Rust:

https://crates.io/crates/futures-signals

I have used it to successfully create one of the fastest DOM frameworks in the world, though of course the FRP library is agnostic so it can be used for anything (not just the DOM).

Since Bevy already uses Futures internally, it is trivial to integrate futures-signals (since it is built on top of Futures, and integrates perfectly with Futures).

But integrating it with the ECS sounds rather difficult, since they are fundamentally different paradigms.

But integrating it with the ECS sounds rather difficult, since they are fundamentally different paradigms.

That's exactly what I said above: The question is how to implement this in existing languages - and here I expect many answers ๐Ÿ˜‰.

Though I'm almost sure the "double pub-sub" concept could fit ECS pretty well (efficiently and readably/understandably). I'm experimenting with it a bit now (in Python though) and if I find something I'll come back at some point.


Btw. futures-signals are awesome - I'd be quite interested in your thoughts @Pauan why integrating them with ECS sounds rather difficult (but maybe we should discuss this elsewhere).

Edit: I made a separate topic about "ECS versus signal-slot/pub-sub" - see Pauan/rust-signals#31 .

Concerning UI, I've been working with WPF (XAML), Xamarin, Unity, Unreal and Flutter both as solo developer and in small teams with graphic designers. Strictly speaking about developer experience, I prefer by far Flutter's way of doing things: using the same language to both write the logic and the UI and decoupling the two with the BLOC pattern and with streams. On the other hand, I saw my colleagues who don't know anything about programming having a much more pleasant experience using Unreal's blueprints.
For Bevy, if it's aimed to be "by developers for developers", I cannot recommend enough to learn from Flutter, at least initially and then, if technically possible, integrating a visual designer to allow artists to design things by "dragging and dropping" widgets.

I've been a webdev for about 25 years. One thing that's become clear to me over the last few years: React was a good idea.

I've recently started playing with Bevy too. I've made lots of toy games over the years, often doing as much as I can from scratch. But lately I want to just use good, solid tools. I'm trying harder to not re-invent the wheel. I totally support anyone who wants to build a brand new FRP UI library for Bevy, but in my mind, there are already great UI building libraries out there, so why not stand on the shoulders of giants?

So, in the name of not reinventing the wheel, I would like to use React for my game UI. Doesn't look like it's possible to render a browser page into Bevy right now, but that got me thinking...

React actually already has a way of building apps that target platforms other than the web: React Native. And there are a number of targets ("platforms") that various non-Facebook people are maintaining.

It's apparently not easy, but making it easier to maintain a React Native platform is one of the major goals of their next rewrite: React Native Fabric. It seems to be deep in early alpha territory at the moment, but that might not prevent at least doing some research on it.

Perhaps the most straightforward implementation of React Native in Bevy would be to fire up a V8 instance, build your mini apps in JavaScript, and then just build the React Native rendering service that ties them into Bevy. I suspect one could do so iteratively, building out the various modules and attributes as you needed them.

Another approach would be to just adopt the React Native API, but to keep everything in Rust. That way you can ditch the JavaScript runtime. You would have some guarantees of having a robust, familiar API design but no real constraints on how it would be implemented or integrated. Still, at that point it seems like you might as well just design something from the ground up that's as Rust-y as possible.

commented

@erikpukinskis Note that there are a lot of quirks with React, it's a complex beast. And it was also designed specifically for JS, it would be difficult to integrate it with Rust.

The idea of React is indeed quite good, it's essentially an immediate mode GUI, which is a lot nicer than the DOM (which is retained mode). However, we can gain the advantages of immediate mode GUI without React.

A Bevy-specific UI can be much lighter-weight, more intuitive, more Rust-y, have better integration with ECS, and have more features.

I've been a webdev for about 25 years. One thing that's become clear to me over the last few years: React was a good idea.

I've recently started playing with Bevy too. I've made lots of toy games over the years, often doing as much as I can from scratch. But lately I want to just use good, solid tools. I'm trying harder to not re-invent the wheel. I totally support anyone who wants to build a brand new FRP UI library for Bevy, but in my mind, there are already great UI building libraries out there, so why not stand on the shoulders of giants?

So, in the name of not reinventing the wheel, I would like to use React for my game UI. Doesn't look like it's possible to render a browser page into Bevy right now, but that got me thinking...

React actually already has a way of building apps that target platforms other than the web: React Native. And there are a number of targets ("platforms") that various non-Facebook people are maintaining.

It's apparently not easy, but making it easier to maintain a React Native platform is one of the major goals of their next rewrite: React Native Fabric. It seems to be deep in early alpha territory at the moment, but that might not prevent at least doing some research on it.

Perhaps the most straightforward implementation of React Native in Bevy would be to fire up a V8 instance, build your mini apps in JavaScript, and then just build the React Native rendering service that ties them into Bevy. I suspect one could do so iteratively, building out the various modules and attributes as you needed them.

Another approach would be to just adopt the React Native API, but to keep everything in Rust. That way you can ditch the JavaScript runtime. You would have some guarantees of having a robust, familiar API design but no real constraints on how it would be implemented or integrated. Still, at that point it seems like you might as well just design something from the ground up that's as Rust-y as possible.

Just a heads up:

Elm made FRP popular and inspired React directly.
React only went back to Javascript for the obvious reasons.

Elm then quickly realized that their own, new MVU based "Elm architecture" has couple of significant improvements compared to FRP, and dropped that one already 4 years ago or something.

commented

With the mockup that I am playing around with in #85 (comment), I took inspiration mainly in the visuals of VS Code. Flat UI and only the colors are theme-able, what users would distribute to each other would be color themes (palettes), not CSS themes.

This would prevent all the work on a CSS-based architecture, which probably would be only a subset of CSS. With my experience with GTK3, CSS themes might be very flexible, but I have never seen the GTK community fully satisfied with it, breakages of themes often happens with GTK updates. Now, consider that GTK/Gnome development community is way, way bigger and many years older than the Bevy community.

In my opinion, for the editor specifically, the focus should be on: 1) DPI scaling, 2) clean look and 3) color theming. For the use of the UI outside the editor, then explore all the way with theming.

Hello! How about rg3d-ui? I've successfully making editor for rg3d engine with it. IMO it has almost everything that is needed for the editor, including docking, windows, dropdown lists, text boxes, trees, file browsers, and even specific widgets that allows editing Vector3 and so on. It can be easily integrated in bevy - all you need to do is to write a renderer (rg3d-ui knows nothing about rendering on screen) and feed it with window events.

I am not a game engine developer of any kind and not a GUI person either, but this talk seems very relevant to the problem.

Leaving my 50 cents so to say.

https://youtu.be/yYq_dviv1B0

EDIT: Especially more towards the end >40 min or sth

Great talk -- the Our Machinery people are really smart folks.

I will say that while IMGUI works for smaller engines (and may even work for Bevy!), Unity started with an IMGUI approach and has since spent the last 4 years attempting to migrate off of it, for performance reasons. I think it's great for some basic UI, but performance suffers for more complicated layouts.

I might be mistaken, but I'm fairly sure UE's editor UI (Slate?) doesn't use IMGUI. https://docs.unrealengine.com/4.26/en-US/ProgrammingAndScripting/Slate/Architecture/

Yeah, I agree advanced layout isn't always necessary. :) But just a data point that Unity eventually found that it was. Bevy might never be nearly that complex.

Ah, or maybe you're just saying that IMGUI can tackle anything the UE editor does? Might be true!

A good example for me is something like flex-box, which makes certain types of UI (multi-column, size responsive, etc) very easy. IMGUI can't really do something like flex-box because you'd be re-solving the layout constraints every frame. You have to keep some state around because the layout itself is a very expensive operation.

NOTE: I am speculating very hard on this and I don't want to waste anybody's time on stupid meaningless thoughts, but I think what I have to say has some meaning, so sorry for long text.

In fact I really like the first ever (not sure) proposed idea about how UI should be made. With ECS, that is. In the grand scheme of things, if one just looks at the layout, in general it is mostly about manipulating "boxes" and CSS operates on exactly that. Every box has components size and position[1] and this should be (?) easy to batch, given that bevy already does that automatically for in-game components. Same goes for any layouting. Flexbox is nothing but a container that breaks on constraints, so really all you have to program is some box that arranges its' children into dynamic amount of boxes (given a set of breaking points, like in CSS), but that isn't a layout manager, this functionality is generic to all boxes. Of course I see why immediate mode struggles with layout, but I think there's not need for a "declarative" layout in this case, it should just be dynamic, achieved through - I don't know - some columnCount(u8) component, which will tell the system that this box wants to be broken into, and how much, columns/rows.

You don't manipulate some global state, instead you manipulate the instances of entities directly, like one does in data-oriented development, so really, your UI drawing turns into a "real-time game code". Immediate-mode UI is perfect for game engines in a way that it, just like a game, draws every frame, and just like the talk above says, the UI usually requires a complete redraw very often anyway, because changes usually get reflected in different places in different views.

Any component can be in any component because they are generic boxes, that is, a struct that has size and position, then it's a matter of composition. In this way, one could simply create "builders" or just basically shortcuts for elements of UI, like some do with React components. Those are simply a set of instructions on what to put where, that get executed and refreshed later.

[1]: And other nicesities like padding, margin, etc.

EDIT: Here's a hyper illustrative diagram of how it could work out (maybe)

diagram-01

EDIT: Now in the hackmd link I see that this approach was already considered, so I was talking here for nothing, sorry for wasted time, reader.

commented

@mjoork Flexbox is nothing but a container that breaks on constraints

I think you're underestimating how complicated flex-box is. It's a lot more than just line-wrapping on overflow...

In fact, line-wrapping is almost the exact opposite of what flex-box does: flex-box tries to keep everything on the same line, by automatically resizing children so that they fit within the line.

So in your example, flex-box would be resizing the children so that they all fit, without any wrapping.

But that's just scratching the surface... you can specify that some elements should have a fixed size (no resizing), but other elements on the same line should be dynamic (automatically resize). And you can set further constraints, for example you can say "this element should always be 2x the width of this other element". And you can say "I want this box to automatically resize, but keep it at this minimum width/height".

And then there is the difference between a horizontal flex-box and a vertical flex-box, and then there are various alignment modes (left/right/center/stretch), and because you can nest flex-boxes inside of other flex-boxes, that means the size of the parent can depend on the size of the children...

Doing layout for flex-box is very complicated, it is not something you want to do every frame. That doesn't mean you can't use immediate mode GUIs, it just means that the flex-box layout has to be done in user-land, not automatically inside the framework.

Ok, thanks for that explanation.

I just really want to know, what good will flexbox bring in this case? Is it justified? Can't bevy just use grid layout with differently sized (fixed-width or "auto") columns for something like this:

screenshot

I am just not sure that there are components that will really benefit from such a dynamic thing like a flexbox.

commented

@mjoork I have quite a lot of experience with flex-box in HTML, and yes I think it is absolutely worth it. Simple cases don't need flex-box, but as soon as you try to do something complicated it really benefits from flex-box.

@Pauan It might seem like I am not familiar with web at all ๐Ÿ˜„, but no, I was doing web. dev and tried all bells and whistles, but apparently I really do underestimate flexbox for some reason. Perhaps UI should be designed not to have complex components in the first place, because such components might contribute to bad UX instead. The impression I got from working in Unreal Engine Editor and Unity, is that horizontal space is usually taken by the viewport or some big view panel. In this context flexbox that does a good job at keeping elements on one line loses its' advantages because it would be better if the components were compact horizontally. Horizontally compact components are better off just using columns and rows, which wouldn't wrap even when horizontally shrinked to really small sizes.

But again, this is just my impression.

Thank you for your feedback

commented

@mjoork The point of flex-box is that it is both extremely powerful and flexible (allowing you to create almost any type of UI you want), while also being extremely simple to understand and use. Grids are great, but they are very limiting in what they can do.

If you want to have multiple lines (or a grid), you just put flexboxes inside of flexboxes. For example you can have a horizontal flexbox which contains multiple vertical flexboxes (or vice versa). By nesting various combinations of flexboxes you can create complex UIs by using simple parts. This allows you to create essentially any UI layout you want with just 2 primitives (and a handful of options).

Another way to think of it is that a horizontal flexbox is a row, and a vertical flexbox is a column. But unlike a grid you don't need the same number of rows/columns, it's a lot more flexible.

Then... table? (Perhaps grid isn't exactly table but the idea is same) Table doesn't require n_rows == n_cols, and you can size different rows, cols differently and also span items across them. Having items with different vertical sizes is bad UX anyway. It's also a single primitive. Not to mention that before other layout methods became available on the web, tables ruled the layout domain. With breakpoints they can be given that additional flexibility. They are also easier to understand than flexbox, because there's no magical computation going on behind it. In HTML they have limitation of being completely either auto-sized or manually-sized, but there is a way to make this possible in this domain.

commented

A table is a grid. You misunderstand what I meant... I didn't mean that rows == columns, I meant that you cannot create layouts like this:

https://en.wikipedia.org/wiki/Holy_grail_(web_design)

And that's just a really simple layout, there are lots of examples of more complex layouts. Even the example you posted earlier (with the Static Mesh chair) cannot be done with just a grid, but it is trivially easy to do with flexbox.

Not to mention that before other layout methods became available on the web, tables ruled the layout domain.

Yes, because flexbox did not exist at all, and so tables were literally the only layout option available. But everybody hated tables, which is why flexbox was developed. And then everybody immediately switched to flexbox as soon as it was available, because flexbox is simply superior to tables in every situation. The only time when tables are used is for actual tabular data.

They are also easier to understand than flexbox, because there's no magical computation going on behind it.

That's incorrect, the algorithm for HTML tables is more complex than for flexbox. It also has a lot of the same issues as flexbox (parents depending on the size of children, mixture of static and dynamic sizes, etc.)

Flexbox is simpler and more powerful than tables, replacing flexbox with tables would be a step backwards.

Ok, I will agree with you just to stop this conversation, because I am clearly in a disadvantage, it's not like I am going to code this UI anyway.

One last thing, this is not true:

Even the example you posted earlier (with the Static Mesh chair) cannot be done with just a grid, but it is trivially easy to do with flexbox.

And it was trivial. CodePen: https://codepen.io/Mjoork/pen/PomOEJb

commented

@mjoork "CSS grids" is a very complex layout system with dozens of primitives and options. Certainly much more complex than flexbox (both in the implementation and the user API). And it's also less powerful than flexbox (since it's restricted to just a grid). And it's also slower to calculate than flexbox.

Just to give you a taste of the complexity of CSS grids: https://css-tricks.com/snippets/css/complete-guide-grid/

When I said "it cannot be done with grids", I meant that it cannot be done with a grid primitive, since the layout does not fit into a grid. Even your CodePen is using multiple nested grids (which can be accomplished far more simply by using nested flexboxes).

P.S. CSS grids do not replace flexbox, instead they're designed to complement flexbox, because flexbox is still often the better choice. CSS grids are used for layouts which are actually a grid, it's not intended as a flexible general-purpose do-everything API (unlike flexbox).

Even if Bevy implemented a CSS grid style system, it would still need a flexbox system in addition to that, because flexbox can do things that grids cannot.

P.P.S. None of my messages are an attack on you, I'm just explaining why flexbox is good (from a technical standpoint), because it's very powerful, reasonably fast, and very simple to use. None of the layout systems are fast enough to be run on every frame (unless the layout system is cripplingly simple), but like I said earlier it's still possible to use immediate mode GUIs, it just means the state has to be moved into user-land.

@Pauan sorry, I didn't mean my messages to sound this way, I didn't consider your messages attacking. Thank you for explanation! Is state in the user-land bad?

commented

Is state in the user-land bad?

It's not necessarily good or bad, it's just different trade-offs. We'll have to experiment and see which type of state works best with ECS.

Could we ask other projects with experience in regard to the state in ECS?

So, just for completion and interest: There is this one framework, that does render UI exactly as games get rendered.

The guy has worked on the XBox SDK team and knows his stuff.

There are a couple of implications, and not all of them are replicable with Rust, but I still feel this approach can shine a light of inspiration on this question.

Timestamp direct to the architecture:
https://youtu.be/1QNxLNMq3Uw?t=553

commented

@ShalokShalom Scenic does not have any way to create flexible UIs (UIs that can adjust based on the screen size). Instead it has a scene graph where everything is manually positioned in pixels. So you're saying things like "place this button at 0px,60px and place this other button at 0px,120px". That's it. There is no layout whatsoever, everything is manually positioned at fixed locations.

Its use case seems very different from our use case. They are prioritizing IoT devices (which are small and have low-power CPUs). They just want to display a simple UI for the operator of the IoT device. So their design makes sense for their use case. But what we need is something much more ambitious than that.

Yeah, and this is one of the aspects, that are not relevant to our use case.

I meant more the way it is rendered, and there is no way, this could not be ported/implemented this way.

There are also a couple of other aspects, who make no sense to be used here, like that its OTP based ๐Ÿ˜‰

Its serves merly as an inspiration, as stated.

commented

I meant more the way it is rendered, and there is no way, this could not be ported/implemented this way.

Of course it could be... manually positioning elements is the simplest way of doing layout, it's trivial. It's also far too trivial, we need more than that.

Its also not about doing it manually. I give up ๐Ÿ˜„

commented

@ShalokShalom But that's how Scenic works: you must manually specify the pixel position for every element. Yes it has a scene graph, so parents can affect the children, but that does not help you at all for positioning siblings. Every sibling has to be manually positioned. If you look at the Scenic examples, every single element has a translation specified.

Consider this simple scene graph:

Group1
+-- Group2
  +-- Button1
  +-- Button2
+-- Button3

Now let's say you wanted to position the Button elements so they don't overlap (which is usually what you want to do with a UI).

With Scenic, you have to specify the position of every single element. You need to manually move Button2 so it doesn't overlap with Button1, and you need to manually move Button3 so it doesn't overlap with Button1 or Button2. The parent-child hierarchy does not help us at all, we still have to specify every element individually.

And now let's say you wanted to add a new button inside of Group2... you would now have to manually move Button3 in order to make room for it. Moving Group2 doesn't work, moving Group1 doesn't work. Once again, the parent-child hierarchy does not help us.

The scene graph does not help at all, because we care about the visual relationship of elements (whether they are overlapping or not), and the visual relationship is completely different from the parent-child relationship. It's possible for elements to overlap even if they have completely different parents, and so doing layout requires global information, not parent-child information.

That's why every decent UI system has ways of doing auto-layout, including Unreal, which uses a flexbox/grid/wrap system. With a flexbox system, you do not need to specify the position for any of the Buttons, everything is handled automatically so that there are no overlaps.

For believers in ECS (myself including) there is a research on ECS-based GUIs with good prototypes showing how to deal with some harder problems (incl. drag&drop, etc.). Feel free to take a look traffaillac/traffaillac.github.io#1 .

@ShalokShalom But that's how Scenic works: you must manually specify the pixel position for every element. Yes it has a scene graph, so parents can affect the children, but that does not help you at all for positioning siblings. Every sibling has to be manually positioned. If you look at the Scenic examples, every single element has a translation specified.

Consider this simple scene graph:

Group1
+-- Group2
  +-- Button1
  +-- Button2
+-- Button3

Now let's say you wanted to position the Button elements so they don't overlap (which is usually what you want to do with a UI).

With Scenic, you have to specify the position of every single element. You need to manually move Button2 so it doesn't overlap with Button1, and you need to manually move Button3 so it doesn't overlap with Button1 or Button2. The parent-child hierarchy does not help us at all, we still have to specify every element individually.

And now let's say you wanted to add a new button inside of Group2... you would now have to manually move Button3 in order to make room for it. Moving Group2 doesn't work, moving Group1 doesn't work. Once again, the parent-child hierarchy does not help us.

The scene graph does not help at all, because we care about the visual relationship of elements (whether they are overlapping or not), and the visual relationship is completely different from the parent-child relationship. It's possible for elements to overlap even if they have completely different parents, and so doing layout requires global information, not parent-child information.

That's why every decent UI system has ways of doing auto-layout, including Unreal, which uses a flexbox/grid/wrap system. With a flexbox system, you do not need to specify the position for any of the Buttons, everything is handled automatically so that there are no overlaps.

Yeah, and an auto-scaling method could be implemented in Scenic and still, I consider its rendering method and the overall architecture as inspiring

I think I accidentally started a more heated conversation that I intended. Whoops.

One thing I think everyone might keep in mind is that the different points of view in this thread primarily come from different use cases. There's three that I think I might summarize here:

  1. Bevy Editor UI. This use case only needs a UI library complex enough to render a game editor. This is a highly restricted use case, and can clearly escape without nicer features (see UE, which doesn't have data-driven styling or constraint-based layout). This library needs to be advanced enough to make Editor tooling not a chore.

  2. Basic Game UI. 75% of games need very simple UI. A main menu, maybe some dialogue prompts and few text bubbles. The structure of this UI is simple enough that basic layout or even manually positioning elements is plenty. No styling needed. IMGUI is great here.

  3. Advanced Game UI. The other 25% of games need an advanced UI system. Think EVE Online, which is essentially as or more complicated than any website or application. It's important to recognize that games are a super-set of software, and as such, the most software-like games will require many advanced features: flex-box, swappable styling, data-driven definitions, etc.

So when we're talking past each other I think that often times we're coming at this from one of these different use cases.

The question is (and this is largely for cart, I think), should there be a single unified UI library that supports each of these cases, or should it be fragmented? One tool means all of the knowledge you learn building editor extensions can be immediately applied to in-game UI. It also means that the Bevy Editor can benefit from the advanced features of use case 3. But it also means many developers will need to learn a more complex library even if they only need simple UI.

But there's not an obvious answer. It depends on the goals of Bevy project. Unity has to support all possible games on their platform. Bevy does not.

The idea has been to use the same UI system for the editor and for games just like Godot.

Easy answer for half of the question. The remaining question is then, does Bevy want to make EVE Online-like games possible with this same UI library? The same library that is intended for very basic UI?

I've already mentioned this above, but I still think the strongest option is to build a tight rendering framework for UI, and allow multiple layout / UI frameworks to build on top of it. This might be similar to the DOM / (React, Vue, Vanilla) web UI split, if you imagine the DOM as something that was actually performant and was stored in ECS.

Thank you all.
I think we try to design the perfect UI system for the engine.
Even so, this approach could prevent a lot of problems later on. Maybe we could do better.
IMO this type of perfectionism mindset is why most of the Rust crates are still under 1.0 release. (Even so, this is also why I enjoy using Rust.)
Maybe if we could assemble what we need today and provide results, Bevy could gather more fans and programmers to help us creating better systems tomorrow.
I would say develop as we need, not what would be perfect.
Especially with the fact that Bevy is modular, we could change and replace systems easily, unlike other engines that systems are so coupled to each other that changing one system affects others directly.
Another topic that I think would be fun is why not having an official game for Bevy, like Unreal and Fortnite for Unreal Engine or Crysis for CryEngine, a game that can guide the development of Bevy.

An own game could diverge the mindset of the perfect underlying implementation over to the most optimal interface to be actually used.

@nical just put together this extremely insightful GUIs on GPU post for us, based on their past experiences building things like WebRender and Lyon. I'm dropping it here so interested people can read it (and so that we don't lose it).

I have experience combining ECS with UI frameworks, particularly UIKit, Flutter, SwiftUI, UnityUI.
Would it be something that BevyUI should be doing as well?

Just stumbled upon this library: https://github.com/jakobhellermann/bevy-inspector-egui#world-inspector. The world inspecter looks very promising as a basic visual editor.

I'm just here to yeet another wrench into the proverbial pot ๐Ÿ˜ Bevy should draw as much inspiration from https://svelte.dev/ as possible. I'll justify that rational, but first the fun part, an example of what the API could look like...

// It's just a struct. Nothing special about it.
struct NormalStruct {
    pub number: i32,
    pub should_remove: bool,
}

// This is where the magic happens. Like Svelte, this "compiles" a UI component into a collection
// of ECS components, bundles and systems. How those are layed out is purely an implementation
// detail. It could be a Entity for each UI Node, a single entity for each component, or totally
// out-of-band. Bevy can use what ever paradigm is fastest, the developer doesn't care!
define_component! {
    MyListItem {
        item_just_clicked: bool,
        numeric: i32,
    }

    Template {
        <button bind:clicked={ item_just_clicked }>
            { format!("{:04}", numeric) }
        </button>
    }
}

// This component is marked `pub` meaning the bundle it generates will be pub in this module. It
// uses another component which is local to this module, otherwise you could import it using
// standard Rust `use` syntax. It also defines a system which will accompany the bundle.
define_component! {
    pub MyList {
        items: Vec<NormalStruct>,
        next_number: i32,
    }

    System {
        my_list_component
    }

    Template{
        <div>
        {#each item in items}
            <MyListItem numeric={item.number} bind:item_just_clicked={item.should_remove} />
        {/each}
        </div>
    }
}

// This is a standard ECS system. There are zero things special about it apart from the fact that it
// will be registered for you iff (if and only if) there are at least one MyList components included
// in the compiled UI tree.
fn my_list_component(mut query: Query<&mut MyList>) {
    for mut my_list in query.iter_mut() {
        // Remove items that were clicked
        my_list.items = my_list
            .items
            .iter()
            .filter(|i| !i.should_remove)
            .cloned()
            .collect();

        // If there are no items left, then generate 10 new items.
        if my_list.items.len() == 0 {
            for i in my_list.next_number..(my_list.next_number + 10) {
                my_list.items.push(NormalStruct {
                    number: i as i32,
                    ..Default::default()
                });
            }

            my_list.next_number += 10;
        }
    }
}
use crate::my_list::MyList;

fn main() {
    App::build()
        .add_plugins(DefaultPlugins)
        .with_ui_root(MyList)
        .run();
}

I challenge you to write this in other proposals. The bind: syntax does an immense amount of work for you.

UI Axioms

  • Layout is easy (flexboxes are fine). State binding, data-driven layout and back-propagation of state/events is very hard.
    • Ask yourself why there are so many web frameworks even though they all use the same HTML/Flexbox layout. The only thing they do differently, is state.
  • Humans have a very low cognitive saturation point (at least mine is ๐Ÿ˜‚). UI is mentally demanding, a fact only groked after years of suffering in UI world, in a dozen ecosystems.
    • Managing an ECS hierarchy for UI myself, when you get into things like swapping components, lists, conditional renders or any other type of dynamic adding/removing/modifying components sounds like absolutely no fun. Reminds me of old WinForms and Unity.
  • Rust is not a markup language, it will always be bad at defining UI layout and state binding.
  • Compilers have no limits on cognitive saturation; we should strive to move as much brain-overhead to a compiler as possible.
  • Compilers write better optimized code than humans.
  • A clean break can be made between how you "define" a UI, and how the engine "runs" the UI. Compilers already do this for ASM, and have been for years. Yet in the UI world we still write ASM; code directly in the paradigm that "runs" the UI.

So...

Separate the internal representation of UI (let's say one ECS Entity per UI Node, in retained-mode, for the sake of argument) from the developer experience.

  • The former optimizes runtime performance and binary size
  • The latter optimizes easy of use and cognitive overhead

Edit: If there is interest in this I'll work up a running example with "hand compiled" code. It's a bit more work though, so I'm stopping myself in case the proposal is DOA anyway.

I like godot but one problem is everything has to be packed as a special asset so dynamic loading of anything not built in is a huge pain. I'd like to develop non-game software with it as a kind of app platform but its a huge pain. The UI has fancy asset loaders and importers, but they aren't available to the final app.

So I think whatever work that can make the Bevy UI good also has implications in making it a next generation app framework, not just for games.

Mmm, anyone thought about integrating into blender instead of doing a ground up project? Armory 3d did that when blender dropped the old game engine from their code. It was hands down one of the best developing experience.

Mmm, anyone thought about integrating into blender [..]?

Yes. Look here. I think Blender is a fantasitc FOSS tool for 3D arts, however it is a terrible game editor, since it lacks literally everything else. Also, trying to build on Blender means trying to leverage the Blender platform for something it was not meant to be used, instead of using Bevy for something it is meant to be used and make it a showcase at the same time.
To me, it does not make a whole lot of sense to just create a game engine / application framework for the sake of creating one. It should be used to get a feeling for how it is used, and what kind of things feel tedious or even out of place while, using it. Creating a new Editor from the ground up seems like a very good idea to test out Bevy in a real project and feed back all the experience into the main project :)

I see what your saying, in general what I intended was building the nodes to represent bevy logic and using blender as a middle man between all objects, materials, cameras, animations ect. I personally prefer no editor to be honest, however if the end goal is to give a full set of tools to provide a game developer there will never be anything better then blender. I assume any game developer is already going to be using blender in their pipeline. Dont mind me, just sharing perspective. Thank you for bevy <3

I do think theres value in adding deeper integration between Bevy and Blender in the form of a "Bevy Blender plugin / exporter". I think ultimately we should invest some time in this. However I do fully believe in a standalone Bevy Editor built as a Bevy app (for the many reasons listed above and in other places) and that will always be my first priority.

I wrote a comment in the editor issue, but seems like this issue is a bit more active, so once again I'll share my thoughts on editor UI here.

I believe that having a separate editor app is not the most convenient way because UI becomes complicated fast in an attempt to create one-size-fits-all.

Instead, I suggest optionally include an editor in the game itself in the form of an overlay. This would force editors to be simple and modular, allowing users to create custom implementations for a particular project or even use some in production (e.g. a character editor).

At the same time if one would decide to bundle some editors together and release them as a standalone app it wouldn't be hard at all.

And the last question is how can I help? I'm well versed in all things UI, but not yet a Rust expert

egui is fairly complete, looks good and has a Bevy plugin, so what would be the downsides to adopting it as the official Bevy gui?

egui does not integrate well with the ECS dataflow of Bevy. Also, the author clearly stated that customization is a non-goal for egui, therefore, we cannot rely on a third party crate that don't meet our criteria.

The problem domain is so specialized in this case that we should strive to make our own solution, independently from how difficult it could be.

Also, the author clearly stated that customization is a non-goal for egui, therefore, we cannot rely on a third party crate that don't meet our criteria.

I don't want to go too offtopic, but what kind of customization? Having tried a ton of GUI frameworks over the years, from WinForms to WPF to Qt to all the web stuff like Ember/Angular/React to imgui to egui, I'd say egui is by far the easiest to make completely custom widgets with. A big part (probably over half) of my game's UI uses Painter and draws things completely custom, but still has event handling.

I'm not necessarily advocating for egui, just trying to understand what are some more customizable alternatives, and what is the customization bevy would want that egui doesn't provide easily? Personally I'm not sure how I feel about egui + ECS, because on one hand it's not ideal, on the other it's still better than every other GUI framework I've ever used. Again, not trying to advocate for egui, just trying to understand the landscape of options better.

The problem with egui is that all apps that use it have that imgui smell unless the author puts significant effort into customization and reverting the defaults.

As an alternative, ui4 is an ongoing esperiment of a GUI toolkit designed for Bevy.

The problem with egui is that all apps that use it have that imgui smell unless the author puts significant effort into customization and reverting the defaults.

Really, that's your reasoning? Regardless of any "imgui smell", I personally think it looks excellent. Also, AFAIK, you can completely change the style at your whim.

[edit] I also think egui is very simple to use.

The issues with Immediate Mode gui has been discussed heavily already in the thread. Give it a read before we hash out the exact same discussion! :)

The issues with Immediate Mode gui has been discussed heavily already in the thread. Give it a read before we hash out the exact same discussion! :)

Yeah, I understand the potential performance issues. Hopefully, whatever is decided on, it will be as simple to use. Until then, Bevy remains editor-less (which, to me, is a huge negative) and this GUI issue seems to be deadlocked.

The problem with egui is that all apps that use it have that imgui smell unless the author puts significant effort into customization and reverting the defaults.

As an alternative, ui4 is an ongoing esperiment of a GUI toolkit designed for Bevy.

I see your point. One thing I know of, which doesn't completely fix this but could help, is https://github.com/jacobsky/egui-stylist, which aims to provide theme support + editor for egui. It's probably not the perfect solution for bevy, but leaving this here for completeness.

image