uobikiemukot / yaft

yet another framebuffer terminal

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

yaft always redraws the whole line across the screen, even in tmux

oimaasi opened this issue · comments

I'm implementing image preview for rover file browser using idump, want to run the whole thing in yaft + tmux. But it seems that yaft always tries to redraw the whole line, which also affects the image pane. A screenshot saves a thousand words, and here is the screencast.
https://vimeo.com/268868152

But I also notice that such cross pane interference only occurs when I idump some images. Normally, everything runs perfectly alright in yaft + tmux, each pane will refresh individually. Also, Linux console doesn't have this behavior.

Is it actually a bug, or it is a limitation of the current implementation?
Thank you.

I'm implementing image preview for rover file browser using idump, want to run the whole thing in yaft + tmux. But it seems that yaft always tries to redraw the whole line, which also affects the image pane. A screenshot saves a thousand words, and here is the screencast

Yes, yaft always redraw whole line at a time. It's expected behavior.

for example, 2x2 screen resolution, you can see pixels on screen like this:

pixel1, pixel2
pixel3, pixel4

In memory, these pixels are allocated on a line.

  • pixel1, pixel2, pixel3, pixel4

If pixel1 and pixel3 are dirty and we need to redraw two pixels.
There are some strategies.

  • method1:
    • write data [ new pixel1, old pixel2, new pixel3 ] to address pixel1, length 3
  • method2:
    • write data [ new pixel1 ] to address pixel1, length 1
    • write data [ new pixel3 ] to address pixel3, length 1

Usually method1 is more efficient (If there are many rendering panes) and more simple implementation.
(If 10 lines are dirty, method2 need to redraw 10 line * 16 pixels per line = 160 times)

But I also notice that such cross pane interference only occurs when I idump some images. Normally, everything runs perfectly alright in yaft + tmux, each pane will refresh individually. Also, Linux console doesn't have this behavior.

yaft knows which lines are dirty and needed to redraw If yaft is the only program that draw to framebuffer. Normally rendering collision or rendering priority is handled by display server. But there is no such program in framebuffer.

Many thanks for the detailed explanations. After taking a look at the corresponding places in the code, I am slowly getting a better understanding of how does yaft work.

Supposed we will keep the current model of whole line drawing, one possible solution would be adding another image buffer in addition to fb->wall? Let's call it fb->graphics for now. Then slightly modify idump so that it will load the image, convert it to the format suitable for framebuffer, but write to fb->graphics in yaft instead of its own instance of fb->fp. When yaft redraws the line, let it selectively copy the pixel from fb->wall or fb-graphics, depending on the current col and line.

I am now trying to implement the above mentioned stuffs. One principle question I have is inter process communication. How should idump, as a seperate process, elegantly pass the address of the image buffer to yaft? For obvious reasons, I don't want to throw everything into one "feature-rich" program ...

If you already see something conceptually flawed, or have better ideas, or know something which I should definitely take a look at, please tell me. Thank you.

I am now trying to implement the above mentioned stuffs. One principle question I have is inter process communication. How should idump, as a seperate process, elegantly pass the address of the image buffer to yaft? For obvious reasons, I don't want to throw everything into one "feature-rich" program ...

Maybe shared memory is the simplest choice.

suggested workflow:

  1. Share image buffer between yaft and idump
  2. idump always updates shared image buffer and actual framebuffer
  3. yaft always redraws image graphics buffer after yaft's framebuffer update

But I don't reccomend.

If you already see something conceptually flawed, or have better ideas, or know something which I should definitely take a look at, please tell me. Thank you.

In my opinion, collision detection is not terminal's job. Terminal is not display server (like Xorg server). yaft and idump is not only program that access to framebuffer. Above-mentioned solution is not suitable for general-purpose.

workarounds:

  • use multi-monitor and specify "/dev/fb0" for yaft and "/dev/fb1" for idump or mplayer or something (you need 2 graphics hardware at least, onboard and discrete card)
  • use sixel (for lightweight rendering), sixel graphics is under controlled by yaft (but there are some problem with screen or tmux...)

sixel graphics is under controlled by yaft (but there are some problem with screen or tmux...)

I tried sdump, the static version. It works in yaft, but not at all when tmux is running. sdump just quietly exit without showing anything.

Anyway, I have one question concerning how yaft and sixel work together. Where and when do you update cellp->has_sixel and the corresponding pixmap? How does sdump tell yaft the range of cols and lines which should belong to sixel?

Being able to understand those mechanisms would be very helpful for me to try out some stuffs I have in mind. I would like to do something like this:

  • use tmux as tiling window manager
  • when drawing a picture, always do it in a specified tmux pane, which is wholely dedicated to that picture
  • with the tmux variables #{pane_top},#{pane_bottom},#{pane_left},#{pane_right}, which are also in the unit of cols and lines, I can inform yaft to update the corresponding cells to 'has_sixel'
  • update pixmap of those cells (but how?)
  • if the graphics pane get changed (closed, resized, moved etc.), update cell properties and redraw in yaft.

Not a general purpose solution, but that's what I can think of at the moment and would like to try out.

use multi-monitor and specify "/dev/fb0" for yaft and "/dev/fb1" for idump or mplayer or something (you need more than 2 graphics hardware at least, onboard and discrete card)

Just a thought. How difficult is it to make a 'framebuffer multiplexer'? For instance, if we have 10 layers of virtual framebuffers, /dev/vfb0 should be drawn on the top and /dev/vfb9 at the bottom, with background pixels at each layer being 'transparent'. The multiplexer then puts together the final result for /dev/fb0 by picking the non-background pixel from the highest layer. Then we can draw yaft on vfb2, idump pictures on vfb1, wallpaper on vfb3 etc., without changing much of the existing codes?

I assume such program already exists?

sixel graphics is under controlled by yaft (but there are some problem with screen or tmux...)

I tried sdump, the static version. It works in yaft, but not at all when tmux is running. sdump just quietly exit without showing anything.

All terminal multiplexers are lack of sixel integration. Because sixel sequence is complicated and a few people need this feature.

Here is tmux fork that support sixel integration (I haven't test yet).

tmux never support sixel by itself.

GNU screen doesn't support sixel too, but pass through sixel sequence with img2sixel and -P penetrate option. GNU screen is more viable choice If you want to use sixel.

Anyway, I have one question concerning how yaft and sixel work together. Where and when do you update cellp->has_sixel and the corresponding pixmap? How does sdump tell yaft the range of cols and lines which should belong to sixel?

Being able to understand those mechanisms would be very helpful for me to try out some stuffs I have in mind. I would like to do something like this:
...

What you need to do is to specify pseudo terminal for output when you use sdump or img2sixel command with penetrate option. Each tmux pane has independent pseudo terminal (use tty command in the pane to identify current pseudo terminal). And when you want to refresh pane, just run refresh-client tmux command.

As I mentioned above, I recommend to use GNU screen.

use multi-monitor and specify "/dev/fb0" for yaft and "/dev/fb1" for idump or mplayer or something (you need more than 2 graphics hardware at least, onboard and discrete card)

Just a thought. How difficult is it to make a 'framebuffer multiplexer'? For instance, if we have 10 layers of virtual framebuffers, /dev/vfb0 should be drawn on the top and /dev/vfb9 at the bottom, with background pixels at each layer being 'transparent'. The multiplexer then puts together the final result for /dev/fb0 by picking the non-background pixel from the highest layer. Then we can draw yaft on vfb2, idump pictures on vfb1, wallpaper on vfb3 etc., without changing much of the existing codes?

I don't know your approach is worth or not, but it seems to reinvent the wheel.

In my thoughts, what we need in framebuffer environment is not an another display server but terminal multiplexer. Current terminal multiplexers only have less capability of supporting unusual terminal features (not only sixel, there are many other features that rarely used but noticeable)

I assume such program already exists?

Only a few project I know:

  • Xorg server (fbdev driver)
  • DirectFB
  • FramebufferUI (disappeared)