wslsys in alt buffer toggles DISABLE_NEWLINE_AUTO_RETURN
j4james opened this issue · comments
Windows Terminal version
1.16.2641.0
Windows build number
10.0.19044.2364
Other Software
No response
Steps to reproduce
- Open a WSL bash shell in conhost or Windows Terminal.
- Switch to the alt buffer with
printf "\e[?1049h"
- Test the formfeed behavior with
printf "\e[10CX\b\fY\n"
- Run
wslsys
- Run the formfeed test again with
printf "\e[10CX\b\fY\n"
Expected Behavior
The test case above moves the cursor right 10 columns, outputs an X
, a BS
control (moving the cursor back a column), and a FF
control (moving the cursor down a row), then finally outputs a Y
.
In both cases the Y
should be directly below the X
.
Actual Behavior
After wslsys
has been run, the DISABLE_NEWLINE_AUTO_RETURN
mode is turned off, so the FF
control also triggers a carriage return, and the Y
is output in the first column.
The problem is that the auto-return mode is tracked in two separates places. It's one of the modes of the OutputMode
field in the SCREEN_INFORMATION
class, but it's also stored in the _fAutoReturnOnNewline
flag in the global Settings
. These two places have different scope!
By default _fAutoReturnOnNewline
is true, and the DISABLE_NEWLINE_AUTO_RETURN
mode isn't set. But WSL toggles that state on startup, so in WSL the mode is set, and _fAutoReturnOnNewline
flag is false. When you switch to the alt buffer, though, you get a new SCREEN_INFORMATION
instance with the defaults again, so the mode is again not set, but the global _fAutoReturnOnNewline
is still false!
This isn't immediately a problem, because the line feed operations rely on the _fAutoReturnOnNewline
flag which still has the correct value. But once you run wslsys
, it makes a call set SetConsoleMode
which forces the _fAutoReturnOnNewline
flag to be refreshed. And although it doesn't change the mode, the mode was already wrong at this point, so that incorrect state is now transferred to the _fAutoReturnOnNewline
flag.
From then, all the line feed controls will trigger a carriage return, which can be problematic for apps that weren't expecting that behavior. One case in particular is tmux
, which relies on line feed controls to move down without a carriage return when redrawing the screen. If that isn't correct, the entire display gets messed up. This is at least one of the causes of #6987.
holy hell
Like, how do you even find this kind of stuff. This is 👌
So, seems like if we change it so that changing _fAutoReturnOnNewline
also sets OutputMode
to match, this should just work?
If we just want a quick fix, we could make the SCREEN_INFORMATION
class initialise the OutputMode
based on the state of the _fAutoReturnOnNewline
flag. We already do this for the VIRTUAL_TERMINAL_PROCESSING
mode, which also mirrors its state globally (in the Settings::_dwVirtTermLevel
field).
This isn't really correct though. Because we can still have a situation where one buffer has the mode set, and another has it reset, but the only state we care about internally is the global flag. So at least some of the buffers will be reporting a mode that doesn't match the active behavior.
Ideally we'd just make the OutputMode
a global field itself, and then drop all of those other global fields that are mirroring some of the flags. However, there may be legacy reasons why we need to retain at least some of the current behavior.
Just so you know how things currently stand, this is my understanding of the various modes:
The ENABLE_LVB_GRID_WORLDWIDE
, DISABLE_NEWLINE_AUTO_RETURN
modes are both essentially global, but they are also mirrored in the per-buffer OutputMode
field. As a result, the reported mode won't necessarily reflect that active state. I don't think there's any reason why these shouldn't be made strictly global - that's really a bug fix.
The ENABLE_VIRTUAL_TERMINAL_PROCESSING
is mostly treated as a per-buffer mode, but it's also tracked globally, and that global state determines the initial state of any new buffers. I find that confusing, and since this is a relatively new mode, I wouldn't have thought it essential to preserve the current behavior, but some may still prefer that.
The ENABLE_PROCESSED_OUTPUT
and ENABLE_WRAP_AT_EOL_OUTPUT
modes are both per-buffer options. From a VT point of view the latter mode should be global (it's equivalent of DECAWM
). The former doesn't have an exact VT equivalent, but it's similar to CRM
mode, which would typically be global. For legacy reasons, though, I suspect we might need to preserve both of these as per-buffer options.
In short: The first two should definitely be strictly global modes, the third I think could reasonably be made global, but the last two might need to remain as per-buffer modes for backwards compatibility.
I'd bet there's a wacky chance that there's an app out there that uses different screen buffers (not VT ones) and relies on different modes in each. That just feels like the kind of weird gap that's been abused in the last 40 years.
That being said, if there's anything to do to help mitigate #6987, I think we should try it.
I think I'm cool with the breakdown you posted above. Let's just leave the last two per-buffer, but initialized by the global state.
That just feels like the kind of weird gap that's been abused in the last 40 years.
I think that this has been "officially supported" ever since the SetConsoleMode
API took a HANDLE
We may want to draw a clear line between what is shared between console buffers allocated via CreateConsoleScreenBuffer
("real" console buffers implemented as multiple SCREEN_INFORMATION
objects) and what is shared between alt buffers (where we used multiple SCREEN_INFORMATION
objects only out of convenience¹).
¹ [irony intensifies]
I've had another idea for this which I think should be a cleaner solution. If we can't make all of the modes global, the next best option would be to make all of them per-buffer modes.
For legacy console apps, every buffer will have its own set of modes, and the GetConsoleMode
API will always return an accurate state - there's no chance of the behavior being out of sync with the reported modes.
When creating a new buffer, the defaults are largely static:
ENABLE_PROCESSED_OUTPUT
- trueENABLE_WRAP_AT_EOL_OUTPUT
- trueDISABLE_NEWLINE_AUTO_RETURN
- falseENABLE_LVB_GRID_WORLDWIDE
- falseENABLE_VIRTUAL_TERMINAL_PROCESSING
- only true for conpty or when the VT registry entry is set to 1.
And note that the VT default is decided at startup and then is static for the life of the process - it's not affected by mode changes.
The only special case is the VT alt buffer, which needs to give the impression that the modes are global. However, that can easily be achieved by copying the current modes from the main buffer into the alt buffer when it's created, and copying them back again when it's closed (we already do something similar for other buffer state).
I think this fixes all of the existing problems, without changing any of the legacy behavior, and the model is easy to understand (all modes work the same way, rather than a mix of global and non-global modes).
That seems like a fine solution to me!
What would/should the behavior be for input modes?
I'm asking because I really wish that I could call CreateConsoleScreenBuffer
for a TUI program, turn off QuickEdit and turn on mouse input, and have QuickEdit restored when the program exists. As of now, this isn't possible to do. See also the ENABLE_EXTENDED_FLAGS behavior on GetConsoleMode
/SetConsoleMode
.
(Semi-offtopic rant - although Dustin is obviously right that SetConsoleMode
took a HANDLE
, CreateConsoleScreenBuffer
only returns one HANDLE
although the Win32 console programming model needs two handles to interact with a console, which leads to inevitable inconsistency.)
@zadjii-msft Thanks for the pointer, I didn't know about that.
Note though that per-process != per-buffer, and the two are not even incompatible with each other. I'd assert that per-buffer is "safe" in the sense that software using alternate buffers is really trying to explicitly save/restore state.
Per-process is more interesting/challenging - I can personally vouch for the existence of a mountain of code trying to "set up" modes for the benefit of child processes and revert them later. On the one hand, it'd be great to delete; on the other, there's a lot of fragility there. One callout would be the absurdly complex Ctrl+C handling where the existence of ENABLE_PROCESSED_INPUT
controls how the signal is delivered to multiple processes attached to the console according to CREATE_NEW_PROCESS_GROUP
.