MightyPirates / OpenComputers

Home of the OpenComputers mod for Minecraft.

Home Page:https://oc.cil.li

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GPU overhaul suggestion

JasonS05 opened this issue · comments

After watching this video by Bisqwit about how some old text mode programs could achieve detailed patterns with single-pixel resolution by continuously changing the GPU's font, I thought it would be really cool if OpenComputers could do something similar. Then, I thought, why stop there? So I've come up with a whole new system for OpenComputers' graphics. Unfortunately, I'm way too lazy to implement any of this myself. However, I did quite enjoy planning it out in detail!

Capabilities

Tier 1

Max resolution is 50x16 characters. VRAM is also 800 characters (50*16*1). So far no difference. However, the font is now restricted to a fixed 256 character font (no Unicode). This set of 256 characters could be one of the various extended ASCII encodings.

Tier 2

Max resolution is 80x25 characters, and 4000 characters of VRAM (80*25*2), with 16 color overwriteable palette. Again, no difference from original OpenComputers so far. However, this one also has a 256 character font. Unlike tier 1 though, this font can be overwritten freely and the screen will update accordingly automatically. When the GPU is power cycled (e.g. removed from computer and added back in, or computer is turned off and on again) it resets the font and color palette back to default.

Tier 3

Max resolution is 160x50 characters, and 24,000 characters of VRAM (160*50*3), with same 256 color hybrid palette as the original tier 3 GPU. This should also be the same as original OpenComputers so far. However, unlike tiers 1 and 2, it has an 8192 character overwriteable font. It also has several graphics modes. When the GPU is power cycled the font and palette are reset. Changing between text mode and graphics mode or between graphics modes also clears any data on the GPU the same way a power cycle does.

Graphics mode 1

Max resolution is 320x200. Color depth is 8 bits. VRAM is 192,000 pixels (320*200*3).

Graphics mode 2

Max resolution is 640x400. Color depth is 8 bits. No VRAM.

Graphics mode 3

Max resolution is 640x400. Color depth is 4 bits. VRAM is 256,000 pixels (640*400*1).

Graphics mode 4

Max resolution is 640x400. Color depth is 2 bits. This uses a four color overwriteable palette which initializes itself to 0x00_00_00, 0x55_55_55, 0xAA_AA_AA, and 0xFF_FF_FF. VRAM is 768,000 pixels (640*400*3).

Graphics mode 5

Max resolution is 1280x800. Color depth is 2 bits. No VRAM.

Graphics mode 6

Max resolution is 1280x800. Color depth is 1 bit. This uses a two color overwriteable palette which initializes itself to 0x00_00_00 and 0xFF_FF_FF. VRAM is 1,024,000 pixels (1280*800*1).

Possible graphics mode 0? (might be too OP)

Max resolution is 320x200. Color depth is 24 bits. No VRAM.

Reasoning/Rationale (not visible to users of the mod)

The tier 1 GPU will have 2kB of high speed RAM and 4kB of ROM. The RAM holds the screen data (800 characters) and VRAM (800 characters) at one byte per character. This leaves an extra 448 bytes unused, but that's not really a big deal. The ROM holds the font data at 16 bytes per character. Since each character is 8x16, with one bit per pixel, that works out to be 16 bytes. With 256 characters, that makes 4kB of ROM.

The tier 2 GPU will have 16kB of high speed RAM and 4kB of ROM. Each character is two bytes. 4 bits for the foreground color, 4 bits for the background color, and 8 bits for the character code. At a resolution of 80x25 that makes 4kB for the screen. The VRAM is twice the screen size so that makes another 8kB for the VRAM. The font is also overwriteable so that needs another 4kB of RAM to store that. The ROM is to initialize the font on startup. Altogether that makes 16kB of RAM.

The tier 3 GPU will have 256kB of high speed RAM and 128kB of ROM. Each character is four bytes. 8 bits for the foreground color, 8 bits for the background color, 13 bits for the character code, and 3 bits padding. With a font of 8192 characters that requires a 128kB ROM for initialization and 128kB RAM to store the font data. The screen, at 160x50 characters and four bytes per character, takes 32kB. The VRAM is triple the size of the screen so it takes 96kB of RAM. Altogether that makes 256kB of RAM.

For the graphics modes, the ROM is unused and the 256kB of RAM is split up in a variety of ways. I'm also considering screens to have a 1280x800 resolution since a 160x50 text mode with 8x16 pixel characters works out to 1280x800 pixels. With 8 bits of color data per pixel and 320x200 resolution, a full screen takes 64kB of RAM. This leaves triple the screen's size for VRAM. With 8 bits per pixel and 640x400 resolution, a full screen takes all 256kB of RAM, so there is no VRAM. Reducing the color depth to 4 bits frees one screen's worth of VRAM. Reducing again to 2 bits frees two more screens' worth of VRAM. Doubling the resolution again to 1280x800 consumes all the RAM again leaving none left over for VRAM. Halving the color depth to 1 bit frees one screen's worth of VRAM. For the mode with 320x200 resolution and 24 bits of color depth, 8 bits of padding is added to each pixel for 32 bits per pixel so a full screen uses all 256kB of RAM.

Graphical modes for lower tier screens

So far I've been assuming the screen to be tier 3. However, it is important to consider the limitations of tier 1 and 2 screens with graphical modes. The tier 1 screen has a max resolution of 400x256 and is strictly black and white. This means only graphical mode 6 is supported because it is the only mode with 1 bit of color depth. Additionally, the overwriteable palette that mode 6 supports is disabled. The tier 2 screen has a max resolution of 640x400 and only supports the 16 overwriteable colors, not the 240 fixed colors. This limits it to 16 distinct colors at a time, but each of the 16 colors may have any RGB value. This means only graphical modes 3 through 6 (inclusive) are supported. The tier 3 screen has a max resolution of 1280x800 and supports all graphical modes without any limitations.

API

The text mode API will be largely unaffected except for the changeover from a Unicode based system to a font based system. For tiers 1 and 2 this means strings will be interpreted strictly byte-by-byte, with each byte specifying a character. For tier 3, a slightly more complicated and Unicode-like system will be used. Bytes 0x00 to 0x7f (0 to 127) will be interpreted as the corresponding character. Bytes 0x80 to 0xff (128 to 255) will be interpreted as the first byte of a two byte code, where the second byte provides the upper 6 bits, and the lower 7 bits of the first byte provide the lower 7 bits. The upper two bits of the second byte are ignored. If the second byte is 0x01 this means the two byte code just corresponds to whatever character the first byte would be to a tier 1 or 2 GPU. For portability, tier 1 and 2 GPUs will provide a boolean argument to ignore 0x01 bytes which defaults to false. Example two byte codes:

0x9B, 0x10 (155, 16) -> character 2075
0xD3, 0x37 (211, 55) -> character 7123
0x82, 0x03 (130, 3) -> character 386

For tier 3 GPUs, the same boolean argument, rather than ignoring 0x01 bytes, will instead enable two byte codes. When this option is set to false (the default value), tier 3 behavior mirrors tier 1 and 2 behavior.

There will also be a new function call called directWrite or some such that directly fills in a rectangle of characters, providing color data and everything. Data will be provided in a string. For tier 1 GPUs, there will be 1 byte per character. For tier 2, there will be 2 bytes per character with the first byte being the character code, the upper four bits of the second byte being the the foreground color, and lower four bits of the second byte being the background color. For tier 3 GPUs there will be 4 bytes per character. The first three bits will be flags to disable overwriting the character code, foreground color, and background color, respectively. The next 13 bits will be the character code. The next 8 bits will be the foreground color. The next 8 bits after that will be the background color.

For writing in graphics mode with a tier 3 GPU, directWrite will be the only means of writing to the screen or VRAM. The data will be provided in compacted form. That is to say, for 1, 2, and 4 bit color depths, there will be 8, 4, and 2 pixels specified per byte, respectively. When the area being written to contains a number of pixels that is not a multiple of 8, 4, or 2, then extra pixels specified at the end of the last byte are ignored. If the string is too short the missing bytes are assumed to be 0x00. If the string is too long, the extra bytes are ignored. For 24 bit color depth (graphics mode 0), pixels are provided in blocks of four bytes in the order RGBA. The A byte in each block of four bytes is ignored.

Miscellaneous

Note to devs

I honestly don't expect you guys to do anything with this. The amount of work required is probably massive, and I designed this and wrote this up primarily for my own enjoyment. However, if you were to implement this in any form, even heavily modified, that would be awesome. The new graphics possibilities would be massive.

Some interesting observations

If you don't care about having colors, don't mind a little extra complexity in your code, and don't care about VRAM, there are a lot of advantages in choosing to use the 1280x800 graphics mode with 2 bit color depth instead of the 160x50 text mode. You will have to take care of handling the font on your own instead of relying on the GPU to do that for you, but it allows for beautiful anti-aliased fonts, not to mention that characters can be positioned anywhere, even in off-grid positions. This may also be pretty good for black and white graphics with dithering, although 640x400 with 4 bits of color depth is probably better suited for that with 16 shades of gray instead of 4, plus it gives you a full screen's worth of VRAM.

Because 160x50 text mode can only display 8000 characters on the screen at once and there are 8192 characters in the font, you can achieve a sort of pseudo graphics mode with 1280x800 resolution and 256 colors, with the limitation that each 8x16 block of pixels can only have two colors. This would entail setting each character code on screen to the corresponding character in the font so it uses the first 8000 font characters, and manipulating the graphics would then consist of changing the font, not the character codes on the screen. This would also mean the VRAM would only really be useful for swapping out colors, not actual pixel patterns.

Because of the way two byte character codes work, every one byte character code can also be represented with a two byte character code by adding 128 to the byte and following with a null byte. I don't see any reason to do this, unless someone really wants strict 16 bit characters that also play nice with tier 3 rendering even for character codes 0 through 127. However, this will not work for tiers 1 and 2.

Backwards compatibility

This will obviously be somewhat incompatible with existing OpenComputers software, especially software that depends on Unicode braille for graphics display. As such, I have thought of four options for backwards compatibility:

Option 1

Provide a new API call to switch between new and old behavior.

Option 2

Allow players to change GPUs between new and old behavior the same way they can change CPUs between Lua 5.2 and Lua 5.3.

Option 3

Make a new set of GPUs for the new behavior with different crafting recipes. This may also be useful to reflect the greater capabilities of the new GPUs (especially the tier 3), requiring a more expensive crafting recipe. The old GPUs might have (legacy) or (deprecated) appended to the end of their item names.

Option 4

Make this an official addon mod so it becomes opt-in, with players needing to download an additional .jar file to change the old behavior to the new behavior.

As an osdev in opencomputers I would like it, for example for desktop programming

Okay, so, as a start: removing Unicode support for OpenComputers is unacceptable and off the table. This mod isn't designed only for people who speak English. Extended ASCII encodings don't accurately cover every language, nor do they allow mixing languages, nor do they allow mixing block/line characters with languages in some combinations (due to running out of room).

The GPU's design constraint is not "OP-ness", it's rather "server->client bandwidth use". Bitmapped modes are inherently more complicated because of graphical interfaces requiring a lot more data to transfer; OpenComputers 2 solves this by bundling an H.264 encoder/decoder with the mod, but implementing one in OC1 would be a lot of work as it was in OC2. Trying to keep to similar bandwidth usage, in comparison, would make bitmapped interfaces slow to draw and even slower to interact with.

The community has long used 2x4 subpixels as a way to allow 320x200 "pseudo-bitmapped" color modes with OC1. A mode in which the Unicode glyphs are replaced by 16-bit pixel "tiles" (4x4 1bpp, 2x4 2bpp, 2x2 4bpp, 1x2 8bpp) would be much easier to add on top of the existing system, as the storage format would be the same, all you'd really need to do is replace the client-side renderer - and write functions which operate at the pixel level rather than the character level, optionally.

Another option would be to simply have multiple distinct GPU types - just like computers used to have separate 2D and 3D cards, with one passing into the other.

This proposal, as-is, is off the table - not to mention, OC1 itself is in "maintenance mode", with no future feature updates planned from this team at least. Sorry.