-
-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix gray alpha png #2234
base: dev
Are you sure you want to change the base?
Fix gray alpha png #2234
Conversation
Is there a tester on cpp-tests? |
I don't quite understand how you are not able to reproduce the result, because no other changes were done when I did the test, and definitely no change to Also, please always add screenshots of actual and expected output. Aside from that, while I am not sure if the existing code in
The only way you would still be seeing a problem is if you forgot to pass the correct format of
Without that, then it would still be treated as RGBA, in which case the modification you made to
All this does is allow the existing
This is because in
You should confirm this in your own test with a breakpoint on the above section of code. So, please double-check your code. Re-test after reverting the change to I think that the change to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this custom gray+alpha shader solution is probably the best one.
Btw, I think the reason why convertRG8ToRGBA8()
expands RG
into this "weird" RGRG
format is for backwards compatibility with GLES2
. Because in GLES2
and old OpenGL
versions there was only GL_LUMINANCE_ALPHA
pixel format for two channels textures, that loaded RGBA channels with LLLA
. So usually in shaders you would read ra
channels. But this format was removed in favor of RG8
, and now you read rg
. So expanding RG
into RGRG
is backwards compatible with GLES2
and current graphic API formats. Though, not very efficient. :)
void main() | ||
{ | ||
vec4 c = texture(u_tex0, v_texCoord); | ||
FragColor = v_color * vec4(c.r, c.r, c.r, c.g); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will fail on GLES2, because for two channel textures it uses GL_LUMINANCE_ALPHA
pixel format which loads data into shader's rgba channels as LLLA. So for GLES2 you need to use (c.r, c.r, c.r, c.a)
. So you either need an #if defined(GLES2)
, or use compatibility macro RG8_CHANNEL
defined in base.glsl
. Check videoTextureNV12.frag
shader for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would (c.r, c.r, c.r, c.a) work for both cases? For RG8, it's being loaded as RGRG into the RGBA channels, so instead of using R and G, why not just use R and A, since A has the same value as G?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I treat loading RG8 as RGRG a bug, so I would go with an #if defined(GLES2)
. GLES2 will die at some point (In few years? Currently just 4.1% of Android market share), so this R and A channels is a deprecated legacy, and I wouldn't build a new feature based on outdated stuff.
@@ -0,0 +1,16 @@ | |||
#version 310 es | |||
precision highp float; | |||
precision highp int; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use lowp precision as default for both float and int. That's enough for simple color processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shader was copied from positionTextureColor.frag
, with the only section changed being what is in the main()
function. The existing shaders use the same settings for precision etc..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm aware that all shaders for some reason were marked with highp, and I've actually benchmarked that it has a noticeable performance impact on mobile devices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't realise there was such a performance hit on mobile devices, so if there is no need to have them marked as highp
, then at some point we should adjust the rest of the shaders as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've tried to find the notes from the tests I've did: so I was rendering ~1 million vertices: 300 triangles and 1000 batches, with the standard Axmol's position texture color shader, and the difference was 51ms (lowp) vs 54ms (highp), so about 5%. But this will depend on your game's fillrate: the more pixels you draw, the bigger impact will be.
Btw, the device I was testing didn't really support lowp, so it was falling back to mediump. On device with true lowp the difference would be larger. But I think most devices only support mediump and highp.
precision highp int; | ||
|
||
layout(location = COLOR0) in vec4 v_color; | ||
layout(location = TEXCOORD0) in vec2 v_texCoord; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use highp precision for texture coordinates.
I've tracked the initialization of the
Since the format change in I don't see the point of the conversion from RG8 to RGRG because the pixel format, in the end, still claims it's RGBA! It means that no code can rely on the actual data when |
Sorry, I'm not clear on the purpose of your post, or what it is you're actually asking. I am also not overly familiar with that section of code, or why the conversions happen, so perhaps the person who can answer it for you is @halx99. |
I think it's a bug if you request RG texture and actual image is RG and it still gets converted to RGBA (loading RGRG). But as I've said, it might have been done for backwards compatibility. But maybe it's ok to break it in this case, since it behaves like a bug, because if you request RG and supply RG, then you should get RG. Now, if you request RGBA and the image is RG, then it's another question if it should be loaded as RGRG or RRRG. Maybe it could be a separate feature, where you could request some channel swizzles before loading data into the requested format. |
Hello, just a quick note about the problems we have here. I tried to remove the RG to RGBA conversion but I still have cases where a conversion happens. Digging a bit, it seems that most of the API does not want to use the pixel format of the image resources :(
This does not follow the principle that the textures are just x-channels buffers and that the semantics is given by the shader. I think the member Now this is a lot of work for the few hours I can give to Axmol, and I'm very afraid of the side effects in many places I have no knowledge of. I doubt I would be very helpful here. |
Agreed, it has been a big source of issues and frustration for us. Instead of this, there should be a way to convert the texture being loaded either by asking for another pixel format, or specifying some kind of channel swizzles. But this should be done as a parameter for loading functions, not a global variable.
Yeah, from your summary of the issues it seems there's a need to rethink and refactor how textures are loaded (and reloaded on GL context loss). But that would touch a lot of places and would include a lot of breaking changes. Not sure what is the best way forward in this case and if such changes would be welcome. |
Perhaps the larger refactoring should be part of Axmol v3, since breaking changes would be expected in that major update. |
This is mostly what is described by @rh101 in #2230 with the addition of the following:
convertRG8ToRGBA8
to copy the pixels as RRRA instead of RARA (as it was introduced in Remove deprecated pixel formats L8, A8, LA8 #1839).I could not reproduce the output shown by @rh101 in its comment without the point above. From my understanding the PNG is loaded into an Image with the correct pixel format (RG8), then this image is passed to a Texture2D alongside a render pixel format set to RGBA. The Texture2D subsequently calls updatWithMipmaps which may convert the pixels into the render format. By the time the Texture2D reaches the Sprite the pixel format of the texture is RGBA.
Tested with cpp-tests' Texture2D/5:PNG Test (Linux) and in my app (Linux and Android).
I hereby invoke @smilediver to double check the loading of two-channels images, and @halx99 for the
convertRG8ToRGBA8
part :)