Shader differences from 3.0 beta 2 to 3.2

I have some grayscale shader code that works fine in 3.0 beta 2 but when ported to 3.2, it fails to work correctly. I’m trying to make something grayscale using the following:

const std::string fsh = R"(
varying vec2 v_texCoord;
varying vec4 v_fragmentColor;

void main() {
    vec4 texel = v_fragmentColor * texture2D(CC_Texture0, v_texCoord);
    gl_FragColor = vec4(1.0 - texel.r, 1.0 - texel.g, 1.0 - texel.b, texel.a);
})";

GLProgram *program = 
    GLProgram::createWithByteArrays(ccPositionTextureColor_noMVP_vert, 
                                    (GLchar*)fsh.c_str());

Sprite *someImage = Sprite::create("/path/to/texture");
someImage->setGLProgram(program);
addChild(someImage);

From the looks of things, the fragment color’s alpha value is being ignored. The previously pixels in the original image are being inverted from rgb = vec3(0.0, 0.0, 0.0) to vec3(1.0, 1.0, 1.0) while the alpha component seems to … just ‘stay’ at 1.0. I’m not sure if shader faulted or something else. There’s no GL errors being printed out.

Here’s a side-by-side comparison of the Sprite with and without the shader applied:

Any help would be appreciated, thanks.

Actually, I tested the code in 3.1.1 it works fine.

EDIT: I previously mentioned how 3.2 alpha0 and 3.2 rc0 were unaffected. That’s not true; I loaded up in incorrect version when I compiled.

EDIT2: Looks like one of the changes from 3.1.1 to 3.2 was the fact that RGBA8888 images are now automatically premultiplying alpha and are stored as such, causing the blend function for the Sprite to be { GL_ONE, GL_ONE_MINUS_SRC_ALPHA }. The only workaround I have found for this is to manually set the blend function to { GL_ALPHA, GL_ONE_MINUS_SRC_ALPHA } when enabling that custom shader.

@rivm

I think the reason is for premultiply-alpha.
In version 3.0, there is no premultiply, and it is re added in version 3.2.
Take an example of (r,g,b,a);
it is (r, g,b,a) in 3.0 and (r*a, b * a, g * a, a) in 3.2
so in your shader you need to divide the RGB colour of pixel by alpha value.
It would works.

Sorry for our incompatibility.

Thanks! I took into consideration the premultiplied alpha values with the following change:

gl_FragColor = Vec4(Vec3(1.0) - (texel.rgb * vec3(1.0 / texel.a)), texel.a);

However, it looks like there’s still some white padding.

Here’s what I got:

This is what happens when I overlay the same exact same sprite texture on top of each of the inverted ones:

Do you know what could be causing that?

Did you mean the thin white boarder that surrounds the logo?
If I am right,
The reason is because of the sharp edges of alpha value between the image and the transparent background.
If we use GL_LINEAR or its variant value in glTextParameter function, an average value of several nearby pixels will be sampled to be used as the gl_FragColor.

The final colour is src.rgb * src.a + dst.rgb * (1-src.a) if non premultiplied alpha blend function is used.

  • The src.rgb is (1,1,1) outside the border(according to the shader), and src.a is 0.
  • The src.rgb is (vr,vg,vb) inside the border(according to the shader), and src.a is 1.

So just in the border, the sampled value is between (vr, vg, vb) and (1,1,1). and the src.a is between 0 and 1

Which means that src.rgb * src.a is not 0, white border will comes out.

There are two solution for this.

  • using GL_NEAREST in glTextParameter, which is not perfect, artifacts will exists if we scale the sprite.

  • using a smooth transition in the border.

Calling:

Texture2D::setAliasTexParameters()

to get:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

Doesn’t seem to eliminate the averaging that it’s doing for the borders. In any case, since I’m applying this shader to particular sprites on a larger sprite sheet (some of which are scaled differently), making that texture unit aliased is going to create unwanted side-effects for every other sprite using that texture as well, won’t it?

To that end, I’m wondering if there are actually any downsides with just setting the BlendFunc to a non-premultiplied alpha so I can apply my shader properly?

Theoretical, there is no solution for just setting a different blendFunc to get the same result.

Take a pixel (r, g, b, a) as the example.
in 3.1.1, we would firstly invert RGB value, get (1-r, 1-g, 1-b, a) in the shader, and then using a Non premultiplied alpha{ src.a, 1-src.a} as the blend factor blend function to get {(1-r) * a + (1-a) * dst.r, (1-g) * a + (1-a) * dst.g,(1-b) * a + (1-a) * dst.b)} for the final value.
however, in 3.2, we will get (1-ra, 1-ga, 1- b*a, a) in the shader, and in order to get the same result, we need to set (1-r) * a/(1-r * a) as the src blend factor. There is no blend factor for this.