Can we Blend content of different cameras?

Hello guys,

Camera::USER1 = C1(draws my shader , Depth = -1)
Camera::USER2 = C2(draws my game scene, Depth =1)

I made a lighting shader as per my game requirement,the shader was quite heavy on GPU so to render it i am using techniques suggested by @stevetranby in this post. My issue is that my shader has some black portions along with lights and i want to use that shader above my game play scene. If i use it as is then black portion of shader is drawn on top of game scene which is not acceptable. So i tried to use two cameras C1 and C2, to get my shader working properly with this arrangement, i have to blend additively every node which is being drawn by C2 so that C1 can be blended with every node, as a result every node is blending with other node in C1 which is again not what i was looking forward to.

A easy fix for this is i render my game scene using RenderTexture calling visit() on my game scene layer and then passing texture from RenderTexture to my shader and blend texture color inside shader. Sounds easy but i am worried about performance issues as i have to do it in update which doesn’t sounds like a optimized option(calling visit() for every node every frame explicitly).

What i am looking for is a Collision group like arrangements which can be found in physics engines where objects belonging to one group don’t collide with each other but do collide with other groups, similarly if we could have a arrangement where nodes rendered from one camera could be blended with nodes drawn by other camera only, This way we can easily differentiate between different layers and achieve more effects quite easily. I don’t know of any arrangements already present in cocos2dx. If anyone could point me in right direction then it will be a huge help, Pointers are more then welcome. I tried explaining my situation best i could, if more info is needed please ask away. Am i overthinking it?
P.S english is not my first language please forgive any typo mistake.

Thanks
happy Gaming :smile:

You may be overthinking it, but it sounds like you need to test things out. Maybe the way you’re doing it is performant enough for the devices you care about?

My question would be what is your goal of what you’re asking? It sounds like some type of lighting effect? Is this like a box2d lighting effect? Or something like fog of war?

You could blend two input render textures (say frame buffer for each camera) with your own shader to output the screen frame buffer. Blending some combination of nodes based on “collision masks” does sound expensive.

On first read it does sound like you’re adding a little extra complexity than necessary, but that’s likely just my not understanding your goals fully.

That’s what my initial thoughts were, just needed to confirm from more erudite person.

[quote=“stevetranby, post:2, topic:26994”]
Maybe the way you’re doing it is performant enough for the devices you care about?
[/quote]are you suggesting calling visit() on game layer explicitly won’t attract performance hits, AFAIK for every visit() call onDraw will be called of every node as a result matrices will be recalculated for custom command

[quote=“stevetranby, post:2, topic:26994”]
My question would be what is your goal of what you’re asking? It sounds like some type of lighting effect? Is this like a box2d lighting effect? Or something like fog of war?
[/quote]Its more like water droplets on camera screen, :stuck_out_tongue_winking_eye: depicting storm in game play. Where water drops are achieved with a elipse shape.

[quote=“stevetranby, post:2, topic:26994”]
You could blend two input render textures (say frame buffer for each camera) with your own shader to output the screen frame buffer.
[/quote]I am kinda lost in your words can you please share little more details?

[quote=“stevetranby, post:2, topic:26994”]
On first read it does sound like you’re adding a little extra complexity than necessary, but that’s likely just my not understanding your goals fully.
[/quote]Please allow me to explain my goals further.

I have a game play scene with few custom shaders. game play is working fine and performent enough. My rain shader is a little heavy on GUP and to optimize it i am using small fbo as suggested by you in other post of yours. Now rain shader also have some black portions as area not occupied by droplets is black. If i use it as is then it will render black (with drops) as its not being blended with game scene. If i pass game play texture into shader using RenderTexture then due to different fbo size its not showing(I suspect, needs to confirm it). Right now the easiest fix seems to blend rain shader Additively with game play scene.

EDIT:- please view the link i have uploaded a working screen capture of same. Please notice the sprites blending with each other.

I will prob test this out sometime next week. Thanks for the info.

I was recommending the process of rendering to a smaller FBO as once where you’d render your entire game/scene instead of just a part of it like you are attempting, so I’m interested to see how well it works to do the same for only some nodes (possibly something we’d utilize when we want some higher-resolution nodes in the game). This would be for a fill-rate limited game + device which usually means older hardware or those with super high resolution (causing excess fill rate issues when applying screen sized quads w/shaders).

Visiting nodes can be expensive, but if the CPU has a lot of headroom it actually may not be an issue.

I’m also trying to think if there’s a better way or better suggestion as a solution.

Here’s what I would (will) try first
You might be able to get the camera to render to the RenderTexture with a clear background instead of black? If not you should be able to render the rain RenderTexture on top of the game layer using additive blending to effectively mask out the black, which is probably what you were asking for in the first place?
Camera1 - masked to all game nodes, add Sprite node w/texture set to rt->getTexture(), set blend mode on rt
Camera2 - masked to only the rain nodes, set to render into RenderTexture rt. Either with black background and additive blend, or clear background w/normal alpha blend.

(afaik you can’t really blend two cameras directly, though if both cameras render to the same FBO then I believe you can disable the clear between cameras in order to blend the nodes, but then you could have just used one camera and made sure all nodes for the 2nd camera are rendered last)

Instead of using a camera we render fog of war information into a RenderTexture and then use that as a uniform into our relatively simple tile shader to apply a darkening value to each tile’s pixels. We’ve also worked on implementing the FBO solution for the entire scene in order to get around WP8 phone fill-rate limitations mostly because the resolutions are high-dpi whereas we are fine with mid-dpi quality where necessary.

You can set the FBO to transparent

fbo->setClearColor(Color4F(0, 0, 0, 0));

that’s what I did for my box2dlights port.

Only, glanced over the posts, but hope this helps.