Cutting an image into pieces based on a grid mask

Hi,

I have seen lately several question about jigsaw puzzles. The questions were about how to generate the set of pieces based on some piece masks. I would like to go one step further and try to cut the piece set based on a raster image like this:

The original image had white contour instead of blue I just painted it in blue to let it show correctly here in the forums. This mask image is an image same size of the image we want to cut into pieces (the one with the picture to cut) with white outlines for the pieces and transparent pixels for the rest.

I have seen other games using something like this to build the piece set but can’t see how to do it in a performant way. The only way I can come across is applying a flood fill algorithm and extract the piece this way.

Any help will be really appreciated.

Cheers.

These puzzle topics have also been super interesting.

Let us rally some troops and talk about how to do this!

@IQD @stevetranby @grimfate @mannewalis @milos1290 @Heyalda @Zinitter @fradow

yeah… let’s talk about this… I’ve been thinking a game like this, but still thinking about the best way to implement it. And suddenly there’re threads like this… hahaha…

I think the easiest way to cut pieces is to have a mask for each piece, and use a RenderTexture to take only a part of the original image (with the appropriate blendFunc). That’s what I do, at least.

No matter what, to be able to move each piece independantly, your are going to need a separate Node per piece. Using your image is going to make some things easier (no need to know in advance what’s the position of each piece, since this information is on the image), but I doubt it will be more performant.

The thing is, to be able to cut pieces from this image, first you will need to detect the contour of each piece, to be able to generate a mask. It is far from impossible, but still a fair amount of work.

Another caveat: how are you going to handle anti-aliasing? Your image is very aliased, it won’t look good. You need to generate a better anti-aliased image. Then your mask generator must be able to handle alpha

Here is what your mask generator should generate (only the alpha channel is relevant, the color is not):

@hexdump

@grimfate may give you ideas with some example codes at this post

RenderTexture should be a good idea. Your mask should cut into each unique pieces.
I can’t give you much idea on the masking as i’m not good on it.

But if you want randomise puzzle pieces, you can label each unique pieces and pair them.

For each unique piece, you can define what piece can match it in an array.
Then you can randomise generate a grid mask in a loop.

Illustrate example.
1 2 3
4 p 5
6 7 8

Although, you may try using clipping node but not sure about the performance on having too many clipping node.

Go check the ClippingNode test.
http://cocos2d-x.org/js-tests/

@slackmoehrle
You summon so many “troops” at once, lol :smile:

haha starting to think I need to make a tutorial for this idea if everyone wants to make this sort of game.

I agree with @Fradow. Individual masks and RenderTextures. As @Zinitter said, there is some code I’ve already posted on this idea, but that thread isn’t complete yet. Should give some idea on where to start though.

1 Like

@grimfate

That’s great to have such tutorial.

I would suggest start from “How to make RenderTexture work with mask”.

I read your other post that different mask and background would cause different result.

So, i think you should make it standard and clear on how to prepare the mask.

@Zinitter @grimfate while waiting for a complete tutorial, here is my method to generate pieces using mask like the image above:

/* Arguments:
 - texture: an already initialized RenderTexture, which must be the size of cutzone * MULTI_SAMPLING
 - cutzone: an already initalized mask, that msut have a color where you should keep the data. See image above
 - destination: a custom object that contains information about where this piece is located in your model
 - model: the sprite which contains your original image
 
 Warning: do to how Renderer work in V3, you shouldn't reuse ANY Sprite/RenderTexture if you generate several pieces at once, because the actual render will be performed later.
 
 Macros I use:
 - MULTI_SAMPLING is defined to 2, so that the resolution of the piece is higher than normal, to look better
 - DEFAULT_SIZE is 628 * ccb scale (I use ccb to load my graphics, so the scale is dependant, you can probably have a constant)
*/
void generatePieceTexture(RenderTexture* texture, Sprite* cutzone, Piece* destination, Sprite* model)
{
    //Prepare the cutzone and model. The model must be above, and you should find a proper positionning
    cutzone->setAnchorPoint(ccp(0.5, 0.5));
    model->setAnchorPoint(ccp(0, 1));
    cutzone->setZOrder(1);
    model->setZOrder(2);
    cutzone->setScale(MULTI_SAMPLING);
    model->setScale(MULTI_SAMPLING * DEFAULT_SIZE / model->getContentSize().width);
    
    cutzone->setPosition(ccp((cutzone->getTextureRect().size.width * cutzone->getScaleX() / 2),
                             (cutzone->getTextureRect().size.height * cutzone->getScaleY() / 2)));
    
    model->setPosition(destination->getImagePosition(cutzone->getScale() * CCBLoaderGetScale()));
    
    CCSize size = CCSizeMake(cutzone->getTextureRect().size.width * MULTI_SAMPLING, cutzone->getTextureRect().size.height * MULTI_SAMPLING);
    texture->setContentSize(size);
    texture->setAnchorPoint(ccp(0.5, 0.5));
    
    //Here is the actual "cut", which is performed using by visiting the model, then using a blendFunc to cut what's necessary
    texture->beginWithClear(1, 1, 1, 1);
    model->visit();
    cutzone->setBlendFunc(ccBlendFunc{GL_ONE_MINUS_DST_ALPHA, GL_SRC_ALPHA});
    cutzone->visit();
    texture->end();
    texture->getSprite()->setPosition(ccp(size.width/2, size.height/2));
}
1 Like

Hi again,

Thanks a lot for the answers, howerver (and I don’t want to sound rought) they do not answer my original question.

There’s a game made with cocos2d-x I came across the other day called Jigsaw Pirates: https://play.google.com/store/apps/details?id=com.greysprings.jigsawgames

Surprisingly, this game uses a technique to cut the pieces based on a raster grid mask as I posted. I know this because I decompiled the game and modified some of the grid it has inside and the game was cutting the pieces based on the new grid image I modified after recompilation.

This way of doing this is pretty nice because you can have pieces of any shape. They don’t need to be regular pieces and all the work of cutting them, etc… is made by the computer. More, you can apply same grid to any image.

The real question is about… how are these guys accomplish this. I have been thinking about for some hours without comming up with a nice solution and I would like to hear from you. At this time I have 2 possible solutions (wihtout implementation, just ideas):

  1. Flood fill algorithm for each piece shape in the mask grid. One of the problems here is detect where to start the flood fill when pieces have weird forms.

  2. Some kind fo algorithms to follow the grid lines on the image and take out the pieces from there.

Please, stick to this question and let’s see if we can guess how this smart guys did it. I’m really interested in it not just to apply it in my game but learn a new way to do this.

Cheers.

I think they’ve used openCV for that. You should google more about openCV and how to use it.

Check this link:

It has some samples on how to use openCV, there is also one for android that converts camera image into random puzzle (i think).

Here is another link in cpp that might help you

You can flood fill the entire image using a counter to define a new color to each area. The first color index would be the top left pixel, as it appears all puzzle grids need to have 1 px edge of color (minimum). You just go from top left pixel to bottom right pixel, check if already checked, it not tested then flood fill from this pixel until you can’t with the current color index, increment index. This is similar to generating or improving procedural maps (biomes, finding connected areas, etc).

Edit: If you have enough memory you could render each color index to a different RenderTexture using a shader to convert the uniform color to white and all else to black. You could also do this with pixel data directly. This will probably work okay for small grids, but you may have to find a way to trim off any excess and then complicate the positioning of the masks.

Hi,

@stevetranby
If I understand correctly, this is exactly what I am doing right now. Starting from top/left I try to find a pixel with transparency!=1.0f and then when I found it I do a flood fill and dump the original image to a sprite using the flood-filled zone. This way I create all pieces. I do something more, I use the alpha channel to add a little contour to the pieces, because, if you check te image I posted the line is aliased with just a think line being totally opaque and some more pixels being less opaque. This way, I can build a nice contour using this less opaque pixels and sending the result to the sprite while creating it with the flood fill algoritm :).

Anyway, I have to dig a bit more in this implementation because it is ok for offline piece compoting but not online.

@milos1290

Thanks, this is something I’m investigating too :).

Cheers.

@hexdump cool, I misread that you found the flood fill solution (oh wait actually I read your 2nd post and noticed you were wondering where to start when there are weird forms).

I would love to know if you find a more performant solution, but why do you need it to be an online solution? Are you working to allow people to draw their own puzzle grid in-game? And if so I bet they won’t mind having the app run the calculation once, and then you could save out the pieces to a folder/files with a unique id as a “strong cache”.

Our flood fill for our map generation should handle your image in less than 1sec on most hardware, so I’m curious if you’re trying to process all maps or very large maps?

I’m trying to think of a way to find the pieces automagically than with a flood fill. I would think a marching ants algorithm to find the pieces by edges instead of fill would likely be no more performant. Maybe you could do everything on the GPU (OpenCV)?

Another idea depending on how you are acquiring the grid in the first place is that you could use SVG or custom means of going from a drawing to paths. Either using vector draw program or writing a drawing app within the game where you take the drawn touches and convert them to paths. It’s possible you could extract each piece information more efficiently.
(probably not a great idea having written it)

Often times I look at something I’m trying to process and step back and ask which parts benefit from automation, which parts would benefit from by-hand/manual effort.

Also threading could help in this instance as well considering mobile and desktop are starting to have 4+ threads available going forward.

Just some thoughts.

Thanks. I really appreciate your comments. I’m building a little piece extractor script on python right now to let my apps going, but will get back to the online solution later.

I want it to be online just because others are doing it :), and I like to check if I can do it too :smiley:

Cheers.

1 Like