experimental::TMXTiledMap maps bigger than 128x128 with 32-bit indices

I searched a bit and found that there is a limitation on the size of tilemaps around 128x128 because of an openGL limitation on the amount of vertices that can be rendered in one call.

I am using the experimental::TMXTiledMap, which only renders the tiles visible in the viewport (automatic culling). It does so by calculating which tiles are visible and creating an index buffer with only the visible vertices. This index buffer is a vector of GLushort (16bit).

I changed the source code of experimental::TMXLayer like this:

CCFastTMXLayer.h
std::vector<GLuint> _indices; instead of std::vector<GLushort> _indices;

CCFastTMXLayer.cpp in the method updateIndexBuffer()
_indexBuffer = IndexBuffer::create(IndexBuffer::IndexType::INDEX_TYPE_SHORT_32, (int)_indices.size());
instead of
_indexBuffer = IndexBuffer::create(IndexBuffer::IndexType::INDEX_TYPE_SHORT_16, (int)_indices.size());

After this change, cocos could render maps bigger than 128x128 on win32.

My question: why is this limit enforced hard coded? Because of multi-platform? I googled a bit, and found out it should work on iOS as well (not tested):

on which someone answers:

This answer is incorrect, GL_UNSIGNED_INT is supported for glDrawElements if GL_OES_element_index_uint is implemented (which it is in iOS, according to OpenGLES/ES2/glext.h).

The original tilemap renderer was created a long long time ago. There’s been a bit of discussion on improving TMX support in cocos2d, but beyond the current state of FastTMX there’s not much that has been done yet.

I’m guessing it would make sense in the short term to add a check for that GL extension and use SHORT_32 if available. I don’t foresee any issue adding it in as long as it continues to default to _16 because I’d guess there are still active devices that don’t support it.

Feel free to submit an Issue/PR on github.

@stevetranby if I would do this the correct way, it should probably happen in CCConfiguration, as there are more checks involving what the gpu supports (https://github.com/cocos2d/cocos2d-x/blob/v3/cocos/base/CCConfiguration.cpp). Another option is a compile flag and let the developer decide if they want to use it or not. Which one would you prefer?

I made a PR: https://github.com/cocos2d/cocos2d-x/pull/17006

Great!

Another idea is to allow for runtime selection in the short-term (v3.x) while unsupported devices exist. One could make it performant by, for example, setting a function pointer (or lambda) at startup that creates the index buffer and for the indices storage could use a runtime allocation + void*. This is probably related to adding it into CCConfiguration.cpp.

Do your platform targets support INT_32?
If not, curious what you’d do to support both with the compile flag version?

Chunking up the map is probably the long-term solution that’s been discussed briefly by myself and others on this board, and more so in other communities including development of minecraft/voxel clones.

I think we can use glsizei as glDrawElements accepts glsizei. @ricardo do you have any idea why FastTilemap uses GLushort?

@zhangxm isn’t GLsizei always 32 bit? And if we would use it, there is still the issue of passing the type to the indexbuffer.

Yep, GLsizei is 32bit as the doc says. If changing to use GLsizei, we should changing other codes to make it work too.

By doing that you will drop support for devices that do not support 32bit indices. (OpenGL ES without the extension GL_OES_element_index_uint). How much of all supported devices is that?

As i found GL_OES_element_index_uint is created for OpenGL ES 1.0. Is there any doc says that OpenGL ES 2.0 also needs it?

It seems there are devices with OpenGL ES 2.0 don’t support GL_OES_element_index_uint according this page. So, we should check it in runtime?

And if a device doesn’t support it and my game uses maps bigger than 128x128 it will render just part of the map. Say I chose to do it via compile flags and it would try to render on a device without GL_OES_element_index_uint, will it crash?

Either way, the developer will have to target certain android version (API version 18 and up supports 3.0):
https://developer.android.com/guide/topics/graphics/opengl.html

we can start using 32-bit indices instead of 16-bit indices if needed.
we are using 16-bit indices only because it takes less memory… just that.
but modern phones have lot of memory, so perhaps we can start using 32-bit indices for everything.

I think this PR is good for now, since this is default Disabled (ie: set to use 16-bit indices). Assuming the dev team doesn’t mind excess defines?

I’m not sure the correct failure mode? Either revert to 16-bit and just render the corrupt tile map, or maybe add a debug-time Assert or even a release msgbox if 32-bit is Enabled with a failed checkForGLExtension().

Future improvement probably is best done by refactoring TMX classes to support auto-chunking of a tile map for rendering. Initially this would be only done for the GL primitives, and later could be done as a higher-level world chunking (a la Minecraft and others). That said 4.x could probably go 32-bit from the start if the device ecosystem the min supported platforms mostly (or entirely) supports this GL extension?

Yep, the PR keep compatibility and can let developers to use large indices. @stevetranby i agree with you we need do higher-level world chunking in future.