I am creating an image effects program using cocos2d-x and one of the key features is gesture-based (using the mouse on desktop systems and the touchscreen on Android and iOS) region definition (used for masking, anchor point definition, and custom-shaped region-of-interest specification). One way this could be accomplished is by taking the window coordinate touch/click callback provided by the cc2dx framework and apply offsets based on an image’s defined position in the window to figure out which pixels (if any) the user touched and gestured over/clicked and moused over. I’d prefer a more direct approach though, ideally with the image sprite generating its own callbacks with touch coordinates based on its own coordinate system rather than that of its parent window such that results would be the same regardless of any image scaling, translation, or rotation that might have occurred independently of the parent window. Which image wrapper class from cc2dx would be best for this use case? I realize I probably will have to create much of this functioanlity myself, but I’d like to know which of the many provided image container/display classes would make the most sense to start out with in the creation of a ‘smart sprite’ class capable of detecting the coordinates of user interactions within its own coordinate system?
thanks very much,
In browsing the class and method index, I discovered the convertTouchToNodeSpace function that can be inherited from CCNode. Seems like this might do the trick, more or less… what do you think? There is also getGrid with description ‘A CCGrid object that is used when applying effects’ from CCNode which sounds promising— what exactly does the returned CCGridBase pointer provide?