Hi everybody, I have a problem with setCameraMask method.
So, in function Init I define a camera:
_camera = Camera::createPerspective(60, visibleSize.width / visibleSize.height,
100, 1100);
_camera->setAnchorPoint(cocos2d::Vec2(0.5f, 0.5f));
_camera->setPosition3D(Vec3(x, y, z));
_camera->setCameraFlag(CameraFlag::USER1);
this->setCameraMask(static_cast<unsigned short> (CameraFlag::USER1), true);
this->addChild(_camera, 4);
When I add a new sprite with the next function
auto position = _camera->getPosition();
auto node = Sprite::create("images/node_1.png");
node->setAnchorPoint(Vec2(0.5f, 0.5f));
node->setPosition(position);
node->setScale(0.3f);
auto radiu = node->getContentSize().height / 2;
auto node_body = PhysicsBody::createCircle(radiu, PHYSICSBODY_MATERIAL_DEFAULT, Vec2::ZERO);
node->setPhysicsBody(node_body);
node->setCameraMask(static_cast<unsigned short> (CameraFlag::USER1), true);
this->addChild(node,2);
the following happens
but if I use the next function for move the camera
auto position3Ds = _camera->getPosition3D();
position3Ds.x += 10.0f;
_camera->setPosition3D(position3Ds);
the following happens
The sprite doesn’t change its position but its PhysicsBody does it.
Has anyone had the same problem? Can anybody help me?
Sorry, my English is bad
1 Like
You haven’t given enough consecutive code. Mentioning next function doesn’t help us to visualize the complete work flow. Can you post more consecutive code?
I am running into a similar issue.
Here is my code for moving the camera:
virtual void onKeyboardEvent(EventKeyboard::KeyCode keyCode, bool isPressed) override
{
if (!isPressed)
return;
switch (keyCode)
{
case EventKeyboard::KeyCode::KEY_LEFT_ARROW:
if (auto camera = Camera::getDefaultCamera())
camera->runAction(MoveBy::create(0.2f, -Vec2::UNIT_X * 5));
break;
case EventKeyboard::KeyCode::KEY_RIGHT_ARROW:
if (auto camera = Camera::getDefaultCamera())
camera->runAction(MoveBy::create(0.2f, Vec2::UNIT_X * 5));
break;
}
}
And here is my code for moving Node
s with PhysicsBody
s attached:
Vec2 positionInNode;
void AddTouchListenerToNode(Node* node)
{
auto touchListener = EventListenerTouchOneByOne::create();
touchListener->setSwallowTouches(true);
touchListener->onTouchBegan = [](Touch* touch, Event* event)
{
auto node = event->getCurrentTarget();
auto bounds = node->getBoundingBox();
auto position = node->convertTouchToNodeSpace(touch);
bounds.origin.setZero();
if (!bounds.containsPoint(position))
return false;
auto parent = node->getParent();
auto positionInParent = parent->convertTouchToNodeSpace(touch);
auto nodePosition = node->getPosition();
positionInNode = positionInParent - nodePosition;
if (auto physicsBody = node->getPhysicsBody())
{
physicsBody->setGravityEnable(false);
physicsBody->setDynamic(false);
physicsBody->setAngularVelocity(0.0f);
physicsBody->setVelocity(Vec2::ZERO);
}
return true;
};
touchListener->onTouchMoved = [](Touch* touch, Event* event)
{
auto node = event->getCurrentTarget();
auto parent = node->getParent();
auto positionInParent = parent->convertTouchToNodeSpace(touch);
node->setPosition(positionInParent - positionInNode);
if (auto physicsBody = node->getPhysicsBody())
{
physicsBody->setAngularVelocity(0.0f);
physicsBody->setVelocity(Vec2::ZERO);
}
return true;
};
touchListener->onTouchEnded = [](Touch* touch, Event* event)
{
auto node = event->getCurrentTarget();
auto parent = node->getParent();
auto positionInParent = parent->convertTouchToNodeSpace(touch);
node->setPosition(positionInParent - positionInNode);
if (auto physicsBody = node->getPhysicsBody())
{
physicsBody->setGravityEnable(true);
physicsBody->setDynamic(true);
}
return true;
};
node->getEventDispatcher()->addEventListenerWithSceneGraphPriority(touchListener, node);
}
If the camera is not moved, I can successfully move the one of the physics-enabled Node
s, but when the camera is moved, I have to apply a negative offset of the camera position to the point on the screen that I must use to be able to move the Node
.
To summarize, the problem that I am encountering is that the Node::convertTouchToNodeSpace()
does not take into account the camera’s matrix. Is there a function that can convert screen-space coordinates into Node
-space coordinates while taking into account a camera’s view matrix?