why does cocos2dx convert int to long in many classes?

why does cocos2dx convert int to long in many classes? who can tell me?

To welcome 64-bit CPU architecture.
We think next year 64-bit is the mainstream.

Minggo Zhang wrote:

To welcome 64-bit CPU architecture.
We think next year 64-bit is the mainstream.

you means that 64-bit CPU architecture will not use int type forever.

No. It is because the size of int is undefined.
For example, the size of int in 64-bit CPU may be 4.

Uhmm…

  1. What’s the problem with 32-bit integer? Just don’t mess integers and pointers and everything will be fine.
  2. So, you are sure that size of long is 64-bit on 64-bit CPU. Well, bad news for you. In LLP64 data model which is used by Windows (both x86-64 and IA-64), size of long is 32-bit.

Want to be sure about the size of your integer variable? Use fixed width types, e.g. int32_t.

@dot squid
Thanks for your sharing.
We don’t want to use fixed size type. Just want to use the same size of architecture. For example, 32-bit on 32-bit CPU and 64-bit on 64-bit CPU. I think it is better for performance.

Do you have any suggestion for it?

Yeah, my suggestion is to recall Donald Knuth’s thesis: premature optimization is the root of all evil. Especially when you don’t know how to optimize correctly.

64-bit arithmetics is not faster than 32-bit one (it could be even slower on some CPUs). Moreover, if you switch to 64-bit integers you’ll get

  1. a larger memory footprint (I’m sure it’s obvious why this happens);
  2. purer memory bandwidth utilization;
  3. purer cache line utilization.

So, leave int as it is. There are a lot of other places in Cocos2d-x requiring optimization.

@dot squid
I agree with you.

I will discuss it with Ricardo. And i have asked him to take a look of this thread.

Hi,

Some thoughts:

  • dot squid is correct about Windows. In the LLP64 architecture long is still 32-bit.
  • But our code needs to be 64-bit friendly:
    > * cocos2d-x runs on 64-bit Mac, 64-bit Linux and ARM64 OK.
    > * But it compiles with many warnings regarding implicit conversions from 64-bit to 32-bit types.
    > * And having compilation warnings is not good. We need to fix of all the compiler warnings.

99% of the warnings were not important, but I fixed some critical bugs regarding that.
So, the motivation to be 64-bit friendly is to remove all our compiler warnings, including the “Implicit conversion to 32-bit Type” warnings.

As an example, the std containers, in LP64 mode (Mac, Linux, ARM64) size() and count() returns size_t which is a 64-bit type.

So, part of our API needs to use size_t (where it makes sense) or long in order to prevent compiler warnings. (Casting a size_t to int is not a good idea).

So, the proposal is to update our API to be LP64 friendly: basically compile without warnings.

We do not need to replace ALL the ints with longs (or size_t) but part of our API needs to be updated.

UPDATE:
Info about LLP64 vs LP64 can be found here: http://en.wikipedia.org/wiki/64-bit_computing#64-bit_data_models

Yep, we should change int to long when it makes sense. But it will bring some more warnings too. For example, we change the return type of MenuItemFont::getFontSize from int to long, then all the codes that invoking MenuItemFont::getFontSize will cause warnings. If we want to fix these warnings, we should also modify more codes from int to long.

Yes, it might (or not) bring more warnings.
If it bring more warnings, we should fix them as well.

My suggestion is:

// old code: this code raises warnings in LP64
int i = array.size();

// in theory size() should return size_t
// so the correct code should be
size_t i = array.size();

// but I suggest to replace it with 'auto'.
// According to our C++ guideline we should only use 'auto' for local variables
// so we can use 'auto' in this case.
auto i = array.size();

ccArray.h is using long for the size, but I think it should use size_t, since it is a size (disclaimer: A few weeks ago I replaced int with long in ccArray… I think it should be size_t instead ).

`Riq

So we should re-check the codes that using long@ and consider if it is suitable to change tosize_t,auto, orint`.
What’s your opinion?

What about this ?

  • the API definition (parameters and returns values) should never use auto
  • local variables could use auto… it is not mandatory, but auto is allowed to be used in local variables
  • if a size is expected in a API like int get_size(), size_t (or std::size_t) should be used instead of int.
  • int or long should be used when it makes sense: by default int should used. But if the compiler is raising “implicit conversion to 32-bit type”-warning, and it is not a size, long should be used instead.
  • If the API is expecting a 64-bit type (either in32-bit or 64-bit mode), it should use uint64_t or int64_t

thoughts ?

My opinion:
warning of type conversion is normal in c++, it depends on the coder’s intention. For example, if I have a vector which only contain hundreds elements, use int to loop it is totally OK, however unsigned int or size_t will eliminate the warning. If some API use int is enough, for example, a API return the screen resolution, use long is meaningless. This only bring overload and warnings can be suppressed by other meanings such as preprocessor.
So change the type or not depends on what the API want to do rather than CPU architecture. If your API express the functional well, warning is no harm. In fact, few project needs so large integer type.

@Riq
I agree with you.

`ning li

Thanks. But our goal is to fix all the warnings. Having warnings on our build is not healthy because we might not detect important issues.

“implicit to 32-bit type” warnings shouldn’t be normal.
If the it is OK to convert a 64-bit to 32-bit, then an `static_cast<>@ should be added to the conversion to avoid the warning.

You have a good point regarding the screen resolution. int should be enough for that. If you have a long with the resolution, then what we should do is to cast the long to int in order to fix the warning.

Ricardo Quesada
warnings indeed help us to find error but write health code without warning is tough in c++ world, that depends on what kind the project be. For generic purpose lib such as data structure, it can not predicate how large the structure will be, so STL use size_t and leave the decision to platform. For not so generic software, almost all decision of integer type can be made by software itself. If the business logic never have a chance to use a type bigger than int, why change it to long? That’s doesn’t make sense and confuse user.

“possible loss of data” conversion warning can not avoid completely, especially the business logic represents real world thing.
For example:
int sprite_on_screen(){return _sprite_vec.size();}

Should I change int to std::size_t? Change to std::size_t will not introduce warning and adaptive to different platform, but it is silly because I know the return value never bigger than std::numeric_limit on any platform. So keep it simple, and let the warning just show.

ning li wrote:

Should I change int to std::size_t? Change to std::size_t will not introduce warning and adaptive to different platform, but it is silly because I know the return value never bigger than std::numeric_limit on any platform. So keep it simple, and let the warning just show.

This is a direct route to unmaintainable unportable code full of heisenbugs. Don’t do that. All those size_t’s, ptrdiff_t’s, uintptr_t’s have appeared not by accident, but by design. The design by people whose experience in C++ and software development in general is much-much higher than yours and mine. Trust them and follow the rules.

dot squid:
you are mix up something. Unlike generic library designer, common user of c++ will not face the problem they facing(can not predicate what’s the manner their code will be used).

If you use STL and never make “possible loss of data” conversion, that means almost all primitive types in your project will be size_t. For example, sprite_on_screen() return a size_t, then the variable hold the return value must be size_t also: size_t n = sprite_on_screen(); all other places use this n ,again, have to change to size_t, soon you will find, you project will be a lot of size_t. This is theoretic perfect for compiler, but really bad for design view: you don’t understand the business logic well.