+ 1
Why int size vary?
Why size of int data type changes from system to system and as per architecture?
1 Resposta
+ 1
Well - god didn't descend onto us programmers and told us that we should be using 8 bits in a byte or 4 bytes in an int. At the time when C was made, this were the computers that people were doing work with: https://en.wikipedia.org/wiki/PDP-10 - 9-bit words, 36-bit integers! Similarly there were machines with 6-, 7-, and 8-bit words. People were testing things out!
Of course, that's not true today, and we have settled on some common standards. However, this doesn't mean that stuff is still changing! Not too long ago, all the consumer computers switched from 32- to 64-bit architecture (32-bit computers could only handle 4GB of RAM) which also meant that it's only natural to make long ints bigger since you have more space inside CPU registers anyway.
Plus there will always be embedded development where 64-bit longs may just be too expensive in a lot of cases.
C aims to be a language that can run on any architecture, and the only way to make that happen is not to say "an int has to be this big, eat it". Inside limits.h (or climits if you're a C++ guy, http://tigcc.ticalc.org/doc/limits.html) you can see what range of numbers the various types need to be able to handle, but that's all. C99 also says that:
sizeof(short) <= sizeof(int) <= sizeof(long) < sizeof(long long)
If you need types that have an exact specific width, check out stdint.h, which defines types like uint64_t or int16_t.