0

Expression evaluation with variables and literals

When I wrote my 'big factorials' code here on Sololearn, I struggled with a problem. I tried to do this (pseudo code): llu big_n = huge int * huge int; My assumption was that if I just place a sufficiently large container, I can multiply two int variables and save the large result (yeah: pythonist ^^) . What happened? int*int overflowed and the distorted result got stored in the llu. It was a valuable lesson for me: int * int will be int, no warning. Today I read that a number literal without decimals is interpreted as type int by the compiler. I thought: Well, that would mean that two int literals would have to overflow as well, like: llu big_n = 2'100'000'000+3'000'000'000; Funnily enough, that didn't happen here on Sololearn. It seemed like the compiler somehow 'understood' that we needed something bigger here. On my PC though, I got a warning 'Overflow in expression' and a crazy result. I suppose this has to do with how Sololearn is set up, but can anyone explain what exactly is going on?

15th Jan 2019, 9:04 PM
HonFu
HonFu - avatar
5 odpowiedzi
+ 2
HonFu , Care to share that experimental code here so we can all see it?
16th Jan 2019, 8:47 AM
Ipang
+ 2
Ipang, I just wanted to recreate what I tried, but for some reason, today Sololearn agrees with my pc. It's just these 4 lines (C++): long long x = 2000000000+2000000000; cout << x; x = 2000000000ll+2000000000; cout << x; First two lines lead to overflow, second two lines don't - which is what was to be expected. What did I even do different yesterday? oO
16th Jan 2019, 10:23 AM
HonFu
HonFu - avatar
+ 2
HonFu , Yes, the second example uses the "ll" suffix which indicates that it was a long long type rather than an int, I guess that's why it didn't overflow for the addition operation in the second example, but without that literal sign int was assumed, because 2000000000 was still within int range (cmiiw). Interesting test 👍
16th Jan 2019, 10:43 AM
Ipang
+ 2
HonFu , I don't dare to say about the doubt in the last paragraph, I might be wrong 😁 But to some extent I guess we can think of it that way, the compiler is "smart" enough to deduct a literal type for optimise memory allocation (probably) 👍
16th Jan 2019, 11:04 AM
Ipang
+ 1
Ipang, thanks for your answer, it made me remember what I did differently yesterday: I wrote literals of numbers that didn't fit into an int. So the compiler does judge the literals: If the number is too large for an int, it will be of longer type. Consequently you may end up with ll*int, which will lead to auto conversion. On the other hand, if both are ints, they will overflow, unless I use something like ll as a suffix to manually 'cast' it and force the transformation. So basically, literals work just like typed variables - convenient! But that means, the compiler has a way to judge the literals and compare them to the different type maxes and infer the type from the evaluation, right?
16th Jan 2019, 10:53 AM
HonFu
HonFu - avatar