+ 1

Why keep the notation of starting at 0 for counting? It's obviously confusing when learning, and I'm guessing that it leads to all kinds of mistakes?

1st Dec 2016, 7:38 AM
Abbass
3 Antworten
+ 2
Because 0 represents an offset, and 'real' programmers know the addresses of even their pointers to data (maybe; it's a joke). Seriously: 1-off errors are a fact of programming and a lot of us don't separate 'count' and 'offset' well. Loops are tricky enough to include all values (< vs <=, and let's see...9-2 is 7 but elements 2 to 9 is actually 8 elements...now mix in some not's) without also having to remember to adjust for 1-based indexes. Yes, it's mixing offsets and (human) positions but in a portable way: there are 'empty' tests (since 0 is significant then)...and I'd rather have a set of "that's 0, so can't possibly be interfering" than a bunch of 1's that might be.
1st Dec 2016, 8:30 AM
Kirk Schafer
Kirk Schafer - avatar
+ 1
computer at its basic level starts counting from 0 so it is kept in language some language make it by 1 but it's common now for people to get it by 0 and they expect it so some new language also give 0 yes you are right it seems weird but you will get habil
1st Dec 2016, 7:54 AM
Sandeep Chatterjee
+ 1
Because that is how binary notation works, it starts counting from 0 and up. If you want you can start from 1, but that little part, that 0, is wasted, and your program will run slowly
1st Dec 2016, 11:39 AM
Seckar
Seckar - avatar