Why aren't the C-supplied integer types good enough for basically any project? -


i'm more of sysadmin programmer. spend inordinate amount of time grovelling through programmers' code trying figure out went wrong. , disturbing amount of that time spent dealing problems when programmer expected 1 definition of __u_ll_int32_t or whatever (yes, know that's not real), either expected file defining type somewhere other is, or (and far worse thankfully rare) expected semantics of definition other is.

as understand c, deliberately doesn't make width definitions integer types (and thing), instead gives programmer char, short, int, long, , long long, in signed , unsigned glory, defined minima implementation (hopefully) meets. furthermore, gives programmer various macros implementation must provide tell things width of char, largest unsigned long, etc. , yet first thing non-trivial c project seems either import or invent set of types give them explicitly 8, 16, 32, , 64 bit integers. means sysadmin, have have definition files in place programmer expects (that is, after all, job), not of semantics of definitions same (this wheel has been re-invented many times) , there's no non-ad-hoc way know of satisfy of users' needs here. (i've resorted @ times making <bits/types_for_ralph.h>, know makes puppies cry every time it.)

what trying define bit-width of numbers explicitly (in language doesn't want that) gain programmer makes worth configuration management headache? why isn't knowing defined minima , platform-provided max/min macros enough c programmers want do? why want take language main virtue it's portable across arbitrarily-bitted platforms , typedef specific bit widths?

when c or c++ programmer (hereinafter addressed in second-person) choosing size of integer variable, it's in 1 of following circumstances:

  • you know (at least roughly) valid range variable, based on real-world value represents. example,
    • numpassengersonplane in airline reservation system should accommodate largest supported airplane, needs @ least 10 bits. (round 16.)
    • numpeopleinstate in census tabulating program needs accommodate most populous state (currently 38 million), needs @ least 26 bits. (round 32.)

in case, want semantics of int_leastn_t <stdint.h>. it's common programmers use exact-width intn_t here, when technically shouldn't; however, 8/16/32/64-bit machines overwhelmingly dominant today distinction merely academic.

you could use standard types , rely on constraints “int must @ least 16 bits”, drawback of there's no standard maximum size integer types. if int happens 32 bits when needed 16, you've unnecessarily doubled size of data. in many cases (see below), isn't problem, if have array of millions of numbers, you'll lots of page faults.

  • your numbers don't need big, efficiency reasons, want fast, “native” data type instead of small 1 may require time wasted on bitmasking or zero/sign-extension.

this int_fastn_t types in <stdint.h>. however, it's common use built-in int here, in 16/32-bit days had semantics of int_fast16_t. it's not native type on 64-bit systems, it's enough.

  • the variable amount of memory, array index, or casted pointer, , needs size depends on amount of addressable memory.

this corresponds typedefs size_t, ptrdiff_t, intptr_t, etc. have use typedefs here because there no built-in type that's guaranteed memory-sized.

  • the variable part of structure that's serialized file using fread/fwrite, or called non-c language (java, cobol, etc.) has own fixed-width data types.

in these cases, need exact-width type.

  • you haven't thought appropriate type, , use int out of habit.

often, works enough.


so, in summary, of typedefs <stdint.h> have use cases. however, usefulness of built-in types limited due to:

  • lack of maximum sizes these types.
  • lack of native memsize type.
  • the arbitrary choice between lp64 (on unix-like systems) , llp64 (on windows) data models on 64-bit systems.

as why there many redundant typedefs of fixed-width (word, dword, __int64, gint64, fint64, etc.) , memsize (int_ptr, lparam, vptrdiff, etc.) integer types, it's because <stdint.h> came late in c's development, , people still using older compilers don't support it, libraries need define own. same reason why c++ has many string classes.


Comments

Popular posts from this blog

javascript - RequestAnimationFrame not working when exiting fullscreen switching space on Safari -

Python ctypes access violation with const pointer arguments -