Arbitrary precision

  1. Is there a performance benefit in using an integer with fewer bits, for example i10 instead of i16 or i25 instead of i32 or i50 instead of i64?
  2. Is the upper limit(65k) permanent? Or will it be increased or even disappear?
  3. For large numbers, how does zig’s implementation compare to GMP? Are there any other advantages or disadvantages?
  4. Is there a plan for a bigfloat? I don’t mean something like f130, but just a single type.
  1. No. You can in fact loose out on performances, even more so in packed structs with all the bit shifting generated by the compiler to obey the layout of the packed struct.
  2. Integers are stack allocated and integers that your machine can’t handle natively are handled in software. An i65536 will use 64 KiB of sequential stack memory. I think this is a limit in LLVM, not Zig.
  3. GMP integers are dynamically sized using allocator backed memory. If you need arbitrary precision arithmetics, use GMP.
  4. Use mpf_t from GMP.
1 Like