Std.math.pow() at comptime causes compilation error?

When I’m trying to compile the below program, I get the following error:

error: evaluation exceeded 1000 backwards branches
    const overflow_shift = math.floatExponentBits(T) + 1;

Program:

const std = @import("std");

const Num_samples: u32 = 256;

const Table = calculateTable();

fn calculateTable() [Num_samples]f32 {
    var buf: [Num_samples]f32 = undefined;

    var i: u32 = 0;
    while (i < Num_samples) : (i += 1) {
        buf[i] = std.math.pow(f32, @intToFloat(f32, i) / 255.0, 2.4);
    }

     return buf;
}

pub fn main() void {
    std.debug.print("{}\n", .{Table[15]});
}

Calling calculateTable() at runtime is not a proplem apparently. Should this work?

1 Like

There’s an adjustable limit to how many evaluation branches can execute at compile time. You can adjust it with @setEvalBranchQuota:

const Table = table_init: {
    @setEvalBranchQuota(11_000);
    break :table_init calculateTable();
};
1 Like

That solves it! Do you know more about this feature? Will it consume a lot of memory, or what is the motivation for keeping the initial limit at 1000?

1 Like

I’m speculating here, but I think this limit serves like a warning flag to the programmer, so you can stop and think about the balance between what should be evaluated at compile time and what should be left for runtime. If you think about it, you could build programs where almost everything is evaluated at compile time, but then compile times (as in duration; waiting for it to compile) would inevitably increase substantially, taking Zig into the C++ and Rust zone of long compile times. xkcd: Compiling

1 Like

A limit has to be there if you want to make sure that a build eventually stops (either with a success or a failure), without any similar system people could mistakenly write programs that loop forever in the compile time phase.

Not an insurmountable problem, you could still kill the build, but an annoying one nevertheless because compile times depend on the hardware. Backward branches are a hardware-agnostic way of timing out builds. Occasionally you have bump up the default limit if your application does a lot of stuff at comptime, but that’s pretty much it.

1 Like

That makes sense. In c++ I sometimes write code to generate a file containing the table as c+±code, which I simply include from then on. As I understand it, zig won´t recalculate/recompile the table for every unrelated change to other parts of the program.