@atomicRmw() on meta.Vector component

How do you perform @atomicRmw() on a component of a meta.Vector?
The program below will result in the error:

error: expected type '*f32', found '*align(16:0:4:2) f32'
_ = @atomicRmw(f32, &v[2], .Add, v[0] * v[1], .Monotonic);

It works fine on a normal array.

const std = @import("std");
const Vec4f = std.meta.Vector(4, f32);

pub fn main() void {
    var a = [_]f32{ 1.0, 2.0, 3.0, 4.0 };
    _ = @atomicRmw(f32, &a[2], .Add, a[0] * a[1], .Monotonic);
    std.debug.print("a={any}\n", .{a});

    var v = Vec4f{ 1.0, 2.0, 3.0, 4.0 };
    _ = @atomicRmw(f32, &v[2], .Add, v[0] * v[1], .Monotonic);
    std.debug.print("v={}\n", .{v});
}

Fixing the compile error with @ptrCast(*f32, &v[2]) yields the wrong result though:

a={ 1.0e+00, 2.0e+00, 5.0e+00, 4.0e+00 }
v={ 3.0e+00, 2.0e+00, 3.0e+00, 4.0e+00 }

Vectors aren’t arrays but instead chunks of a numeric type that the cpu has the opportunity to operate on in parallel. Vectors however should be able to be casted to normal arrays and vice versa iirc. If so, you should store your data in memory as f32s to operate on it atomically and cast it to a vector after reading to do calculations

1 Like

Maybe that is for the best.

On the other hand storing arrays of the vectors directly is convenient and works alright. So I thought the floats must be somewhere in memory and it might be possible to access the underlying f32 directly. But perhaps that’s not possible with __m128 either, in a portable manner.