mesa: 367cf3a2e3e51466429a6446ef14ed398a5fb948 (12.1.0-devel) $ ./bin/shader_runner tests/shaders/glsl-array-bounds-01.shader_test -auto Probe color at (15,15) Expected: 0.000000 1.000000 0.000000 Observed: 1.000000 0.000000 0.000000 PIGLIT: {"result": "fail" } c264fdbc073a0dfc393f53a8be880f535fd4b988 is the first bad commit commit c264fdbc073a0dfc393f53a8be880f535fd4b988 Author: Kenneth Graunke <kenneth@whitecape.org> Date: Mon Jun 20 11:20:51 2016 -0700 glsl: Split arrays even in the presence of whole-array copies. Previously, we failed to split constant arrays. Code such as int[2] numbers = int[](1, 2); would generates a whole-array assignment: (assign () (var_ref numbers) (constant (array int 4) (constant int 1) (constant int 2))) opt_array_splitting generally tried to visit ir_dereference_array nodes, and avoid recursing into the inner ir_dereference_variable. So if it ever saw a ir_dereference_variable, it assumed this was a whole-array read and bailed. However, in the above case, there's no array deref, and we can totally handle it - we just have to "unroll" the assignment, creating assignments for each element. This was mitigated by the fact that we constant propagate whole arrays, so a dereference of a single component would usually get the desired single value anyway. However, I plan to stop doing that shortly; early experiments with disabling constant propagation of arrays revealed this shortcoming. This patch causes some arrays in Gl32GSCloth's geometry shaders to be split, which allows other optimizations to eliminate unused GS inputs. The VS then doesn't have to write them, which eliminates the entire VS (5 -> 2 instructions). It still renders correctly. No other change in shader-db. v2: Drop !AOA check and improve a comment (feedback from Tim Arceri). Cc: mesa-stable@lists.freedesktop.org Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Timothy Arceri <timothy.arceri@collabora.com> :040000 040000 5d96cd8ba322e2ac1857baf9e401b5d494a85ab2 2b920fe841296c89251cca630f51060de413cbb3 M src bisect run success
I can't reproduce this. I tried both classic swrast and llvmpipe and it seems to be working fine.
I can reproduce this here so gave it a quick look: The GLSL IR is identical with _and_ w/o the optimisations (MESA_GLSL=nopt), yet the result changes to pass. Seems like the _mesa_optimize_program invocation from ir_to_mesa.cpp:get_mesa_program() is the one to blame. Something sounds quite fishy here...
I think the test is wrong, it should not be expecting a specific outcome. The spec says: "Behavior is undefined if a shader subscripts an array with an index less than 0 or greater than or equal to the size the array was declared with." And the test is doing: float array[] = float [] (1.0, 2.0, 3.0, 4.0); void main() { int idx = 20; if (array[idx] == 5.0) gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); else gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0); } So the result is undefined.
I've update the test to have the same outcome regardless of which branch is taken. https://patchwork.freedesktop.org/patch/147474/ This should fix the problem.
-- GitLab Migration Automatic Message -- This bug has been migrated to freedesktop.org's GitLab instance and has been closed from further activity. You can subscribe and participate further through the new bug through this link to our GitLab instance: https://gitlab.freedesktop.org/mesa/mesa/issues/321.
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.