Different results on different computers, Matlab 2021b - 64 bit, windows 64 bit, both Intel chips

Hello together,
I was testing some code on two different machines, both 64 bit windows, both Matlab 2021b with 64 bit.
I was suprised, that a simple operation with the same variables, the same precision, reproduces slightly different results.
It is not a huge operation, actually just a matrix vector multiplication of a vector A with size 1x16 and a matrix B with 16x3, both in single format, resulting in a vector C of 1x3.
I tested the bit representation of both, the vector A and the matrix B entries, and they are exactly the same.
But when I perform the matrix vector multiplication C = A*B; , the first entry is different on the two machines.
The funny thing is, that, when I perform C(1) = A*B(:,1); I get the same value on both machines, and I get also the same value (but the other different result) when I perform C(1) = sum(A.*B(:,1)');
So summarized:
  • when I perform C = A*B, the first entries are different on the two machines ('10111101111101110110011011110110' and '10111101111101110110011011111000')
  • when I perform C(1) = A*B(:,1), the values are the same on the two machines ('10111101111101110110011011111000')
  • when I perform C(1) = sum(A.*B(:,1)'), the values are the same on the two machines ('10111101111101110110011011110110')
How does this come, and which value to trust?
Thanks!

4 comentarios

Does one have AMD and the other one have Intel?
No, both have Intel:
  • Intel® Xeon® Gold 6134 CPU, 3.20 GHz, 3.20 GHz
  • Intel® Core™ i7-8650U CPU 1.90 GHz, 2.11 GHz
Edit 1: I also tested a third machine, which gave the same results as the second above:
  • Intel® Xeon® CPU E5-2623 v3 3.00 GHz, 3.00GHz
Edit 2: I also tested a fourth machine, which gave the same results as the first above:
  • Intel® Xeon® Gold 6134 CPU, 3.20 GHz, 3.20 GHz
Edit: Just as information, I tested several such matrix vector multiplications with different vectors and but the same matrix (40000 overall) and nearly all of them gave different results.
Can you post some actual small examples that demonstrate this? Either post the hex versions of the numbers, or maybe a mat file. I.e., post something so that we can use the exact same numbers to start with.
Do the two machines use different BLAS libraries? Or maybe the floating point rounding mode is set differently on the two machines for some reason.

Iniciar sesión para comentar.

 Respuesta aceptada

First, about "which value to trust?"
Both values are equally trustworthy, the differences in results come down to applying the multiplications and additions of the matrix multiplication in different order. There is no right and wrong choice there, and so it's made based on performance considerations for each hardware architecture.
So for an individual matrix, you could use the symbolic toolbox and compare which machine got closer to the exact result, but this will come down to random luck for any specific input matrix.
The harder question is "how does this come?"
We make sure that MATLAB commands are reproducible, by which we mean: If you're on the same MATLAB version, same machine, same OS, same number of threads allowed for MATLAB to use, no change in any deep-down BIOS settings, the outputs of the same command with the exact same inputs are always the same.
To see what might be going on, can you run
version -blas
on your machine? This will tell us about which version of the library that we call for matrix multiplication has been chosen. I suspect they might be using different instruction set levels (e.g., AVX2 vs AVX512).

7 comentarios

@Christine Tobler: Yes your right:
For the computers, which have the same output, we have the same CNR branches.
For the computers, which have different outputs, we have different CNR branches:
'Intel(R) Math Kernel Library Version 2019.0.3 Product Build 20190125 for Intel(R) 64 architecture applications, CNR branch AVX2'
'Intel(R) Math Kernel Library Version 2019.0.3 Product Build 20190125 for Intel(R) 64 architecture applications, CNR branch AVX512'
So you would say, the differences are coming from the different branches? Is this dependent on the HW, which one is used? Can I switch them, or is this not recommended (i.e. optimization issues)?
Yes, I would say the differences are coming down to which branch is used. The newer machine has AVX512 registers, which allow faster speed, and the MKL library detects this and uses code which uses these new registers.
I'm not aware of a way to switch the branch chosen here by hand, although I assume there might be some deep-down way to do this.
However, even if possible, I wouldn't recommend doing this, because this would affect all computations in MATLAB, meaning that you wouldn't get some of the performance benefits of your new machine. Also, this would be a configuration of MATLAB that hasn't been tested (running AVX2 instructions on a machine that has AVX512 hardware), so you'd be losing some safety there, too.
The change in round-off you're experiencing here is likely to also happen with a MATLAB update, or at your next machine update, so changing to using AVX2 would only be delaying these types of portability concerns.
Thanks for the clarification!
After a lot of digging I finally get to this webpage and document set:
and the pdf version:
It is worth a read.
In particular, you CAN force MATLAB to use AVX2 across all machines if they are all AVX2 capable, so for example the Xeon and other new i7 CPUs will be "restrained" from AVX512 to AVX2. The performance loss does not seem to be significant on the Xeon-processor machine I tried, and it allows me to get bitwise-perfect matching MATLAB (and more significantly, Simulink) similations between multiple workstations running different i5/i7/Xeon processors, that otherwise produce DIFFERENT results. I don't see any evidence (at least so far) that the AVX2 usage on the AVX512-capable machines is producing incorrect results,
To get it to work, start MATLAB via a batch file that sets the environment variable MKL_ENABLE_INSTRUCTIONS, or, set that environment variable directly via Windows settings, BEFORE starting MATLAB.
In batch file:
set MKL_ENABLE_INSTRUCTIONS=AVX2
"C:\Program Files\MATLAB\R2021b\bin\matlab.exe -singleCompThread"
(or similar).
Then try version('-blas') at the MATLAB command prompt and check that all Xeon-type processors now say "AVX2" not "AVX512".
I am finding that I also need to constrain all MATLAB sessions to the same number of threads across workstations, as well as the same "AVX2" setting, to guarantee numerical repeatability. Practically this is easiest by just using a single thread for MATLAB/Simulink. This can be achieved either by using the -singleCompThread option when starting MATLAB, or by executing
maxNumCompThreads(1);
early in the MATLAB script that configures a simulation.
I did experiment with the other environment variable setting:
set MKL_CBWR=AVX2,STRICT
This did not seem to have any effect on MATLAB when I tried version('-blas'); I don't know why that doesn't work as per the documentation.
Also I did experiment with the other environment variable setting:
set MKL_DEBUG_CPU_TYPE=5
This DID work, and seemed to be equivalent to
set MKL_ENABLE_INSTRUCTIONS=AVX2
but it isn't as well documented as
set MKL_ENABLE_INSTRUCTIONS=AVX2
so I choose the latter solution.
Thanks, @Andrew Roscoe that looks like it could be useful to someone.
A further finding is that to get the same Simulink results I need to bdclose() the relevant model and re-open it just prior to the simulation, OR, start a fresh MATLAB session. Otherwise, "something" in the cached memory of Simulink can cause a set of simulations to produce different numerical results if you re-run the set of simulations in a different order to the first time.
Deleting the slxc files, and/or the contents of the slprj directory, seems to have no effect. It seems to be something in the memory of the Simulink session, related to the open model, that is relevant.
This is rather annoying, because, while the time penalty for constraining to AVX2 and a single thread is not too bad (10-20% simulation slowdown), the time penalty for not being able to benefit from reduced JIT acceleration times, for simulations run in a sequence, is very large. Often the JIT acceleration time is nearly 50% of the total time required. For a sequence of simulations, if the simulations use the same model (or nearly the same model), the JIT acceleration time can be dropped to almost-zero or even zero, if Simulink realises that the model is similar to the one it just simulated.
BUT, to get numerical reproducability I seem to have to bdclose() the model between every simulation, which means that the JIT acceleration takes the full time, every time, even if I leave the slprj directory intact.
To check:
If you set the random number seed to a constant before each run, does the same problem happen? (I assume here that even if you do not knowingly use random numbers, that something in your model might just be using random numbers.)
Something else that can cause subtle differences if if somehow the rounding mode got set. Rounding mode at the MATLAB level is not documented; it is set via system_dependent() or feature(); see https://undocumentedmatlab.com/articles/undocumented-feature-function

Iniciar sesión para comentar.

Más respuestas (1)

Roman Foell
Roman Foell el 1 de Dic. de 2021
Editada: Roman Foell el 2 de Dic. de 2021
I attached the example variables A,B in my first post.
@James Tursa: How to check the BLAS setting? How to check the rounding setting for floating point?
Edit: Following https://de.mathworks.com/matlabcentral/answers/223952-configuration-of-lapack-and-blas-in-matlab the BLAS setting is dependent of the Matlab version, I used both Matlab 2021b.

6 comentarios

Does anybody can tell me, if he could reproduce my results and perhaps can tell me, what's the reason? Thanks.
I can reproduce the results (R2021a+b, i7-3635QM)
>> whos A, whos B
Name Size Bytes Class Attributes
A 1x16 64 single
Name Size Bytes Class Attributes
B 16x3 192 single
>> dec2bin(typecast( (A*B) * [1;0;0], 'uint32'))
ans =
'10111101111101110110011011110110'
>> dec2bin(typecast( A*B(:,1), 'uint32'))
ans =
'10111101111101110110011011111000'
>> dec2bin(typecast( sum(A.*B(:,1)'), 'uint32'))
ans =
'10111101111101110110011011110110'
>> N1 = (A*B) * [1;0;0] - A*B(:,1)
N1 =
single
1.4901e-08
@Andres: Ok, thanks. But if A*B produces different results on different computers, did you also check this?
Does the BLAS library perhaps asks for specific hardware properties, even if it is Intel in all cases? Perhaps the RAM might be slightly different as well - for performance issues (even when 16 multiplications and addiitions are fairly small).
I can also confirm different results on different computers.
Matlab Online gave the same results as in my previous comment, but
R2020a and R2021b on Intel(R) Core(TM) i5-7300U:
C1 = dec2bin(typecast( (A*B) * [1;0;0], 'uint32'))
C2 = dec2bin(typecast( A*B(:,1), 'uint32'))
C3 = dec2bin(typecast( sum(A.*B(:,1)'), 'uint32'))
N1 = (A*B) * [1;0;0] - A*B(:,1)
C1 =
'10111101111101110110011011111000'
C2 =
'10111101111101110110011011111000'
C3 =
'10111101111101110110011011110110'
N1 =
single
0
@Andres: Thanks, so actually the same as for me. Do you could figure out, from which this difference comes?

Iniciar sesión para comentar.

Productos

Versión

R2021b

Preguntada:

el 1 de Dic. de 2021

Comentada:

el 2 de Nov. de 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by