Why do Nan values greatly increase the cost of sparse multiplication
120 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Jan Neggers
el 22 de Oct. de 2025 a las 6:16
Comentada: Matt J
hace alrededor de 7 horas
Consider this code:
clear; close all;
n = 1e6;
m = 1e3;
s = 1e-3;
p = round(s^2 * n * m);
I = randperm(n, p);
J = randperm(m, p);
phi = sparse(I, J, 1, n, m);
Gx = randn(n, 1);
for k = 1:3
tic
L = Gx .* phi;
toc
end
Gx(1) = NaN;
for k = 1:3
tic
L = Gx .* phi;
toc
end
Now consider the output on my macbook pro M1 running Matlab 2024a (so through rosetta).
Elapsed time is 0.001591 seconds.
Elapsed time is 0.000291 seconds.
Elapsed time is 0.000282 seconds.
Elapsed time is 0.573762 seconds.
Elapsed time is 0.548177 seconds.
Elapsed time is 0.560880 seconds.
Notice the very large difference in computation times between the no-NaN version of Gx and the version that contains a single NaN value.
The solution is quite obvious, just remove the NaN values with your favorite method (isnan, isfinite, etc.). So this is not my question.
What intregues me is:
why the computation time changes so drastically?
2 comentarios
Dyuman Joshi
el 22 de Oct. de 2025 a las 7:05
Editada: Dyuman Joshi
el 22 de Oct. de 2025 a las 7:20
n = 1e6;
m = 1e3;
s = 1e-3;
p = round(s^2 * n * m);
I = randperm(n, p);
J = randperm(m, p);
phi = sparse(I, J, 1, n, m);
Gx = randn(n, 1);
timeit(@() Gx.*phi)
Gx(1) = NaN;
timeit(@() Gx.*phi)
%The version glitch is still here
version
The difference is of the same order, 1e3.
Respuesta aceptada
Matt J
el 22 de Oct. de 2025 a las 11:51
Editada: Matt J
el 22 de Oct. de 2025 a las 12:10
I think you should report it as a bug. If you need a workaround, though, the following equivalent operation, which avoids implicit exapnsion, seems to work fine,
n = 1e6;
m = 1e3;
s = 1e-3;
p = round(s^2 * n * m);
I = randperm(n, p);
J = randperm(m, p);
phi = sparse(I, J, 1, n, m);
Gx = spdiags(randn(n, 1),0,n,n);
timeit(@() Gx*phi)
Gx(1) = NaN;
timeit(@() Gx*phi)
2 comentarios
Matt J
hace alrededor de 7 horas
Unfortuantely, the workaround I've suggested is affected by a bug as well (which I am investigating here).
n = 1e6;
m = 1e3;
s = 1e-3;
p = round(s^2 * n * m);
I = randperm(n, p);
J = randperm(m, p);
phi = sparse(I, J, 1, n, m);
v=randi(100,n, 1);
v(1)=nan;
vDiag = spdiags(v,0,n,n); %i.e. vDiag=sparse(diag(v))
anynan(vDiag*phi) %incorrect
anynan(v.*phi) %correct
Más respuestas (1)
Steven Lord
el 22 de Oct. de 2025 a las 13:05
Let's look at a sample sparse matrix.
S = sprand(1e4, 1e4, 0.1);
status = @(M) fprintf("The matrix has %d non-zero elements, a density of %g.\n", nnz(M)./[1, numel(M)]);
status(S)
Multiplying S by four different values
Finite non-zero value
If we multiply S by a finite value, the result has the same number of non-zero elements as S.
tic
A1 = S.*2;
toc
status(A1)
2*0 is a 0, and that doesn't need to get explicitly stored in A1.
anynan(A1) % false
Non-finite values
What happens if we multiply S by a non-finite value? Let's first look at multiplying by NaN.
tic
A2 = S.*NaN;
toc
status(A2)
NaN*0 is NaN, and that does get explicitly stored in A2. In fact, all the elements in A2 are NaN.
anynan(A2)
nnz(isnan(A2))
What if we multiply by Inf instead?
tic
A3 = S.*Inf;
toc
status(A3)
Inf*0 is also NaN, and that gets explicitly stored in A3. Not all the elements in A3 are NaN, just the ones that were 0 in S since Inf times a positive finite value is Inf.
anynan(A3)
NaNInA3 = nnz(isnan(A3))
InfInA3 = nnz(isinf(A3))
NaNInA3 + InfInA3 == numel(A3)
We can see that all the locations where A3 is Inf rather than NaN are the locations corresponding to non-zero values in S.
WereAllInfInA3WhereSWasNonzero = all(isinf(A3) == (S~=0), "all")
Zero
And if we multiply S by 0:
tic
A4 = S.*0;
toc
status(A4)
The result has no non-zero elements.
0 times anything finite (0 or non-zero) is 0, and those don't need to get explicitly stored in A4.
anynan(A4) % false
Computing A4 is quicker than even computing A1.
Memory and performance comparison
You can see this by looking at how much memory each matrix consumes. The A2 and A3 sparse matrices are not sparsely populated, but they are stored using sparse storage. One term for this is Big Sparse. They'll actually consume more memory than they would if you'd stored them as full!
F2 = full(A2);
F3 = full(A3);
whos S A1 A2 A3 A4 F2 F3
Roughly, A2 and A3 take 10 times as much memory as either A1 or S. They also take roughly 10 times as long to compute as the computation of A1. And A4 has no non-zero elements so all the memory it consumes is from the stored index information.
4 comentarios
Matt J
hace alrededor de 10 horas
It still feels to me that there is some bug / non-consitent behaviour somewhere
Of course there is. Did you reply to them with the evidence developed here that the slow performance is unrelated to memory?
Ver también
Categorías
Más información sobre Logical en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!