solving a system of linear equations where A is a 40*40 sparse matrix
Mostrar comentarios más antiguos
Hi there,
I am solving a system of linear equations A*x=b , where A is a 40*40 sparse matrix. Should I use inv or \ to reach a more precise answer x?
Respuestas (2)
John D'Errico
el 11 de En. de 2026
Editada: John D'Errico
el 11 de En. de 2026
I'd suggest that sparse is almost meaningless when the matrix is only 40x40. At that point, the dense solvers are still extremely fast. Too many people seem to think that this is a large matrix, but that is not at all true. Worse, if your matrix is even slightly more dense than it needs to be for sparseness to be useful, then you get large amounts of fill-in, and any gain you would have gotten from the sparseness is wasted. And then sparse will actually be slower, and when applied in the wrong case, sparse storage can actually use MORE memory.
For example, here is about the most extreme case I can think of where sparseness can gain on such a small matrix, a simple tridiagonal matrix.
A = triu(tril(randn(40),1),-1);
As = sparse(A);
spy(As)
b = rand(40,1);
timeit(@() A\b)
timeit(@() As\b)
timeit(@() inv(A)\b)
timeit(@() inv(As)*b)
As you can see, the sparse solve is barely faster than the dense solve, and that was for an extremely well patterned tridiagonal matrix. And the inverse is always slower in either case. So inv is bad in terms of speed, and the sparseness is literally wasted unless you desperately needed that million'th of a second for your code to be successful.
In general, it is pretty safe to say ALWAYS use \, and never use inv. And reserve sparseness for matrices where you really do see a gain.
As far as precision goes, even then stick with backslash. You won't gain using inv. Again, a simple example will suffice:
x0 = ones(40,1);
b = As*x0;
xbs = As\b;
norm(x0 - xbs)
xinv = inv(As)*b;
norm(x0 - xinv)
So backslash was slightly better able to reconstruct the original vector x0. The difference is small, but if your matrix is close to singularity, it could be important.
There are few occasions where I will use inv. If, for example, I needed to know specific elements of the inverse matrix? Well, then inv can be useful, but even then, there are tricks if you understand the linear algebra.
2 comentarios
Tony Cheng
el 12 de En. de 2026
Lol. This response will probably get lengthy.
Large is in the eyes of the beholder I guess. And computers are constantly becoming more powerful. As you saw, there is a small gain in the 40x40 tridiagonal case, but the difference is pretty small for sparse versus full there. And, well, pattern matters! Density very much matters! For example...
As = sprand(500,500,0.02);
A = full(As);
A is 2% full, so roughly 10 non-zeros per row. Does that seem pretty sparse to you? (Actually, not to me, in my eyes.)
[Ls,Us,P] = lu(As);
whos Ls Us As A
So the factors of the sparse As are nearly as large as A itself. Almost all of those triangles are filled in.
spy(Ls)
The factor Ls of As is nearly 70% full in that lower triangle! So even though As was only 2% full to start, thus on average, only 10 non-zeros per row, the factors are pretty dense. We will see this reflected in poor performance for a sparse solve, where again, we don't gain much.
b = rand(500,1);
timeit(@() A\b)
timeit(@() As\b)
In fact here, the sparse solve was slower than the dense solve!
Most of the time, when I use a sparse solve, my matrices have 3 to 5, maybe 7 non-zeros per row. This is because matrices derived from things like finite difference approximations have typically that number of non-zeros per row. Similarly, the matrix you would use to construct the coefficients for a cubic spline is typically tridiagonal. A 1% full penta-diagonal matrix will be 500x500. That is roughly were you start to see some gain from sparsity at that density, though I would personally not consider a matrix of that size "large". More medium size. :) Anything that takes on the order of 0.003 seconds to solve is not large.
Again, pattern matters a LOT. Consider this next comparison for a 1% full random matrix, versus a 1% full penta-diagonal matrix.
% random pattern
As = sprand(500,500,0.01) + eye(500);
timeit(@() As\b)
A = full(As);
timeit(@() A\b)
% penta-diagonal
A = tril(triu(rand(500),-2),2);As = sparse(A);
spy(As)
timeit(@() A\b)
timeit(@() As\b)
So the penta-diagonal case shows a significant gain in solution speed, whereas the fully random 1% sparse matrix was roughly equal in time for sparse versus full.
What size do I think of as a large sparse matrix? Even at that, it depends on what you will do with it. Eigenvalues or an SVD are more nasty yet than a simple backslash. They take way more time. But in terms of backslash, in my eyes, "large" starts to happen around 10Kx10K. There are people solving sparse systems on the order of 1e6x1e6, and beyond.
Even the computer you have matters. If you are strapped for RAM on a highly limited computer, then anything you can gain from sparsity will be important. I can tell a story from my old APL days, when I had set a solve running for a (dense) 1000x1000 matrix, so 1000 unknowns. (Or maybe it was Fortran.) An hour later, I got a call from the console person on the IBM mainframe, telling me my job had been running for a solid hour now. Did I want them to cancel the process? NO! NO! NO! PLEASE NO! THIS IS NOT AN INFINITE LOOP!
You should use \.
It's not just about precision, though. It's also about speed:
A=sprand(400,400,0.2)+speye(400); b=rand(400,1);
timeit(@() inv(A)*b)
timeit(@() A\b)
Categorías
Más información sobre Sparse Matrices en Centro de ayuda y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!


