- I'm assuming you have (at least) 10 cores. Start a smaller number of workers.
- How much memory do you have? Aim for about 4 GB/per worker. How big is x? The reason this is less of a concern serially is that you only have 1 MATLAB running (versus 10).
- Create the python session within in the parfor (to remove this concern)
Does the MATLAB python interface support parallel processing?
27 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I am trying to access a simple python function via Matlab in R2022a:
function out = lognormcdf (x)
out = double(py.scipy.stats.norm().logcdf(x));
end
When I call call this function repeatedly in a for loop, the output is as expected, but in a parfor loop, I get the error attatched at the end of this file. I guess that every worker is accessing the same python session, and are interfering with one another. Is there any solution to this?
This is my error trace:
Starting parallel pool (parpool) using the 'local' profile ...
Connected to the parallel pool (number of workers: 10).
Warning: A worker aborted during execution of the parfor loop. The parfor loop will now run again on the remaining workers.
> In distcomp/remoteparfor/handleIntervalErrorResult (line 245)
In distcomp/remoteparfor/getCompleteIntervals (line 395)
In parallel_function>distributed_execution (line 746)
In parallel_function (line 578)
In plot_mle_vs_d (line 26)
Error using distcomp.remoteparfor/rebuildParforController
All workers aborted during execution of the parfor loop.
Error in distcomp.remoteparfor/handleIntervalErrorResult (line 258)
obj.rebuildParforController();
Error in distcomp.remoteparfor/getCompleteIntervals (line 395)
[r, err] = obj.handleIntervalErrorResult(r);
Error in plot_mle_vs_d (line 26)
parfor i = 1:n_mc
Preserving jobs with IDs: 13 because they contain crash dump files.
You can use 'delete(myCluster.Jobs)' to remove all jobs created with profile local. To create 'myCluster' use 'myCluster = parcluster('local')'.
The client lost connection to worker 7. This might be due to network problems, or the interactive communicating job might have
errored.
Warning: 10 worker(s) crashed while executing code in the current parallel pool. MATLAB may attempt to run the code again on the
remaining workers of the pool, unless an spmd block has run. View the crash dump files to determine what caused the workers to crash.
0 comentarios
Respuestas (1)
Raymond Norris
el 17 de Mayo de 2023
I can't see a single python session being the issue. I'm betting you ran out of memory. Try the following
Ver también
Categorías
Más información sobre Startup and Shutdown en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!