Matlab Error : parallel:i​nternal:De​serializat​ionExcepti​on ??

4 visualizaciones (últimos 30 días)
While implementing 50K images using matlab+hadoop integration we faced this Error ??
how to solve this problem...
Matlab Exception : parallel:internal:DeserializationException
Error =
MException with properties:
identifier: 'parallel:internal:DeserializationException'
message: 'Deserialization threw an exception.'
cause: {0×1 cell}
stack: [3×1 struct]
Hadoop Data Node Log File :
2017-05-01 11:02:07,299 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-05-01 11:02:07,416 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-05-01 11:02:07,416 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system started
2017-05-01 11:02:07,425 INFO [main] org.apache.hadoop.mapred.YarnChild: Executing with tokens:
2017-05-01 11:02:07,425 INFO [main] org.apache.hadoop.mapred.YarnChild: Kind: mapreduce.job, Service: job_1493614370302_0004, Ident: (org.apache.hadoop.mapreduce.security.token.JobTokenIdentifier@6fc2862b)
2017-05-01 11:02:07,587 INFO [main] org.apache.hadoop.mapred.YarnChild: Sleeping for 0ms before retrying again. Got null now.
2017-05-01 11:02:09,429 INFO [main] org.apache.hadoop.mapred.YarnChild: mapreduce.cluster.local.dir for child: /tmp/hadoop-nitw_viper_user/nm-local-dir/usercache/nitw_viper_user/appcache/application_14936$
2017-05-01 11:02:10,133 INFO [main] org.apache.hadoop.conf.Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
2017-05-01 11:02:10,610 INFO [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: File Output Committer Algorithm version is 1
2017-05-01 11:02:10,762 INFO [main] org.apache.hadoop.mapred.Task: Using ResourceCalculatorProcessTree : [ ]
2017-05-01 11:02:10,990 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: hdfs://master:9000/images/39/394286.jpg:0+280234
2017-05-01 11:02:11,014 INFO [main] org.apache.hadoop.mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
2017-05-01 11:02:11,014 INFO [main] org.apache.hadoop.mapred.MapTask: mapreduce.task.io.sort.mb: 100
2017-05-01 11:02:11,014 INFO [main] org.apache.hadoop.mapred.MapTask: soft limit at 83886080
2017-05-01 11:02:11,014 INFO [main] org.apache.hadoop.mapred.MapTask: bufstart = 0; bufvoid = 104857600
2017-05-01 11:02:11,014 INFO [main] org.apache.hadoop.mapred.MapTask: kvstart = 26214396; length = 6553600
2017-05-01 11:02:11,017 INFO [main] org.apache.hadoop.mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
2017-05-01 11:03:36,877 INFO [main] org.apache.hadoop.mapred.MapTask: Starting flush of map output
2017-05-01 11:03:46,807 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : com.mathworks.toolbox.parallel.hadoop.worker.RemoteFuture$CommunicationLostException
at com.mathworks.toolbox.parallel.hadoop.worker.RemoteFuture.get(Unknown Source)
at com.mathworks.toolbox.parallel.hadoop.worker.RemoteFuture.get(Unknown Source)
at com.mathworks.toolbox.parallel.hadoop.link.MatlabWorkerFevalFuture.get(Unknown Source)
at com.mathworks.toolbox.parallel.hadoop.link.MatlabMapper.map(Unknown Source)
at com.mathworks.toolbox.parallel.hadoop.link.MatlabMapper.map(Unknown Source)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at com.mathworks.toolbox.parallel.hadoop.MatlabReflectionMapper.run(Unknown Source)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
2017-05-01 11:03:49,307 INFO [main] org.apache.hadoop.mapred.Task: Runnning cleanup for the task
2017-05-01 11:03:55,474 WARN [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete hdfs://master:9000/corel_image_10L_seq_8/_temporary/1/_temporary/attempt_1493614370302_0004$
2017-05-01 11:03:56,752 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics system...
2017-05-01 11:03:57,366 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system stopped.
2017-05-01 11:03:57,388 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MapTask metrics system shutdown complete.
Thanks

Respuesta aceptada

Walter Roberson
Walter Roberson el 1 de Mayo de 2017
possibly you ran out of memory: that would prevent deserialization.
  2 comentarios
Pulkesh  Haran
Pulkesh Haran el 1 de Mayo de 2017
Editada: Pulkesh Haran el 1 de Mayo de 2017
we have 8GB RAM System with swap memory 16GB .... and cluster of 110 Nodes ... each system with 8GB RAM, i7 Processor We are getting this Exception while creating Sequence file ....
What should we do now ...?
Images size is very less and we are using map - reduce model ....if its possible to proposed any other solution rather than creating sequence file ........ we want to process more than 10L images. We are able to process 40K images successfully but now may be because of so many Mapper task we are failed to process further Please Provide us Solution ...
Walter Roberson
Walter Roberson el 1 de Mayo de 2017
I would recommend talking to Mathworks about this. I do not have any experience with mapreduce.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre MapReduce en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by