Test Result : DAGSchedulerSuite

0 failures
84 tests
Took 5.7 sec.

All Tests

Test nameDurationStatus
All shuffle files on the slave should be cleaned up when slave lost0.25 secPassed
Barrier task failures from a previous stage attempt don't trigger stage retry24 msPassed
Barrier task failures from the same stage attempt don't trigger multiple stage retries16 msPassed
Completions in zombie tasksets update status of non-zombie taskset25 msPassed
Fail the job if a barrier ResultTask failed18 msPassed
Failures in different stages should not trigger an overall abort76 msPassed
Multiple consecutive stage fetch failures should lead to job being aborted50 msPassed
Non-consecutive stage failures don't trigger abort72 msPassed
Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure36 msPassed
Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled34 msPassed
SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures1.4 secPassed
SPARK-23207: cannot rollback a result stage14 msPassed
SPARK-23207: local checkpoint fail to rollback (checkpointed before)32 msPassed
SPARK-23207: local checkpoint fail to rollback (checkpointing now)19 msPassed
SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before)91 msPassed
SPARK-23207: reliable checkpoint fail to rollback (checkpointing now)26 msPassed
SPARK-25341: abort stage while using old fetch protocol0.13 secPassed
SPARK-25341: continuous indeterminate stage roll back49 msPassed
SPARK-25341: retry all the succeeding stages when the map stage is indeterminate47 msPassed
SPARK-28967 properties must be cloned before posting to listener bus for 0 partition13 msPassed
SPARK-29042: Sampled RDD with unordered input should be indeterminate2 msPassed
SPARK-30388: shuffle fetch failed on speculative task, but original task succeed0.42 secPassed
SPARK-32003: All shuffle files for executor should be cleaned up on fetch failure0.17 secPassed
Single stage fetch failure should not abort the stage60 msPassed
Spark exceptions should include call site in stack trace31 msPassed
Trigger mapstage's job listener in submitMissingTasks24 msPassed
[SPARK-13902] Ensure no duplicate stages are created45 msPassed
[SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts29 msPassed
[SPARK-3353] parent stage should have lower stage id0.16 secPassed
accumulator not calculated for resubmitted result stage6 msPassed
accumulator not calculated for resubmitted task in result stage6 msPassed
accumulators are updated on exception failures and task killed6 msPassed
avoid exponential blowup when getting preferred locs list98 msPassed
cache location preferences w/ dependency13 msPassed
cached post-shuffle30 msPassed
catch errors in event loop12 msPassed
countApprox on empty RDDs schedules jobs which never complete11 msPassed
don't submit stage until its dependencies map outputs are registered (SPARK-5259)46 msPassed
equals and hashCode AccumulableInfo1 msPassed
extremely late fetch failures don't cause multiple concurrent attempts for the same stage26 msPassed
failure of stage used by two jobs11 msPassed
getMissingParentStages should consider all ancestor RDDs' cache statuses6 msPassed
getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606)34 msPassed
getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606)22 msPassed
getShuffleDependencies correctly returns only direct shuffle parents2 msPassed
ignore late map task completions12 msPassed
interruptOnCancel should not crash DAGScheduler51 msPassed
job cancellation no-kill backend12 msPassed
late fetch failures don't cause multiple concurrent attempts for the same map stage18 msPassed
map stage submission with executor failure late map task completions19 msPassed
map stage submission with fetch failure28 msPassed
map stage submission with multiple shared stages and failures47 msPassed
map stage submission with reduce stage also depending on the data15 msPassed
misbehaved accumulator should not crash DAGScheduler and SparkContext56 msPassed
misbehaved accumulator should not impact other accumulators33 msPassed
misbehaved resultHandler should not crash DAGScheduler and SparkContext82 msPassed
recursive shuffle failures33 msPassed
reduce task locality preferences should only include machines with largest map outputs17 msPassed
reduce tasks should be placed locally with map output16 msPassed
register map outputs correctly after ExecutorLost and task Resubmitted21 msPassed
regression test for getCacheLocs3 msPassed
run shuffle with map stage failure15 msPassed
run trivial job5 msPassed
run trivial job w/ dependency7 msPassed
run trivial shuffle15 msPassed
run trivial shuffle with fetch failure20 msPassed
run trivial shuffle with out-of-band executor failure and retry15 msPassed
shuffle fetch failure in a reused shuffle dependency31 msPassed
shuffle files lost when executor failure without shuffle service0.28 secPassed
shuffle files lost when worker lost with shuffle service0.17 secPassed
shuffle files lost when worker lost without shuffle service0.16 secPassed
shuffle files not lost when executor failure with shuffle service0.15 secPassed
shuffle files not lost when slave lost with shuffle service0.16 secPassed
simple map stage submission18 msPassed
stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880)23 msPassed
stage used by two jobs, the first no longer active (SPARK-6880)15 msPassed
stages with both narrow and shuffle dependencies use narrow ones for locality16 msPassed
task end event should have updated accumulators (SPARK-20342)0.33 secPassed
task events always posted in speculation / when stage is killed38 msPassed
trivial job cancellation4 msPassed
trivial job failure17 msPassed
trivial shuffle with multiple fetch failures13 msPassed
unserializable task16 msPassed
zero split job5 msPassed