Test Result : DAGSchedulerSuite

0 failures (±0)
104 tests (±0)
Took 15 sec.

All Tests

Test nameDurationStatus
All shuffle files on the storage endpoint should be cleaned up when it is lost0.11 secPassed
Barrier task failures from a previous stage attempt don't trigger stage retry81 msPassed
Barrier task failures from the same stage attempt don't trigger multiple stage retries0.1 secPassed
Completions in zombie tasksets update status of non-zombie taskset0.13 secPassed
Fail the job if a barrier ResultTask failed87 msPassed
Failures in different stages should not trigger an overall abort0.19 secPassed
Multiple consecutive stage fetch failures should lead to job being aborted.0.16 secPassed
Non-consecutive stage failures don't trigger abort0.26 secPassed
Retry all the tasks on a resubmitted attempt of a barrier stage caused by FetchFailure0.13 secPassed
Retry all the tasks on a resubmitted attempt of a barrier stage caused by TaskKilled0.11 secPassed
SPARK-17644: After one stage is aborted for too many failed attempts, subsequent stagesstill behave correctly on fetch failures1.6 secPassed
SPARK-23207: cannot rollback a result stage0.11 secPassed
SPARK-23207: local checkpoint fail to rollback (checkpointed before)0.16 secPassed
SPARK-23207: local checkpoint fail to rollback (checkpointing now)0.13 secPassed
SPARK-23207: reliable checkpoint can avoid rollback (checkpointed before)0.35 secPassed
SPARK-23207: reliable checkpoint fail to rollback (checkpointing now)0.14 secPassed
SPARK-25341: abort stage while using old fetch protocol93 msPassed
SPARK-25341: continuous indeterminate stage roll back0.15 secPassed
SPARK-25341: retry all the succeeding stages when the map stage is indeterminate0.14 secPassed
SPARK-27164: RDD.countApprox on empty RDDs schedules jobs which never complete66 msPassed
SPARK-28967 properties must be cloned before posting to listener bus for 0 partition59 msPassed
SPARK-29042: Sampled RDD with unordered input should be indeterminate0.21 secPassed
SPARK-30388: shuffle fetch failed on speculative task, but original task succeed0.49 secPassed
SPARK-32003: All shuffle files for executor should be cleaned up on fetch failure0.12 secPassed
SPARK-32920: Disable push based shuffle in the case of a barrier stage0.2 secPassed
SPARK-32920: Disable shuffle merge due to not enough mergers available0.12 secPassed
SPARK-32920: Empty RDD should not be computed0.18 secPassed
SPARK-32920: Ensure child stage should not start before all the parent stages are completed with shuffle merge finalized for all the parent stages0.1 secPassed
SPARK-32920: Merge results should be unregistered if the running stage is cancelled before shuffle merge is finalized90 msPassed
SPARK-32920: Reused ShuffleDependency with Shuffle Merge disabled for the corresponding ShuffleDependency should not cause DAGScheduler to hang0.18 secPassed
SPARK-32920: Reused ShuffleDependency with Shuffle Merge disabled for the corresponding ShuffleDependency with shuffle data loss should recompute missing partitions0.21 secPassed
SPARK-32920: SPARK-35549: Merge results should not get registered after shuffle merge finalization0.19 secPassed
SPARK-32920: merger locations not empty0.15 secPassed
SPARK-32920: merger locations reuse from shuffle dependency0.15 secPassed
SPARK-32920: metadata fetch failure should not unregister map status0.22 secPassed
SPARK-32920: shuffle merge finalization0.75 secPassed
Single stage fetch failure should not abort the stage.0.15 secPassed
Spark exceptions should include call site in stack trace0.15 secPassed
Trigger mapstage's job listener in submitMissingTasks0.19 secPassed
[SPARK-13902] Ensure no duplicate stages are created0.13 secPassed
[SPARK-19263] DAGScheduler should not submit multiple active tasksets, even with late completions from earlier stage attempts0.13 secPassed
[SPARK-3353] parent stage should have lower stage id0.37 secPassed
accumulator not calculated for resubmitted result stage58 msPassed
accumulator not calculated for resubmitted task in result stage64 msPassed
accumulators are updated on exception failures and task killed0.12 secPassed
avoid exponential blowup when getting preferred locs list0.13 secPassed
cache location preferences w/ dependency87 msPassed
cached post-shuffle0.13 secPassed
catch errors in event loop90 msPassed
don't submit stage until its dependencies map outputs are registered (SPARK-5259)0.16 secPassed
equals and hashCode AccumulableInfo1 msPassed
extremely late fetch failures don't cause multiple concurrent attempts for the same stage0.11 secPassed
failure of stage used by two jobs0.1 secPassed
getMissingParentStages should consider all ancestor RDDs' cache statuses68 msPassed
getPartitions exceptions should not crash DAGScheduler and SparkContext (SPARK-8606)99 msPassed
getPreferredLocations errors should not crash DAGScheduler and SparkContext (SPARK-8606)94 msPassed
getShuffleDependenciesAndResourceProfiles correctly returns only direct shuffle parents73 msPassed
getShuffleDependenciesAndResourceProfiles returns deps and profiles correctly0.2 secPassed
ignore late map task completions0.1 secPassed
invalid spark.job.interruptOnCancel should not crash DAGScheduler0.1 secPassed
job cancellation no-kill backend68 msPassed
late fetch failures don't cause multiple concurrent attempts for the same map stage0.1 secPassed
map stage submission with executor failure late map task completions0.15 secPassed
map stage submission with fetch failure0.15 secPassed
map stage submission with multiple shared stages and failures0.19 secPassed
map stage submission with reduce stage also depending on the data0.11 secPassed
misbehaved accumulator should not crash DAGScheduler and SparkContext0.11 secPassed
misbehaved accumulator should not impact other accumulators0.1 secPassed
misbehaved resultHandler should not crash DAGScheduler and SparkContext0.18 secPassed
recursive shuffle failures0.19 secPassed
reduce task locality preferences should only include machines with largest map outputs0.11 secPassed
reduce tasks should be placed locally with map output94 msPassed
register map outputs correctly after ExecutorLost and task Resubmitted0.12 secPassed
regression test for getCacheLocs59 msPassed
run shuffle with map stage failure82 msPassed
run trivial job68 msPassed
run trivial job w/ dependency73 msPassed
run trivial shuffle84 msPassed
run trivial shuffle with fetch failure0.11 secPassed
run trivial shuffle with out-of-band executor failure and retry0.17 secPassed
shuffle fetch failure in a reused shuffle dependency0.13 secPassed
shuffle files lost when executor failure without shuffle service73 msPassed
shuffle files lost when worker lost with shuffle service71 msPassed
shuffle files lost when worker lost without shuffle service69 msPassed
shuffle files not lost when executor failure with shuffle service73 msPassed
shuffle files not lost when executor process lost with shuffle service76 msPassed
simple map stage submission0.11 secPassed
stage used by two jobs, some fetch failures, and the first job no longer active (SPARK-6880)0.12 secPassed
stage used by two jobs, the first no longer active (SPARK-6880)96 msPassed
stages with both narrow and shuffle dependencies use narrow ones for locality94 msPassed
task end event should have updated accumulators (SPARK-20342)0.55 secPassed
task events always posted in speculation / when stage is killed0.11 secPassed
test 1 resource profile0.12 secPassed
test 2 resource profile with merge conflict config true0.1 secPassed
test 2 resource profiles errors by default0.1 secPassed
test default resource profile87 msPassed
test merge 2 resource profiles multiple configs5 msPassed
test merge 3 resource profiles0 msPassed
test multiple resource profiles created from merging use same rp0.11 secPassed
trivial job cancellation0.13 secPassed
trivial job failure83 msPassed
trivial shuffle with multiple fetch failures0.1 secPassed
unserializable task83 msPassed
zero split job64 msPassed