Test Result : TaskSetManagerSuite

0 failures
50 tests
Took 10 sec.

All Tests

Test nameDurationStatus
Ensure TaskSetManager is usable after addition of levels90 msPassed
Executors exit for reason unrelated to currently running tasks76 msPassed
Kill other task attempts when one attempt belonging to the same task succeeds97 msPassed
Killing speculative tasks does not count towards aborting the taskset0.11 secPassed
Not serializable exception thrown if the task cannot be serialized0.1 secPassed
SPARK-13343 speculative tasks that didn't commit shouldn't be marked as success0.18 secPassed
SPARK-13704 Rack Resolution is done with a batch of de-duped hosts0.11 secPassed
SPARK-17894: Verify TaskSetManagers for different stage attempts have unique names93 msPassed
SPARK-19868: DagScheduler only notified of taskEnd when state is ready0.17 secPassed
SPARK-21563 context's added jars shouldn't change mid-TaskSet81 msPassed
SPARK-24677: Avoid NoSuchElementException from MedianHeap0.11 secPassed
SPARK-24755 Executor loss can cause task to not be resubmitted0.13 secPassed
SPARK-26755 Ensure that a speculative task is submitted only once for execution76 msPassed
SPARK-26755 Ensure that a speculative task obeys original locality preferences85 msPassed
SPARK-29976 Regular speculation configs should still take effect even when a threshold is provided69 msPassed
SPARK-29976 when a speculation time threshold is provided, should not speculative if there are too many tasks in the stage even though time threshold is provided67 msPassed
SPARK-29976 when a speculation time threshold is provided, should speculative run the task even if there are not enough successful runs, total tasks: 10.1 secPassed
SPARK-29976 when a speculation time threshold is provided, should speculative run the task even if there are not enough successful runs, total tasks: 286 msPassed
SPARK-29976: when the speculation time threshold is not provided,don't speculative run if there are not enough successful runs, total tasks: 192 msPassed
SPARK-29976: when the speculation time threshold is not provided,don't speculative run if there are not enough successful runs, total tasks: 285 msPassed
SPARK-4939: no-pref tasks should be scheduled after process-local tasks finished86 msPassed
SPARK-4939: node-local tasks should be scheduled right after process-local tasks finished0.1 secPassed
TaskOutputFileAlreadyExistException lead to task set abortion70 msPassed
TaskSet with no preferences99 msPassed
TaskSetManager allocate resource addresses from available resources78 msPassed
Test TaskLocation for different host type1 msPassed
Test that locations with HDFSCacheTaskLocation are treated as PROCESS_LOCAL84 msPassed
[SPARK-13931] taskSetManager should not send Resubmitted tasks after being a zombie66 msPassed
[SPARK-22074] Task killed by other attempt task should not be resubmitted84 msPassed
abort the job if total size of results is too large1.5 secPassed
basic delay scheduling0.12 secPassed
cores due to standalone settings, speculate if there is only one task in the stage71 msPassed
delay scheduling with failed hosts99 msPassed
delay scheduling with fallback0.11 secPassed
do not emit warning when serialized task is small0.13 secPassed
don't update blacklist for shuffle-fetch failures, preemption, denied commits, or killed tasks93 msPassed
emit warning when serialized task is large0.12 secPassed
executors should be blacklisted after task failure, in spite of locality preferences93 msPassed
multiple offers with no preferences0.1 secPassed
new executors get added and lost91 msPassed
node-local tasks should be scheduled right away when there are only node-local and no-preference tasks88 msPassed
repeated failures lead to task set abortion92 msPassed
reset4.3 secPassed
skip unsatisfiable locality levels0.16 secPassed
speculative and noPref task should be scheduled after node-local94 msPassed
task result lost99 msPassed
test RACK_LOCAL tasks0.12 secPassed
update application blacklist for shuffle-fetch91 msPassed
update blacklist before adding pending task to avoid race condition0.1 secPassed
we do not need to delay scheduling when we only have noPref tasks in the queue0.1 secPassed