Test Result : TaskSetManagerSuite

0 failures (±0)
56 tests (±0)
Took 22 sec.

All Tests

Test nameDurationStatus
Ensure TaskSetManager is usable after addition of levels0.16 secPassed
Executors exit for reason unrelated to currently running tasks55 msPassed
Kill other task attempts when one attempt belonging to the same task succeeds82 msPassed
Killing speculative tasks does not count towards aborting the taskset81 msPassed
Not serializable exception thrown if the task cannot be serialized97 msPassed
SPARK-13343 speculative tasks that didn't commit shouldn't be marked as success65 msPassed
SPARK-13704 Rack Resolution is done with a batch of de-duped hosts84 msPassed
SPARK-17894: Verify TaskSetManagers for different stage attempts have unique names0.14 secPassed
SPARK-19868: DagScheduler only notified of taskEnd when state is ready0.28 secPassed
SPARK-21040: Check speculative tasks are launched when an executor is decommissioned and the tasks running on it cannot finish within EXECUTOR_DECOMMISSION_KILL_INTERVAL88 msPassed
SPARK-21563 context's added jars shouldn't change mid-TaskSet84 msPassed
SPARK-24677: Avoid NoSuchElementException from MedianHeap63 msPassed
SPARK-24755 Executor loss can cause task to not be resubmitted69 msPassed
SPARK-26755 Ensure that a speculative task is submitted only once for execution74 msPassed
SPARK-26755 Ensure that a speculative task obeys original locality preferences73 msPassed
SPARK-29976 Regular speculation configs should still take effect even when a threshold is provided95 msPassed
SPARK-29976 when a speculation time threshold is provided, should not speculative if there are too many tasks in the stage even though time threshold is provided92 msPassed
SPARK-29976 when a speculation time threshold is provided, should speculative run the task even if there are not enough successful runs, total tasks: 170 msPassed
SPARK-29976 when a speculation time threshold is provided, should speculative run the task even if there are not enough successful runs, total tasks: 249 msPassed
SPARK-29976: when the speculation time threshold is not provided,don't speculative run if there are not enough successful runs, total tasks: 199 msPassed
SPARK-29976: when the speculation time threshold is not provided,don't speculative run if there are not enough successful runs, total tasks: 281 msPassed
SPARK-30359: don't clean executorsPendingToRemove at the beginning of CoarseGrainedSchedulerBackend.reset4.9 secPassed
SPARK-30417 when spark.task.cpus is greater than spark.executor.cores due to standalone settings, speculate if there is only one task in the stage0.11 secPassed
SPARK-31837: Shift to the new highest locality level if there is when recomputeLocality59 msPassed
SPARK-32470: do not check total size of intermediate stages11 secPassed
SPARK-32653: Decommissioned executor should not be used to calculate locality levels69 msPassed
SPARK-32653: Decommissioned host should not be used to calculate locality levels76 msPassed
SPARK-33741 Test minimum amount of time a task runs before being considered for speculation65 msPassed
SPARK-4939: no-pref tasks should be scheduled after process-local tasks finished64 msPassed
SPARK-4939: node-local tasks should be scheduled right after process-local tasks finished64 msPassed
TaskOutputFileAlreadyExistException lead to task set abortion0.12 secPassed
TaskSet with no preferences81 msPassed
TaskSetManager passes task resource along54 msPassed
Test TaskLocation for different host type.2 msPassed
Test that locations with HDFSCacheTaskLocation are treated as PROCESS_LOCAL.68 msPassed
[SPARK-13931] taskSetManager should not send Resubmitted tasks after being a zombie71 msPassed
[SPARK-22074] Task killed by other attempt task should not be resubmitted76 msPassed
abort the job if total size of results is too large1.4 secPassed
basic delay scheduling90 msPassed
delay scheduling with failed hosts74 msPassed
delay scheduling with fallback0.24 secPassed
do not emit warning when serialized task is small0.14 secPassed
don't update excludelist for shuffle-fetch failures, preemption, denied commits, or killed tasks0.17 secPassed
emit warning when serialized task is large72 msPassed
executors should be excluded after task failure, in spite of locality preferences99 msPassed
multiple offers with no preferences66 msPassed
new executors get added and lost0.1 secPassed
node-local tasks should be scheduled right away when there are only node-local and no-preference tasks64 msPassed
repeated failures lead to task set abortion72 msPassed
skip unsatisfiable locality levels76 msPassed
speculative and noPref task should be scheduled after node-local67 msPassed
task result lost0.1 secPassed
test RACK_LOCAL tasks0.11 secPassed
update application healthTracker for shuffle-fetch72 msPassed
update healthTracker before adding pending task to avoid race condition60 msPassed
we do not need to delay scheduling when we only have noPref tasks in the queue58 msPassed