Console Output

Skipping 2,820 KB.. Full Log
2021-11-28 23:15:25.243 - stderr> 21/11/28 23:15:25 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20211128231525-0000/0 is now RUNNING
2021-11-28 23:15:25.706 - stderr> 21/11/28 23:15:25 INFO SparkContext: Starting job: collect at SparkSubmitSuite.scala:1555
2021-11-28 23:15:25.724 - stderr> 21/11/28 23:15:25 INFO DAGScheduler: Got job 0 (collect at SparkSubmitSuite.scala:1555) with 10 output partitions
2021-11-28 23:15:25.725 - stderr> 21/11/28 23:15:25 INFO DAGScheduler: Final stage: ResultStage 0 (collect at SparkSubmitSuite.scala:1555)
2021-11-28 23:15:25.725 - stderr> 21/11/28 23:15:25 INFO DAGScheduler: Parents of final stage: List()
2021-11-28 23:15:25.727 - stderr> 21/11/28 23:15:25 INFO DAGScheduler: Missing parents: List()
2021-11-28 23:15:25.733 - stderr> 21/11/28 23:15:25 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkSubmitSuite.scala:1555), which has no missing parents
2021-11-28 23:15:25.81 - stderr> 21/11/28 23:15:25 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 4.0 KiB, free 366.3 MiB)
2021-11-28 23:15:25.884 - stderr> 21/11/28 23:15:25 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 2.3 KiB, free 366.3 MiB)
2021-11-28 23:15:25.887 - stderr> 21/11/28 23:15:25 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.17.0.1:38845 (size: 2.3 KiB, free: 366.3 MiB)
2021-11-28 23:15:25.891 - stderr> 21/11/28 23:15:25 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1474
2021-11-28 23:15:25.907 - stderr> 21/11/28 23:15:25 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkSubmitSuite.scala:1555) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
2021-11-28 23:15:25.908 - stderr> 21/11/28 23:15:25 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks resource profile 0
2021-11-28 23:15:28.825 - stderr> 21/11/28 23:15:28 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (172.17.0.1:43832) with ID 0,  ResourceProfileId 0
2021-11-28 23:15:28.964 - stderr> 21/11/28 23:15:28 INFO BlockManagerMasterEndpoint: Registering block manager 172.17.0.1:39613 with 366.3 MiB RAM, BlockManagerId(0, 172.17.0.1, 39613, None)
2021-11-28 23:15:29.123 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0) (172.17.0.1, executor 0, partition 0, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.424 - stderr> 21/11/28 23:15:29 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.17.0.1:39613 (size: 2.3 KiB, free: 366.3 MiB)
2021-11-28 23:15:29.881 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1) (172.17.0.1, executor 0, partition 1, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.886 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 774 ms on 172.17.0.1 (executor 0) (1/10)
2021-11-28 23:15:29.897 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2) (172.17.0.1, executor 0, partition 2, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.898 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 19 ms on 172.17.0.1 (executor 0) (2/10)
2021-11-28 23:15:29.915 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3) (172.17.0.1, executor 0, partition 3, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.915 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 18 ms on 172.17.0.1 (executor 0) (3/10)
2021-11-28 23:15:29.929 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4) (172.17.0.1, executor 0, partition 4, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.929 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 15 ms on 172.17.0.1 (executor 0) (4/10)
2021-11-28 23:15:29.941 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5) (172.17.0.1, executor 0, partition 5, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.941 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 13 ms on 172.17.0.1 (executor 0) (5/10)
2021-11-28 23:15:29.959 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6) (172.17.0.1, executor 0, partition 6, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.96 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 19 ms on 172.17.0.1 (executor 0) (6/10)
2021-11-28 23:15:29.971 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7) (172.17.0.1, executor 0, partition 7, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.971 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 12 ms on 172.17.0.1 (executor 0) (7/10)
2021-11-28 23:15:29.986 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8) (172.17.0.1, executor 0, partition 8, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.986 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 16 ms on 172.17.0.1 (executor 0) (8/10)
2021-11-28 23:15:29.997 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9) (172.17.0.1, executor 0, partition 9, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:29.998 - stderr> 21/11/28 23:15:29 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 13 ms on 172.17.0.1 (executor 0) (9/10)
2021-11-28 23:15:30.011 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 13 ms on 172.17.0.1 (executor 0) (10/10)
2021-11-28 23:15:30.012 - stderr> 21/11/28 23:15:30 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
2021-11-28 23:15:30.013 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: ResultStage 0 (collect at SparkSubmitSuite.scala:1555) finished in 4.257 s
2021-11-28 23:15:30.019 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
2021-11-28 23:15:30.02 - stderr> 21/11/28 23:15:30 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage finished
2021-11-28 23:15:30.023 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Job 0 finished: collect at SparkSubmitSuite.scala:1555, took 4.316457 s
2021-11-28 23:15:30.038 - stderr> 21/11/28 23:15:30 INFO SparkContext: Starting job: collect at SparkSubmitSuite.scala:1555
2021-11-28 23:15:30.039 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Got job 1 (collect at SparkSubmitSuite.scala:1555) with 10 output partitions
2021-11-28 23:15:30.039 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Final stage: ResultStage 1 (collect at SparkSubmitSuite.scala:1555)
2021-11-28 23:15:30.039 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Parents of final stage: List()
2021-11-28 23:15:30.04 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Missing parents: List()
2021-11-28 23:15:30.041 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[3] at map at SparkSubmitSuite.scala:1555), which has no missing parents
2021-11-28 23:15:30.045 - stderr> 21/11/28 23:15:30 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.0 KiB, free 366.3 MiB)
2021-11-28 23:15:30.047 - stderr> 21/11/28 23:15:30 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.3 KiB, free 366.3 MiB)
2021-11-28 23:15:30.048 - stderr> 21/11/28 23:15:30 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.17.0.1:38845 (size: 2.3 KiB, free: 366.3 MiB)
2021-11-28 23:15:30.048 - stderr> 21/11/28 23:15:30 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1474
2021-11-28 23:15:30.049 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 1 (MapPartitionsRDD[3] at map at SparkSubmitSuite.scala:1555) (first 15 tasks are for partitions Vector(0, 1, 2, 3, 4, 5, 6, 7, 8, 9))
2021-11-28 23:15:30.049 - stderr> 21/11/28 23:15:30 INFO TaskSchedulerImpl: Adding task set 1.0 with 10 tasks resource profile 0
2021-11-28 23:15:30.051 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 10) (172.17.0.1, executor 0, partition 0, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.069 - stderr> 21/11/28 23:15:30 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.17.0.1:39613 (size: 2.3 KiB, free: 366.3 MiB)
2021-11-28 23:15:30.081 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 11) (172.17.0.1, executor 0, partition 1, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.082 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 10) in 31 ms on 172.17.0.1 (executor 0) (1/10)
2021-11-28 23:15:30.091 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 12) (172.17.0.1, executor 0, partition 2, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.092 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 11) in 11 ms on 172.17.0.1 (executor 0) (2/10)
2021-11-28 23:15:30.101 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 13) (172.17.0.1, executor 0, partition 3, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.102 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 2.0 in stage 1.0 (TID 12) in 11 ms on 172.17.0.1 (executor 0) (3/10)
2021-11-28 23:15:30.112 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 4.0 in stage 1.0 (TID 14) (172.17.0.1, executor 0, partition 4, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.112 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 3.0 in stage 1.0 (TID 13) in 11 ms on 172.17.0.1 (executor 0) (4/10)
2021-11-28 23:15:30.247 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 5.0 in stage 1.0 (TID 15) (172.17.0.1, executor 0, partition 5, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.248 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 4.0 in stage 1.0 (TID 14) in 137 ms on 172.17.0.1 (executor 0) (5/10)
2021-11-28 23:15:30.258 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 6.0 in stage 1.0 (TID 16) (172.17.0.1, executor 0, partition 6, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.259 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 5.0 in stage 1.0 (TID 15) in 139 ms on 172.17.0.1 (executor 0) (6/10)
2021-11-28 23:15:30.27 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 7.0 in stage 1.0 (TID 17) (172.17.0.1, executor 0, partition 7, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.271 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 6.0 in stage 1.0 (TID 16) in 12 ms on 172.17.0.1 (executor 0) (7/10)
2021-11-28 23:15:30.273 - stderr> 21/11/28 23:15:30 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 172.17.0.1:38845 in memory (size: 2.3 KiB, free: 366.3 MiB)
2021-11-28 23:15:30.279 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 8.0 in stage 1.0 (TID 18) (172.17.0.1, executor 0, partition 8, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.28 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 7.0 in stage 1.0 (TID 17) in 10 ms on 172.17.0.1 (executor 0) (8/10)
2021-11-28 23:15:30.287 - stderr> 21/11/28 23:15:30 INFO BlockManagerInfo: Removed broadcast_0_piece0 on 172.17.0.1:39613 in memory (size: 2.3 KiB, free: 366.3 MiB)
2021-11-28 23:15:30.289 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Starting task 9.0 in stage 1.0 (TID 19) (172.17.0.1, executor 0, partition 9, PROCESS_LOCAL, 4582 bytes) taskResourceAssignments Map()
2021-11-28 23:15:30.29 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 8.0 in stage 1.0 (TID 18) in 11 ms on 172.17.0.1 (executor 0) (9/10)
2021-11-28 23:15:30.298 - stderr> 21/11/28 23:15:30 INFO TaskSetManager: Finished task 9.0 in stage 1.0 (TID 19) in 9 ms on 172.17.0.1 (executor 0) (10/10)
2021-11-28 23:15:30.298 - stderr> 21/11/28 23:15:30 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
2021-11-28 23:15:30.299 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: ResultStage 1 (collect at SparkSubmitSuite.scala:1555) finished in 0.256 s
2021-11-28 23:15:30.299 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Job 1 is finished. Cancelling potential speculative or zombie tasks for this job
2021-11-28 23:15:30.299 - stderr> 21/11/28 23:15:30 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage finished
2021-11-28 23:15:30.3 - stderr> 21/11/28 23:15:30 INFO DAGScheduler: Job 1 finished: collect at SparkSubmitSuite.scala:1555, took 0.261553 s
2021-11-28 23:15:30.307 - stderr> 21/11/28 23:15:30 INFO StandaloneSchedulerBackend: Shutting down all executors
2021-11-28 23:15:30.307 - stderr> 21/11/28 23:15:30 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
2021-11-28 23:15:30.315 - stderr> 21/11/28 23:15:30 INFO LocalSparkCluster: Shutting down local Spark cluster.
2021-11-28 23:15:30.315 - stderr> 21/11/28 23:15:30 INFO Master: Received unregister request from application app-20211128231525-0000
2021-11-28 23:15:30.316 - stderr> 21/11/28 23:15:30 INFO Master: Removing app app-20211128231525-0000
2021-11-28 23:15:30.318 - stderr> 21/11/28 23:15:30 INFO ExecutorRunner: Runner thread for executor app-20211128231525-0000/0 interrupted
2021-11-28 23:15:30.318 - stderr> 21/11/28 23:15:30 INFO ExecutorRunner: Killing process!
2021-11-28 23:15:30.33 - stderr> 21/11/28 23:15:30 INFO AbstractConnector: Stopped Spark@26554a3b{HTTP/1.1, (http/1.1)}{0.0.0.0:0}
2021-11-28 23:15:30.335 - stderr> 21/11/28 23:15:30 INFO Master: 172.17.0.1:48428 got disassociated, removing it.
2021-11-28 23:15:30.336 - stderr> 21/11/28 23:15:30 INFO Master: 172.17.0.1:39315 got disassociated, removing it.
2021-11-28 23:15:30.336 - stderr> 21/11/28 23:15:30 INFO Master: Removing worker worker-20211128231524-172.17.0.1-39315 on 172.17.0.1:39315
2021-11-28 23:15:30.338 - stderr> 21/11/28 23:15:30 INFO Master: Telling app of lost worker: worker-20211128231524-172.17.0.1-39315
2021-11-28 23:15:30.342 - stderr> 21/11/28 23:15:30 INFO AbstractConnector: Stopped Spark@6ccb62f6{HTTP/1.1, (http/1.1)}{0.0.0.0:0}
2021-11-28 23:15:30.359 - stderr> 21/11/28 23:15:30 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
2021-11-28 23:15:30.37 - stderr> 21/11/28 23:15:30 INFO MemoryStore: MemoryStore cleared
2021-11-28 23:15:30.37 - stderr> 21/11/28 23:15:30 INFO BlockManager: BlockManager stopped
2021-11-28 23:15:30.374 - stderr> 21/11/28 23:15:30 INFO BlockManagerMaster: BlockManagerMaster stopped
2021-11-28 23:15:30.377 - stderr> 21/11/28 23:15:30 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
2021-11-28 23:15:30.387 - stderr> 21/11/28 23:15:30 INFO SparkContext: Successfully stopped SparkContext
2021-11-28 23:15:30.391 - stderr> 21/11/28 23:15:30 INFO ShutdownHookManager: Shutdown hook called
2021-11-28 23:15:30.391 - stderr> 21/11/28 23:15:30 INFO ShutdownHookManager: Deleting directory /tmp/spark-69b56716-5fd3-4b98-a8e3-8acc10e2cabe
2021-11-28 23:15:30.395 - stderr> 21/11/28 23:15:30 INFO ShutdownHookManager: Deleting directory /tmp/spark-a86e51f5-4d47-4bce-aeeb-25c048545480
2021-11-28 23:15:30.399 - stderr> 21/11/28 23:15:30 INFO ShutdownHookManager: Deleting directory /tmp/worker-b5688f0a-9f0f-40cb-9e00-531d6955016d
- SPARK-32119: Jars and files should be loaded when Executors launch for plugins
- start SparkApplication without modifying system properties
- support --py-files/spark.submit.pyFiles in non pyspark application
- handles natural line delimiters in --properties-file and --conf uniformly
- get a Spark configuration from arguments
DecommissionWorkerSuite:
- decommission workers should not result in job failure
- decommission workers ensure that shuffle output is regenerated even with shuffle service
- decommission stalled workers ensure that fetch failures lead to rerun
- decommission eager workers ensure that fetch failures lead to rerun
RPackageUtilsSuite:
- pick which jars to unpack using the manifest
- build an R package from a jar end to end
- jars that don't exist are skipped and print warning
- faulty R package shows documentation
- jars without manifest return false
- SparkR zipping works properly
TaskDescriptionSuite:
- encoding and then decoding a TaskDescription results in the same TaskDescription
BlockManagerDecommissionIntegrationSuite:
- SPARK-32850: BlockManager decommission should respect the configuration (enabled=false)
- SPARK-32850: BlockManager decommission should respect the configuration (enabled=true)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@5b30a47b rejected from java.util.concurrent.ThreadPoolExecutor@5b28726e[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- verify that an already running task which is going to cache data succeeds on a decommissioned executor after task start
- verify that an already running task which is going to cache data succeeds on a decommissioned executor after one task ends but before job ends
- verify that shuffle blocks are migrated
- verify that both migrations can work at the same time
- SPARK-36782 not deadlock if MapOutput uses broadcast
MeanEvaluatorSuite:
- test count 0
- test count 1
- test count > 1
TopologyMapperSuite:
- File based Topology Mapper
ShuffleNettySuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
- SPARK-34541: shuffle can be removed
- SPARK-36206: shuffle checksum detect disk corruption
CountEvaluatorSuite:
- test count 0
- test count >= 1
SingleFileEventLogFileReaderSuite:
- Retrieve EventLogFileReader correctly
- get information, list event log files, zip log files - with codec None
- get information, list event log files, zip log files - with codec Some(lz4)
- get information, list event log files, zip log files - with codec Some(lzf)
- get information, list event log files, zip log files - with codec Some(snappy)
- get information, list event log files, zip log files - with codec Some(zstd)
ChromeUISeleniumSuite:
HostLocalShuffleReadingSuite:
- host local shuffle reading with external shuffle service enabled (SPARK-27651)
- host local shuffle reading with external shuffle service disabled (SPARK-32077)
- Enable host local shuffle reading when push based shuffle is enabled
KryoSerializerSuite:
- SPARK-7392 configuration limits
- basic types
- pairs
- Scala data structures
- Bug: SPARK-10251
- ranges
- asJavaIterable
- custom registrator
- kryo with collect
- kryo with parallelize
- kryo with parallelize for specialized tuples
- kryo with parallelize for primitive arrays
- kryo with collect for specialized tuples
- kryo with SerializableHyperLogLog
- kryo with reduce
- kryo with fold
- kryo with nonexistent custom registrator should fail
- default class loader can be set by a different thread
- registration of HighlyCompressedMapStatus
- registration of TaskCommitMessage
- serialization buffer overflow reporting
- KryoOutputObjectOutputBridge.writeObject and KryoInputObjectInputBridge.readObject
- getAutoReset
- SPARK-25176 ClassCastException when writing a Map after previously reading a Map with different generic type
- instance reuse with autoReset = true, referenceTracking = true, usePool = true
- instance reuse with autoReset = true, referenceTracking = true, usePool = false
- instance reuse with autoReset = false, referenceTracking = true, usePool = true
- instance reuse with autoReset = false, referenceTracking = true, usePool = false
- instance reuse with autoReset = true, referenceTracking = false, usePool = true
- instance reuse with autoReset = true, referenceTracking = false, usePool = false
- instance reuse with autoReset = false, referenceTracking = false, usePool = true
- instance reuse with autoReset = false, referenceTracking = false, usePool = false
- SPARK-25839 KryoPool implementation works correctly in multi-threaded environment
- SPARK-27216: test RoaringBitmap ser/dser with Kryo
- SPARK-37071: OpenHashMap serialize with reference tracking turned off
FailureSuite:
- failure in a single-stage job
- failure in a two-stage job
- failure in a map stage
- failure because task results are not serializable
- failure because task closure is not serializable
- managed memory leak error should not mask other failures (SPARK-9266
- last failure cause is sent back to driver
- failure cause stacktrace is sent back to driver if exception is not serializable
- failure cause stacktrace is sent back to driver if exception is not deserializable
- failure in tasks in a submitMapStage
- failure because cached RDD partitions are missing from DiskStore (SPARK-15736)
- SPARK-16304: Link error should not crash executor
PartitionwiseSampledRDDSuite:
- seed distribution
- concurrency
HybridStoreSuite:
- test multiple objects write read delete
- test metadata
- test update
- test basic iteration
- test delete after switch
- test klassMap
JdbcRDDSuite:
- basic functionality
- large id overflow
FileSuite:
- text files
- text files (compressed)
- text files do not allow null rows
- SequenceFiles
- SequenceFile (compressed) - default
- SequenceFile (compressed) - bzip2
- SequenceFile with writable key
- SequenceFile with writable value
- SequenceFile with writable key and value
- implicit conversions in reading SequenceFiles
- object files of ints
- object files of complex types
- object files of classes from a JAR
- write SequenceFile using new Hadoop API
- read SequenceFile using new Hadoop API
- binary file input as byte array
- portabledatastream caching tests
- portabledatastream persist disk storage
- portabledatastream flatmap tests
- SPARK-22357 test binaryFiles minPartitions
- minimum split size per node and per rack should be less than or equal to maxSplitSize
- fixed record length binary file as byte array
- negative binary record length should raise an exception
- file caching
- prevent user from overwriting the empty directory (old Hadoop API)
- prevent user from overwriting the non-empty directory (old Hadoop API)
- allow user to disable the output directory existence checking (old Hadoop API)
- prevent user from overwriting the empty directory (new Hadoop API)
- prevent user from overwriting the non-empty directory (new Hadoop API)
- allow user to disable the output directory existence checking (new Hadoop API
- save Hadoop Dataset through old Hadoop API
- save Hadoop Dataset through new Hadoop API
- Get input files via old Hadoop API
- Get input files via new Hadoop API
- spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD
- spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API)
- spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API)
- spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD
- SPARK-25100: Support commit tasks when Kyro registration is required
ShuffleOldFetchProtocolSuite:
- groupByKey without compression
- shuffle non-zero block size
- shuffle serializer
- zero sized blocks
- zero sized blocks without kryo
- shuffle on mutable pairs
- sorting on mutable pairs
- cogroup using mutable pairs
- subtract mutable pairs
- sort with Java non serializable class - Kryo
- sort with Java non serializable class - Java
- shuffle with different compression settings (SPARK-3426)
- [SPARK-4085] rerun map stage if reduce stage cannot find its local shuffle file
- cannot find its local shuffle file if no execution of the stage and rerun shuffle
- metrics for shuffle without aggregation
- metrics for shuffle with aggregation
- multiple simultaneous attempts for one task (SPARK-8029)
- SPARK-34541: shuffle can be removed
- SPARK-36206: shuffle checksum detect disk corruption
SparkContextSuite:
- Only one SparkContext may be active at a time
- Can still construct a new SparkContext after failing to construct a previous one
- Test getOrCreate
- BytesWritable implicit conversion is correct
- basic case for addFile and listFiles
- SPARK-33530: basic case for addArchive and listArchives
- add and list jar files
- add FS jar files not exists
- SPARK-17650: malformed url's throw exceptions before bricking Executors
- addFile recursive works
- SPARK-30126: addFile when file path contains spaces with recursive works
- SPARK-30126: addFile when file path contains spaces without recursive works
- addFile recursive can't add directories by default
- cannot call addFile with different paths that have the same filename
- addJar can be called twice with same file in local-mode (SPARK-16787)
- addFile can be called twice with same file in local-mode (SPARK-16787)
- addJar can be called twice with same file in non-local-mode (SPARK-16787)
- addFile can be called twice with same file in non-local-mode (SPARK-16787)
- SPARK-30126: add jar when path contains spaces
- add jar with invalid path
- SPARK-22585 addJar argument without scheme is interpreted literally without url decoding
- Cancelling job group should not cause SparkContext to shutdown (SPARK-6414)
- Comma separated paths for newAPIHadoopFile/wholeTextFiles/binaryFiles (SPARK-7155)
- Default path for file based RDDs is properly set (SPARK-12517)
- calling multiple sc.stop() must not throw any exception
- No exception when both num-executors and dynamic allocation set.
- localProperties are inherited by spawned threads.
- localProperties do not cross-talk between threads.
- log level case-insensitive and reset log level
- register and deregister Spark listener from SparkContext
- Cancelling stages/jobs with custom reasons.
- client mode with a k8s master url
- Killing tasks that raise interrupted exception on cancel
- Killing tasks that raise runtime exception on cancel
java.lang.Throwable
	at org.apache.spark.DebugFilesystem$.addOpenStream(DebugFilesystem.scala:35)
	at org.apache.spark.DebugFilesystem.open(DebugFilesystem.scala:75)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)
	at org.apache.spark.SparkContextSuite.$anonfun$new$77(SparkContextSuite.scala:764)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:226)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:190)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:224)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:236)
- SPARK-19446: DebugFilesystem.assertNoOpenStreams should report open streams to help debugging
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:218)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:62)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:62)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:269)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:431)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:269)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:268)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1563)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1563)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:273)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:273)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:272)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:62)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:62)
	at org.scalatest.Suite.callExecuteOnSuite$1(Suite.scala:1175)
	at org.scalatest.Suite.$anonfun$runNestedSuites$1(Suite.scala:1222)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.scalatest.Suite.runNestedSuites(Suite.scala:1220)
	at org.scalatest.Suite.runNestedSuites$(Suite.scala:1154)
	at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:30)
	at org.scalatest.Suite.run(Suite.scala:1109)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:30)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13(Runner.scala:1322)
	at org.scalatest.tools.Runner$.$anonfun$doRunRunRunDaDoRunRun$13$adapted(Runner.scala:1316)
	at scala.collection.immutable.List.foreach(List.scala:431)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1316)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24(Runner.scala:993)
	at org.scalatest.tools.Runner$.$anonfun$runOptionallyWithPassFailReporter$24$adapted(Runner.scala:971)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1482)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:971)
	at org.scalatest.tools.Runner$.main(Runner.scala:775)
	at org.scalatest.tools.Runner.main(Runner.scala)
- support barrier execution mode under local mode
- support barrier execution mode under local-cluster mode
- cancel zombie tasks in a result stage when the job finishes
- Avoid setting spark.task.cpus unreasonably (SPARK-27192)
- test driver discovery under local-cluster mode
- test gpu driver resource files and discovery under local-cluster mode
- Test parsing resources task configs with missing executor config
- Test parsing resources executor config < task requirements
- Parse resources executor config not the same multiple numbers of the task requirements
- test resource scheduling under local-cluster mode
- SPARK-32160: Disallow to create SparkContext in executors
- SPARK-32160: Allow to create SparkContext in executors if the config is set
- SPARK-33084: Add jar support Ivy URI -- default transitive = true
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@64a0863a rejected from java.util.concurrent.ThreadPoolExecutor@21a204ca[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- SPARK-33084: Add jar support Ivy URI -- invalid transitive use default false
- SPARK-33084: Add jar support Ivy URI -- transitive=true will download dependency jars
- SPARK-34506: Add jar support Ivy URI -- transitive=false will not download dependency jars
- SPARK-34506: Add jar support Ivy URI -- test exclude param when transitive unspecified
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@281d4b8e rejected from java.util.concurrent.ThreadPoolExecutor@6d1fd0e9[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- SPARK-33084: Add jar support Ivy URI -- test exclude param when transitive=true
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@52d0f02b rejected from java.util.concurrent.ThreadPoolExecutor@17d27415[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- SPARK-33084: Add jar support Ivy URI -- test different version
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@5059adc0 rejected from java.util.concurrent.ThreadPoolExecutor@155b39af[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@74a09384 rejected from java.util.concurrent.ThreadPoolExecutor@31a20374[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- SPARK-33084: Add jar support Ivy URI -- test invalid param
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@7437537d rejected from java.util.concurrent.ThreadPoolExecutor@3135c39f[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- SPARK-33084: Add jar support Ivy URI -- test multiple transitive params
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@6c4b83a7 rejected from java.util.concurrent.ThreadPoolExecutor@2949104a[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- SPARK-33084: Add jar support Ivy URI -- test param key case sensitive
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@3815d716 rejected from java.util.concurrent.ThreadPoolExecutor@7d175e70[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.util.concurrent.RejectedExecutionException: Task scala.concurrent.impl.CallbackRunnable@47fbb6a4 rejected from java.util.concurrent.ThreadPoolExecutor@2f9203f[Shutting down, pool size = 1, active threads = 1, queued tasks = 0, completed tasks = 0]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
	at java.util.concurrent.Executors$DelegatedExecutorService.execute(Executors.java:668)
	at scala.concurrent.impl.ExecutionContextImpl$$anon$4.execute(ExecutionContextImpl.scala:138)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at scala.concurrent.BatchingExecutor$Batch.processBatch$1(BatchingExecutor.scala:67)
	at scala.concurrent.BatchingExecutor$Batch.$anonfun$run$1(BatchingExecutor.scala:82)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
	at scala.concurrent.BatchingExecutor$Batch.run(BatchingExecutor.scala:59)
	at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:875)
	at scala.concurrent.BatchingExecutor.execute(BatchingExecutor.scala:110)
	at scala.concurrent.BatchingExecutor.execute$(BatchingExecutor.scala:107)
	at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:873)
	at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
	at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
	at scala.concurrent.Promise.complete(Promise.scala:53)
	at scala.concurrent.Promise.complete$(Promise.scala:52)
	at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:187)
	at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- SPARK-33084: Add jar support Ivy URI -- test transitive value case insensitive
- SPARK-34346: hadoop configuration priority for spark/hive/hadoop configs
- SPARK-34225: addFile/addJar shouldn't further encode URI if a URI form string is passed
- SPARK-35383: Fill missing S3A magic committer configs if needed
- SPARK-35691: addFile/addJar/addDirectory should put CanonicalFile
- SPARK-36772: Store application attemptId in BlockStoreClient for push based shuffle
SourceConfigSuite:
- Test configuration for adding static sources registration
- Test configuration for skipping static sources registration
- Test configuration for adding ExecutorMetrics source registration
- Test configuration for skipping ExecutorMetrics source registration
- SPARK-31711: Test executor source registration in local mode
DiskBlockObjectWriterSuite:
- verify write metrics
- verify write metrics on revert
- Reopening a closed block writer
- calling revertPartialWritesAndClose() on a partial write should truncate up to commit
- calling revertPartialWritesAndClose() after commit() should have no effect
- calling revertPartialWritesAndClose() on a closed block writer should have no effect
- commit() and close() should be idempotent
- revertPartialWritesAndClose() should be idempotent
- commit() and close() without ever opening or writing
ThreadingSuite:
- accessing SparkContext form a different thread
- accessing SparkContext form multiple threads
- accessing multi-threaded SparkContext form multiple threads
- parallel job execution
- set local properties in different thread
- set and get local properties in parent-children thread
- mutation in parent local property does not affect child (SPARK-10563)
PythonRDDSuite:
- Writing large strings to the worker
- Handle nulls gracefully
- python server error handling
- mapToConf should not load defaults
- SparkContext's hadoop configuration should be respected in PythonRDD
ShuffleDependencySuite:
- key, value, and combiner classes correct in shuffle dependency without aggregation
- key, value, and combiner classes available in shuffle dependency with aggregation
- combineByKey null combiner class tag handled correctly
HadoopFSUtilsSuite:
- HadoopFSUtils - file filtering
ResourceInformationSuite:
- ResourceInformation.parseJson for valid JSON
- ResourceInformation.equals/hashCode
JVMObjectTrackerSuite:
- JVMObjectId does not take null IDs
- JVMObjectTracker
ClosureCleanerSuite2:
- clean basic serializable closures
- clean basic non-serializable closures
- clean basic nested serializable closures
- clean basic nested non-serializable closures
- clean complicated nested serializable closures
- clean complicated nested non-serializable closures
PartitionPruningRDDSuite:
- Pruned Partitions inherit locality prefs correctly
- Pruned Partitions can be unioned 
SimpleDateParamSuite:
- date parsing
StorageSuite:
- storage status add non-RDD blocks
- storage status add RDD blocks
- storage status getBlock
- storage status memUsed, diskUsed, externalBlockStoreUsed
- storage memUsed, diskUsed with on-heap and off-heap blocks
- old SparkListenerBlockManagerAdded event compatible
CausedBySuite:
- For an error without a cause, should return the error
- For an error with a cause, should return the cause of the error
- For an error with a cause that itself has a cause, return the root cause
JavaUtilsSuite:
- containsKey implementation without iteratively entrySet call
EventLogFileCompactorSuite:
- No event log files
- No compact file, less origin files available than max files to retain
- No compact file, more origin files available than max files to retain
- compact file exists, less origin files available than max files to retain
- compact file exists, number of origin files are same as max files to retain
- compact file exists, more origin files available than max files to retain
- events for finished job are dropped in new compact file
- Don't compact file if score is lower than threshold
- rewrite files with test filters
ShuffleBlockPusherSuite:
- A batch of blocks is limited by maxBlocksBatchSize
- Large blocks are excluded in the preparation
- Number of blocks in a push request are limited by maxBlocksInFlightPerAddress 
- Basic block push
- Large blocks are skipped for push
- Number of blocks in flight per address are limited by maxBlocksInFlightPerAddress
- Hit maxBlocksInFlightPerAddress limit so that the blocks are deferred
- Number of shuffle blocks grouped in a single push request is limited by maxBlockBatchSize
- Error retries
- Error logging
- Blocks are continued to push even when a block push fails with collision exception
- More blocks are not pushed when a block push fails with too late exception
- Connect exceptions remove all the push requests for that host
- SPARK-36255: FileNotFoundException stops the push
FileAppenderSuite:
- basic file appender
- SPARK-35027: basic file appender - close stream
- rolling file appender - time-based rolling
- rolling file appender - time-based rolling (compressed)
- SPARK-35027: rolling file appender - time-based rolling close stream
- SPARK-35027: rolling file appender - size-based rolling close stream
- rolling file appender - size-based rolling
- rolling file appender - size-based rolling (compressed)
- rolling file appender - cleaning
- file appender selection
- file appender async close stream abruptly
- file appender async close stream gracefully
BypassMergeSortShuffleWriterSuite:
- write empty iterator
- write with some empty partitions - transferTo true
- write with some empty partitions - transferTo false
- only generate temp shuffle file for non-empty partition
- cleanup of intermediate files after errors
- write checksum file
DistributedSuite:
- task throws not serializable exception
- local-cluster format
- simple groupByKey
- groupByKey where map output sizes exceed maxMbInFlight
- accumulators
- broadcast variables
- repeatedly failing task
- repeatedly failing task that crashes JVM
- repeatedly failing task that crashes JVM with a zero exit code (SPARK-16925)
- caching (encryption = off)
- caching (encryption = on)
- caching on disk (encryption = off)
- caching on disk (encryption = on)
- caching in memory, replicated (encryption = off)
- caching in memory, replicated (encryption = off) (with replication as stream)
- caching in memory, replicated (encryption = on)
- caching in memory, replicated (encryption = on) (with replication as stream)
- caching in memory, serialized, replicated (encryption = off)
- caching in memory, serialized, replicated (encryption = off) (with replication as stream)
java.lang.NullPointerException
	at org.apache.spark.deploy.worker.Worker.$anonfun$syncExecutorStateWithMaster$1(Worker.scala:800)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.lang.NullPointerException
	at org.apache.spark.deploy.worker.Worker.$anonfun$syncExecutorStateWithMaster$1(Worker.scala:800)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
java.lang.NullPointerException
	at org.apache.spark.deploy.worker.Worker.$anonfun$syncExecutorStateWithMaster$1(Worker.scala:800)
	at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- caching in memory, serialized, replicated (encryption = on)
- caching in memory, serialized, replicated (encryption = on) (with replication as stream)
- caching on disk, replicated 2 (encryption = off)
- caching on disk, replicated 2 (encryption = off) (with replication as stream)
- caching on disk, replicated 2 (encryption = on)
- caching on disk, replicated 2 (encryption = on) (with replication as stream)
- caching on disk, replicated 3 (encryption = off)
- caching on disk, replicated 3 (encryption = off) (with replication as stream)
- caching on disk, replicated 3 (encryption = on)
- caching on disk, replicated 3 (encryption = on) (with replication as stream)
- caching in memory and disk, replicated (encryption = off)
- caching in memory and disk, replicated (encryption = off) (with replication as stream)
- caching in memory and disk, replicated (encryption = on)
- caching in memory and disk, replicated (encryption = on) (with replication as stream)
- caching in memory and disk, serialized, replicated (encryption = off)
- caching in memory and disk, serialized, replicated (encryption = off) (with replication as stream)
- caching in memory and disk, serialized, replicated (encryption = on)
- caching in memory and disk, serialized, replicated (encryption = on) (with replication as stream)
- compute without caching when no partitions fit in memory
- compute when only some partitions fit in memory
- passing environment variables to cluster
- recover from node failures
- recover from repeated node failures during shuffle-map
- recover from repeated node failures during shuffle-reduce
- recover from node failures with replication
- unpersist RDDs
- reference partitions inside a task
FutureActionSuite:
- simple async action
- complex async action
LocalCheckpointSuite:
- transform storage level
- basic lineage truncation
- basic lineage truncation - caching before checkpointing
- basic lineage truncation - caching after checkpointing
- indirect lineage truncation
- indirect lineage truncation - caching before checkpointing
- indirect lineage truncation - caching after checkpointing
- checkpoint without draining iterator
- checkpoint without draining iterator - caching before checkpointing
- checkpoint without draining iterator - caching after checkpointing
- checkpoint blocks exist
- checkpoint blocks exist - caching before checkpointing
- checkpoint blocks exist - caching after checkpointing
- missing checkpoint block fails with informative message
SingleEventLogFileWriterSuite:
- create EventLogFileWriter with enable/disable rolling
- initialize, write, stop - with codec None
- initialize, write, stop - with codec Some(lz4)
- initialize, write, stop - with codec Some(lzf)
- initialize, write, stop - with codec Some(snappy)
- initialize, write, stop - with codec Some(zstd)
- Use the defalut value of spark.eventLog.compression.codec
- Log overwriting
- Event log name
WorkerWatcherSuite:
- WorkerWatcher shuts down on valid disassociation
- WorkerWatcher stays alive on invalid disassociation
ExternalShuffleServiceDbSuite:
- Recover shuffle data with spark.shuffle.service.db.enabled=true after shuffle service restart
- Can't recover shuffle data with spark.shuffle.service.db.enabled=false after shuffle service restart
CoarseGrainedExecutorBackendSuite:
- parsing no resources
- parsing one resource
- parsing multiple resources resource profile
- parsing multiple resources
- error checking parsing resources and executor and task configs
- executor resource found less than required resource profile
- executor resource found less than required
- use resource discovery
- use resource discovery and allocated file option with resource profile
- use resource discovery and allocated file option
- track allocated resources by taskId
- SPARK-24203 when bindAddress is not set, it defaults to hostname
- SPARK-24203 when bindAddress is different, it does not default to hostname
NettyRpcEnvSuite:
- send a message locally
- send a message remotely
- send a RpcEndpointRef
- ask a message locally
- ask a message remotely
- ask a message timeout
- ask a message abort
- onStart and onStop
- onError: error in onStart
- onError: error in onStop
- onError: error in receive
- self: call in onStart
- self: call in receive
- self: call in onStop
- call receive in sequence
- stop(RpcEndpointRef) reentrant
- sendWithReply
- sendWithReply: remotely
- sendWithReply: error
- sendWithReply: remotely error
- network events in sever RpcEnv when another RpcEnv is in server mode
- network events in sever RpcEnv when another RpcEnv is in client mode
- network events in client RpcEnv when another RpcEnv is in server mode
- sendWithReply: unserializable error
- port conflict
- send with authentication
- send with SASL encryption
- send with AES encryption
- ask with authentication
- ask with SASL encryption
- ask with AES encryption
- construct RpcTimeout with conf property
- ask a message timeout on Future using RpcTimeout
- file server
- SPARK-14699: RpcEnv.shutdown should not fire onDisconnected events
- isolated endpoints
- non-existent endpoint
- advertise address different from bind address
- RequestMessage serialization
Exception in thread "dispatcher-event-loop-1" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:114)
	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Exception in thread "dispatcher-event-loop-0" java.lang.StackOverflowError
	at org.apache.spark.rpc.netty.NettyRpcEnvSuite$$anon$1$$anonfun$receiveAndReply$1.applyOrElse(NettyRpcEnvSuite.scala:114)
	at org.apache.spark.rpc.netty.Inbox.$anonfun$process$1(Inbox.scala:103)
	at org.apache.spark.rpc.netty.Inbox.safelyCall(Inbox.scala:213)
	at org.apache.spark.rpc.netty.Inbox.process(Inbox.scala:100)
	at org.apache.spark.rpc.netty.MessageLoop.org$apache$spark$rpc$netty$MessageLoop$$receiveLoop(MessageLoop.scala:75)
	at org.apache.spark.rpc.netty.MessageLoop$$anon$1.run(MessageLoop.scala:41)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
- StackOverflowError should be sent back and Dispatcher should survive
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
- SPARK-31233: ask rpcEndpointRef in client mode timeout
ResourceDiscoveryPluginSuite:
- plugin initialization in non-local mode fpga and gpu
- single plugin gpu
- multiple plugins with one empty
- empty plugin fallback to discovery script
PagedTableSuite:
- pageNavigation
- pageNavigation with different id
ClientSuite:
- correctly validates driver jar URL's
BlockIdSuite:
- test-bad-deserialization
- rdd
- shuffle
- shuffle batch
- shuffle data
- shuffle index
- shuffle merged data
- shuffle merged index
- shuffle merged meta
- shuffle merged block
- broadcast
- taskresult
- stream
- temp local
- temp shuffle
- test
- merged shuffle id
- shuffle chunk
PrometheusServletSuite:
- register metrics
- normalize key
PartiallyUnrolledIteratorSuite:
- join two iterators
KryoSerializerResizableOutputSuite:
- kryo without resizable output buffer should fail on large array
- kryo with resizable output buffer should succeed on large array
BlockManagerReplicationSuite:
- get peers with addition and removal of block managers
- block replication - 2x replication
- block replication - 3x replication
- block replication - mixed between 1x to 5x
- block replication - off-heap
- block replication - 2x replication without peers
- block replication - replication failures
- test block replication failures when block is received by remote block manager but putBlock fails (stream = false)
- test block replication failures when block is received by remote block manager but putBlock fails (stream = true)
- block replication - addition and deletion of block managers
BarrierTaskContextSuite:
- global sync by barrier() call
- share messages with allGather() call
- throw exception if we attempt to synchronize with different blocking calls
- successively sync with allGather and barrier
- support multiple barrier() call within a single task
- throw exception on barrier() call timeout
- throw exception if barrier() call doesn't happen on every task
- throw exception if the number of barrier() calls are not the same on every task
- barrier task killed, no interrupt
- barrier task killed, interrupt
- SPARK-24818: disable legacy delay scheduling for barrier stage
- SPARK-34069: Kill barrier tasks should respect SPARK_JOB_INTERRUPT_ON_CANCEL
BlockStoreShuffleReaderSuite:
- read() releases resources on completion
WholeTextFileRecordReaderSuite:
- Correctness of WholeTextFileRecordReader.
- Correctness of WholeTextFileRecordReader with GzipCodec.
SubmitRestProtocolSuite:
- validate
- request to and from JSON
- response to and from JSON
- CreateSubmissionRequest
- CreateSubmissionResponse
- KillSubmissionResponse
- SubmissionStatusResponse
- ErrorResponse
ChromeUIHistoryServerSuite:
FlatmapIteratorSuite:
- Flatmap Iterator to Disk
- Flatmap Iterator to Memory
- Serializer Reset
SizeEstimatorSuite:
- simple classes
- primitive wrapper objects
- class field blocks rounding
- strings
- primitive arrays
- object arrays
- 32-bit arch
- 64-bit arch with no compressed oops
- class field blocks rounding on 64-bit VM without useCompressedOops
- check 64-bit detection for s390x arch
- SizeEstimation can provide the estimated size
DependencyUtilsSuite:
- SPARK-33084: Add jar support Ivy URI -- test invalid ivy uri
ElementTrackingStoreSuite:
- asynchronous tracking single-fire
- tracking for multiple types
FallbackStorageSuite:
- fallback storage APIs - copy/exists
- SPARK-34142: fallback storage API - cleanUp
- migrate shuffle data to fallback storage
- Upload from all decommissioned executors
- Upload multi stages
- lz4 - Newly added executors should access old data from remote storage
- lzf - Newly added executors should access old data from remote storage
- snappy - Newly added executors should access old data from remote storage
- zstd - Newly added executors should access old data from remote storage
WorkerDecommissionSuite:
- verify task with no decommissioning works as expected
- verify a running task with all workers decommissioned succeeds
PipedRDDSuite:
- basic pipe
- basic pipe with tokenization
- failure in iterating over pipe input
- stdin writer thread should be exited when task is finished
- advanced pipe
- pipe with empty partition
- pipe with env variable
- pipe with process which cannot be launched due to bad command
cat: nonexistent_file: No such file or directory
cat: nonexistent_file: No such file or directory
- pipe with process which is launched but fails with non-zero exit status
- basic pipe with separate working directory
- test pipe exports map_input_file
- test pipe exports mapreduce_map_input_file
AccumulatorV2Suite:
- LongAccumulator add/avg/sum/count/isZero
- DoubleAccumulator add/avg/sum/count/isZero
- ListAccumulator
InboxSuite:
- post
- post: with reply
- post: multiple threads
- post: Associated
- post: Disassociated
- post: AssociationError
- SPARK-32738: should reduce the number of active threads when fatal error happens
MasterWebUISuite:
- kill application
- kill driver
- Kill one host
- Kill multiple hosts
RadixSortSuite:
- radix support for unsigned binary data asc nulls first
- sort unsigned binary data asc nulls first
- sort key prefix unsigned binary data asc nulls first
- fuzz test unsigned binary data asc nulls first with random bitmasks
- fuzz test key prefix unsigned binary data asc nulls first with random bitmasks
- radix support for unsigned binary data asc nulls last
- sort unsigned binary data asc nulls last
- sort key prefix unsigned binary data asc nulls last
- fuzz test unsigned binary data asc nulls last with random bitmasks
- fuzz test key prefix unsigned binary data asc nulls last with random bitmasks
- radix support for unsigned binary data desc nulls last
- sort unsigned binary data desc nulls last
- sort key prefix unsigned binary data desc nulls last
- fuzz test unsigned binary data desc nulls last with random bitmasks
- fuzz test key prefix unsigned binary data desc nulls last with random bitmasks
- radix support for unsigned binary data desc nulls first
- sort unsigned binary data desc nulls first
- sort key prefix unsigned binary data desc nulls first
- fuzz test unsigned binary data desc nulls first with random bitmasks
- fuzz test key prefix unsigned binary data desc nulls first with random bitmasks
- radix support for twos complement asc nulls first
- sort twos complement asc nulls first
- sort key prefix twos complement asc nulls first
- fuzz test twos complement asc nulls first with random bitmasks
- fuzz test key prefix twos complement asc nulls first with random bitmasks
- radix support for twos complement asc nulls last
- sort twos complement asc nulls last
- sort key prefix twos complement asc nulls last
- fuzz test twos complement asc nulls last with random bitmasks
- fuzz test key prefix twos complement asc nulls last with random bitmasks
- radix support for twos complement desc nulls last
- sort twos complement desc nulls last
- sort key prefix twos complement desc nulls last
- fuzz test twos complement desc nulls last with random bitmasks
- fuzz test key prefix twos complement desc nulls last with random bitmasks
- radix support for twos complement desc nulls first
- sort twos complement desc nulls first
- sort key prefix twos complement desc nulls first
- fuzz test twos complement desc nulls first with random bitmasks
- fuzz test key prefix twos complement desc nulls first with random bitmasks
- radix support for binary data partial
- sort binary data partial
- sort key prefix binary data partial
- fuzz test binary data partial with random bitmasks
- fuzz test key prefix binary data partial with random bitmasks
DiskBlockManagerSuite:
- basic block creation
- enumerating blocks
- SPARK-22227: non-block files are skipped
- should still create merge directories if one already exists under a local dir
- Test dir creation with permission 770
- Encode merged directory name and attemptId in shuffleManager field
WorkerArgumentsTest:
- Memory can't be set to 0 when cmd line args leave off M or G
- Memory can't be set to 0 when SPARK_WORKER_MEMORY env property leaves off M or G
- Memory correctly set when SPARK_WORKER_MEMORY env property appends G
- Memory correctly set from args with M appended to memory value
StatusTrackerSuite:
- basic status API usage
- getJobIdsForGroup()
- getJobIdsForGroup() with takeAsync()
- getJobIdsForGroup() with takeAsync() across multiple partitions
PrimitiveKeyOpenHashMapSuite:
- size for specialized, primitive key, value (int, int)
- initialization
- basic operations
- null values
- changeValue
- inserting in capacity-1 map
- contains
HistoryServerMemoryManagerSuite:
- lease and release memory
ApplicationCacheSuite:
- Completed UI get
- Test that if an attempt ID is set, it must be used in lookups
- Incomplete apps refreshed
- Large Scale Application Eviction
- Attempts are Evicted
- redirect includes query params
StandaloneDynamicAllocationSuite:
- dynamic allocation default behavior
- dynamic allocation with max cores <= cores per worker
- dynamic allocation with max cores > cores per worker
- dynamic allocation with cores per executor
- dynamic allocation with cores per executor AND max cores
- kill the same executor twice (SPARK-9795)
- the pending replacement executors should not be lost (SPARK-10515)
- disable force kill for busy executors (SPARK-9552)
- initial executor limit
- kill all executors on localhost
- executor registration on a excluded host must fail
ResourceUtilsSuite:
- ResourceID
- Resource discoverer no addresses errors
- Resource discoverer amount 0
- Resource discoverer multiple resource types
- get from resources file and discover the remaining
- get from resources file and discover resource profile remaining
- list resource ids
- parse resource request
- Resource discoverer multiple gpus on driver
- Resource discoverer script returns mismatched name
- Resource discoverer with invalid class
- Resource discoverer script returns invalid format
- Resource discoverer script doesn't exist
- gpu's specified but not a discovery script
ExternalClusterManagerSuite:
- launch of backend and scheduler
LogUrlsStandaloneSuite:
- verify that correct log urls get propagated from workers
- verify that log urls reflect SPARK_PUBLIC_DNS (SPARK-6175)
AppClientSuite:
- interface methods of AppClient using local Master
- request from AppClient before initialized with master
InternalAccumulatorSuite:
- internal accumulators in TaskContext
- internal accumulators in a stage
- internal accumulators in multiple stages
- internal accumulators in resubmitted stages
- internal accumulators are registered for cleanups
JsonProtocolSuite:
- SparkListenerEvent
- Dependent Classes
- ExceptionFailure backward compatibility: full stack trace
- StageInfo backward compatibility (details, accumulables)
- StageInfo resourceProfileId
- InputMetrics backward compatibility
- Input/Output records backwards compatibility
- Shuffle Read/Write records backwards compatibility
- OutputMetrics backward compatibility
- BlockManager events backward compatibility
- FetchFailed backwards compatibility
- SPARK-32124: FetchFailed Map Index backwards compatibility
- ShuffleReadMetrics: Local bytes read backwards compatibility
- SparkListenerApplicationStart backwards compatibility
- ExecutorLostFailure backward compatibility
- SparkListenerJobStart backward compatibility
- SparkListenerJobStart and SparkListenerJobEnd backward compatibility
- RDDInfo backward compatibility (scope, parent IDs, callsite)
- StageInfo backward compatibility (parent IDs)
- TaskCommitDenied backward compatibility
- AccumulableInfo backward compatibility
- ExceptionFailure backward compatibility: accumulator updates
- ExecutorMetricsUpdate backward compatibility: executor metrics update
- executorMetricsFromJson backward compatibility: handle missing metrics
- AccumulableInfo value de/serialization
- SPARK-31923: unexpected value type of internal accumulator
- SPARK-30936: forwards compatibility - ignore unknown fields
- SPARK-30936: backwards compatibility - set default values for missing fields
BroadcastSuite:
- Using TorrentBroadcast locally
- Accessing TorrentBroadcast variables from multiple threads
- Accessing TorrentBroadcast variables in a local cluster (encryption = off)
- Accessing TorrentBroadcast variables in a local cluster (encryption = on)
- TorrentBroadcast's blockifyObject and unblockifyObject are inverses
- Test Lazy Broadcast variables with TorrentBroadcast
- Unpersisting TorrentBroadcast on executors only in local mode
- Unpersisting TorrentBroadcast on executors and driver in local mode
- Unpersisting TorrentBroadcast on executors only in distributed mode
11/28/21 11:29:53 PM ===========================================================

-- Gauges ----------------------------------------------------------------------
master.aliveWorkers
- Unpersisting TorrentBroadcast on executors and driver in distributed mode
- Using broadcast after destroy prints callsite
- Broadcast variables cannot be created after SparkContext is stopped (SPARK-5065)
- Forbid broadcasting RDD directly
- Cache broadcast to disk (encryption = off)
- Cache broadcast to disk (encryption = on)
- One broadcast value instance per executor
- One broadcast value instance per executor when memory is constrained
SerializerPropertiesSuite:
- JavaSerializer does not support relocation
- KryoSerializer supports relocation when auto-reset is enabled
- KryoSerializer does not support relocation when auto-reset is disabled
EventLoopSuite:
- EventLoop
- EventLoop: start and stop
- EventLoop: onError
- EventLoop: error thrown from onError should not crash the event thread
- EventLoop: calling stop multiple times should only call onStop once
- EventLoop: post event in multiple threads
- EventLoop: onReceive swallows InterruptException
- EventLoop: stop in eventThread
- EventLoop: stop() in onStart should call onStop
- EventLoop: stop() in onReceive should call onStop
- EventLoop: stop() in onError should call onStop
SparkThrowableSuite:
- No duplicate error classes
- Error classes are correctly formatted
- SQLSTATE invariants
- Message format invariants
- Round trip
- Check if error class is missing
- Check if message parameters match message format
- Error message is formatted
- Try catching legacy SparkError
- Try catching SparkError with error class
- Try catching internal SparkError
ZippedPartitionsSuite:
- print sizes
DiskStoreSuite:
- reads of memory-mapped and non memory-mapped files are equivalent
- block size tracking
- blocks larger than 2gb
- block data encryption
LiveEntitySuite:
- partition seq
- Only show few elements of CollectionAccumulator when converting to v1.AccumulableInfo
ExecutorSummarySuite:
- Check ExecutorSummary serialize and deserialize with empty peakMemoryMetrics
DoubleRDDSuite:
- sum
- WorksOnEmpty
- WorksWithOutOfRangeWithOneBucket
- WorksInRangeWithOneBucket
- WorksInRangeWithOneBucketExactMatch
- WorksWithOutOfRangeWithTwoBuckets
- WorksWithOutOfRangeWithTwoUnEvenBuckets
- WorksInRangeWithTwoBuckets
- WorksInRangeWithTwoBucketsAndNaN
- WorksInRangeWithTwoUnevenBuckets
- WorksMixedRangeWithTwoUnevenBuckets
- WorksMixedRangeWithFourUnevenBuckets
- WorksMixedRangeWithUnevenBucketsAndNaN
- WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRange
- WorksMixedRangeWithUnevenBucketsAndNaNAndNaNRangeAndInfinity
- WorksWithOutOfRangeWithInfiniteBuckets
- ThrowsExceptionOnInvalidBucketArray
- WorksWithoutBucketsBasic
- WorksWithoutBucketsBasicSingleElement
- WorksWithoutBucketsBasicNoRange
- WorksWithoutBucketsBasicTwo
- WorksWithDoubleValuesAtMinMax
- WorksWithoutBucketsWithMoreRequestedThanElements
- WorksWithoutBucketsForLargerDatasets
- WorksWithoutBucketsWithNonIntegralBucketEdges
- WorksWithHugeRange
- ThrowsExceptionOnInvalidRDDs
AppStatusStoreSuite:
- quantile calculation: 1 task
- quantile calculation: few tasks
- quantile calculation: more tasks
- quantile calculation: lots of tasks
- quantile calculation: custom quantiles
- quantile cache
- SPARK-26260: summary should contain only successful tasks' metrics (store = disk)
- SPARK-26260: summary should contain only successful tasks' metrics (store = in memory)
- SPARK-26260: summary should contain only successful tasks' metrics (store = in memory live)
ExternalSorterSpillSuite:
- SPARK-36242 Spill File should not exists if writer close fails
SorterSuite:
- equivalent to Arrays.sort
- KVArraySorter
- SPARK-5984 TimSort bug
- java.lang.ArrayIndexOutOfBoundsException in TimSort
- Sorter benchmark for key-value pairs !!! IGNORED !!!
- Sorter benchmark for primitive int array !!! IGNORED !!!
MedianHeapSuite:
- If no numbers in MedianHeap, NoSuchElementException is thrown.
- Median should be correct when size of MedianHeap is even
- Median should be correct when size of MedianHeap is odd
- Median should be correct though there are duplicated numbers inside.
- Median should be correct when input data is skewed.
PoolSuite:
- FIFO Scheduler Test
- Fair Scheduler Test
- Nested Pool Test
- SPARK-17663: FairSchedulableBuilder sets default values for blank or invalid datas
- FIFO scheduler uses root pool and not spark.scheduler.pool property
- FAIR Scheduler uses default pool when spark.scheduler.pool property is not set
- FAIR Scheduler creates a new pool when spark.scheduler.pool property points to a non-existent pool
- Pool should throw IllegalArgumentException when schedulingMode is not supported
- Fair Scheduler should build fair scheduler when valid spark.scheduler.allocation.file property is set
- Fair Scheduler should use default file(fairscheduler.xml) if it exists in classpath and spark.scheduler.allocation.file property is not set
- Fair Scheduler should throw FileNotFoundException when invalid spark.scheduler.allocation.file property is set
- SPARK-35083: Support remote scheduler pool file !!! CANCELED !!!
  2 was not greater than or equal to 3, and 2 equaled 2, but 7 was not greater than or equal to 9 (PoolSuite.scala:349)
DistributionSuite:
- summary
ContextCleanerSuite:
- cleanup RDD
- cleanup shuffle
- cleanup broadcast
- automatically cleanup RDD
- automatically cleanup shuffle
- automatically cleanup broadcast
- automatically cleanup normal checkpoint
- automatically clean up local checkpoint
- automatically cleanup RDD + shuffle + broadcast
- automatically cleanup RDD + shuffle + broadcast in distributed mode
JsonProtocolSuite:
- writeApplicationInfo
- writeWorkerInfo
- writeApplicationDescription
- writeExecutorRunner
- writeDriverInfo
- writeMasterState
- writeWorkerState
HeartbeatReceiverSuite:
- task scheduler is set correctly
- normal heartbeat
- reregister if scheduler is not ready yet
- reregister if heartbeat from unregistered executor
- reregister if heartbeat from removed executor
- expire dead hosts
- expire dead hosts should kill executors with replacement (SPARK-8119)
- SPARK-34273: Do not reregister BlockManager when SparkContext is stopped
AccumulatorSourceSuite:
- that that accumulators register against the metric system's register
- the accumulators value property is checked when the gauge's value is requested
- the double accumulators value property is checked when the gauge's value is requested
UninterruptibleThreadRunnerSuite:
- runUninterruptibly should switch to UninterruptibleThread
- runUninterruptibly should not add new UninterruptibleThread
ExecutorResourceInfoSuite:
- Track Executor Resource information
- Don't allow acquire address that is not available
- Don't allow acquire address that doesn't exist
- Don't allow release address that is not assigned
- Don't allow release address that doesn't exist
- Ensure that we can acquire the same fractions of a resource from an executor
ReplayListenerSuite:
- Simple replay
- Replay compressed inprogress log file succeeding on partial read
- Replay incompatible event log
- End-to-end replay
- End-to-end replay with compression
UIUtilsSuite:
- makeDescription(plainText = false)
- makeDescription(plainText = true)
- SPARK-11906: Progress bar should not overflow because of speculative tasks
- decodeURLParameter (SPARK-12708: Sorting task error in Stages Page when yarn mode.)
- listingTable with tooltips
- listingTable without tooltips
MutableURLClassLoaderSuite:
- child first
- parent first
- child first can fall back
- child first can fail
- default JDK classloader get resources
- parent first get resources
- child first get resources
- driver sets context class loader in local mode
CheckpointSuite:
- basic checkpointing [reliable checkpoint]
- basic checkpointing [local checkpoint]
- checkpointing partitioners [reliable checkpoint]
- RDDs with one-to-one dependencies [reliable checkpoint]
- RDDs with one-to-one dependencies [local checkpoint]
- ParallelCollectionRDD [reliable checkpoint]
- ParallelCollectionRDD [local checkpoint]
- BlockRDD [reliable checkpoint]
- BlockRDD [local checkpoint]
- ShuffleRDD [reliable checkpoint]
- ShuffleRDD [local checkpoint]
- UnionRDD [reliable checkpoint]
- UnionRDD [local checkpoint]
- CartesianRDD [reliable checkpoint]
- CartesianRDD [local checkpoint]
- CoalescedRDD [reliable checkpoint]
- CoalescedRDD [local checkpoint]
- CoGroupedRDD [reliable checkpoint]
- CoGroupedRDD [local checkpoint]
- ZippedPartitionsRDD [reliable checkpoint]
- ZippedPartitionsRDD [local checkpoint]
- PartitionerAwareUnionRDD [reliable checkpoint]
- PartitionerAwareUnionRDD [local checkpoint]
- CheckpointRDD with zero partitions [reliable checkpoint]
- CheckpointRDD with zero partitions [local checkpoint]
- checkpointAllMarkedAncestors [reliable checkpoint]
- checkpointAllMarkedAncestors [local checkpoint]
HealthTrackerSuite:
- executors can be excluded with only a few failures per stage
- executors aren't excluded as a result of tasks in failed task sets
- stage exclude updates correctly on stage success
- stage exclude updates correctly on stage failure
- excluded executors and nodes get recovered with time
- exclude can handle lost executors
- task failures expire with time
- task failure timeout works as expected for long-running tasksets
- only exclude nodes for the application when enough executors have failed on that specific host
- exclude still respects legacy configs
- check exclude configuration invariants
- excluding kills executors, configured by EXCLUDE_ON_FAILURE_KILL_ENABLED
- excluding decommission and kills executors when enabled
- fetch failure excluding kills executors, configured by EXCLUDE_ON_FAILURE_KILL_ENABLED
AppStatusUtilsSuite:
- schedulerDelay
WorkerDecommissionExtendedSuite:
- Worker decommission and executor idle timeout
- Decommission 2 executors from 3 executors in total
IndexShuffleBlockResolverSuite:
- commit shuffle files multiple times
- SPARK-33198 getMigrationBlocks should not fail at missing files
- getMergedBlockData should return expected FileSegmentManagedBuffer list
- getMergedBlockMeta should return expected MergedBlockMeta
- write checksum file
TaskResultGetterSuite:
- handling results smaller than max RPC message size
- handling results larger than max RPC message size
- handling total size of results larger than maxResultSize
- task retried if result missing from block manager
- failed task deserialized with the correct classloader (SPARK-11195)
- task result size is set on the driver, not the executors
Exception in thread "task-result-getter-0" java.lang.NoClassDefFoundError
	at org.apache.spark.scheduler.UndeserializableException.readObject(TaskResultGetterSuite.scala:305)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2296)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
	at org.apache.spark.ThrowableSerializationWrapper.readObject(TaskEndReason.scala:202)
	at sun.reflect.GeneratedMethodAccessor214.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2296)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
- failed task is handled when error occurs deserializing the reason
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2405)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2329)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2187)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:503)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:461)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:87)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:129)
	at org.apache.spark.scheduler.TaskResultGetter.$anonfun$enqueueFailedTask$2(TaskResultGetter.scala:141)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:2030)
	at org.apache.spark.scheduler.TaskResultGetter.$anonfun$enqueueFailedTask$1(TaskResultGetter.scala:137)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
TopologyAwareBlockReplicationPolicyBehavior:
- block replication - random block replication policy
- All peers in the same rack
- Peers in 2 racks
PersistenceEngineSuite:
- FileSystemPersistenceEngine
- ZooKeeperPersistenceEngine
MasterSuite:
- can use a custom recovery mode factory
- master correctly recover the application
- master/worker web ui available
- master/worker web ui available with reverseProxy
- master/worker web ui available behind front-end reverseProxy
- basic scheduling - spread out
- basic scheduling - no spread out
- basic scheduling with more memory - spread out
- basic scheduling with more memory - no spread out
- scheduling with max cores - spread out
- scheduling with max cores - no spread out
- scheduling with cores per executor - spread out
- scheduling with cores per executor - no spread out
- scheduling with cores per executor AND max cores - spread out
- scheduling with cores per executor AND max cores - no spread out
- scheduling with executor limit - spread out
- scheduling with executor limit - no spread out
- scheduling with executor limit AND max cores - spread out
- scheduling with executor limit AND max cores - no spread out
- scheduling with executor limit AND cores per executor - spread out
- scheduling with executor limit AND cores per executor - no spread out
- scheduling with executor limit AND cores per executor AND max cores - spread out
- scheduling with executor limit AND cores per executor AND max cores - no spread out
- SPARK-13604: Master should ask Worker kill unknown executors and drivers
- SPARK-20529: Master should reply the address received from worker
- SPARK-27510: Master should avoid dead loop while launching executor failed in Worker
- All workers on a host should be decommissioned
- No workers should be decommissioned with invalid host
- Only worker on host should be decommissioned
- SPARK-19900: there should be a corresponding driver for the app after relaunching driver
- assign/recycle resources to/from driver
- assign/recycle resources to/from executor
ExternalAppendOnlyMapSuite:
- single insert
- multiple insert
- insert with collision
- ordering
- null keys and values
- simple aggregator
- simple cogroup
- spilling
- spilling with compression
- spilling with compression and encryption
- ExternalAppendOnlyMap shouldn't fail when forced to spill before calling its iterator
- spilling with hash collisions
- spilling with many hash collisions
- spilling with hash collisions using the Int.MaxValue key
- spilling with null keys and values
- SPARK-22713 spill during iteration leaks internal map
- drop all references to the underlying map once the iterator is exhausted
- SPARK-22713 external aggregation updates peak execution memory
- force to spill for external aggregation
AdaptiveSchedulingSuite:
- simple use of submitMapStage
- fetching multiple map output partitions per reduce
- fetching all map output partitions in one reduce
- more reduce tasks than map output partitions
GenericAvroSerializerSuite:
- schema compression and decompression
- uses schema fingerprint to decrease message size
- caches previously seen schemas
- SPARK-34477: GenericData.Record serialization and deserialization
- SPARK-34477: GenericData.Record serialization and deserialization through KryoSerializer 
- SPARK-34477: GenericData.Array serialization and deserialization
- SPARK-34477: GenericData.Array serialization and deserialization through KryoSerializer 
- SPARK-34477: GenericData.EnumSymbol serialization and deserialization
- SPARK-34477: GenericData.EnumSymbol serialization and deserialization through KryoSerializer 
- SPARK-34477: GenericData.Fixed serialization and deserialization
- SPARK-34477: GenericData.Fixed serialization and deserialization through KryoSerializer 
AppStatusListenerSuite:
- environment info
- scheduler events
- storage events
- eviction of old data
- eviction should respect job completion time
- eviction should respect stage completion time
- skipped stages should be evicted before completed stages
- eviction should respect task completion time
- lastStageAttempt should fail when the stage doesn't exist
- SPARK-24415: update metrics for tasks that finish late
- Total tasks in the executor summary should match total stage tasks (live = true)
- Total tasks in the executor summary should match total stage tasks (live = false)
- driver logs
- executor metrics updates
- stage executor metrics
- storage information on executor lost/down
- clean up used memory when BlockManager added
- SPARK-34877 - check YarnAmInfoEvent is populated correctly
ImmutableBitSetSuite:
- basic get
- nextSetBit
- xor len(bitsetX) < len(bitsetY)
- xor len(bitsetX) > len(bitsetY)
- andNot len(bitsetX) < len(bitsetY)
- andNot len(bitsetX) > len(bitsetY)
- immutability
BoundedPriorityQueueSuite:
- BoundedPriorityQueue poll test
ProactiveClosureSerializationSuite:
- throws expected serialization exceptions on actions
- mapPartitions transformations throw proactive serialization exceptions
- map transformations throw proactive serialization exceptions
- filter transformations throw proactive serialization exceptions
- flatMap transformations throw proactive serialization exceptions
- mapPartitionsWithIndex transformations throw proactive serialization exceptions
Run completed in 32 minutes, 16 seconds.
Total number of tests run: 2899
Suites: completed 277, aborted 0
Tests: succeeded 2897, failed 2, canceled 2, ignored 8, pending 0
*** 2 TESTS FAILED ***
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project ML Local Library
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project GraphX
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Catalyst
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project SQL
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project ML Library
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] -----------------< org.apache.spark:spark-tools_2.12 >------------------
[INFO] Building Spark Project Tools 3.3.0-SNAPSHOT                      [10/31]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-duplicate-dependencies) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- mvn-scalafmt_2.12:1.0.4:format (default) @ spark-tools_2.12 ---
[WARNING] format.skipSources set, ignoring main directories
[WARNING] format.skipTestSources set, ignoring validateOnly directories
[WARNING] No sources specified, skipping formatting
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:add-source (eclipse-add-source) @ spark-tools_2.12 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/tools/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/tools/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-tools_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.15/scala-library-2.12.15.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm/7.1/asm-7.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-tree/7.1/asm-tree-7.1.jar:/home/jenkins/.m2/repository/org/clapper/grizzled-scala_2.12/4.9.3/grizzled-scala_2.12-4.9.3.jar:/home/jenkins/.m2/repository/org/clapper/classutil_2.12/1.5.1/classutil_2.12-1.5.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.2.0/scala-xml_2.12-1.2.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-compiler/2.12.15/scala-compiler-2.12.15.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-analysis/7.1/asm-analysis-7.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.15/scala-reflect-2.12.15.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-commons/7.1/asm-commons-7.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.12/2.0.0/scala-collection-compat_2.12-2.0.0.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-util/7.1/asm-util-7.1.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-tools_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/tools/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ spark-tools_2.12 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:compile (scala-compile-first) @ spark-tools_2.12 ---
[INFO] Using incremental compilation using Mixed compile order
[INFO] Compiler bridge file: /home/jenkins/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.12-1.3.1-bin_2.12.15__52.0-1.3.1_20191012T045515.jar
[INFO] compiler plugin: BasicArtifact(com.github.ghik,silencer-plugin_2.12.15,1.7.6,null)
[INFO] compile in 0.1 s
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-tools_2.12 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-tools_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/tools/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ spark-tools_2.12 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (generate-test-classpath) @ spark-tools_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scalatest/scalatest-matchers-core_2.12/3.2.9/scalatest-matchers-core_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.12/3.2.9/scalactic_2.12-3.2.9.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-tree/7.1/asm-tree-7.1.jar:/home/jenkins/.m2/repository/org/clapper/grizzled-scala_2.12/4.9.3/grizzled-scala_2.12-4.9.3.jar:/home/jenkins/.m2/repository/org/scalacheck/scalacheck_2.12/1.15.4/scalacheck_2.12-1.15.4.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-compatible/3.2.9/scalatest-compatible-3.2.9.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-core_2.12/3.2.9/scalatest-core_2.12-3.2.9.jar:/home/jenkins/.m2/repository/junit/junit/4.13.1/junit-4.13.1.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/websocket/websocket-common/9.4.40.v20210413/websocket-common-9.4.40.v20210413.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-propspec_2.12/3.2.9/scalatest-propspec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.14/httpcore-4.4.14.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-chrome-driver/3.141.59/selenium-chrome-driver-3.141.59.jar:/home/jenkins/.m2/repository/com/shapesecurity/salvation2/3.0.0/salvation2-3.0.0.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/2.6/objenesis-2.6.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-opera-driver/3.141.59/selenium-opera-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-shouldmatchers_2.12/3.2.9/scalatest-shouldmatchers_2.12-3.2.9.jar:/home/jenkins/.m2/repository/xml-apis/xml-apis/1.4.01/xml-apis-1.4.01.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.12/3.2.9/scalatest_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-text/1.6/commons-text-1.6.jar:/home/jenkins/.m2/repository/net/sourceforge/htmlunit/neko-htmlunit/2.50.0/neko-htmlunit-2.50.0.jar:/home/jenkins/.m2/repository/net/sourceforge/htmlunit/htmlunit/2.50.0/htmlunit-2.50.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-collection-compat_2.12/2.0.0/scala-collection-compat_2.12-2.0.0.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.10.13/byte-buddy-agent-1.10.13.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-flatspec_2.12/3.2.9/scalatest-flatspec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-funsuite_2.12/3.2.9/scalatest-funsuite_2.12-3.2.9.jar:/home/jenkins/.m2/repository/commons-codec/commons-codec/1.15/commons-codec-1.15.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/com/squareup/okio/okio/1.14.0/okio-1.14.0.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm/7.1/asm-7.1.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-freespec_2.12/3.2.9/scalatest-freespec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-support/3.141.59/selenium-support-3.141.59.jar:/home/jenkins/.m2/repository/xalan/serializer/2.7.2/serializer-2.7.2.jar:/home/jenkins/.m2/repository/org/clapper/classutil_2.12/1.5.1/classutil_2.12-1.5.1.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.2.0/scala-xml_2.12-1.2.0.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-http/9.4.43.v20210629/jetty-http-9.4.43.v20210629.jar:/home/jenkins/.m2/repository/xalan/xalan/2.7.2/xalan-2.7.2.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-client/9.4.43.v20210629/jetty-client-9.4.43.v20210629.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/websocket/websocket-api/9.4.40.v20210413/websocket-api-9.4.40.v20210413.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-funspec_2.12/3.2.9/scalatest-funspec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/xerces/xercesImpl/2.12.0/xercesImpl-2.12.0.jar:/home/jenkins/.m2/repository/net/sourceforge/htmlunit/htmlunit-cssparser/1.7.0/htmlunit-cssparser-1.7.0.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.10.13/byte-buddy-1.10.13.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-diagrams_2.12/3.2.9/scalatest-diagrams_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.43.v20210629/jetty-util-9.4.43.v20210629.jar:/home/jenkins/.m2/repository/com/squareup/okhttp3/okhttp/3.11.0/okhttp-3.11.0.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-remote-driver/3.141.59/selenium-remote-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpmime/4.5.13/httpmime-4.5.13.jar:/home/jenkins/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-ie-driver/3.141.59/selenium-ie-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/org/brotli/dec/0.1.2/dec-0.1.2.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-featurespec_2.12/3.2.9/scalatest-featurespec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-compiler/2.12.15/scala-compiler-2.12.15.jar:/home/jenkins/.m2/repository/org/scalatestplus/mockito-3-4_2.12/3.2.9.0/mockito-3-4_2.12-3.2.9.0.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-api/3.141.59/selenium-api-3.141.59.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-java/3.141.59/selenium-java-3.141.59.jar:/home/jenkins/.m2/repository/net/sourceforge/htmlunit/htmlunit-core-js/2.50.0/htmlunit-core-js-2.50.0.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.4.6/mockito-core-3.4.6.jar:/home/jenkins/.m2/repository/org/scalatestplus/selenium-3-141_2.12/3.2.9.0/selenium-3-141_2.12-3.2.9.0.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-mustmatchers_2.12/3.2.9/scalatest-mustmatchers_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-edge-driver/3.141.59/selenium-edge-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-exec/1.3/commons-exec-1.3.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-commons/7.1/asm-commons-7.1.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-util/7.1/asm-util-7.1.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/websocket/websocket-client/9.4.40.v20210413/websocket-client-9.4.40.v20210413.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.15/scala-library-2.12.15.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-io/9.4.40.v20210413/jetty-io-9.4.40.v20210413.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/htmlunit-driver/2.50.0/htmlunit-driver-2.50.0.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-firefox-driver/3.141.59/selenium-firefox-driver-3.141.59.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-wordspec_2.12/3.2.9/scalatest-wordspec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-safari-driver/3.141.59/selenium-safari-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/ow2/asm/asm-analysis/7.1/asm-analysis-7.1.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.15/scala-reflect-2.12.15.jar:/home/jenkins/.m2/repository/org/scalatestplus/scalacheck-1-15_2.12/3.2.9.0/scalacheck-1-15_2.12-3.2.9.0.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-refspec_2.12/3.2.9/scalatest-refspec_2.12-3.2.9.jar
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:testCompile (scala-test-compile-first) @ spark-tools_2.12 ---
[INFO] compile in 0.0 s
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M5:test (default-test) @ spark-tools_2.12 ---
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M5:test (test) @ spark-tools_2.12 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.2:test (test) @ spark-tools_2.12 ---
Discovery starting.
Discovery completed in 115 milliseconds.
Run starting. Expected test count is: 0
DiscoverySuite:
Run completed in 149 milliseconds.
Total number of tests run: 0
Suites: completed 1, aborted 0
Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
No tests were executed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Hive
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project REPL
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --------------< org.apache.spark:spark-network-yarn_2.12 >--------------
[INFO] Building Spark Project YARN Shuffle Service 3.3.0-SNAPSHOT       [11/31]
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-versions) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- maven-enforcer-plugin:3.0.0-M2:enforce (enforce-no-duplicate-dependencies) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- build-helper-maven-plugin:3.2.0:timestamp-property (module-timestamp-property) @ spark-network-yarn_2.12 ---
[WARNING] Using platform locale (en_US actually) to format date/time, i.e. build is platform dependent!
[INFO] 
[INFO] --- build-helper-maven-plugin:3.2.0:timestamp-property (local-timestamp-property) @ spark-network-yarn_2.12 ---
[WARNING] Using platform locale (en_US actually) to format date/time, i.e. build is platform dependent!
[INFO] 
[INFO] --- build-helper-maven-plugin:3.2.0:regex-property (regex-property) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- mvn-scalafmt_2.12:1.0.4:format (default) @ spark-network-yarn_2.12 ---
[WARNING] format.skipSources set, ignoring main directories
[WARNING] format.skipTestSources set, ignoring validateOnly directories
[WARNING] No sources specified, skipping formatting
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:add-source (eclipse-add-source) @ spark-network-yarn_2.12 ---
[INFO] Add Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/network-yarn/src/main/scala
[INFO] Add Test Source directory: /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/network-yarn/src/test/scala
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (default-cli) @ spark-network-yarn_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.15/scala-library-2.12.15.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-crypto/1.1.0/commons-crypto-1.1.0.jar:/home/jenkins/.m2/repository/org/roaringbitmap/shims/0.9.22/shims-0.9.22.jar:/home/jenkins/.m2/repository/com/google/crypto/tink/tink/1.6.0/tink-1.6.0.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.13.0/jackson-annotations-2.13.0.jar:/home/jenkins/.m2/repository/org/roaringbitmap/RoaringBitmap/0.9.22/RoaringBitmap-0.9.22.jar:/home/jenkins/.m2/repository/io/netty/netty-all/4.1.68.Final/netty-all-4.1.68.Final.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.13.0/jackson-core-2.13.0.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/network-shuffle/target/scala-2.12/classes:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/network-common/target/scala-2.12/classes:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.jar:/home/jenkins/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.2.2/metrics-core-4.2.2.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-network-yarn_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/network-yarn/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ spark-network-yarn_2.12 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:compile (scala-compile-first) @ spark-network-yarn_2.12 ---
[INFO] Using incremental compilation using Mixed compile order
[INFO] Compiler bridge file: /home/jenkins/.sbt/1.0/zinc/org.scala-sbt/org.scala-sbt-compiler-bridge_2.12-1.3.1-bin_2.12.15__52.0-1.3.1_20191012T045515.jar
[INFO] compiler plugin: BasicArtifact(com.github.ghik,silencer-plugin_2.12.15,1.7.6,null)
[INFO] Compiling 3 Java sources to /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/network-yarn/target/scala-2.12/classes ...
[INFO] Done compiling.
[INFO] compile in 0.3 s
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-network-yarn_2.12 ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-network-yarn_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/network-yarn/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ spark-network-yarn_2.12 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (generate-test-classpath) @ spark-network-yarn_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scalatest/scalatest-matchers-core_2.12/3.2.9/scalatest-matchers-core_2.12-3.2.9.jar:/home/jenkins/.m2/repository/com/google/crypto/tink/tink/1.6.0/tink-1.6.0.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-common/2.7.4/hadoop-mapreduce-client-common-2.7.4.jar:/home/jenkins/.m2/repository/org/apache/htrace/htrace-core/3.1.0-incubating/htrace-core-3.1.0-incubating.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-log4j12/1.7.30/slf4j-log4j12-1.7.30.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-jobclient/2.7.4/hadoop-mapreduce-client-jobclient-2.7.4.jar:/home/jenkins/.m2/repository/com/google/protobuf/protobuf-java/2.5.0/protobuf-java-2.5.0.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-core_2.12/3.2.9/scalatest-core_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.2/zookeeper-jute-3.6.2.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/websocket/websocket-common/9.4.40.v20210413/websocket-common-9.4.40.v20210413.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-propspec_2.12/3.2.9/scalatest-propspec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.14/httpcore-4.4.14.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-client/2.7.4/hadoop-client-2.7.4.jar:/home/jenkins/.m2/repository/com/shapesecurity/salvation2/3.0.0/salvation2-3.0.0.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.12.0/commons-lang3-3.12.0.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/2.6/objenesis-2.6.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-shouldmatchers_2.12/3.2.9/scalatest-shouldmatchers_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/jenkins/.m2/repository/xml-apis/xml-apis/1.4.01/xml-apis-1.4.01.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.12/3.2.9/scalatest_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-text/1.6/commons-text-1.6.jar:/home/jenkins/.m2/repository/net/sourceforge/htmlunit/neko-htmlunit/2.50.0/neko-htmlunit-2.50.0.jar:/home/jenkins/.m2/repository/net/sourceforge/htmlunit/htmlunit/2.50.0/htmlunit-2.50.0.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.2/zookeeper-3.6.2.jar:/home/jenkins/.m2/repository/commons-codec/commons-codec/1.15/commons-codec-1.15.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-funsuite_2.12/3.2.9/scalatest-funsuite_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-common/2.7.4/hadoop-common-2.7.4.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.4/commons-io-2.4.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-freespec_2.12/3.2.9/scalatest-freespec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.jar:/home/jenkins/.m2/repository/xalan/serializer/2.7.2/serializer-2.7.2.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-client/2.7.4/hadoop-yarn-client-2.7.4.jar:/home/jenkins/.m2/repository/xalan/xalan/2.7.2/xalan-2.7.2.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-client/9.4.43.v20210629/jetty-client-9.4.43.v20210629.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-core-asl/1.9.13/jackson-core-asl-1.9.13.jar:/home/jenkins/.m2/repository/xerces/xercesImpl/2.12.0/xercesImpl-2.12.0.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-funspec_2.12/3.2.9/scalatest-funspec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/net/sourceforge/htmlunit/htmlunit-cssparser/1.7.0/htmlunit-cssparser-1.7.0.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.10.13/byte-buddy-1.10.13.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-diagrams_2.12/3.2.9/scalatest-diagrams_2.12-3.2.9.jar:/home/jenkins/.m2/repository/com/squareup/okhttp3/okhttp/3.11.0/okhttp-3.11.0.jar:/home/jenkins/.m2/repository/org/roaringbitmap/RoaringBitmap/0.9.22/RoaringBitmap-0.9.22.jar:/home/jenkins/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-ie-driver/3.141.59/selenium-ie-driver-3.141.59.jar:/home/jenkins/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/home/jenkins/.m2/repository/org/mortbay/jetty/jetty-sslengine/6.1.26/jetty-sslengine-6.1.26.jar:/home/jenkins/.m2/repository/org/scalatestplus/selenium-3-141_2.12/3.2.9.0/selenium-3-141_2.12-3.2.9.0.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-mustmatchers_2.12/3.2.9/scalatest-mustmatchers_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-edge-driver/3.141.59/selenium-edge-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-exec/1.3/commons-exec-1.3.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-auth/2.7.4/hadoop-auth-2.7.4.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-shuffle/2.7.4/hadoop-mapreduce-client-shuffle-2.7.4.jar:/home/jenkins/.m2/repository/com/google/guava/guava/14.0.1/guava-14.0.1.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/websocket/websocket-client/9.4.40.v20210413/websocket-client-9.4.40.v20210413.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.15/scala-library-2.12.15.jar:/home/jenkins/.m2/repository/commons-beanutils/commons-beanutils/1.9.4/commons-beanutils-1.9.4.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-io/9.4.40.v20210413/jetty-io-9.4.40.v20210413.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-core/2.7.4/hadoop-mapreduce-client-core-2.7.4.jar:/home/jenkins/.m2/repository/commons-cli/commons-cli/1.5.0/commons-cli-1.5.0.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/org/codehaus/jackson/jackson-xc/1.9.13/jackson-xc-1.9.13.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/network-common/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/scalatest/scalatest-wordspec_2.12/3.2.9/scalatest-wordspec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.15/scala-reflect-2.12.15.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.12/3.2.9/scalactic_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/roaringbitmap/shims/0.9.22/shims-0.9.22.jar:/home/jenkins/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/io/netty/netty-all/4.1.68.Final/netty-all-4.1.68.Final.jar:/home/jenkins/.m2/repository/org/scalacheck/scalacheck_2.12/1.15.4/scalacheck_2.12-1.15.4.jar:/home/jenkins/.m2/repository/org/apache/avro/avro/1.11.0/avro-1.11.0.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-compatible/3.2.9/scalatest-compatible-3.2.9.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-framework/2.7.1/curator-framework-2.7.1.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-recipes/2.7.1/curator-recipes-2.7.1.jar:/home/jenkins/.m2/repository/junit/junit/4.13.1/junit-4.13.1.jar:/home/jenkins/.m2/repository/org/apache/directory/server/apacheds-kerberos-codec/2.0.0-M15/apacheds-kerberos-codec-2.0.0-M15.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-chrome-driver/3.141.59/selenium-chrome-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-opera-driver/3.141.59/selenium-opera-driver-3.141.59.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.10.13/byte-buddy-agent-1.10.13.jar:/home/jenkins/.m2/repository/org/mortbay/jetty/jetty-util/6.1.26/jetty-util-6.1.26.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-api/2.7.4/hadoop-yarn-api-2.7.4.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-flatspec_2.12/3.2.9/scalatest-flatspec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/com/squareup/okio/okio/1.14.0/okio-1.14.0.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/javax/xml/bind/jaxb-api/2.2.11/jaxb-api-2.2.11.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-support/3.141.59/selenium-support-3.141.59.jar:/home/jenkins/.m2/repository/javax/servlet/jsp/jsp-api/2.1/jsp-api-2.1.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-server-common/2.7.4/hadoop-yarn-server-common-2.7.4.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.13.0/jackson-core-2.13.0.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.2.0/scala-xml_2.12-1.2.0.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-http/9.4.43.v20210629/jetty-http-9.4.43.v20210629.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-mapreduce-client-app/2.7.4/hadoop-mapreduce-client-app-2.7.4.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/tags/target/scala-2.12/test-classes:/home/jenkins/.m2/repository/xmlenc/xmlenc/0.52/xmlenc-0.52.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/websocket/websocket-api/9.4.40.v20210413/websocket-api-9.4.40.v20210413.jar:/home/jenkins/.m2/repository/org/apache/directory/api/api-util/1.0.0-M20/api-util-1.0.0-M20.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-yarn-common/2.7.4/hadoop-yarn-common-2.7.4.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.2.2/metrics-core-4.2.2.jar:/home/jenkins/.m2/repository/com/google/code/gson/gson/2.2.4/gson-2.2.4.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.43.v20210629/jetty-util-9.4.43.v20210629.jar:/home/jenkins/.m2/repository/org/apache/directory/api/api-asn1-api/1.0.0-M20/api-asn1-api-1.0.0-M20.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-annotations/2.7.4/hadoop-annotations-2.7.4.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-remote-driver/3.141.59/selenium-remote-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpmime/4.5.13/httpmime-4.5.13.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.13.0/jackson-databind-2.13.0.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-crypto/1.1.0/commons-crypto-1.1.0.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.13.0/jackson-annotations-2.13.0.jar:/home/jenkins/.m2/repository/commons-configuration/commons-configuration/1.6/commons-configuration-1.6.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-compress/1.21/commons-compress-1.21.jar:/home/jenkins/.m2/repository/commons-digester/commons-digester/1.8/commons-digester-1.8.jar:/home/jenkins/.m2/repository/org/apache/directory/server/apacheds-i18n/2.0.0-M15/apacheds-i18n-2.0.0-M15.jar:/home/jenkins/.m2/repository/org/brotli/dec/0.1.2/dec-0.1.2.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-featurespec_2.12/3.2.9/scalatest-featurespec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/org/scalatestplus/mockito-3-4_2.12/3.2.9.0/mockito-3-4_2.12-3.2.9.0.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-api/3.141.59/selenium-api-3.141.59.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-client/2.7.1/curator-client-2.7.1.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs/2.7.4/hadoop-hdfs-2.7.4.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-java/3.141.59/selenium-java-3.141.59.jar:/home/jenkins/.m2/repository/net/sourceforge/htmlunit/htmlunit-core-js/2.50.0/htmlunit-core-js-2.50.0.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.4.6/mockito-core-3.4.6.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/tags/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/htmlunit-driver/2.50.0/htmlunit-driver-2.50.0.jar:/home/jenkins/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/home/jenkins/workspace/spark-master-test-maven-hadoop-2.7/common/network-shuffle/target/scala-2.12/classes:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-firefox-driver/3.141.59/selenium-firefox-driver-3.141.59.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-safari-driver/3.141.59/selenium-safari-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/scalatestplus/scalacheck-1-15_2.12/3.2.9.0/scalacheck-1-15_2.12-3.2.9.0.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-refspec_2.12/3.2.9/scalatest-refspec_2.12-3.2.9.jar:/home/jenkins/.m2/repository/commons-httpclient/commons-httpclient/3.1/commons-httpclient-3.1.jar
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:testCompile (scala-test-compile-first) @ spark-network-yarn_2.12 ---
[INFO] compile in 0.0 s
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M5:test (default-test) @ spark-network-yarn_2.12 ---
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M5:test (test) @ spark-network-yarn_2.12 ---
[INFO] Skipping execution of surefire because it has already been run for this configuration
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.2:test (test) @ spark-network-yarn_2.12 ---
Discovery starting.
Discovery completed in 137 milliseconds.
Run starting. Expected test count is: 0
DiscoverySuite:
Run completed in 180 milliseconds.
Total number of tests run: 0
Suites: completed 1, aborted 0
Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
No tests were executed.
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project YARN
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Mesos
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Hive Thrift Server
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Kafka 0.10+ Token Provider for Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Integration for Kafka 0.10
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Kafka 0.10+ Source for Structured Streaming
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Kinesis Integration
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Examples
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Integration for Kafka 0.10 Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Avro
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Spark Project Kinesis Assembly
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Spark Project Parent POM 3.3.0-SNAPSHOT:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [  3.307 s]
[INFO] Spark Project Tags ................................. SUCCESS [  3.493 s]
[INFO] Spark Project Sketch ............................... SUCCESS [ 17.618 s]
[INFO] Spark Project Local DB ............................. SUCCESS [  3.912 s]
[INFO] Spark Project Networking ........................... SUCCESS [ 51.393 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 10.598 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [  3.273 s]
[INFO] Spark Project Launcher ............................. SUCCESS [  4.161 s]
[INFO] Spark Project Core ................................. FAILURE [35:03 min]
[INFO] Spark Project ML Local Library ..................... SKIPPED
[INFO] Spark Project GraphX ............................... SKIPPED
[INFO] Spark Project Streaming ............................ SKIPPED
[INFO] Spark Project Catalyst ............................. SKIPPED
[INFO] Spark Project SQL .................................. SKIPPED
[INFO] Spark Project ML Library ........................... SKIPPED
[INFO] Spark Project Tools ................................ SUCCESS [  1.569 s]
[INFO] Spark Project Hive ................................. SKIPPED
[INFO] Spark Project REPL ................................. SKIPPED
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [  2.216 s]
[INFO] Spark Project YARN ................................. SKIPPED
[INFO] Spark Project Mesos ................................ SKIPPED
[INFO] Spark Project Hive Thrift Server ................... SKIPPED
[INFO] Spark Project Assembly ............................. SKIPPED
[INFO] Kafka 0.10+ Token Provider for Streaming ........... SKIPPED
[INFO] Spark Integration for Kafka 0.10 ................... SKIPPED
[INFO] Kafka 0.10+ Source for Structured Streaming ........ SKIPPED
[INFO] Spark Kinesis Integration .......................... SKIPPED
[INFO] Spark Project Examples ............................. SKIPPED
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SKIPPED
[INFO] Spark Avro ......................................... SKIPPED
[INFO] Spark Project Kinesis Assembly ..................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  36:46 min
[INFO] Finished at: 2021-11-28T23:34:53-08:00
[INFO] ------------------------------------------------------------------------
[WARNING] The requested profile "hive-2.3" could not be activated because it does not exist.
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:2.0.2:test (test) on project spark-core_2.12: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <args> -rf :spark-core_2.12
+ retcode2=1
+ [[ 0 -ne 0 ]]
+ [[ 1 -ne 0 ]]
+ [[ 0 -ne 0 ]]
+ [[ 1 -ne 0 ]]
+ echo 'Testing Spark with Maven failed'
Testing Spark with Maven failed
+ exit 1
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
[Checks API] No suitable checks publisher found.
Finished: FAILURE