FailedConsole Output

Skipping 2,118 KB.. Full Log
s)
[info] - Data source V2 relation resolution 'INSERT INTO TABLE testcat.tab VALUES (1)' (2 milliseconds)
[info] - Data source V2 relation resolution 'INSERT INTO TABLE spark_catalog.default.v2Table VALUES (1)' (2 milliseconds)
[info] - Data source V2 relation resolution 'DESC TABLE tab' (2 milliseconds)
[info] - Data source V2 relation resolution 'DESC TABLE testcat.tab' (2 milliseconds)
[info] - Data source V2 relation resolution 'DESC TABLE spark_catalog.default.v2Table' (1 millisecond)
[info] - Data source V2 relation resolution 'SHOW TBLPROPERTIES tab' (2 milliseconds)
[info] - Data source V2 relation resolution 'SHOW TBLPROPERTIES testcat.tab' (1 millisecond)
[info] - Data source V2 relation resolution 'SHOW TBLPROPERTIES spark_catalog.default.v2Table' (2 milliseconds)
[info] - Data source V2 relation resolution 'SELECT * from tab' (3 milliseconds)
[info] - Data source V2 relation resolution 'SELECT * from testcat.tab' (2 milliseconds)
[info] - Data source V2 relation resolution 'SELECT * from spark_catalog.default.v2Table' (2 milliseconds)
[info] - MERGE INTO TABLE (81 milliseconds)
[info] - MERGE INTO TABLE - skip resolution on v2 tables that accept any schema (3 milliseconds)
[info] - SPARK-31147: forbid CHAR type in non-Hive tables (160 milliseconds)
[info] DatasetCacheSuite:
[info] - get storage level (105 milliseconds)
[info] - persist and unpersist (240 milliseconds)
[info] - persist and then rebind right encoder when join 2 datasets (246 milliseconds)
[info] - persist and then groupBy columns asKey, map (421 milliseconds)
[info] - persist and then withColumn (151 milliseconds)
00:02:04.179 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[info] - cache UDF result correctly (4 seconds, 374 milliseconds)
[info] - SPARK-24613 Cache with UDF could not be matched with subsequent dependent caches (238 milliseconds)
[info] - SPARK-24596 Non-cascading Cache Invalidation (556 milliseconds)
[info] - SPARK-30656: getOffsetRangesFromUnresolvedOffsets - multiple topic partitions (16 seconds, 195 milliseconds)
00:02:14.536 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 2.3.0
00:02:14.536 WARN org.apache.hadoop.hive.metastore.ObjectStore: setMetaStoreSchemaVersion called but recording version is disabled: version = 2.3.0, comment = Set by MetaStore jenkins@192.168.10.24
00:02:14.658 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - success sanity check (1 minute, 2 seconds)
[info] - hadoop configuration preserved (545 milliseconds)
00:02:17.163 WARN org.apache.spark.sql.hive.client.HiveClientImpl: Detected HiveConf hive.session.history.enabled is true and will be reset to false to disable useless hive logic
00:02:17.164 WARN org.apache.spark.sql.hive.client.HiveClientImpl: Detected HiveConf hive.execution.engine is 'tez' and will be reset to 'mr' to disable useless hive logic
[info] - override useless and side-effect hive configurations  (733 milliseconds)
[info] - failure sanity check !!! IGNORED !!!
[info] - SPARK-24596 Non-cascading Cache Invalidation - verify cached data reuse (17 seconds, 155 milliseconds)
[info] - SPARK-26708 Cache data and cached plan should stay consistent (140 milliseconds)
[info] - postgreSQL/union.sql (30 seconds, 952 milliseconds)
[info] - SPARK-27739 Save stats from optimized plan (1 second, 175 milliseconds)
00:02:24.839 WARN org.apache.spark.sql.DatasetCacheSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.DatasetCacheSuite, thread names: shuffle-boss-390-1, rpc-boss-387-1 =====

[info] JoinSuite:
[info] - equi-join is hash-join (23 milliseconds)
[info] - join operator selection (829 milliseconds)
[info] - broadcasted hash join operator selection (172 milliseconds)
[info] - broadcasted hash outer join operator selection (329 milliseconds)
[info] - multiple-key equi-join is hash-join (21 milliseconds)
[info] - inner join where, one match per row (635 milliseconds)
[info] - SPARK-30656: getOffsetRangesFromResolvedOffsets (21 seconds, 14 milliseconds)
[info] - inner join ON, one match per row (494 milliseconds)
[info] - inner join, where, multiple matches (302 milliseconds)
[info] - postgreSQL/timestamp.sql (3 seconds, 800 milliseconds)
[info] - inner join, no matches (270 milliseconds)
00:02:28.919 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - SPARK-22141: Propagate empty relation before checking Cartesian products (819 milliseconds)
00:02:29.231 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:29.409 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:30.458 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - big inner join, 4 matches per row (1 second, 593 milliseconds)
[info] - cartesian product join (392 milliseconds)
00:02:31.513 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:32.174 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] KafkaSinkBatchSuiteV2:
00:02:32.302 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:32.503 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:32.716 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:33.583 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:35.835 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:35.960 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:36.103 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:36.308 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - left outer join (5 seconds, 206 milliseconds)
00:02:37.291 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - postgreSQL/window_part3.sql (9 seconds, 125 milliseconds)
00:02:38.217 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:38.307 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:38.697 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - batch - write to kafka (1 second, 867 milliseconds)
[info] - 0.12: create client (21 seconds, 991 milliseconds)
00:02:39.744 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:40.542 WARN com.jolbox.bonecp.BoneCPConfig: Max Connections < 1. Setting to 20
[info] - right outer join (4 seconds, 266 milliseconds)
00:02:41.351 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:43.786 WARN com.jolbox.bonecp.BoneCPConfig: Max Connections < 1. Setting to 20
00:02:43.833 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.12.0
00:02:44.211 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:44.402 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: createDatabase (4 seconds, 694 milliseconds)
[info] - 0.12: create/get/alter database should pick right user name as owner (1 millisecond)
[info] - 0.12: createDatabase with null description (75 milliseconds)
[info] - batch - partition column and partitioner priorities (5 seconds, 992 milliseconds)
[info] - batch - null topic field value, and no topic option (77 milliseconds)
[info] - 0.12: setCurrentDatabase (22 milliseconds)
[info] - 0.12: getDatabase (65 milliseconds)
[info] - 0.12: databaseExists (72 milliseconds)
[info] - 0.12: listDatabases (60 milliseconds)
[info] - 0.12: alterDatabase (236 milliseconds)
00:02:46.003 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:46.169 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:46.305 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - full outer join (5 seconds, 755 milliseconds)
[info] - SPARK-20496: batch - enforce analyzed plans (1 second, 511 milliseconds)
00:02:46.421 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:46.515 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - broadcasted existence join operator selection (172 milliseconds)
00:02:46.613 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:46.714 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: dropDatabase (795 milliseconds)
[info] - batch - unsupported save modes (403 milliseconds)
00:02:47.567 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:47.674 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - cross join with broadcast (1 second, 377 milliseconds)
00:02:48.015 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - left semi join (528 milliseconds)
[info] - 0.12: createTable (1 second, 611 milliseconds)
00:02:48.960 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: loadTable (378 milliseconds)
[info] - 0.12: tableExists (64 milliseconds)
[info] - 0.12: getTable (53 milliseconds)
[info] - 0.12: getTableOption (41 milliseconds)
[info] - 0.12: getTablesByName (140 milliseconds)
00:02:49.855 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: getTablesByName when multiple tables (73 milliseconds)
[info] - 0.12: getTablesByName when some tables do not exist (51 milliseconds)
[info] - 0.12: getTablesByName when contains invalid name (133 milliseconds)
[info] - 0.12: getTablesByName when empty (32 milliseconds)
[info] - cross join detection (2 seconds, 410 milliseconds)
[info] - 0.12: alterTable(table: CatalogTable) (320 milliseconds)
[info] - 0.12: alterTable - should respect the original catalog table's owner name (474 milliseconds)
[info] - 0.12: alterTable(dbName: String, tableName: String, table: CatalogTable) (182 milliseconds)
[info] - 0.12: alterTable - rename (684 milliseconds)
[info] - test SortMergeJoin (without spill) (1 second, 968 milliseconds)
[info] - 0.12: alterTable - change database (507 milliseconds)
[info] - 0.12: alterTable - change database and table names (335 milliseconds)
[info] - 0.12: listTables(database) (33 milliseconds)
[info] - 0.12: listTables(database, pattern) (144 milliseconds)
00:02:54.423 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: listTablesByType(database, pattern, tableType) (275 milliseconds)
00:02:54.828 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: dropTable (1 second, 299 milliseconds)
[info] - 0.12: sql create partitioned table (83 milliseconds)
00:02:56.482 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:57.486 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: createPartitions (1 second, 303 milliseconds)
00:02:57.719 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:58.122 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: getPartitionNames(catalogTable) (320 milliseconds)
00:02:58.225 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:58.502 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:02:58.594 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: getPartitions(catalogTable) (324 milliseconds)
[info] - test SortMergeJoin (with spill) (5 seconds, 817 milliseconds)
[info] - outer broadcast hash join should not throw NPE (230 milliseconds)
[info] - 0.12: getPartitionsByFilter (158 milliseconds)
[info] - test SortMergeJoin output ordering (131 milliseconds)
[info] - 0.12: getPartition (94 milliseconds)
[info] - 0.12: getPartitionOption(db: String, table: String, spec: TablePartitionSpec) (114 milliseconds)
00:02:59.428 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - SPARK-22445 Respect stream-side child's needCopyResult in BroadcastHashJoin (469 milliseconds)
00:02:59.545 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: getPartitionOption(table: CatalogTable, spec: TablePartitionSpec) (132 milliseconds)
00:02:59.950 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: getPartitions(db: String, table: String) (150 milliseconds)
[info] - SPARK-24495: Join may return wrong result when having duplicated equal-join keys (482 milliseconds)
[info] - 0.12: loadPartition (325 milliseconds)
[info] - SPARK-27485: EnsureRequirements should not fail join with duplicate keys (645 milliseconds)
[info] - 0.12: loadDynamicPartitions (73 milliseconds)
00:03:00.987 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: renamePartitions (519 milliseconds)
[info] - 0.12: alterPartitions (466 milliseconds)
[info] - SPARK-26352: join reordering should not change the order of columns (1 second, 529 milliseconds)
00:03:02.163 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:03:02.302 ERROR org.apache.spark.sql.hive.client.HiveClientImpl: 
======================
Attempt to drop the partition specs in table 'src_part' database 'default':
Map(key1 -> 1, key2 -> 3)
In this attempt, the following partitions have been dropped successfully:

The remaining partitions have not been dropped:
[1, 3]
======================
             
[info] - 0.12: dropPartitions (723 milliseconds)
[info] - 0.12: createFunction (39 milliseconds)
[info] - 0.12: functionExists (34 milliseconds)
[info] - 0.12: renameFunction (27 milliseconds)
[info] - 0.12: alterFunction (20 milliseconds)
[info] - 0.12: getFunction (21 milliseconds)
[info] - NaN and -0.0 in join keys (1 second, 656 milliseconds)
[info] - 0.12: getFunctionOption (26 milliseconds)
[info] - 0.12: listFunctions (25 milliseconds)
[info] - 0.12: dropFunction (23 milliseconds)
[info] - 0.12: sql set command (50 milliseconds)
00:03:04.732 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:03:04.940 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: sql create index and reset (1 second, 314 milliseconds)
[info] - 0.12: sql read hive materialized view (1 millisecond)
[info] - 0.12: version (1 millisecond)
00:03:06.336 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: getConf (2 milliseconds)
00:03:06.514 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: setOut (33 milliseconds)
00:03:06.766 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - SPARK-28323: PythonUDF should be able to use in join condition (3 seconds, 12 milliseconds)
[info] - 0.12: setInfo (33 milliseconds)
00:03:07.035 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: setError (27 milliseconds)
00:03:07.196 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - SPARK-28345: PythonUDF predicate should be able to pushdown to join (554 milliseconds)
00:03:07.383 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - 0.12: newSession (163 milliseconds)
00:03:07.545 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - postgreSQL/window_part1.sql (30 seconds, 235 milliseconds)
[info] - 0.12: withHiveState and addJar (64 milliseconds)
[info] - SPARK-21492: cleanupResource without code generation (530 milliseconds)
[info] - SPARK-29850: sort-merge-join an empty table should not memory leak (763 milliseconds)
[info] - SPARK-32330: Preserve shuffled hash join build side partitioning (102 milliseconds)
[info] - 0.12: reset (1 second, 778 milliseconds)
[info] - SPARK-32383: Preserve hash join (BHJ and SHJ) stream side ordering (1 second, 36 milliseconds)
[info] - SPARK-32290: SingleColumn Null Aware Anti Join Optimize (232 milliseconds)
[info] - postgreSQL/insert.sql (2 seconds, 616 milliseconds)
00:03:12.969 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 14050.0 (TID 29655)
java.lang.ArithmeticException: integer overflow
	at java.lang.Math.multiplyExact(Math.java:867)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:12.971 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 14050.0 (TID 29654)
java.lang.ArithmeticException: integer overflow
	at java.lang.Math.multiplyExact(Math.java:867)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:12.972 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 14050.0 (TID 29655) (amp-jenkins-worker-04.amp executor driver): java.lang.ArithmeticException: integer overflow
	at java.lang.Math.multiplyExact(Math.java:867)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:12.972 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 14050.0 failed 1 times; aborting job
00:03:13.137 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 14052.0 (TID 29659)
java.lang.ArithmeticException: integer overflow
	at java.lang.Math.multiplyExact(Math.java:867)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:13.138 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 14052.0 (TID 29659) (amp-jenkins-worker-04.amp executor driver): java.lang.ArithmeticException: integer overflow
	at java.lang.Math.multiplyExact(Math.java:867)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:13.138 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 14052.0 failed 1 times; aborting job
00:03:13.144 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 14052.0 (TID 29658) (amp-jenkins-worker-04.amp executor driver): TaskKilled (Stage cancelled)
00:03:13.311 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 14054.0 (TID 29662)
java.lang.ArithmeticException: integer overflow
	at java.lang.Math.addExact(Math.java:790)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:13.312 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 14054.0 (TID 29662) (amp-jenkins-worker-04.amp executor driver): java.lang.ArithmeticException: integer overflow
	at java.lang.Math.addExact(Math.java:790)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:13.312 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 14054.0 failed 1 times; aborting job
00:03:13.525 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 14056.0 (TID 29666)
java.lang.ArithmeticException: integer overflow
	at java.lang.Math.addExact(Math.java:790)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:13.527 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 14056.0 (TID 29666) (amp-jenkins-worker-04.amp executor driver): java.lang.ArithmeticException: integer overflow
	at java.lang.Math.addExact(Math.java:790)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:13.527 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 14056.0 failed 1 times; aborting job
00:03:13.643 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 14058.0 (TID 29671)
java.lang.ArithmeticException: integer overflow
	at java.lang.Math.subtractExact(Math.java:829)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:13.644 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 14058.0 (TID 29671) (amp-jenkins-worker-04.amp executor driver): java.lang.ArithmeticException: integer overflow
	at java.lang.Math.subtractExact(Math.java:829)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:13.645 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 14058.0 failed 1 times; aborting job
00:03:13.649 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 14058.0 (TID 29670) (amp-jenkins-worker-04.amp executor driver): TaskKilled (Stage cancelled)
00:03:13.746 ERROR org.apache.spark.executor.Executor: Exception in task 1.0 in stage 14060.0 (TID 29675)
java.lang.ArithmeticException: integer overflow
	at java.lang.Math.subtractExact(Math.java:829)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:13.747 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 14060.0 (TID 29675) (amp-jenkins-worker-04.amp executor driver): java.lang.ArithmeticException: integer overflow
	at java.lang.Math.subtractExact(Math.java:829)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.project_doConsume_0$(Unknown Source)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

00:03:13.747 ERROR org.apache.spark.scheduler.TaskSetManager: Task 1 in stage 14060.0 failed 1 times; aborting job
00:03:13.750 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 14060.0 (TID 29674) (amp-jenkins-worker-04.amp executor driver): TaskKilled (Stage cancelled)
00:03:14.340 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-eba5f3a6-1ba6-4bfa-8d1a-27406a47521d/tbl specified for non-external table:tbl
[info] - postgreSQL/int4.sql (4 seconds, 383 milliseconds)
[info] - SPARK-32399: Full outer shuffled hash join (5 seconds, 958 milliseconds)
[info] - SPARK-32649: Optimize BHJ/SHJ inner/semi join with empty hashed relation (2 seconds, 325 milliseconds)
00:03:18.427 WARN org.apache.spark.sql.JoinSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.JoinSuite, thread names: block-manager-storage-async-thread-pool-48, subquery-58, block-manager-storage-async-thread-pool-78, block-manager-storage-async-thread-pool-50, block-manager-storage-async-thread-pool-2, block-manager-storage-async-thread-pool-33, block-manager-storage-async-thread-pool-91, block-manager-storage-async-thread-pool-57, block-manager-storage-async-thread-pool-44, block-manager-storage-async-thread-pool-74, block-manager-storage-async-thread-pool-26, block-manager-storage-async-thread-pool-41, block-manager-storage-async-thread-pool-96, block-manager-storage-async-thread-pool-15, subquery-57, block-manager-storage-async-thread-pool-79, block-manager-storage-async-thread-pool-14, block-manager-storage-async-thread-pool-20, block-manager-storage-async-thread-pool-51, subquery-60, block-manager-storage-async-thread-pool-42, block-manager-storage-async-thread-pool-36, block-manager-storage-async-thread-pool-31, block-manager-storage-async-thread-pool-56, block-manager-storage-async-thread-pool-24, block-manager-storage-async-thread-pool-95, block-manager-storage-async-thread-pool-47, block-manager-storage-async-thread-pool-0, block-manager-storage-async-thread-pool-13, block-manager-storage-async-thread-pool-39, block-manager-storage-async-thread-pool-32, subquery-59, block-manager-storage-async-thread-pool-12, block-manager-storage-async-thread-pool-63, block-manager-storage-async-thread-pool-87, block-manager-storage-async-thread-pool-55, block-manager-storage-async-thread-pool-46, block-manager-storage-async-thread-pool-1, block-manager-storage-async-thread-pool-17, shuffle-boss-396-1, block-manager-storage-async-thread-pool-66, block-manager-storage-async-thread-pool-98, block-manager-storage-async-thread-pool-35, block-manager-storage-async-thread-pool-8, block-manager-storage-async-thread-pool-99, block-manager-storage-async-thread-pool-60, block-manager-storage-async-thread-pool-88, block-manager-storage-async-thread-pool-21, block-manager-storage-async-thread-pool-49, block-manager-storage-async-thread-pool-90, block-manager-storage-async-thread-pool-45, block-manager-storage-async-thread-pool-34, Idle Worker Monitor for python3, block-manager-storage-async-thread-pool-81, block-manager-storage-async-thread-pool-43, block-manager-storage-async-thread-pool-69, block-manager-storage-async-thread-pool-11, block-manager-storage-async-thread-pool-16, block-manager-storage-async-thread-pool-22, block-manager-storage-async-thread-pool-97, block-manager-storage-async-thread-pool-64, rpc-boss-393-1, block-manager-storage-async-thread-pool-75, block-manager-storage-async-thread-pool-53, block-manager-storage-async-thread-pool-27, block-manager-storage-async-thread-pool-58, block-manager-storage-async-thread-pool-70 =====

[info] BroadcastJoinSuite:
[info] - postgreSQL/select_having.sql (5 seconds, 789 milliseconds)
[info] - 0.12: CREATE TABLE AS SELECT (11 seconds, 183 milliseconds)
00:03:21.446 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-eba5f3a6-1ba6-4bfa-8d1a-27406a47521d/tbl specified for non-external table:tbl
[info] - postgreSQL/select.sql (6 seconds, 262 milliseconds)
[info] - unsafe broadcast hash join updates peak execution memory (7 seconds, 362 milliseconds)
[info] - 0.12: CREATE Partitioned TABLE AS SELECT (6 seconds, 158 milliseconds)
[info] - unsafe broadcast hash outer join updates peak execution memory (348 milliseconds)
[info] - unsafe broadcast left semi join updates peak execution memory (453 milliseconds)
[info] - broadcast hint isn't bothered by authBroadcastJoinThreshold set to low values (92 milliseconds)
[info] - broadcast hint isn't bothered by a disabled authBroadcastJoinThreshold (57 milliseconds)
[info] - SPARK-23192: broadcast hint should be retained after using the cached data (59 milliseconds)
00:03:28.010 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=broadcast) is specified but it is not part of a join relation.
[info] - SPARK-23214: cached data should not carry extra hint info (61 milliseconds)
[info] - broadcast hint isn't propagated after a join (97 milliseconds)
[info] - broadcast hint programming API (359 milliseconds)
00:03:28.575 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Count not find relation 'v' specified in hint 'BROADCAST(v)'.
00:03:28.647 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Count not find relation 'v' specified in hint 'BROADCASTJOIN(v)'.
00:03:28.722 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Count not find relation 'v' specified in hint 'MAPJOIN(v)'.
[info] - broadcast hint in SQL (299 milliseconds)
[info] - join key rewritten (1 millisecond)
[info] - Shouldn't change broadcast join buildSide if user clearly specified (999 milliseconds)
[info] - Shouldn't bias towards build right if user didn't specify (300 milliseconds)
[info] - 0.12: Delete the temporary staging directory and files after each insert (4 seconds, 575 milliseconds)
00:03:34.254 WARN org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils: Encountered AvroSerdeException determining schema. Returning signal schema to indicate problem
org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Neither avro.schema.literal nor avro.schema.url specified, can't determine table schema
	at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrThrowException(AvroSerdeUtils.java:66)
	at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrReturnErrorSchema(AvroSerdeUtils.java:87)
	at org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:60)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:254)
	at org.apache.hadoop.hive.ql.metadata.Partition.getDeserializer(Partition.java:252)
	at org.apache.hadoop.hive.ql.metadata.Partition.initialize(Partition.java:218)
	at org.apache.hadoop.hive.ql.metadata.Partition.<init>(Partition.java:166)
	at org.apache.hadoop.hive.ql.metadata.Hive.createPartition(Hive.java:1513)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_12.$anonfun$createPartitions$1(HiveShim.scala:341)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.sql.hive.client.Shim_v0_12.createPartitions(HiveShim.scala:316)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$createPartitions$1(HiveClientImpl.scala:602)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:290)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:223)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:222)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:272)
	at org.apache.spark.sql.hive.client.HiveClientImpl.createPartitions(HiveClientImpl.scala:602)
	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$createPartitions$1(HiveExternalCatalog.scala:1006)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:103)
	at org.apache.spark.sql.hive.HiveExternalCatalog.createPartitions(HiveExternalCatalog.scala:989)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.createPartitions(ExternalCatalogWithListener.scala:201)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createPartitions(SessionCatalog.scala:1000)
	at org.apache.spark.sql.execution.command.AlterTableAddPartitionCommand.$anonfun$run$15(ddl.scala:484)
	at org.apache.spark.sql.execution.command.AlterTableAddPartitionCommand.$anonfun$run$15$adapted(ddl.scala:483)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.sql.execution.command.AlterTableAddPartitionCommand.run(ddl.scala:483)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3681)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3679)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:92)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:89)
	at org.apache.spark.sql.hive.test.TestHiveSparkSession.$anonfun$sql$1(TestHive.scala:241)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.hive.test.TestHiveSparkSession.sql(TestHive.scala:239)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$113(VersionsSuite.scala:925)
	at org.apache.spark.sql.hive.client.VersionsSuite.withTable(VersionsSuite.scala:66)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$112(VersionsSuite.scala:894)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$112$adapted(VersionsSuite.scala:862)
	at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:188)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$111(VersionsSuite.scala:862)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:34.311 WARN org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils: Encountered AvroSerdeException determining schema. Returning signal schema to indicate problem
org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Neither avro.schema.literal nor avro.schema.url specified, can't determine table schema
	at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrThrowException(AvroSerdeUtils.java:66)
	at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrReturnErrorSchema(AvroSerdeUtils.java:87)
	at org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:60)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:254)
	at org.apache.hadoop.hive.ql.metadata.Partition.getDeserializer(Partition.java:252)
	at org.apache.hadoop.hive.ql.metadata.Partition.initialize(Partition.java:218)
	at org.apache.hadoop.hive.ql.metadata.Partition.<init>(Partition.java:108)
	at org.apache.hadoop.hive.ql.metadata.Hive.createPartition(Hive.java:1551)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.sql.hive.client.Shim_v0_12.$anonfun$createPartitions$1(HiveShim.scala:341)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.apache.spark.sql.hive.client.Shim_v0_12.createPartitions(HiveShim.scala:316)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$createPartitions$1(HiveClientImpl.scala:602)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:290)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:223)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:222)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:272)
	at org.apache.spark.sql.hive.client.HiveClientImpl.createPartitions(HiveClientImpl.scala:602)
	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$createPartitions$1(HiveExternalCatalog.scala:1006)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:103)
	at org.apache.spark.sql.hive.HiveExternalCatalog.createPartitions(HiveExternalCatalog.scala:989)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.createPartitions(ExternalCatalogWithListener.scala:201)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createPartitions(SessionCatalog.scala:1000)
	at org.apache.spark.sql.execution.command.AlterTableAddPartitionCommand.$anonfun$run$15(ddl.scala:484)
	at org.apache.spark.sql.execution.command.AlterTableAddPartitionCommand.$anonfun$run$15$adapted(ddl.scala:483)
	at scala.collection.Iterator.foreach(Iterator.scala:941)
	at scala.collection.Iterator.foreach$(Iterator.scala:941)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
	at org.apache.spark.sql.execution.command.AlterTableAddPartitionCommand.run(ddl.scala:483)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79)
	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3681)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3679)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:92)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:89)
	at org.apache.spark.sql.hive.test.TestHiveSparkSession.$anonfun$sql$1(TestHive.scala:241)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.hive.test.TestHiveSparkSession.sql(TestHive.scala:239)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$113(VersionsSuite.scala:925)
	at org.apache.spark.sql.hive.client.VersionsSuite.withTable(VersionsSuite.scala:66)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$112(VersionsSuite.scala:894)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$112$adapted(VersionsSuite.scala:862)
	at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:188)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$111(VersionsSuite.scala:862)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:34.743 WARN org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils: Encountered AvroSerdeException determining schema. Returning signal schema to indicate problem
org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Neither avro.schema.literal nor avro.schema.url specified, can't determine table schema
	at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrThrowException(AvroSerdeUtils.java:66)
	at org.apache.hadoop.hive.serde2.avro.AvroSerdeUtils.determineSchemaOrReturnErrorSchema(AvroSerdeUtils.java:87)
	at org.apache.hadoop.hive.serde2.avro.AvroSerDe.initialize(AvroSerDe.java:60)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:254)
	at org.apache.hadoop.hive.ql.metadata.Partition.getDeserializer(Partition.java:252)
	at org.apache.hadoop.hive.ql.metadata.Partition.initialize(Partition.java:218)
	at org.apache.hadoop.hive.ql.metadata.Partition.<init>(Partition.java:108)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitions(Hive.java:1781)
	at org.apache.hadoop.hive.ql.metadata.Hive.getPartitions(Hive.java:1799)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$getPartitions$1(HiveClientImpl.scala:730)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:290)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:223)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:222)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:272)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitions(HiveClientImpl.scala:722)
	at org.apache.spark.sql.hive.client.HiveClient.getPartitions(HiveClient.scala:222)
	at org.apache.spark.sql.hive.client.HiveClient.getPartitions$(HiveClient.scala:218)
	at org.apache.spark.sql.hive.client.HiveClientImpl.getPartitions(HiveClientImpl.scala:90)
	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$listPartitions$1(HiveExternalCatalog.scala:1246)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:103)
	at org.apache.spark.sql.hive.HiveExternalCatalog.listPartitions(HiveExternalCatalog.scala:1244)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.listPartitions(ExternalCatalogWithListener.scala:254)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.listPartitions(SessionCatalog.scala:1116)
	at org.apache.spark.sql.hive.execution.HiveTableScanExec.rawPartitions$lzycompute(HiveTableScanExec.scala:196)
	at org.apache.spark.sql.hive.execution.HiveTableScanExec.rawPartitions(HiveTableScanExec.scala:185)
	at org.apache.spark.sql.hive.execution.HiveTableScanExec.prunedPartitions$lzycompute(HiveTableScanExec.scala:179)
	at org.apache.spark.sql.hive.execution.HiveTableScanExec.prunedPartitions(HiveTableScanExec.scala:165)
	at org.apache.spark.sql.hive.execution.HiveTableScanExec.$anonfun$doExecute$2(HiveTableScanExec.scala:210)
	at org.apache.spark.util.Utils$.withDummyCallSite(Utils.scala:2509)
	at org.apache.spark.sql.hive.execution.HiveTableScanExec.doExecute(HiveTableScanExec.scala:210)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
	at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:316)
	at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:382)
	at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3690)
	at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:2959)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3681)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3679)
	at org.apache.spark.sql.Dataset.collect(Dataset.scala:2959)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$113(VersionsSuite.scala:932)
	at org.apache.spark.sql.hive.client.VersionsSuite.withTable(VersionsSuite.scala:66)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$112(VersionsSuite.scala:894)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$112$adapted(VersionsSuite.scala:862)
	at org.apache.spark.SparkFunSuite.withTempDir(SparkFunSuite.scala:188)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$111(VersionsSuite.scala:862)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:35.149 ERROR org.apache.spark.sql.execution.exchange.BroadcastExchangeExec: Could not execute broadcast in 5 secs.
java.util.concurrent.TimeoutException
	at java.util.concurrent.FutureTask.get(FutureTask.java:205)
	at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:194)
	at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:516)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeBroadcast$1(SparkPlan.scala:188)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:184)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:203)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareRelation(BroadcastHashJoinExec.scala:217)
	at org.apache.spark.sql.execution.joins.HashJoin.codegenInner(HashJoin.scala:442)
	at org.apache.spark.sql.execution.joins.HashJoin.codegenInner$(HashJoin.scala:441)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenInner(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.joins.HashJoin.doConsume(HashJoin.scala:350)
	at org.apache.spark.sql.execution.joins.HashJoin.doConsume$(HashJoin.scala:348)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.CodegenSupport.constructDoConsumeFunction(WholeStageCodegenExec.scala:222)
	at org.apache.spark.sql.execution.CodegenSupport.consume(WholeStageCodegenExec.scala:193)
	at org.apache.spark.sql.execution.CodegenSupport.consume$(WholeStageCodegenExec.scala:150)
	at org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:41)
	at org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:87)
	at org.apache.spark.sql.execution.CodegenSupport.consume(WholeStageCodegenExec.scala:195)
	at org.apache.spark.sql.execution.CodegenSupport.consume$(WholeStageCodegenExec.scala:150)
	at org.apache.spark.sql.execution.RangeExec.consume(basicPhysicalOperators.scala:379)
	at org.apache.spark.sql.execution.RangeExec.doProduce(basicPhysicalOperators.scala:566)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:96)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:91)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:91)
	at org.apache.spark.sql.execution.RangeExec.produce(basicPhysicalOperators.scala:379)
	at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:54)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:96)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:91)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:91)
	at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:41)
	at org.apache.spark.sql.execution.joins.HashJoin.doProduce(HashJoin.scala:345)
	at org.apache.spark.sql.execution.joins.HashJoin.doProduce$(HashJoin.scala:344)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doProduce(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:96)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:91)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:91)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.produce(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:54)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:96)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:91)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:91)
	at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:41)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:656)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:719)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
	at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:316)
	at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:382)
	at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3690)
	at org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:2959)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3681)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3679)
	at org.apache.spark.sql.Dataset.collect(Dataset.scala:2959)
	at org.apache.spark.sql.execution.joins.BroadcastJoinSuiteBase.$anonfun$new$31(BroadcastJoinSuite.scala:415)
	at org.scalatest.Assertions.intercept(Assertions.scala:749)
	at org.scalatest.Assertions.intercept$(Assertions.scala:746)
	at org.scalatest.funsuite.AnyFunSuite.intercept(AnyFunSuite.scala:1562)
	at org.apache.spark.sql.execution.joins.BroadcastJoinSuiteBase.$anonfun$new$30(BroadcastJoinSuite.scala:414)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:54)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:38)
	at org.apache.spark.sql.execution.joins.BroadcastJoinSuiteBase.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(BroadcastJoinSuite.scala:45)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:246)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:244)
	at org.apache.spark.sql.execution.joins.BroadcastJoinSuiteBase.withSQLConf(BroadcastJoinSuite.scala:45)
	at org.apache.spark.sql.execution.joins.BroadcastJoinSuiteBase.$anonfun$new$28(BroadcastJoinSuite.scala:413)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.execution.adaptive.DisableAdaptiveExecutionSuite.$anonfun$test$5(AdaptiveTestUtils.scala:67)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf(SQLHelper.scala:54)
	at org.apache.spark.sql.catalyst.plans.SQLHelper.withSQLConf$(SQLHelper.scala:38)
	at org.apache.spark.sql.execution.joins.BroadcastJoinSuiteBase.org$apache$spark$sql$test$SQLTestUtilsBase$$super$withSQLConf(BroadcastJoinSuite.scala:45)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf(SQLTestUtils.scala:246)
	at org.apache.spark.sql.test.SQLTestUtilsBase.withSQLConf$(SQLTestUtils.scala:244)
	at org.apache.spark.sql.execution.joins.BroadcastJoinSuiteBase.withSQLConf(BroadcastJoinSuite.scala:45)
	at org.apache.spark.sql.execution.adaptive.DisableAdaptiveExecutionSuite.$anonfun$test$4(AdaptiveTestUtils.scala:67)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - Broadcast timeout (5 seconds, 46 milliseconds)
00:03:35.176 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 6.0 (TID 12) (amp-jenkins-worker-04.amp executor 1): TaskKilled (Stage cancelled)
00:03:35.178 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 6.0 (TID 13) (amp-jenkins-worker-04.amp executor 0): TaskKilled (Stage cancelled)
[info] - 0.12: SPARK-13709: reading partitioned Avro table with nested schema (3 seconds, 855 milliseconds)
00:03:37.876 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Could not persist `default`.`t` in a Hive compatible way. Persisting it into Hive metastore in Spark SQL specific format.
java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:206)
	at org.apache.spark.sql.hive.client.HiveClientImpl$.toInputFormat(HiveClientImpl.scala:1023)
	at org.apache.spark.sql.hive.client.HiveClientImpl$.$anonfun$toHiveTable$8(HiveClientImpl.scala:1062)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.hive.client.HiveClientImpl$.toHiveTable(HiveClientImpl.scala:1062)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$createTable$1(HiveClientImpl.scala:542)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:290)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:223)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:222)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:272)
	at org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:540)
	at org.apache.spark.sql.hive.HiveExternalCatalog.saveTableIntoHive(HiveExternalCatalog.scala:510)
	at org.apache.spark.sql.hive.HiveExternalCatalog.createDataSourceTable(HiveExternalCatalog.scala:398)
	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$createTable$1(HiveExternalCatalog.scala:275)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:103)
	at org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:246)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.createTable(ExternalCatalogWithListener.scala:94)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:345)
	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:185)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:127)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:126)
	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:985)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:985)
	at org.apache.spark.sql.DataFrameWriter.createTable(DataFrameWriter.scala:749)
	at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:727)
	at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:625)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$115(VersionsSuite.scala:940)
	at org.apache.spark.sql.hive.client.VersionsSuite.withTable(VersionsSuite.scala:66)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$114(VersionsSuite.scala:939)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
00:03:39.213 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: Could not persist `default`.`t1` in a Hive compatible way. Persisting it into Hive metastore in Spark SQL specific format.
java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
	at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:206)
	at org.apache.spark.sql.hive.client.HiveClientImpl$.toInputFormat(HiveClientImpl.scala:1023)
	at org.apache.spark.sql.hive.client.HiveClientImpl$.$anonfun$toHiveTable$8(HiveClientImpl.scala:1062)
	at scala.Option.map(Option.scala:230)
	at org.apache.spark.sql.hive.client.HiveClientImpl$.toHiveTable(HiveClientImpl.scala:1062)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$createTable$1(HiveClientImpl.scala:542)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.client.HiveClientImpl.$anonfun$withHiveState$1(HiveClientImpl.scala:290)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:223)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:222)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:272)
	at org.apache.spark.sql.hive.client.HiveClientImpl.createTable(HiveClientImpl.scala:540)
	at org.apache.spark.sql.hive.HiveExternalCatalog.saveTableIntoHive(HiveExternalCatalog.scala:510)
	at org.apache.spark.sql.hive.HiveExternalCatalog.createDataSourceTable(HiveExternalCatalog.scala:398)
	at org.apache.spark.sql.hive.HiveExternalCatalog.$anonfun$createTable$1(HiveExternalCatalog.scala:275)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:103)
	at org.apache.spark.sql.hive.HiveExternalCatalog.createTable(HiveExternalCatalog.scala:246)
	at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.createTable(ExternalCatalogWithListener.scala:94)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:345)
	at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:185)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:120)
	at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)
	at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3681)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3679)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:92)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:89)
	at org.apache.spark.sql.hive.test.TestHiveSparkSession.$anonfun$sql$1(TestHive.scala:241)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:769)
	at org.apache.spark.sql.hive.test.TestHiveSparkSession.sql(TestHive.scala:239)
	at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$115(VersionsSuite.scala:942)
	at org.apache.spark.sql.hive.client.VersionsSuite.withTable(VersionsSuite.scala:66)
	at org.apache.spark.sql.hive.client.VersionsSuite.$anonfun$new$114(VersionsSuite.scala:939)
	at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
	at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.funsuite.AnyFunSuiteLike$$anon$1.apply(AnyFunSuiteLike.scala:189)
	at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:176)
	at org.scalatest.funsuite.AnyFunSuiteLike.invokeWithFixture$1(AnyFunSuiteLike.scala:187)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTest$1(AnyFunSuiteLike.scala:199)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest(AnyFunSuiteLike.scala:199)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTest$(AnyFunSuiteLike.scala:181)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:234)
	at org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:227)
	at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:61)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$runTests$1(AnyFunSuiteLike.scala:232)
	at org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:413)
	at scala.collection.immutable.List.foreach(List.scala:392)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:475)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests(AnyFunSuiteLike.scala:232)
	at org.scalatest.funsuite.AnyFunSuiteLike.runTests$(AnyFunSuiteLike.scala:231)
	at org.scalatest.funsuite.AnyFunSuite.runTests(AnyFunSuite.scala:1562)
	at org.scalatest.Suite.run(Suite.scala:1112)
	at org.scalatest.Suite.run$(Suite.scala:1094)
	at org.scalatest.funsuite.AnyFunSuite.org$scalatest$funsuite$AnyFunSuiteLike$$super$run(AnyFunSuite.scala:1562)
	at org.scalatest.funsuite.AnyFunSuiteLike.$anonfun$run$1(AnyFunSuiteLike.scala:236)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:535)
	at org.scalatest.funsuite.AnyFunSuiteLike.run(AnyFunSuiteLike.scala:236)
	at org.scalatest.funsuite.AnyFunSuiteLike.run$(AnyFunSuiteLike.scala:235)
	at org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:61)
	at org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
	at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
	at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
	at org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
	at org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
	at sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
[info] - 0.12: CTAS for managed data source tables (4 seconds, 361 milliseconds)
00:03:40.890 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-eba5f3a6-1ba6-4bfa-8d1a-27406a47521d/tab1 specified for non-external table:tab1
00:03:41.024 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary,ds:string>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2),ds:string>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:03:41.167 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary,ds:string>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2),ds:string>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:03:41.315 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary,ds:string>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2),ds:string>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:03:41.674 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-eba5f3a6-1ba6-4bfa-8d1a-27406a47521d/tab1 specified for non-external table:tab1
00:03:41.778 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2)>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:03:41.916 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2)>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:03:42.041 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2)>). We have to fall back to the table schema from Hive metastore which is not case preserving.
[info] - 0.12: Decimal support of Avro Hive serde (1 second, 538 milliseconds)
[info] - 0.12: read avro file containing decimal (1 second, 193 milliseconds)
00:03:44.118 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-eba5f3a6-1ba6-4bfa-8d1a-27406a47521d/tab1 specified for non-external table:tab1
[info] - broadcast join where streamed side's output partitioning is HashPartitioning (10 seconds, 714 milliseconds)
[info] - 0.12: SPARK-17920: Insert into/overwrite avro table (3 seconds, 363 milliseconds)
[info] - broadcast join where streamed side's output partitioning is PartitioningCollection (7 seconds, 983 milliseconds)
[info] - BroadcastHashJoinExec output partitioning scenarios for inner join (3 milliseconds)
[info] - BroadcastHashJoinExec output partitioning size should be limited with a config (4 milliseconds)
00:03:53.929 WARN org.apache.spark.sql.execution.joins.BroadcastJoinSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.joins.BroadcastJoinSuite, thread names: ExecutorRunner for app-20201022000319-0000/1, File appending thread for /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/work/app-20201022000319-0000/1/stderr, rpc-boss-402-1, rpc-boss-409-1, rpc-boss-399-1, shuffle-boss-413-1, rpc-boss-405-1, ExecutorRunner for app-20201022000319-0000/0, File appending thread for /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/work/app-20201022000319-0000/0/stderr =====

[info] UDFSuite:
00:03:53.953 WARN org.apache.spark.sql.SparkSession: An existing Spark session exists as the active or default session.
This probably means another suite leaked it. Attempting to stop it before continuing.
This existing Spark session was created at:

org.apache.spark.sql.execution.joins.BroadcastJoinSuiteBase.beforeAll(BroadcastJoinSuite.scala:58)
org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:212)
org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:61)
org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:318)
org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:513)
sbt.ForkMain$Run.lambda$runTest$1(ForkMain.java:413)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
java.lang.Thread.run(Thread.java:748)

         
[info] - built-in fixed arity expressions (25 milliseconds)
[info] - built-in vararg expressions (16 milliseconds)
[info] - built-in expressions with multiple constructors (74 milliseconds)
[info] - count (9 milliseconds)
[info] - count distinct (9 milliseconds)
[info] - SPARK-8003 spark_partition_id (93 milliseconds)
[info] - SPARK-8005 input_file_name (773 milliseconds)
[info] - error reporting for incorrect number of arguments - builtin function (4 milliseconds)
[info] - error reporting for incorrect number of arguments - udf (5 milliseconds)
[info] - error reporting for undefined functions (122 milliseconds)
[info] - Simple UDF (227 milliseconds)
00:03:55.529 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function foo replaced a previously registered function.
[info] - UDF defined using UserDefinedFunction (103 milliseconds)
[info] - ZeroArgument non-deterministic UDF (606 milliseconds)
00:03:56.238 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function strlenscala replaced a previously registered function.
[info] - TwoArgument UDF (50 milliseconds)
[info] - UDF in a WHERE (120 milliseconds)
[info] - UDF in a HAVING (278 milliseconds)
[info] - UDF in a GROUP BY (285 milliseconds)
00:03:56.972 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function groupfunction replaced a previously registered function.
00:03:56.973 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function havingfilter replaced a previously registered function.
[info] - UDFs everywhere (397 milliseconds)
[info] - struct UDF (295 milliseconds)
[info] - udf that is transformed (69 milliseconds)
[info] - type coercion for udf inputs (54 milliseconds)
[info] - udf in different types (2 seconds, 548 milliseconds)
00:04:00.339 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function testdatafunc replaced a previously registered function.
[info] - SPARK-11716 UDFRegistration does not include the input data type in returned UDF (599 milliseconds)
[info] - SPARK-19338 Provide identical names for UDFs in the EXPLAIN output (946 milliseconds)
[info] - SPARK-23666 Do not display exprId in argument names (91 milliseconds)
[info] - cached Data should be used in the write path (1 second, 483 milliseconds)
[info] - SPARK-24891 Fix HandleNullInputsForUDF rule (149 milliseconds)
00:04:04.352 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function f replaced a previously registered function.
[info] - SPARK-24891 Fix HandleNullInputsForUDF rule - with table (933 milliseconds)
[info] - SPARK-25044 Verify null input handling for primitive types - with udf() (277 milliseconds)
00:04:05.013 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function f replaced a previously registered function.
[info] - SPARK-25044 Verify null input handling for primitive types - with udf.register (631 milliseconds)
[info] - SPARK-25044 Verify null input handling for primitive types - with udf(Any, DataType) (229 milliseconds)
[info] - use untyped Scala UDF should fail by default (2 milliseconds)
[info] - SPARK-26308: udf with decimal (125 milliseconds)
[info] - SPARK-26308: udf with complex types of decimal (456 milliseconds)
[info] - SPARK-26323 Verify input type check - with udf() (108 milliseconds)
00:04:06.561 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function f replaced a previously registered function.
[info] - SPARK-26323 Verify input type check - with udf.register (332 milliseconds)
[info] - Using java.time.Instant in UDF (166 milliseconds)
[info] - Using java.time.LocalDate in UDF (147 milliseconds)
00:04:07.189 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function buildlocaldateinstanttype replaced a previously registered function.
00:04:07.331 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function buildlocaldateinstanttype replaced a previously registered function.
[info] - Using combined types of Instant/LocalDate in UDF (418 milliseconds)
00:04:07.629 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function buildtimestampinstanttype replaced a previously registered function.
00:04:07.767 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function buildtimestampinstanttype replaced a previously registered function.
[info] - Using combined types of Instant/Timestamp in UDF (424 milliseconds)
[info] - SPARK-32154: return null with or without explicit type (439 milliseconds)
[info] - SPARK-28321 0-args Java UDF should not be called only once (349 milliseconds)
[info] - SPARK-28521 error message for CAST(parameter types contains DataType) (3 milliseconds)
[info] - only one case class parameter (229 milliseconds)
[info] - one case class with primitive parameter (158 milliseconds)
[info] - multiple case class parameters (155 milliseconds)
[info] - input case class parameter and return case class (169 milliseconds)
00:04:09.514 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.0
[info] - any and case class parameter (178 milliseconds)
[info] - nested case class parameter (173 milliseconds)
[info] - case class as element type of Seq/Array (346 milliseconds)
[info] - 0.13: create client (23 seconds, 465 milliseconds)
log4j:WARN No appenders could be found for logger (org.apache.hadoop.hive.metastore.RetryingHMSHandler).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
[info] - 0.13: createDatabase (43 milliseconds)
[info] - case class as key/value type of Map (684 milliseconds)
[info] - case class as element of tuple (163 milliseconds)
[info] - 0.13: create/get/alter database should pick right user name as owner (79 milliseconds)
[info] - 0.13: createDatabase with null description (11 milliseconds)
[info] - 0.13: setCurrentDatabase (1 millisecond)
[info] - case class as generic type of Option (312 milliseconds)
[info] - 0.13: getDatabase (5 milliseconds)
[info] - more input fields than expect for case class (140 milliseconds)
[info] - 0.13: databaseExists (7 milliseconds)
[info] - less input fields than expect for case class (17 milliseconds)
[info] - wrong order of input fields for case class (109 milliseconds)
[info] - 0.13: listDatabases (21 milliseconds)
[info] - top level Option primitive type (109 milliseconds)
[info] - 0.13: alterDatabase (35 milliseconds)
[info] - array Option (205 milliseconds)
00:04:11.865 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 147.0 (TID 236)
org.apache.spark.SparkException: Failed to execute user defined function(UDFSuite$MalformedClassObject$MalformedNonPrimitiveFunction: (string) => int)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: / by zero
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedNonPrimitiveFunction.apply(UDFSuite.scala:757)
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedNonPrimitiveFunction.apply(UDFSuite.scala:756)
	... 17 more
00:04:11.867 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 147.0 (TID 236) (amp-jenkins-worker-04.amp executor driver): org.apache.spark.SparkException: Failed to execute user defined function(UDFSuite$MalformedClassObject$MalformedNonPrimitiveFunction: (string) => int)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: / by zero
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedNonPrimitiveFunction.apply(UDFSuite.scala:757)
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedNonPrimitiveFunction.apply(UDFSuite.scala:756)
	... 17 more

00:04:11.867 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 147.0 failed 1 times; aborting job
00:04:11.935 ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 148.0 (TID 237)
org.apache.spark.SparkException: Failed to execute user defined function(UDFSuite$MalformedClassObject$MalformedPrimitiveFunction: (int) => int)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: / by zero
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedPrimitiveFunction.apply$mcII$sp(UDFSuite.scala:761)
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedPrimitiveFunction.apply(UDFSuite.scala:761)
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedPrimitiveFunction.apply(UDFSuite.scala:760)
	... 17 more
00:04:11.937 WARN org.apache.spark.scheduler.TaskSetManager: Lost task 0.0 in stage 148.0 (TID 237) (amp-jenkins-worker-04.amp executor driver): org.apache.spark.SparkException: Failed to execute user defined function(UDFSuite$MalformedClassObject$MalformedPrimitiveFunction: (int) => int)
	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
	at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
	at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:756)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:340)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:898)
	at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:898)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:484)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1426)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:487)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ArithmeticException: / by zero
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedPrimitiveFunction.apply$mcII$sp(UDFSuite.scala:761)
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedPrimitiveFunction.apply(UDFSuite.scala:761)
	at org.apache.spark.sql.UDFSuite$MalformedClassObject$MalformedPrimitiveFunction.apply(UDFSuite.scala:760)
	... 17 more

00:04:11.938 ERROR org.apache.spark.scheduler.TaskSetManager: Task 0 in stage 148.0 failed 1 times; aborting job
[info] - SPARK-32238: Use Utils.getSimpleName to avoid hitting Malformed class name (139 milliseconds)
[info] - 0.13: dropDatabase (279 milliseconds)
[info] - SPARK-32307: Aggression that use map type input UDF as group expression (367 milliseconds)
00:04:12.310 WARN org.apache.spark.sql.catalyst.analysis.SimpleFunctionRegistry: The function key replaced a previously registered function.
[info] - 0.13: createTable (329 milliseconds)
[info] - SPARK-32307: Aggression that use array type input UDF as group expression (298 milliseconds)
[info] - SPARK-32459: UDF should not fail on WrappedArray (197 milliseconds)
00:04:12.828 WARN org.apache.spark.sql.UDFSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.UDFSuite, thread names: rpc-boss-416-1, shuffle-boss-419-1 =====

[info] CachedTableSuite:
[info] - 0.13: loadTable (299 milliseconds)
[info] - 0.13: tableExists (23 milliseconds)
[info] - 0.13: getTable (26 milliseconds)
[info] - cache temp table (32 milliseconds)
[info] - unpersist an uncached table will not raise exception (8 milliseconds)
[info] - 0.13: getTableOption (13 milliseconds)
[info] - cache table as select (164 milliseconds)
[info] - uncaching temp table (56 milliseconds)
[info] - 0.13: getTablesByName (92 milliseconds)
[info] - 0.13: getTablesByName when multiple tables (40 milliseconds)
[info] - 0.13: getTablesByName when some tables do not exist (23 milliseconds)
[info] - 0.13: getTablesByName when contains invalid name (26 milliseconds)
[info] - 0.13: getTablesByName when empty (11 milliseconds)
[info] - 0.13: alterTable(table: CatalogTable) (63 milliseconds)
[info] - 0.13: alterTable - should respect the original catalog table's owner name (102 milliseconds)
[info] - 0.13: alterTable(dbName: String, tableName: String, table: CatalogTable) (64 milliseconds)
[info] - 0.13: alterTable - rename (109 milliseconds)
[info] - too big for memory (1 second, 499 milliseconds)
[info] - calling .cache() should use in-memory columnar caching (12 milliseconds)
[info] - calling .unpersist() should drop in-memory columnar cache (87 milliseconds)
[info] - isCached (25 milliseconds)
[info] - 0.13: alterTable - change database (111 milliseconds)
00:04:15.199 WARN org.apache.spark.sql.execution.CacheManager: Asked to cache already cached data.
[info] - SPARK-1669: cacheTable should be idempotent (35 milliseconds)
[info] - 0.13: alterTable - change database and table names (70 milliseconds)
[info] - read from cached table and uncache (270 milliseconds)
[info] - 0.13: listTables(database) (12 milliseconds)
[info] - SELECT star from cached table (156 milliseconds)
[info] - 0.13: listTables(database, pattern) (51 milliseconds)
[info] - 0.13: listTablesByType(database, pattern, tableType) (46 milliseconds)
[info] - Self-join cached (947 milliseconds)
[info] - 'CACHE TABLE' and 'UNCACHE TABLE' SQL statement (112 milliseconds)
[info] - 0.13: dropTable (763 milliseconds)
[info] - CACHE TABLE tableName AS SELECT * FROM anotherTable (123 milliseconds)
[info] - 0.13: sql create partitioned table (43 milliseconds)
[info] - CACHE TABLE tableName AS SELECT ... (134 milliseconds)
[info] - CACHE LAZY TABLE tableName (110 milliseconds)
[info] - SQL interface support storageLevel(DISK_ONLY) (114 milliseconds)
00:04:17.185 WARN org.apache.spark.sql.execution.command.CacheTableCommand: Invalid options: a -> 1, b -> 2
[info] - 0.13: createPartitions (145 milliseconds)
[info] - SQL interface support storageLevel(DISK_ONLY) with invalid options (102 milliseconds)
[info] - 0.13: getPartitionNames(catalogTable) (28 milliseconds)
[info] - SQL interface support storageLevel(MEMORY_ONLY) (92 milliseconds)
[info] - SQL interface cache SELECT ... support storageLevel(DISK_ONLY) (127 milliseconds)
[info] - SQL interface support storageLevel(Invalid StorageLevel) (7 milliseconds)
[info] - 0.13: getPartitions(catalogTable) (124 milliseconds)
[info] - SQL interface support storageLevel(with LAZY) (132 milliseconds)
[info] - InMemoryRelation statistics (103 milliseconds)
[info] - Drops temporary table (19 milliseconds)
[info] - Drops cached temporary table (61 milliseconds)
[info] - 0.13: getPartitionsByFilter (212 milliseconds)
[info] - Clear all cache (137 milliseconds)
[info] - 0.13: getPartition (49 milliseconds)
[info] - 0.13: getPartitionOption(db: String, table: String, spec: TablePartitionSpec) (26 milliseconds)
[info] - 0.13: getPartitionOption(table: CatalogTable, spec: TablePartitionSpec) (32 milliseconds)
[info] - 0.13: getPartitions(db: String, table: String) (40 milliseconds)
[info] - 0.13: loadPartition (141 milliseconds)
[info] - 0.13: loadDynamicPartitions (15 milliseconds)
[info] - Ensure accumulators to be cleared after GC when uncacheTable (1 second, 159 milliseconds)
[info] - SPARK-10327 Cache Table is not working while subquery has alias in its project list (103 milliseconds)
[info] - 0.13: renamePartitions (113 milliseconds)
[info] - 0.13: alterPartitions (156 milliseconds)
00:04:19.770 ERROR org.apache.spark.sql.hive.client.HiveClientImpl: 
======================
Attempt to drop the partition specs in table 'src_part' database 'default':
Map(key1 -> 1, key2 -> 3)
In this attempt, the following partitions have been dropped successfully:

The remaining partitions have not been dropped:
[1, 3]
======================
             
[info] - 0.13: dropPartitions (293 milliseconds)
[info] - 0.13: createFunction (26 milliseconds)
[info] - 0.13: functionExists (39 milliseconds)
[info] - 0.13: renameFunction (21 milliseconds)
[info] - 0.13: alterFunction (26 milliseconds)
[info] - 0.13: getFunction (4 milliseconds)
[info] - 0.13: getFunctionOption (22 milliseconds)
[info] - 0.13: listFunctions (11 milliseconds)
[info] - 0.13: dropFunction (32 milliseconds)
[info] - 0.13: sql set command (13 milliseconds)
[info] - 0.13: sql create index and reset (869 milliseconds)
[info] - 0.13: sql read hive materialized view (0 milliseconds)
[info] - 0.13: version (1 millisecond)
[info] - 0.13: getConf (1 millisecond)
[info] - 0.13: setOut (2 milliseconds)
[info] - 0.13: setInfo (2 milliseconds)
[info] - 0.13: setError (3 milliseconds)
[info] - 0.13: newSession (70 milliseconds)
[info] - 0.13: withHiveState and addJar (5 milliseconds)
[info] - 0.13: reset (607 milliseconds)
00:04:24.272 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-8e87c2a7-8bff-4e62-823d-374fa570825a/tbl specified for non-external table:tbl
rmr: DEPRECATED: Please use '-rm -r' instead.
Deleted file:///home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-8e87c2a7-8bff-4e62-823d-374fa570825a/tbl
[info] - 0.13: CREATE TABLE AS SELECT (980 milliseconds)
00:04:25.394 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-8e87c2a7-8bff-4e62-823d-374fa570825a/tbl specified for non-external table:tbl
[info] - 0.13: CREATE Partitioned TABLE AS SELECT (1 second, 717 milliseconds)
rmr: DEPRECATED: Please use '-rm -r' instead.
Deleted file:///home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-941e43c9-dc79-4908-9a85-10aba116b78e
rmr: DEPRECATED: Please use '-rm -r' instead.
Deleted file:///home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-941e43c9-dc79-4908-9a85-10aba116b78e
rmr: DEPRECATED: Please use '-rm -r' instead.
Deleted file:///home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-941e43c9-dc79-4908-9a85-10aba116b78e
[info] - 0.13: Delete the temporary staging directory and files after each insert (2 seconds, 407 milliseconds)
rmr: DEPRECATED: Please use '-rm -r' instead.
Deleted file:///home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-141b5a6e-f97a-4c80-826e-11e33171fb56/spark_13709_temp
[info] - 0.13: SPARK-13709: reading partitioned Avro table with nested schema (2 seconds, 73 milliseconds)
[info] - A cached table preserves the partitioning and ordering of its cached SparkPlan (13 seconds, 656 milliseconds)
[info] - SPARK-15870 DataFrame can't execute after uncacheTable (131 milliseconds)
[info] - SPARK-15915 Logical plans should use canonicalized plan when override sameResult (22 milliseconds)
[info] - 0.13: CTAS for managed data source tables (1 second, 127 milliseconds)
[info] - SPARK-19093 Caching in side subquery (26 milliseconds)
00:04:33.131 WARN org.apache.spark.sql.execution.CacheManager: Asked to cache already cached data.
00:04:33.132 WARN org.apache.spark.sql.execution.CacheManager: Asked to cache already cached data.
[info] - SPARK-19093 scalar and nested predicate query (117 milliseconds)
00:04:33.226 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-8e87c2a7-8bff-4e62-823d-374fa570825a/tab1 specified for non-external table:tab1
00:04:33.259 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary,ds:string>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2),ds:string>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:04:34.570 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary,ds:string>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2),ds:string>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:04:34.588 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary,ds:string>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2),ds:string>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:04:34.637 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-8e87c2a7-8bff-4e62-823d-374fa570825a/tab1 specified for non-external table:tab1
00:04:34.667 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2)>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:04:34.700 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2)>). We have to fall back to the table schema from Hive metastore which is not case preserving.
00:04:34.792 WARN org.apache.spark.sql.hive.test.TestHiveExternalCatalog: The table schema given by Hive metastore(struct<f0:binary>) is different from the schema when this table was created by Spark SQL(struct<f0:decimal(38,2)>). We have to fall back to the table schema from Hive metastore which is not case preserving.
[info] - 0.13: Decimal support of Avro Hive serde (2 seconds, 54 milliseconds)
00:04:36.478 WARN org.apache.spark.util.HadoopFSUtils: The directory file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/target/tmp/spark-4d624dec-b73c-42da-876c-e7e5f9fbee39 was not found. Was it deleted very recently?
[info] - SPARK-19765: UNCACHE TABLE should un-cache all cached plans that refer to this table (3 seconds, 402 milliseconds)
[info] - 0.13: read avro file containing decimal (1 second, 459 milliseconds)
00:04:37.008 WARN org.apache.hadoop.hive.metastore.HiveMetaStore: Location: file:/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-8e87c2a7-8bff-4e62-823d-374fa570825a/tab1 specified for non-external table:tab1
rmr: DEPRECATED: Please use '-rm -r' instead.
Deleted file:///home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/sql/hive/target/tmp/org.apache.spark.sql.hive.client.VersionsSuite/spark-8e87c2a7-8bff-4e62-823d-374fa570825a/tab1
[info] - refreshByPath should refresh all cached plans with the specified path (1 second, 372 milliseconds)
[info] - SPARK-19993 simple subquery caching (155 milliseconds)
[info] - SPARK-19993 subquery caching with correlated predicates (151 milliseconds)
[info] - SPARK-19993 subquery with cached underlying relation (152 milliseconds)
[info] - SPARK-19993 nested subquery caching and scalar + predicate subqueris (655 milliseconds)
00:04:39.088 WARN org.apache.spark.sql.execution.CacheManager: Asked to cache already cached data.
[info] - SPARK-23312: vectorized cache reader can be disabled (25 milliseconds)
[info] - SPARK-23880 table cache should be lazy and don't trigger any jobs (204 milliseconds)
[info] - 0.13: SPARK-17920: Insert into/overwrite avro table (2 seconds, 453 milliseconds)
[info] - SPARK-24596 Non-cascading Cache Invalidation - uncache temporary view (218 milliseconds)
[info] - SPARK-24596 Non-cascading Cache Invalidation - drop temporary view (283 milliseconds)
[info] - SPARK-24596 Non-cascading Cache Invalidation - drop persistent view (683 milliseconds)
[info] - SPARK-24596 Non-cascading Cache Invalidation - uncache table (518 milliseconds)
00:04:41.203 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=broadcast) is specified but it is not part of a join relation.
00:04:41.210 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=broadcast) is specified but it is not part of a join relation.
00:04:41.293 WARN org.apache.spark.sql.execution.CacheManager: Asked to cache already cached data.
00:04:41.375 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=broadcast) is specified but it is not part of a join relation.
00:04:41.387 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=broadcast) is specified but it is not part of a join relation.
00:04:41.472 WARN org.apache.spark.sql.execution.CacheManager: Asked to cache already cached data.
00:04:41.541 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=broadcast) is specified but it is not part of a join relation.
00:04:41.554 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=broadcast) is specified but it is not part of a join relation.
00:04:41.658 WARN org.apache.spark.sql.execution.CacheManager: Asked to cache already cached data.
00:04:41.710 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Hint (strategy=merge) is overridden by another hint and will not take effect.
00:04:41.723 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=merge) is specified but it is not part of a join relation.
00:04:41.723 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=shuffle_hash) is specified but it is not part of a join relation.
00:04:41.731 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=merge) is specified but it is not part of a join relation.
00:04:41.731 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: A join hint (strategy=shuffle_hash) is specified but it is not part of a join relation.
00:04:41.820 WARN org.apache.spark.sql.catalyst.analysis.HintErrorLogger: Hint (strategy=merge) is overridden by another hint and will not take effect.
[info] - Cache should respect the hint (827 milliseconds)
[info] - analyzes column statistics in cached query (1 second, 172 milliseconds)
[info] - SPARK-27248 refreshTable should recreate cache with same cache name and storage level (194 milliseconds)
[info] - cache supports for intervals (462 milliseconds)
[info] - SPARK-30494 Fix the leak of cached data when replace an existing view (741 milliseconds)
00:04:44.436 WARN org.apache.spark.sql.CachedTableSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.CachedTableSuite, thread names: shuffle-boss-425-1, rpc-boss-422-1 =====

[info] PathOptionSuite:
[info] - path option always exist (68 milliseconds)
[info] - path option also exist for write path (143 milliseconds)
[info] - path option always represent the value of table location (71 milliseconds)
00:04:44.791 WARN org.apache.spark.sql.sources.PathOptionSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.sources.PathOptionSuite, thread names: shuffle-boss-431-1, rpc-boss-428-1 =====

[info] ParquetSchemaInferenceSuite:
[info] - sql => parquet: basic types (4 milliseconds)
[info] - sql <= parquet: basic types (1 millisecond)
[info] - sql => parquet: logical integral types (1 millisecond)
[info] - sql <= parquet: logical integral types (0 milliseconds)
[info] - sql => parquet: string (1 millisecond)
[info] - sql <= parquet: string (0 milliseconds)
[info] - sql => parquet: binary enum as string (1 millisecond)
[info] - sql <= parquet: binary enum as string (0 milliseconds)
[info] - sql => parquet: non-nullable array - non-standard (1 millisecond)
[info] - sql <= parquet: non-nullable array - non-standard (2 milliseconds)
[info] - sql => parquet: non-nullable array - standard (1 millisecond)
[info] - sql <= parquet: non-nullable array - standard (0 milliseconds)
[info] - sql => parquet: nullable array - non-standard (0 milliseconds)
[info] - sql <= parquet: nullable array - non-standard (0 milliseconds)
[info] - sql => parquet: nullable array - standard (1 millisecond)
[info] - sql <= parquet: nullable array - standard (0 milliseconds)
[info] - sql => parquet: map - standard (0 milliseconds)
[info] - sql <= parquet: map - standard (2 milliseconds)
[info] - sql => parquet: map - non-standard (2 milliseconds)
[info] - sql <= parquet: map - non-standard (0 milliseconds)
[info] - sql => parquet: map - group type key (1 millisecond)
[info] - sql <= parquet: map - group type key (1 millisecond)
[info] - sql => parquet: struct (0 milliseconds)
[info] - sql <= parquet: struct (1 millisecond)
[info] - sql => parquet: deeply nested type - non-standard (0 milliseconds)
[info] - sql <= parquet: deeply nested type - non-standard (0 milliseconds)
[info] - sql => parquet: deeply nested type - standard (1 millisecond)
[info] - sql <= parquet: deeply nested type - standard (1 millisecond)
[info] - sql => parquet: optional types (0 milliseconds)
[info] - sql <= parquet: optional types (0 milliseconds)
00:04:44.994 WARN org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaInferenceSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.execution.datasources.parquet.ParquetSchemaInferenceSuite, thread names: shuffle-boss-437-1, rpc-boss-434-1 =====

[info] SerializationSuite:
[info] - [SPARK-5235] SQLContext should be serializable (8 milliseconds)
[info] - [SPARK-26409] SQLConf should be serializable (2 milliseconds)
00:04:45.136 WARN org.apache.spark.sql.SerializationSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.SerializationSuite, thread names: shuffle-boss-443-1, rpc-boss-440-1 =====

[info] DataFrameWindowFunctionsSuite:
[info] - reuse window partitionBy (456 milliseconds)
[info] - reuse window orderBy (240 milliseconds)
[info] - rank functions in unspecific window (491 milliseconds)
[info] - window function should fail if order by clause is not specified (23 milliseconds)
[info] - corr, covar_pop, stddev_pop functions in specific window (821 milliseconds)
[info] - SPARK-13860: corr, covar_pop, stddev_pop functions in specific window LEGACY_STATISTICAL_AGGREGATE off (619 milliseconds)
[info] - covar_samp, var_samp (variance), stddev_samp (stddev) functions in specific window (498 milliseconds)
[info] - SPARK-13860: covar_samp, var_samp (variance), stddev_samp (stddev) functions in specific window LEGACY_STATISTICAL_AGGREGATE off (428 milliseconds)
[info] - collect_list in ascending ordered window (409 milliseconds)
[info] - collect_list in descending ordered window (299 milliseconds)
[info] - collect_set in window (393 milliseconds)
[info] - skewness and kurtosis functions in window (535 milliseconds)
[info] - aggregation function on invalid column (11 milliseconds)
00:04:50.517 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:50.570 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - numerical aggregate functions on string column (516 milliseconds)
[info] - statistical functions (474 milliseconds)
00:04:51.473 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:51.499 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - window function with aggregates (565 milliseconds)
00:04:52.027 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:52.048 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:52.273 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:52.302 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - SPARK-16195 empty over spec (498 milliseconds)
[info] - window function with udaf (479 milliseconds)
[info] - window function with aggregator (608 milliseconds)
00:04:53.606 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
00:04:53.627 WARN org.apache.spark.sql.execution.window.WindowExec: No Partition Defined for Window operation! Moving all data to a single partition, this can cause serious performance degradation.
[info] - null inputs (180 milliseconds)
[info] - last/first with ignoreNulls (382 milliseconds)
[info] - last/first on descending ordered window (339 milliseconds)
[info] - nth_value with ignoreNulls (337 milliseconds)
[info] - nth_value on descending ordered window (339 milliseconds)
[info] - SPARK-12989 ExtractWindowExpressions treats alias as regular attribute (375 milliseconds)
[info] - aggregation and rows between with unbounded + predicate pushdown (349 milliseconds)
[info] - aggregation and range between with unbounded + predicate pushdown (559 milliseconds)
[info] - Window spill with less than the inMemoryThreshold (289 milliseconds)
[info] - Window spill with more than the inMemoryThreshold but less than the spillThreshold (174 milliseconds)
[info] - Window spill with more than the inMemoryThreshold and spillThreshold (197 milliseconds)
[info] - SPARK-21258: complex object in combination with spilling (514 milliseconds)
[info] - SPARK-24575: Window functions inside WHERE and HAVING clauses (99 milliseconds)
[info] - window functions in multiple selects (1 second, 25 milliseconds)
00:04:59.293 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.14.0
00:04:59.447 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
[info] - NaN and -0.0 in window partition keys (1 second, 810 milliseconds)
00:05:00.578 WARN org.apache.spark.sql.DataFrameWindowFunctionsSuite: 

===== POSSIBLE THREAD LEAK IN SUITE o.a.s.sql.DataFrameWindowFunctionsSuite, thread names: block-manager-ask-thread-pool-83, block-manager-ask-thread-pool-9, block-manager-ask-thread-pool-78, block-manager-ask-thread-pool-62, block-manager-ask-thread-pool-73, block-manager-ask-thread-pool-94, block-manager-ask-thread-pool-39, block-manager-ask-thread-pool-36, block-manager-ask-thread-pool-45, block-manager-ask-thread-pool-5, block-manager-ask-thread-pool-47, block-manager-ask-thread-pool-0, block-manager-ask-thread-pool-72, block-manager-ask-thread-pool-61, block-manager-ask-thread-pool-8, block-manager-ask-thread-pool-91, block-manager-ask-thread-pool-26, block-manager-ask-thread-pool-50, block-manager-ask-thread-pool-12, block-manager-ask-thread-pool-84, block-manager-ask-thread-pool-19, block-manager-ask-thread-pool-54, block-manager-ask-thread-pool-2, block-manager-ask-thread-pool-81, block-manager-ask-thread-pool-18, block-manager-ask-thread-pool-96, block-manager-ask-thread-pool-99, block-manager-ask-thread-pool-60, block-manager-ask-thread-pool-49, block-manager-ask-thread-pool-46, block-manager-ask-thread-pool-55, block-manager-ask-thread-pool-86, block-manager-ask-thread-pool-22, block-manager-ask-thread-pool-35, block-manager-ask-thread-pool-33, block-manager-ask-thread-pool-57, block-manager-ask-thread-pool-66, shuffle-boss-449-1, block-manager-ask-thread-pool-74, block-manager-ask-thread-pool-68, block-manager-ask-thread-pool-41, block-manager-ask-thread-pool-30, block-manager-ask-thread-pool-24, block-manager-ask-thread-pool-44, rpc-boss-446-1, block-manager-ask-thread-pool-87, block-manager-ask-thread-pool-6, block-manager-ask-thread-pool-93 =====

[info] BooleanBitSetSuite:
[info] - BooleanBitSet: empty (4 milliseconds)
[info] - BooleanBitSet: less than 1 word (2 milliseconds)
[info] - BooleanBitSet: exactly 1 word (1 millisecond)
[info] - BooleanBitSet: multiple whole words (2 milliseconds)
[info] - BooleanBitSet: multiple words and 1 more bit (1 millisecond)
[info] - BooleanBitSet: empty for decompression() (2 milliseconds)
[info] - BooleanBitSet: less than 1 word for decompression() (2 milliseconds)
[info] - BooleanBitSet: exactly 1 word for decompression() (2 milliseconds)
[info] - BooleanBitSet: multiple whole words for decompression() (1 millisecond)
[info] - BooleanBitSet: multiple words and 1 more bit for decompression() (2 milliseconds)
[info] - BooleanBitSet: Only nulls for decompression() (2 milliseconds)
[info] TextV1Suite:
[info] - Propagate Hadoop configs from text options to underlying file system (812 milliseconds)
[info] - reading text file (73 milliseconds)
[info] - SQLContext.read.text() API (51 milliseconds)
[error] running /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2-hive-2.3/build/sbt -Phadoop-3.2 -Phive-2.3 -Phive -Pmesos -Pspark-ganglia-lgpl -Phadoop-cloud -Phive-thriftserver -Pkubernetes -Pkinesis-asl -Pyarn test ; process was terminated by signal 9
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Finished: FAILURE