Console Output

Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
[EnvInject] - Preparing an environment for the build.
[EnvInject] - Keeping Jenkins system variables.
[EnvInject] - Keeping Jenkins build variables.
[EnvInject] - Injecting as environment variables the properties content 
PATH=/home/anaconda/envs/py36/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
AMPLAB_JENKINS="true"
SPARK_MASTER_SBT_HADOOP_2_7=1
JAVA_HOME=/usr/java/latest
AMPLAB_JENKINS_BUILD_HIVE_PROFILE=hive2.3
SPARK_TESTING=1
AMPLAB_JENKINS_BUILD_PROFILE=hadoop3.2
LANG=en_US.UTF-8
SPARK_BRANCH=master

[EnvInject] - Variables injected successfully.
[EnvInject] - Injecting contributions.
Building remotely on research-jenkins-worker-06 (ubuntu20 ubuntu) in workspace /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2
The recommended git tool is: NONE
No credentials specified
 > git rev-parse --resolve-git-dir /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/.git # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/spark.git # timeout=10
Fetching upstream changes from https://github.com/apache/spark.git
 > git --version # timeout=10
 > git --version # 'git version 2.25.1'
 > git fetch --tags --force --progress -- https://github.com/apache/spark.git +refs/heads/*:refs/remotes/origin/* # timeout=10
 > git rev-parse origin/master^{commit} # timeout=10
Checking out Revision a1214a98f4b3f7715fb984ad3df514471b1e33c7 (origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f a1214a98f4b3f7715fb984ad3df514471b1e33c7 # timeout=10
Commit message: "[SPARK-37451][SQL] Fix cast string type to decimal type if spark.sql.legacy.allowNegativeScaleOfDecimal is enabled"
 > git rev-list --no-walk 8f6e439068281633acefb895f8c4bd9203868c24 # timeout=10
[EnvInject] - Mask passwords that will be passed as build parameters.
[spark-master-test-sbt-hadoop-3.2] $ /bin/bash /tmp/jenkins1629907587841310541.sh
Removing R/SparkR.Rcheck/
Removing R/SparkR_3.3.0.tar.gz
Removing R/cran-check.out
Removing R/lib/
Removing R/pkg/man/
Removing R/pkg/tests/fulltests/Rplots.pdf
Removing R/pkg/tests/fulltests/_snaps/
Removing R/unit-tests.out
Removing append/
Removing assembly/target/
Removing build/sbt-launch-1.5.5.jar
Removing common/kvstore/target/
Removing common/network-common/target/
Removing common/network-shuffle/target/
Removing common/network-yarn/target/
Removing common/sketch/target/
Removing common/tags/target/
Removing common/unsafe/target/
Removing core/derby.log
Removing core/dummy/
Removing core/ignored/
Removing core/target/
Removing core/temp-secrets/
Removing derby.log
Removing dev/__pycache__/
Removing dev/ansible-for-test-node/roles/jenkins-worker/files/util_scripts/__pycache__/
Removing dev/create-release/__pycache__/
Removing dev/lint-r-report.log
Removing dev/sparktestsupport/__pycache__/
Removing dev/target/
Removing examples/src/main/python/__pycache__/
Removing examples/src/main/python/ml/__pycache__/
Removing examples/src/main/python/mllib/__pycache__/
Removing examples/src/main/python/sql/__pycache__/
Removing examples/src/main/python/sql/streaming/__pycache__/
Removing examples/src/main/python/streaming/__pycache__/
Removing examples/target/
Removing external/avro/spark-warehouse/
Removing external/avro/target/
Removing external/docker-integration-tests/target/
Removing external/kafka-0-10-assembly/target/
Removing external/kafka-0-10-sql/spark-warehouse/
Removing external/kafka-0-10-sql/target/
Removing external/kafka-0-10-token-provider/target/
Removing external/kafka-0-10/target/
Removing external/kinesis-asl-assembly/target/
Removing external/kinesis-asl/checkpoint/
Removing external/kinesis-asl/src/main/python/examples/streaming/__pycache__/
Removing external/kinesis-asl/target/
Removing external/spark-ganglia-lgpl/target/
Removing graphx/target/
Removing hadoop-cloud/target/
Removing launcher/target/
Removing lib/
Removing logs/
Removing metastore_db/
Removing mllib-local/target/
Removing mllib/checkpoint/
Removing mllib/spark-warehouse/
Removing mllib/target/
Removing project/project/
Removing project/target/
Removing python/__pycache__/
Removing python/dist/
Removing python/docs/source/__pycache__/
Removing python/lib/pyspark.zip
Removing python/pyspark.egg-info/
Removing python/pyspark/__pycache__/
Removing python/pyspark/cloudpickle/__pycache__/
Removing python/pyspark/ml/__pycache__/
Removing python/pyspark/ml/linalg/__pycache__/
Removing python/pyspark/ml/param/__pycache__/
Removing python/pyspark/ml/tests/__pycache__/
Removing python/pyspark/mllib/__pycache__/
Removing python/pyspark/mllib/linalg/__pycache__/
Removing python/pyspark/mllib/stat/__pycache__/
Removing python/pyspark/mllib/tests/__pycache__/
Removing python/pyspark/pandas/__pycache__/
Removing python/pyspark/pandas/data_type_ops/__pycache__/
Removing python/pyspark/pandas/indexes/__pycache__/
Removing python/pyspark/pandas/missing/__pycache__/
Removing python/pyspark/pandas/plot/__pycache__/
Removing python/pyspark/pandas/spark/__pycache__/
Removing python/pyspark/pandas/tests/__pycache__/
Removing python/pyspark/pandas/tests/data_type_ops/__pycache__/
Removing python/pyspark/pandas/tests/indexes/__pycache__/
Removing python/pyspark/pandas/tests/plot/__pycache__/
Removing python/pyspark/pandas/typedef/__pycache__/
Removing python/pyspark/pandas/usage_logging/__pycache__/
Removing python/pyspark/python/
Removing python/pyspark/resource/__pycache__/
Removing python/pyspark/resource/tests/__pycache__/
Removing python/pyspark/sql/__pycache__/
Removing python/pyspark/sql/avro/__pycache__/
Removing python/pyspark/sql/pandas/__pycache__/
Removing python/pyspark/sql/tests/__pycache__/
Removing python/pyspark/streaming/__pycache__/
Removing python/pyspark/streaming/tests/__pycache__/
Removing python/pyspark/testing/__pycache__/
Removing python/pyspark/tests/__pycache__/
Removing python/target/
Removing python/test_coverage/__pycache__/
Removing python/test_support/__pycache__/
Removing repl/derby.log
Removing repl/metastore_db/
Removing repl/spark-warehouse/
Removing repl/target/
Removing resource-managers/kubernetes/core/target/
Removing resource-managers/kubernetes/core/temp-secret/
Removing resource-managers/kubernetes/integration-tests/target/
Removing resource-managers/kubernetes/integration-tests/tests/__pycache__/
Removing resource-managers/mesos/target/
Removing resource-managers/yarn/target/
Removing scalastyle-on-compile.generated.xml
Removing spark-warehouse/
Removing sql/__pycache__/
Removing sql/catalyst/fake/
Removing sql/catalyst/spark-warehouse/
Removing sql/catalyst/target/
Removing sql/core/spark-warehouse/
Removing sql/core/src/test/resources/__pycache__/
Removing sql/core/target/
Removing sql/hive-thriftserver/derby.log
Removing sql/hive-thriftserver/metastore_db/
Removing sql/hive-thriftserver/spark-warehouse/
Removing sql/hive-thriftserver/spark_derby/
Removing sql/hive-thriftserver/target/
Removing sql/hive/derby.log
Removing sql/hive/metastore_db/
Removing sql/hive/src/test/resources/data/scripts/__pycache__/
Removing sql/hive/target/
Removing streaming/checkpoint/
Removing streaming/target/
Removing target/
Removing tools/target/
Removing work/
+++ dirname /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/install-dev.sh
++ cd /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R
++ pwd
+ FWDIR=/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R
+ LIB_DIR=/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/lib
+ mkdir -p /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/lib
+ pushd /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R
+ . /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/find-r.sh
++ '[' -z '' ']'
++ '[' '!' -z '' ']'
+++ command -v R
++ '[' '!' /usr/bin/R ']'
++++ which R
+++ dirname /usr/bin/R
++ R_SCRIPT_PATH=/usr/bin
++ echo 'Using R_SCRIPT_PATH = /usr/bin'
Using R_SCRIPT_PATH = /usr/bin
+ . /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/create-rd.sh
++ set -o pipefail
++ set -e
++++ dirname /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/create-rd.sh
+++ cd /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R
+++ pwd
++ FWDIR=/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R
++ pushd /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R
++ . /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/find-r.sh
+++ '[' -z /usr/bin ']'
++ /usr/bin/Rscript -e ' if(requireNamespace("devtools", quietly=TRUE)) { setwd("/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R"); devtools::document(pkg="./pkg", roclets="rd") }'
Updating SparkR documentation
First time using roxygen2. Upgrading automatically...
Loading SparkR
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
Writing structType.Rd
Writing print.structType.Rd
Writing structField.Rd
Writing print.structField.Rd
Writing summarize.Rd
Writing alias.Rd
Writing arrange.Rd
Writing as.data.frame.Rd
Writing cache.Rd
Writing checkpoint.Rd
Writing coalesce.Rd
Writing collect.Rd
Writing columns.Rd
Writing coltypes.Rd
Writing count.Rd
Writing cov.Rd
Writing corr.Rd
Writing createOrReplaceTempView.Rd
Writing cube.Rd
Writing dapply.Rd
Writing dapplyCollect.Rd
Writing gapply.Rd
Writing gapplyCollect.Rd
Writing describe.Rd
Writing distinct.Rd
Writing drop.Rd
Writing dropDuplicates.Rd
Writing nafunctions.Rd
Writing dtypes.Rd
Writing explain.Rd
Writing except.Rd
Writing exceptAll.Rd
Writing filter.Rd
Writing first.Rd
Writing groupBy.Rd
Writing hint.Rd
Writing insertInto.Rd
Writing intersect.Rd
Writing intersectAll.Rd
Writing isLocal.Rd
Writing isStreaming.Rd
Writing limit.Rd
Writing localCheckpoint.Rd
Writing merge.Rd
Writing mutate.Rd
Writing orderBy.Rd
Writing persist.Rd
Writing printSchema.Rd
Writing registerTempTable-deprecated.Rd
Writing rename.Rd
Writing repartition.Rd
Writing repartitionByRange.Rd
Writing sample.Rd
Writing rollup.Rd
Writing sampleBy.Rd
Writing saveAsTable.Rd
Writing take.Rd
Writing write.df.Rd
Writing write.jdbc.Rd
Writing write.json.Rd
Writing write.orc.Rd
Writing write.parquet.Rd
Writing write.stream.Rd
Writing write.text.Rd
Writing schema.Rd
Writing select.Rd
Writing selectExpr.Rd
Writing showDF.Rd
Writing subset.Rd
Writing summary.Rd
Writing union.Rd
Writing unionAll.Rd
Writing unionByName.Rd
Writing unpersist.Rd
Writing with.Rd
Writing withColumn.Rd
Writing withWatermark.Rd
Writing randomSplit.Rd
Writing broadcast.Rd
Writing columnfunctions.Rd
Writing between.Rd
Writing cast.Rd
Writing endsWith.Rd
Writing startsWith.Rd
Writing column_nonaggregate_functions.Rd
Writing otherwise.Rd
Writing over.Rd
Writing eq_null_safe.Rd
Writing withField.Rd
Writing dropFields.Rd
Writing partitionBy.Rd
Writing rowsBetween.Rd
Writing rangeBetween.Rd
Writing windowPartitionBy.Rd
Writing windowOrderBy.Rd
Writing column_datetime_diff_functions.Rd
Writing column_aggregate_functions.Rd
Writing column_collection_functions.Rd
Writing column_ml_functions.Rd
Writing column_string_functions.Rd
Writing column_misc_functions.Rd
Writing avg.Rd
Writing column_math_functions.Rd
Writing column.Rd
Writing column_window_functions.Rd
Writing column_datetime_functions.Rd
Writing column_avro_functions.Rd
Writing last.Rd
Writing not.Rd
Writing fitted.Rd
Writing predict.Rd
Writing rbind.Rd
Writing spark.als.Rd
Writing spark.bisectingKmeans.Rd
Writing spark.fmClassifier.Rd
Writing spark.fmRegressor.Rd
Writing spark.gaussianMixture.Rd
Writing spark.gbt.Rd
Writing spark.glm.Rd
Writing spark.isoreg.Rd
Writing spark.kmeans.Rd
Writing spark.kstest.Rd
Writing spark.lda.Rd
Writing spark.logit.Rd
Writing spark.mlp.Rd
Writing spark.naiveBayes.Rd
Writing spark.decisionTree.Rd
Writing spark.randomForest.Rd
Writing spark.survreg.Rd
Writing spark.svmLinear.Rd
Writing spark.fpGrowth.Rd
Writing spark.prefixSpan.Rd
Writing spark.powerIterationClustering.Rd
Writing spark.lm.Rd
Writing write.ml.Rd
Writing awaitTermination.Rd
Writing isActive.Rd
Writing lastProgress.Rd
Writing queryName.Rd
Writing status.Rd
Writing stopQuery.Rd
Writing print.jobj.Rd
Writing show.Rd
Writing substr.Rd
Writing match.Rd
Writing GroupedData.Rd
Writing pivot.Rd
Writing SparkDataFrame.Rd
Writing storageLevel.Rd
Writing toJSON.Rd
Writing nrow.Rd
Writing ncol.Rd
Writing dim.Rd
Writing head.Rd
Writing join.Rd
Writing crossJoin.Rd
Writing attach.Rd
Writing str.Rd
Writing histogram.Rd
Writing getNumPartitions.Rd
Writing sparkR.conf.Rd
Writing sparkR.version.Rd
Writing createDataFrame.Rd
Writing read.json.Rd
Writing read.orc.Rd
Writing read.parquet.Rd
Writing read.text.Rd
Writing sql.Rd
Writing tableToDF.Rd
Writing read.df.Rd
Writing read.jdbc.Rd
Writing read.stream.Rd
Writing WindowSpec.Rd
Writing createExternalTable-deprecated.Rd
Writing createTable.Rd
Writing cacheTable.Rd
Writing uncacheTable.Rd
Writing clearCache.Rd
Writing dropTempTable-deprecated.Rd
Writing dropTempView.Rd
Writing tables.Rd
Writing tableNames.Rd
Writing currentDatabase.Rd
Writing setCurrentDatabase.Rd
Writing listDatabases.Rd
Writing listTables.Rd
Writing listColumns.Rd
Writing listFunctions.Rd
Writing recoverPartitions.Rd
Writing refreshTable.Rd
Writing refreshByPath.Rd
Writing spark.addFile.Rd
Writing spark.getSparkFilesRootDirectory.Rd
Writing spark.getSparkFiles.Rd
Writing spark.lapply.Rd
Writing setLogLevel.Rd
Writing setCheckpointDir.Rd
Writing unresolved_named_lambda_var.Rd
Writing create_lambda.Rd
Writing invoke_higher_order_function.Rd
Writing install.spark.Rd
Writing sparkR.callJMethod.Rd
Writing sparkR.callJStatic.Rd
Writing sparkR.newJObject.Rd
Writing LinearSVCModel-class.Rd
Writing LogisticRegressionModel-class.Rd
Writing MultilayerPerceptronClassificationModel-class.Rd
Writing NaiveBayesModel-class.Rd
Writing FMClassificationModel-class.Rd
Writing BisectingKMeansModel-class.Rd
Writing GaussianMixtureModel-class.Rd
Writing KMeansModel-class.Rd
Writing LDAModel-class.Rd
Writing PowerIterationClustering-class.Rd
Writing FPGrowthModel-class.Rd
Writing PrefixSpan-class.Rd
Writing ALSModel-class.Rd
Writing AFTSurvivalRegressionModel-class.Rd
Writing GeneralizedLinearRegressionModel-class.Rd
Writing IsotonicRegressionModel-class.Rd
Writing LinearRegressionModel-class.Rd
Writing FMRegressionModel-class.Rd
Writing glm.Rd
Writing KSTest-class.Rd
Writing GBTRegressionModel-class.Rd
Writing GBTClassificationModel-class.Rd
Writing RandomForestRegressionModel-class.Rd
Writing RandomForestClassificationModel-class.Rd
Writing DecisionTreeRegressionModel-class.Rd
Writing DecisionTreeClassificationModel-class.Rd
Writing read.ml.Rd
Writing sparkR.session.stop.Rd
Writing sparkR.init-deprecated.Rd
Writing sparkRSQL.init-deprecated.Rd
Writing sparkRHive.init-deprecated.Rd
Writing sparkR.session.Rd
Writing sparkR.uiWebUrl.Rd
Writing setJobGroup.Rd
Writing clearJobGroup.Rd
Writing cancelJobGroup.Rd
Writing setJobDescription.Rd
Writing setLocalProperty.Rd
Writing getLocalProperty.Rd
Writing crosstab.Rd
Writing freqItems.Rd
Writing approxQuantile.Rd
Writing StreamingQuery.Rd
Writing hashCode.Rd
+ /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/lib /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/pkg/
* installing *source* package ‘SparkR’ ...
** using staged installation
** R
** inst
** byte-compile and prepare package for lazy loading
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (SparkR)
+ cd /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/lib
+ jar cfM /home/jenkins/workspace/spark-master-test-sbt-hadoop-3.2/R/lib/sparkr.zip SparkR
+ popd
[error] Could not find hadoop3.2 in the list. Valid options  are dict_keys(['hadoop2', 'hadoop3'])
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files were found. Configuration error?
Finished: FAILURE