Console Output

Skipping 1,013 KB.. Full Log
[INFO] 
[INFO] --- scala-maven-plugin:4.5.3:testCompile (scala-test-compile-first) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] compile in 0.0 s
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M5:test (default-test) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M5:test (test) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:3.1.2:test-jar (prepare-test-jar) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Building jar: /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.3.0-SNAPSHOT-tests.jar
[INFO] 
[INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Building jar: /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.3.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.5.1:attach-descriptor (attach-descriptor) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] 
[INFO] --- maven-shade-plugin:3.2.4:shade (default) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Including org.apache.spark:spark-streaming-kinesis-asl_2.12:jar:3.3.0-SNAPSHOT in the shaded jar.
[INFO] Including com.amazonaws:amazon-kinesis-client:jar:1.12.0 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-dynamodb:jar:1.11.655 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-s3:jar:1.11.655 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-kms:jar:1.11.655 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-kinesis:jar:1.11.655 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-cloudwatch:jar:1.11.655 in the shaded jar.
[INFO] Including org.apache.commons:commons-lang3:jar:3.12.0 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-sts:jar:1.11.655 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-core:jar:1.11.655 in the shaded jar.
[INFO] Including org.apache.httpcomponents:httpclient:jar:4.5.13 in the shaded jar.
[INFO] Including org.apache.httpcomponents:httpcore:jar:4.4.14 in the shaded jar.
[INFO] Including software.amazon.ion:ion-java:jar:1.0.2 in the shaded jar.
[INFO] Including joda-time:joda-time:jar:2.10.10 in the shaded jar.
[INFO] Including com.amazonaws:jmespath-java:jar:1.11.655 in the shaded jar.
[INFO] Including com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:jar:2.12.3 in the shaded jar.
[INFO] Including org.apache.spark:spark-tags_2.12:jar:3.3.0-SNAPSHOT in the shaded jar.
[INFO] Including javax.activation:activation:jar:1.1.1 in the shaded jar.
[INFO] Including commons-codec:commons-codec:jar:1.15 in the shaded jar.
[INFO] Including org.scala-lang:scala-library:jar:2.12.14 in the shaded jar.
[INFO] Including com.fasterxml.jackson.core:jackson-core:jar:2.12.3 in the shaded jar.
[INFO] Including com.google.protobuf:protobuf-java:jar:2.6.1 in the shaded jar.
[INFO] Including org.apache.hadoop:hadoop-client-runtime:jar:3.3.1 in the shaded jar.
[INFO] Including org.apache.htrace:htrace-core4:jar:4.1.0-incubating in the shaded jar.
[INFO] Including commons-logging:commons-logging:jar:1.1.3 in the shaded jar.
[INFO] Including com.google.code.findbugs:jsr305:jar:3.0.0 in the shaded jar.
[INFO] Including org.spark-project.spark:unused:jar:1.0.0 in the shaded jar.
[WARNING] Discovered module-info.class. Shading will break its strong encapsulation.
[WARNING] Discovered module-info.class. Shading will break its strong encapsulation.
[WARNING] activation-1.1.1.jar, amazon-kinesis-client-1.12.0.jar, aws-java-sdk-cloudwatch-1.11.655.jar, aws-java-sdk-core-1.11.655.jar, aws-java-sdk-dynamodb-1.11.655.jar, aws-java-sdk-kinesis-1.11.655.jar, aws-java-sdk-kms-1.11.655.jar, aws-java-sdk-s3-1.11.655.jar, aws-java-sdk-sts-1.11.655.jar, commons-codec-1.15.jar, commons-lang3-3.12.0.jar, commons-logging-1.1.3.jar, hadoop-client-runtime-3.3.1.jar, htrace-core4-4.1.0-incubating.jar, httpclient-4.5.13.jar, httpcore-4.4.14.jar, ion-java-1.0.2.jar, jackson-core-2.12.3.jar, jackson-dataformat-cbor-2.12.3.jar, jmespath-java-1.11.655.jar, joda-time-2.10.10.jar, jsr305-3.0.0.jar, protobuf-java-2.6.1.jar, scala-library-2.12.14.jar, spark-streaming-kinesis-asl-assembly_2.12-3.3.0-SNAPSHOT.jar, spark-streaming-kinesis-asl_2.12-3.3.0-SNAPSHOT.jar, spark-tags_2.12-3.3.0-SNAPSHOT.jar, unused-1.0.0.jar define 1 overlapping resource: 
[WARNING]   - META-INF/MANIFEST.MF
[WARNING] hadoop-client-runtime-3.3.1.jar, htrace-core4-4.1.0-incubating.jar, httpclient-4.5.13.jar, httpcore-4.4.14.jar, spark-streaming-kinesis-asl-assembly_2.12-3.3.0-SNAPSHOT.jar, spark-streaming-kinesis-asl_2.12-3.3.0-SNAPSHOT.jar, spark-tags_2.12-3.3.0-SNAPSHOT.jar define 1 overlapping resource: 
[WARNING]   - META-INF/DEPENDENCIES
[WARNING] spark-streaming-kinesis-asl_2.12-3.3.0-SNAPSHOT.jar, spark-tags_2.12-3.3.0-SNAPSHOT.jar, unused-1.0.0.jar define 3 overlapping classes and resources: 
[WARNING]   - META-INF/maven/org.spark-project.spark/unused/pom.properties
[WARNING]   - META-INF/maven/org.spark-project.spark/unused/pom.xml
[WARNING]   - org.apache.spark.unused.UnusedStubClass
[WARNING] commons-lang3-3.12.0.jar, hadoop-client-runtime-3.3.1.jar define 2 overlapping resources: 
[WARNING]   - META-INF/maven/org.apache.commons/commons-lang3/pom.properties
[WARNING]   - META-INF/maven/org.apache.commons/commons-lang3/pom.xml
[WARNING] hadoop-client-runtime-3.3.1.jar, httpclient-4.5.13.jar define 3 overlapping resources: 
[WARNING]   - META-INF/maven/org.apache.httpcomponents/httpclient/pom.properties
[WARNING]   - META-INF/maven/org.apache.httpcomponents/httpclient/pom.xml
[WARNING]   - mozilla/public-suffix-list.txt
[WARNING] hadoop-client-runtime-3.3.1.jar, httpcore-4.4.14.jar define 2 overlapping resources: 
[WARNING]   - META-INF/maven/org.apache.httpcomponents/httpcore/pom.properties
[WARNING]   - META-INF/maven/org.apache.httpcomponents/httpcore/pom.xml
[WARNING] commons-codec-1.15.jar, hadoop-client-runtime-3.3.1.jar define 2 overlapping resources: 
[WARNING]   - META-INF/maven/commons-codec/commons-codec/pom.properties
[WARNING]   - META-INF/maven/commons-codec/commons-codec/pom.xml
[WARNING] hadoop-client-runtime-3.3.1.jar, htrace-core4-4.1.0-incubating.jar, jackson-core-2.12.3.jar define 2 overlapping resources: 
[WARNING]   - META-INF/maven/com.fasterxml.jackson.core/jackson-core/pom.properties
[WARNING]   - META-INF/maven/com.fasterxml.jackson.core/jackson-core/pom.xml
[WARNING] hadoop-client-runtime-3.3.1.jar, protobuf-java-2.6.1.jar define 2 overlapping resources: 
[WARNING]   - META-INF/maven/com.google.protobuf/protobuf-java/pom.properties
[WARNING]   - META-INF/maven/com.google.protobuf/protobuf-java/pom.xml
[WARNING] hadoop-client-runtime-3.3.1.jar, htrace-core4-4.1.0-incubating.jar define 4 overlapping resources: 
[WARNING]   - META-INF/maven/com.fasterxml.jackson.core/jackson-annotations/pom.properties
[WARNING]   - META-INF/maven/com.fasterxml.jackson.core/jackson-annotations/pom.xml
[WARNING]   - META-INF/maven/com.fasterxml.jackson.core/jackson-databind/pom.properties
[WARNING]   - META-INF/maven/com.fasterxml.jackson.core/jackson-databind/pom.xml
[WARNING] commons-logging-1.1.3.jar, htrace-core4-4.1.0-incubating.jar define 2 overlapping resources: 
[WARNING]   - META-INF/maven/commons-logging/commons-logging/pom.properties
[WARNING]   - META-INF/maven/commons-logging/commons-logging/pom.xml
[WARNING] maven-shade-plugin has detected that some class files are
[WARNING] present in two or more JARs. When this happens, only one
[WARNING] single version of the class is copied to the uber jar.
[WARNING] Usually this is not harmful and you can skip these warnings,
[WARNING] otherwise try to manually exclude artifacts based on
[WARNING] mvn dependency:tree -Ddetail=true and the above output.
[WARNING] See http://maven.apache.org/plugins/maven-shade-plugin/
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.3.0-SNAPSHOT.jar with /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.3.0-SNAPSHOT-shaded.jar
[INFO] 
[INFO] --- maven-source-plugin:3.1.0:jar-no-fork (create-source-jar) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Building jar: /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.3.0-SNAPSHOT-sources.jar
[INFO] 
[INFO] --- maven-source-plugin:3.1.0:test-jar-no-fork (create-source-jar) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Building jar: /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.3.0-SNAPSHOT-test-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Spark Project Parent POM 3.3.0-SNAPSHOT:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [  3.040 s]
[INFO] Spark Project Tags ................................. SUCCESS [  6.507 s]
[INFO] Spark Project Sketch ............................... SUCCESS [  7.021 s]
[INFO] Spark Project Local DB ............................. SUCCESS [  3.301 s]
[INFO] Spark Project Networking ........................... SUCCESS [  6.129 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  4.220 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 10.054 s]
[INFO] Spark Project Launcher ............................. SUCCESS [  2.944 s]
[INFO] Spark Project Core ................................. SUCCESS [02:09 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [ 25.771 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 32.130 s]
[INFO] Spark Project Streaming ............................ SUCCESS [ 55.996 s]
[INFO] Spark Project Catalyst ............................. SUCCESS [02:46 min]
[INFO] Spark Project SQL .................................. SUCCESS [04:23 min]
[INFO] Spark Project ML Library ........................... SUCCESS [02:49 min]
[INFO] Spark Project Tools ................................ SUCCESS [  8.548 s]
[INFO] Spark Project Hive ................................. SUCCESS [01:39 min]
[INFO] Spark Project REPL ................................. SUCCESS [ 25.258 s]
[INFO] Spark Project Kubernetes ........................... SUCCESS [ 51.508 s]
[INFO] Spark Project Hive Thrift Server ................... SUCCESS [ 52.954 s]
[INFO] Spark Project Assembly ............................. SUCCESS [  3.950 s]
[INFO] Kafka 0.10+ Token Provider for Streaming ........... SUCCESS [ 27.413 s]
[INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [ 43.244 s]
[INFO] Kafka 0.10+ Source for Structured Streaming ........ SUCCESS [05:05 min]
[INFO] Spark Kinesis Integration .......................... SUCCESS [01:32 min]
[INFO] Spark Project Examples ............................. SUCCESS [04:29 min]
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [ 16.306 s]
[INFO] Spark Avro ......................................... SUCCESS [03:14 min]
[INFO] Spark Project Kinesis Assembly ..................... SUCCESS [ 20.368 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  34:57 min
[INFO] Finished at: 2021-07-17T06:23:21-07:00
[INFO] ------------------------------------------------------------------------
+ rm -rf /home/jenkins/workspace/spark-master-test-k8s/dist
+ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/dist/jars
+ echo 'Spark 3.3.0-SNAPSHOT (git revision 71ea25d4f5) built for Hadoop 3.3.1'
+ echo 'Build flags: -DzincPort=3611' -Pkubernetes -Pkinesis-asl -Phive -Phive-thriftserver
+ cp /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/activation-1.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/aircompressor-0.19.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/algebra_2.12-2.0.0-M2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/annotations-17.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/antlr4-runtime-4.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/antlr-runtime-3.5.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/aopalliance-repackaged-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arpack-2.2.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arpack_combined_all-0.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arrow-format-2.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arrow-memory-core-2.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arrow-memory-netty-2.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arrow-vector-2.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/audience-annotations-0.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/automaton-1.11-8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/avro-1.10.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/avro-ipc-1.10.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/avro-mapred-1.10.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/blas-2.2.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/bonecp-0.8.0.RELEASE.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/breeze_2.12-1.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/breeze-macros_2.12-1.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/cats-kernel_2.12-2.0.0-M4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/chill_2.12-0.10.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/chill-java-0.10.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-cli-1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-codec-1.15.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-collections-3.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-compiler-3.0.16.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-compress-1.21.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-crypto-1.1.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-dbcp-1.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-io-2.8.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-lang-2.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-lang3-3.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-logging-1.1.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-math3-3.4.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-net-3.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-pool-1.5.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-text-1.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/compress-lzf-1.0.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/core-1.1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/curator-client-2.13.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/curator-framework-2.13.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/curator-recipes-2.13.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/datanucleus-api-jdo-4.2.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/datanucleus-core-4.1.17.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/datanucleus-rdbms-4.1.19.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/derby-10.14.2.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/flatbuffers-java-1.9.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/generex-1.0.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/gson-2.2.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/guava-14.0.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hadoop-client-api-3.3.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hadoop-client-runtime-3.3.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/HikariCP-2.5.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-beeline-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-cli-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-common-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-exec-2.3.9-core.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-jdbc-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-llap-common-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-metastore-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-serde-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-service-rpc-3.1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-shims-0.23-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-shims-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-shims-common-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-shims-scheduler-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-storage-api-2.7.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-vector-code-gen-2.3.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hk2-api-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hk2-locator-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hk2-utils-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/htrace-core4-4.1.0-incubating.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/httpclient-4.5.13.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/httpcore-4.4.14.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/istack-commons-runtime-3.0.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/ivy-2.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-annotations-2.12.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-core-2.12.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-core-asl-1.9.13.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-databind-2.12.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-dataformat-yaml-2.12.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-datatype-jsr310-2.11.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-mapper-asl-1.9.13.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-module-scala_2.12-2.12.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.annotation-api-1.3.5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.inject-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.servlet-api-4.0.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.validation-api-2.0.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.ws.rs-api-2.1.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.xml.bind-api-2.3.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/janino-3.0.16.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/javassist-3.25.0-GA.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/javax.jdo-3.2.0-m3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/javolution-5.5.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jaxb-api-2.2.11.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jaxb-runtime-2.3.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jcl-over-slf4j-1.7.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jdo-api-3.0.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-client-2.34.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-common-2.34.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-container-servlet-2.34.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-container-servlet-core-2.34.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-hk2-2.34.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-server-2.34.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/JLargeArrays-1.5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jline-2.14.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/joda-time-2.10.10.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jodd-core-3.5.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jpam-1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json-1.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json4s-ast_2.12-3.7.0-M11.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json4s-core_2.12-3.7.0-M11.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json4s-jackson_2.12-3.7.0-M11.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json4s-scalap_2.12-3.7.0-M11.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jsr305-3.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jta-1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/JTransforms-3.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jul-to-slf4j-1.7.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kryo-shaded-4.0.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-client-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-admissionregistration-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-apiextensions-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-apps-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-autoscaling-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-batch-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-certificates-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-common-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-coordination-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-core-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-discovery-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-events-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-extensions-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-flowcontrol-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-metrics-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-networking-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-node-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-policy-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-rbac-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-scheduling-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-storageclass-5.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/lapack-2.2.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/leveldbjni-all-1.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/libfb303-0.9.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/libthrift-0.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/log4j-1.2.17.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/logging-interceptor-3.12.12.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/lz4-java-1.7.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/machinist_2.12-0.6.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/macro-compat_2.12-1.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-core-4.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-graphite-4.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-jmx-4.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-json-4.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-jvm-4.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/minlog-1.3.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/netty-all-4.1.63.Final.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/objenesis-2.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/okhttp-3.12.12.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/okio-1.14.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/opencsv-2.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/orc-core-1.6.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/orc-mapreduce-1.6.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/orc-shims-1.6.9.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/oro-2.0.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/osgi-resource-locator-1.0.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/paranamer-2.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-column-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-common-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-encoding-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-format-structures-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-hadoop-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-jackson-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/protobuf-java-2.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/py4j-0.10.9.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/pyrolite-4.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/RoaringBitmap-0.9.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/rocksdbjni-6.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-collection-compat_2.12-2.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-compiler-2.12.14.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-library-2.12.14.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-parser-combinators_2.12-1.1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-reflect-2.12.14.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-xml_2.12-1.2.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/shapeless_2.12-2.3.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/shims-0.9.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/slf4j-api-1.7.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/slf4j-log4j12-1.7.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/snakeyaml-1.27.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/snappy-java-1.1.8.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-catalyst_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-core_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-graphx_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-hive_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-hive-thriftserver_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-kubernetes_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-kvstore_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-launcher_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-mllib_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-mllib-local_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-network-common_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-network-shuffle_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-repl_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-sketch_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-sql_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-streaming_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-tags_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-tags_2.12-3.3.0-SNAPSHOT-tests.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-unsafe_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spire_2.12-0.17.0-M1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spire-macros_2.12-0.17.0-M1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spire-platform_2.12-0.17.0-M1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spire-util_2.12-0.17.0-M1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/ST4-4.0.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/stax-api-1.0.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/stream-2.9.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/super-csv-2.2.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/threeten-extra-1.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/transaction-api-1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/univocity-parsers-2.9.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/velocity-1.5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/xbean-asm9-shaded-4.20.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/xz-1.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/zjsonpatch-0.3.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/zookeeper-3.6.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/zookeeper-jute-3.6.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/zstd-jni-1.5.0-2.jar /home/jenkins/workspace/spark-master-test-k8s/dist/jars/
+ '[' -f '/home/jenkins/workspace/spark-master-test-k8s/common/network-yarn/target/scala*/spark-*-yarn-shuffle.jar' ']'
+ '[' -d /home/jenkins/workspace/spark-master-test-k8s/resource-managers/kubernetes/core/target/ ']'
+ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/dist/kubernetes/
+ cp -a /home/jenkins/workspace/spark-master-test-k8s/resource-managers/kubernetes/docker/src/main/dockerfiles /home/jenkins/workspace/spark-master-test-k8s/dist/kubernetes/
+ cp -a /home/jenkins/workspace/spark-master-test-k8s/resource-managers/kubernetes/integration-tests/tests /home/jenkins/workspace/spark-master-test-k8s/dist/kubernetes/
+ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars
+ cp /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/aircompressor-0.19.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/annotations-17.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/commons-codec-1.15.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/commons-lang3-3.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/hive-storage-api-2.7.2.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/istack-commons-runtime-3.0.8.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/jakarta.xml.bind-api-2.3.2.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/jaxb-runtime-2.3.2.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/orc-core-1.6.9.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/orc-mapreduce-1.6.9.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/orc-shims-1.6.9.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/scopt_2.12-3.7.1.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/spark-examples_2.12-3.3.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/threeten-extra-1.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/aircompressor-0.19.jar
+ name=aircompressor-0.19.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/aircompressor-0.19.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/aircompressor-0.19.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/annotations-17.0.0.jar
+ name=annotations-17.0.0.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/annotations-17.0.0.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/annotations-17.0.0.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/commons-codec-1.15.jar
+ name=commons-codec-1.15.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/commons-codec-1.15.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/commons-codec-1.15.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/commons-lang3-3.12.0.jar
+ name=commons-lang3-3.12.0.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/commons-lang3-3.12.0.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/commons-lang3-3.12.0.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/hive-storage-api-2.7.2.jar
+ name=hive-storage-api-2.7.2.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/hive-storage-api-2.7.2.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/hive-storage-api-2.7.2.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/istack-commons-runtime-3.0.8.jar
+ name=istack-commons-runtime-3.0.8.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/istack-commons-runtime-3.0.8.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/istack-commons-runtime-3.0.8.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/jakarta.xml.bind-api-2.3.2.jar
+ name=jakarta.xml.bind-api-2.3.2.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/jakarta.xml.bind-api-2.3.2.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/jakarta.xml.bind-api-2.3.2.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/jaxb-runtime-2.3.2.jar
+ name=jaxb-runtime-2.3.2.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/jaxb-runtime-2.3.2.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/jaxb-runtime-2.3.2.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-core-1.6.9.jar
+ name=orc-core-1.6.9.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/orc-core-1.6.9.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-core-1.6.9.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-mapreduce-1.6.9.jar
+ name=orc-mapreduce-1.6.9.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/orc-mapreduce-1.6.9.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-mapreduce-1.6.9.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-shims-1.6.9.jar
+ name=orc-shims-1.6.9.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/orc-shims-1.6.9.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-shims-1.6.9.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/scopt_2.12-3.7.1.jar
+ name=scopt_2.12-3.7.1.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/scopt_2.12-3.7.1.jar ']'
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/spark-examples_2.12-3.3.0-SNAPSHOT.jar
+ name=spark-examples_2.12-3.3.0-SNAPSHOT.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/spark-examples_2.12-3.3.0-SNAPSHOT.jar ']'
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/threeten-extra-1.5.0.jar
+ name=threeten-extra-1.5.0.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/threeten-extra-1.5.0.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/threeten-extra-1.5.0.jar
+ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/dist/examples/src/main
+ cp -r /home/jenkins/workspace/spark-master-test-k8s/examples/src/main /home/jenkins/workspace/spark-master-test-k8s/dist/examples/src/
+ '[' -e /home/jenkins/workspace/spark-master-test-k8s/LICENSE-binary ']'
+ cp /home/jenkins/workspace/spark-master-test-k8s/LICENSE-binary /home/jenkins/workspace/spark-master-test-k8s/dist/LICENSE
+ cp -r /home/jenkins/workspace/spark-master-test-k8s/licenses-binary /home/jenkins/workspace/spark-master-test-k8s/dist/licenses
+ cp /home/jenkins/workspace/spark-master-test-k8s/NOTICE-binary /home/jenkins/workspace/spark-master-test-k8s/dist/NOTICE
+ '[' -e /home/jenkins/workspace/spark-master-test-k8s/CHANGES.txt ']'
+ cp -r /home/jenkins/workspace/spark-master-test-k8s/data /home/jenkins/workspace/spark-master-test-k8s/dist
+ '[' false == true ']'
+ echo 'Skipping building python distribution package'
Skipping building python distribution package
+ '[' true == true ']'
+ echo 'Building R source package'
Building R source package
++ grep Version /home/jenkins/workspace/spark-master-test-k8s/R/pkg/DESCRIPTION
++ awk '{print $NF}'
+ R_PACKAGE_VERSION=3.3.0
+ pushd /home/jenkins/workspace/spark-master-test-k8s/R
+ NO_TESTS=1
+ /home/jenkins/workspace/spark-master-test-k8s/R/check-cran.sh
Using R_SCRIPT_PATH = /usr/bin
++++ dirname /home/jenkins/workspace/spark-master-test-k8s/R/install-dev.sh
+++ cd /home/jenkins/workspace/spark-master-test-k8s/R
+++ pwd
++ FWDIR=/home/jenkins/workspace/spark-master-test-k8s/R
++ LIB_DIR=/home/jenkins/workspace/spark-master-test-k8s/R/lib
++ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/R/lib
++ pushd /home/jenkins/workspace/spark-master-test-k8s/R
++ . /home/jenkins/workspace/spark-master-test-k8s/R/find-r.sh
+++ '[' -z /usr/bin ']'
++ . /home/jenkins/workspace/spark-master-test-k8s/R/create-rd.sh
+++ set -o pipefail
+++ set -e
+++++ dirname /home/jenkins/workspace/spark-master-test-k8s/R/create-rd.sh
++++ cd /home/jenkins/workspace/spark-master-test-k8s/R
++++ pwd
+++ FWDIR=/home/jenkins/workspace/spark-master-test-k8s/R
+++ pushd /home/jenkins/workspace/spark-master-test-k8s/R
+++ . /home/jenkins/workspace/spark-master-test-k8s/R/find-r.sh
++++ '[' -z /usr/bin ']'
+++ /usr/bin/Rscript -e ' if(requireNamespace("devtools", quietly=TRUE)) { setwd("/home/jenkins/workspace/spark-master-test-k8s/R"); devtools::document(pkg="./pkg", roclets="rd") }'
Updating SparkR documentation
First time using roxygen2. Upgrading automatically...
Loading SparkR
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
Writing structType.Rd
Writing print.structType.Rd
Writing structField.Rd
Writing print.structField.Rd
Writing summarize.Rd
Writing alias.Rd
Writing arrange.Rd
Writing as.data.frame.Rd
Writing cache.Rd
Writing checkpoint.Rd
Writing coalesce.Rd
Writing collect.Rd
Writing columns.Rd
Writing coltypes.Rd
Writing count.Rd
Writing cov.Rd
Writing corr.Rd
Writing createOrReplaceTempView.Rd
Writing cube.Rd
Writing dapply.Rd
Writing dapplyCollect.Rd
Writing gapply.Rd
Writing gapplyCollect.Rd
Writing describe.Rd
Writing distinct.Rd
Writing drop.Rd
Writing dropDuplicates.Rd
Writing nafunctions.Rd
Writing dtypes.Rd
Writing explain.Rd
Writing except.Rd
Writing exceptAll.Rd
Writing filter.Rd
Writing first.Rd
Writing groupBy.Rd
Writing hint.Rd
Writing insertInto.Rd
Writing intersect.Rd
Writing intersectAll.Rd
Writing isLocal.Rd
Writing isStreaming.Rd
Writing limit.Rd
Writing localCheckpoint.Rd
Writing merge.Rd
Writing mutate.Rd
Writing orderBy.Rd
Writing persist.Rd
Writing printSchema.Rd
Writing registerTempTable-deprecated.Rd
Writing rename.Rd
Writing repartition.Rd
Writing repartitionByRange.Rd
Writing sample.Rd
Writing rollup.Rd
Writing sampleBy.Rd
Writing saveAsTable.Rd
Writing take.Rd
Writing write.df.Rd
Writing write.jdbc.Rd
Writing write.json.Rd
Writing write.orc.Rd
Writing write.parquet.Rd
Writing write.stream.Rd
Writing write.text.Rd
Writing schema.Rd
Writing select.Rd
Writing selectExpr.Rd
Writing showDF.Rd
Writing subset.Rd
Writing summary.Rd
Writing union.Rd
Writing unionAll.Rd
Writing unionByName.Rd
Writing unpersist.Rd
Writing with.Rd
Writing withColumn.Rd
Writing withWatermark.Rd
Writing randomSplit.Rd
Writing broadcast.Rd
Writing columnfunctions.Rd
Writing between.Rd
Writing cast.Rd
Writing endsWith.Rd
Writing startsWith.Rd
Writing column_nonaggregate_functions.Rd
Writing otherwise.Rd
Writing over.Rd
Writing eq_null_safe.Rd
Writing withField.Rd
Writing dropFields.Rd
Writing partitionBy.Rd
Writing rowsBetween.Rd
Writing rangeBetween.Rd
Writing windowPartitionBy.Rd
Writing windowOrderBy.Rd
Writing column_datetime_diff_functions.Rd
Writing column_aggregate_functions.Rd
Writing column_collection_functions.Rd
Writing column_ml_functions.Rd
Writing column_string_functions.Rd
Writing column_misc_functions.Rd
Writing avg.Rd
Writing column_math_functions.Rd
Writing column.Rd
Writing column_window_functions.Rd
Writing column_datetime_functions.Rd
Writing column_avro_functions.Rd
Writing last.Rd
Writing not.Rd
Writing fitted.Rd
Writing predict.Rd
Writing rbind.Rd
Writing spark.als.Rd
Writing spark.bisectingKmeans.Rd
Writing spark.fmClassifier.Rd
Writing spark.fmRegressor.Rd
Writing spark.gaussianMixture.Rd
Writing spark.gbt.Rd
Writing spark.glm.Rd
Writing spark.isoreg.Rd
Writing spark.kmeans.Rd
Writing spark.kstest.Rd
Writing spark.lda.Rd
Writing spark.logit.Rd
Writing spark.mlp.Rd
Writing spark.naiveBayes.Rd
Writing spark.decisionTree.Rd
Writing spark.randomForest.Rd
Writing spark.survreg.Rd
Writing spark.svmLinear.Rd
Writing spark.fpGrowth.Rd
Writing spark.prefixSpan.Rd
Writing spark.powerIterationClustering.Rd
Writing spark.lm.Rd
Writing write.ml.Rd
Writing awaitTermination.Rd
Writing isActive.Rd
Writing lastProgress.Rd
Writing queryName.Rd
Writing status.Rd
Writing stopQuery.Rd
Writing print.jobj.Rd
Writing show.Rd
Writing substr.Rd
Writing match.Rd
Writing GroupedData.Rd
Writing pivot.Rd
Writing SparkDataFrame.Rd
Writing storageLevel.Rd
Writing toJSON.Rd
Writing nrow.Rd
Writing ncol.Rd
Writing dim.Rd
Writing head.Rd
Writing join.Rd
Writing crossJoin.Rd
Writing attach.Rd
Writing str.Rd
Writing histogram.Rd
Writing getNumPartitions.Rd
Writing sparkR.conf.Rd
Writing sparkR.version.Rd
Writing createDataFrame.Rd
Writing read.json.Rd
Writing read.orc.Rd
Writing read.parquet.Rd
Writing read.text.Rd
Writing sql.Rd
Writing tableToDF.Rd
Writing read.df.Rd
Writing read.jdbc.Rd
Writing read.stream.Rd
Writing WindowSpec.Rd
Writing createExternalTable-deprecated.Rd
Writing createTable.Rd
Writing cacheTable.Rd
Writing uncacheTable.Rd
Writing clearCache.Rd
Writing dropTempTable-deprecated.Rd
Writing dropTempView.Rd
Writing tables.Rd
Writing tableNames.Rd
Writing currentDatabase.Rd
Writing setCurrentDatabase.Rd
Writing listDatabases.Rd
Writing listTables.Rd
Writing listColumns.Rd
Writing listFunctions.Rd
Writing recoverPartitions.Rd
Writing refreshTable.Rd
Writing refreshByPath.Rd
Writing spark.addFile.Rd
Writing spark.getSparkFilesRootDirectory.Rd
Writing spark.getSparkFiles.Rd
Writing spark.lapply.Rd
Writing setLogLevel.Rd
Writing setCheckpointDir.Rd
Writing unresolved_named_lambda_var.Rd
Writing create_lambda.Rd
Writing invoke_higher_order_function.Rd
Writing install.spark.Rd
Writing sparkR.callJMethod.Rd
Writing sparkR.callJStatic.Rd
Writing sparkR.newJObject.Rd
Writing LinearSVCModel-class.Rd
Writing LogisticRegressionModel-class.Rd
Writing MultilayerPerceptronClassificationModel-class.Rd
Writing NaiveBayesModel-class.Rd
Writing FMClassificationModel-class.Rd
Writing BisectingKMeansModel-class.Rd
Writing GaussianMixtureModel-class.Rd
Writing KMeansModel-class.Rd
Writing LDAModel-class.Rd
Writing PowerIterationClustering-class.Rd
Writing FPGrowthModel-class.Rd
Writing PrefixSpan-class.Rd
Writing ALSModel-class.Rd
Writing AFTSurvivalRegressionModel-class.Rd
Writing GeneralizedLinearRegressionModel-class.Rd
Writing IsotonicRegressionModel-class.Rd
Writing LinearRegressionModel-class.Rd
Writing FMRegressionModel-class.Rd
Writing glm.Rd
Writing KSTest-class.Rd
Writing GBTRegressionModel-class.Rd
Writing GBTClassificationModel-class.Rd
Writing RandomForestRegressionModel-class.Rd
Writing RandomForestClassificationModel-class.Rd
Writing DecisionTreeRegressionModel-class.Rd
Writing DecisionTreeClassificationModel-class.Rd
Writing read.ml.Rd
Writing sparkR.session.stop.Rd
Writing sparkR.init-deprecated.Rd
Writing sparkRSQL.init-deprecated.Rd
Writing sparkRHive.init-deprecated.Rd
Writing sparkR.session.Rd
Writing sparkR.uiWebUrl.Rd
Writing setJobGroup.Rd
Writing clearJobGroup.Rd
Writing cancelJobGroup.Rd
Writing setJobDescription.Rd
Writing setLocalProperty.Rd
Writing getLocalProperty.Rd
Writing crosstab.Rd
Writing freqItems.Rd
Writing approxQuantile.Rd
Writing StreamingQuery.Rd
Writing hashCode.Rd
++ /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-master-test-k8s/R/lib /home/jenkins/workspace/spark-master-test-k8s/R/pkg/
* installing *source* package ‘SparkR’ ...
** using staged installation
** R
** inst
** byte-compile and prepare package for lazy loading
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (SparkR)
++ cd /home/jenkins/workspace/spark-master-test-k8s/R/lib
++ jar cfM /home/jenkins/workspace/spark-master-test-k8s/R/lib/sparkr.zip SparkR
++ popd
++ cd /home/jenkins/workspace/spark-master-test-k8s/R/..
++ pwd
+ SPARK_HOME=/home/jenkins/workspace/spark-master-test-k8s
+ . /home/jenkins/workspace/spark-master-test-k8s/bin/load-spark-env.sh
++ '[' -z /home/jenkins/workspace/spark-master-test-k8s ']'
++ SPARK_ENV_SH=spark-env.sh
++ '[' -z '' ']'
++ export SPARK_ENV_LOADED=1
++ SPARK_ENV_LOADED=1
++ export SPARK_CONF_DIR=/home/jenkins/workspace/spark-master-test-k8s/conf
++ SPARK_CONF_DIR=/home/jenkins/workspace/spark-master-test-k8s/conf
++ SPARK_ENV_SH=/home/jenkins/workspace/spark-master-test-k8s/conf/spark-env.sh
++ [[ -f /home/jenkins/workspace/spark-master-test-k8s/conf/spark-env.sh ]]
++ '[' -z '' ']'
++ SCALA_VERSION_1=2.13
++ SCALA_VERSION_2=2.12
++ ASSEMBLY_DIR_1=/home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.13
++ ASSEMBLY_DIR_2=/home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12
++ ENV_VARIABLE_DOC=https://spark.apache.org/docs/latest/configuration.html#environment-variables
++ [[ -d /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.13 ]]
++ [[ -d /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.13 ]]
++ export SPARK_SCALA_VERSION=2.12
++ SPARK_SCALA_VERSION=2.12
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/RELEASE ']'
+ SPARK_JARS_DIR=/home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars
+ '[' -d /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars ']'
+ SPARK_HOME=/home/jenkins/workspace/spark-master-test-k8s
+ /usr/bin/R CMD build /home/jenkins/workspace/spark-master-test-k8s/R/pkg
* checking for file ‘/home/jenkins/workspace/spark-master-test-k8s/R/pkg/DESCRIPTION’ ... OK
* preparing ‘SparkR’:
* checking DESCRIPTION meta-information ... OK
* installing the package to build vignettes
* creating vignettes ... ERROR
--- re-building ‘sparkr-vignettes.Rmd’ using rmarkdown

Attaching package: 'SparkR'

The following objects are masked from 'package:stats':

    cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from 'package:base':

    as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
    rank, rbind, sample, startsWith, subset, summary, transform, union

Picked up _JAVA_OPTIONS: -XX:-UsePerfData 
Picked up _JAVA_OPTIONS: -XX:-UsePerfData 
21/07/17 06:24:09 WARN Utils: Your hostname, research-jenkins-worker-06 resolves to a loopback address: 127.0.1.1; using 172.17.0.1 instead (on interface docker0)
21/07/17 06:24:09 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/07/17 06:24:10 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[Stage 0:>                                                          (0 + 1) / 1]

                                                                                
21/07/17 06:24:19 WARN Instrumentation: [1fe88d5e] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:24:24 WARN package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
Warning in FUN(X[[i]], ...) :
  Use resid_ds instead of resid.ds as column name
Warning in FUN(X[[i]], ...) :
  Use ecog_ps instead of ecog.ps as column name
21/07/17 06:24:43 ERROR StrongWolfeLineSearch: Encountered bad values in function evaluation. Decreasing step size to 0.5
21/07/17 06:24:44 WARN Instrumentation: [d06a44bf] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:24:45 WARN Instrumentation: [5f9240f8] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:24:45 WARN Instrumentation: [c450b575] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:24:45 WARN Instrumentation: [c450b575] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:24:45 WARN Instrumentation: [c450b575] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:24:45 WARN Instrumentation: [c450b575] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:24:45 WARN Instrumentation: [c450b575] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:24:47 WARN Instrumentation: [1ec11e69] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:24:57 WARN LAPACK: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
21/07/17 06:24:57 WARN LAPACK: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
21/07/17 06:24:57 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
21/07/17 06:24:57 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
21/07/17 06:25:06 WARN PrefixSpan: Input data is not cached.
21/07/17 06:25:07 WARN Instrumentation: [6f848c10] regParam is zero, which might cause numerical instability and overfitting.
21/07/17 06:25:08 ERROR Utils: Aborting task
java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21/07/17 06:25:08 ERROR FileFormatWriter: Job job_20210717062508107483431094267321_3880 aborted.
21/07/17 06:25:08 ERROR Executor: Exception in task 0.0 in stage 3880.0 (TID 877)
org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more
21/07/17 06:25:08 WARN TaskSetManager: Lost task 0.0 in stage 3880.0 (TID 877) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more

21/07/17 06:25:08 ERROR TaskSetManager: Task 0 in stage 3880.0 failed 1 times; aborting job
21/07/17 06:25:09 ERROR FileFormatWriter: Aborting job bb6f72b2-22b0-4b2d-a73e-2752d782c647.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3880.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3880.0 (TID 877) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2404)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2353)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2352)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2352)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1109)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2591)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2533)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:898)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2210)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:218)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:93)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:93)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:80)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:78)
	at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:115)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:848)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:382)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:355)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:781)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more
21/07/17 06:25:09 ERROR Instrumentation: org.apache.spark.SparkException: Job aborted.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:496)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:251)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:93)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:93)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:80)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:78)
	at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:115)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:848)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:382)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:355)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:781)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3880.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3880.0 (TID 877) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2404)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2353)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2352)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2352)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1109)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2591)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2533)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:898)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2210)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:218)
	... 102 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more

21/07/17 06:25:09 ERROR Instrumentation: org.apache.spark.SparkException: Job aborted.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:496)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:251)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:93)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:93)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:80)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:78)
	at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:115)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:848)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:382)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:355)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:781)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3880.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3880.0 (TID 877) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2404)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2353)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2352)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2352)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1109)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2591)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2533)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:898)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2210)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:218)
	... 102 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more

21/07/17 06:25:09 ERROR RBackendHandler: save on 1204 failed
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:496)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:251)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:93)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:93)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:80)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:78)
	at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:115)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:848)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:382)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:355)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:781)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	... 37 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3880.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3880.0 (TID 877) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2404)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2353)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2352)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2352)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1109)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1109)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2591)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2533)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2522)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:898)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2210)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:218)
	... 102 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:500)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:327)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$16(FileFormatWriter.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:499)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1468)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:502)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseCurrentWriter(FileFormatDataWriter.scala:62)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:73)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:94)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$2(FileFormatWriter.scala:308)
	at org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:611)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:305)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1502)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:317)
	... 9 more
Quitting from lines 1120-1137 (sparkr-vignettes.Rmd) 
Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
org.apache.spark.SparkException: Job aborted.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:496)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:251)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:113)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:111)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:125)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:97)
	at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:93)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
	at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
	at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:93)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:80)
	at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:78)
	at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:115)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:848)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:382)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:355)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:781)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.chann
--- failed re-building ‘sparkr-vignettes.Rmd’

SUMMARY: processing the following file failed:
  ‘sparkr-vignettes.Rmd’

Error: Vignette re-building failed.
Execution halted
+ retcode=1
+ PATH=/usr/java/latest/bin:/home/anaconda/envs/py36/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/bin:/usr/sbin
+ /home/jenkins/bin/kill_zinc_nailgun.py --zinc-port 3611
+ ((  1 == 0  ))
+ rm -rf
+ exit
Archiving artifacts
‘spark-*.tgz’ doesn’t match anything
ERROR: Step ‘Archive the artifacts’ failed: No artifacts found that match the file pattern "spark-*.tgz". Configuration error?
Finished: FAILURE