FailedConsole Output

Skipping 941 KB.. Full Log
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (process-resource-bundles) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/src/main/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Not compiling main sources
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:compile (scala-compile-first) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] compile in 0.0 s
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-antrun-plugin:1.8:run (create-tmp-dir) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Executing tasks

main:
    [mkdir] Created dir: /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/tmp
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/src/test/resources
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Not compiling test sources
[INFO] 
[INFO] --- maven-dependency-plugin:3.1.1:build-classpath (generate-test-classpath) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Dependencies classpath:
/home/jenkins/.m2/repository/org/scalatestplus/scalatestplus-mockito_2.12/1.0.0-SNAP5/scalatestplus-mockito_2.12-1.0.0-SNAP5.jar:/home/jenkins/.m2/repository/com/clearspring/analytics/stream/2.9.6/stream-2.9.6.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-core/2.12.2/jackson-core-2.12.2.jar:/home/jenkins/.m2/repository/org/apache/avro/avro-mapred/1.10.2/avro-mapred-1.10.2.jar:/home/jenkins/.m2/repository/org/glassfish/hk2/hk2-locator/2.6.1/hk2-locator-2.6.1.jar:/home/jenkins/.m2/repository/org/glassfish/hk2/external/jakarta.inject/2.6.1/jakarta.inject-2.6.1.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-log4j12/1.7.30/slf4j-log4j12-1.7.30.jar:/home/jenkins/.m2/repository/org/glassfish/hk2/hk2-utils/2.6.1/hk2-utils-2.6.1.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-freespec_2.12/3.2.3/scalatest-freespec_2.12-3.2.3.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-funsuite_2.12/3.2.3/scalatest-funsuite_2.12-3.2.3.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-jndi/9.4.37.v20210219/jetty-jndi-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper-jute/3.6.2/zookeeper-jute-3.6.2.jar:/home/jenkins/.m2/repository/jakarta/servlet/jakarta.servlet-api/4.0.3/jakarta.servlet-api-4.0.3.jar:/home/jenkins/.m2/repository/io/netty/netty-all/4.1.51.Final/netty-all-4.1.51.Final.jar:/home/jenkins/.m2/repository/com/thoughtworks/paranamer/paranamer/2.8/paranamer-2.8.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-funspec_2.12/3.2.3/scalatest-funspec_2.12-3.2.3.jar:/home/jenkins/.m2/repository/com/amazonaws/jmespath-java/1.11.844/jmespath-java-1.11.844.jar:/home/jenkins/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.jar:/home/jenkins/.m2/repository/org/objenesis/objenesis/2.6/objenesis-2.6.jar:/home/jenkins/.m2/repository/org/fusesource/leveldbjni/leveldbjni-all/1.8/leveldbjni-all-1.8.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-text/1.6/commons-text-1.6.jar:/home/jenkins/.m2/repository/org/glassfish/jersey/media/jersey-media-jaxb/2.30/jersey-media-jaxb-2.30.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-diagrams_2.12/3.2.3/scalatest-diagrams_2.12-3.2.3.jar:/home/jenkins/.m2/repository/jakarta/ws/rs/jakarta.ws.rs-api/2.1.6/jakarta.ws.rs-api-2.1.6.jar:/home/jenkins/.m2/repository/org/glassfish/hk2/external/aopalliance-repackaged/2.6.1/aopalliance-repackaged-2.6.1.jar:/home/jenkins/.m2/repository/org/json4s/json4s-core_2.12/3.7.0-M5/json4s-core_2.12-3.7.0-M5.jar:/home/jenkins/.m2/repository/com/google/protobuf/protobuf-java/2.6.1/protobuf-java-2.6.1.jar:/home/jenkins/.m2/repository/org/apache/zookeeper/zookeeper/3.6.2/zookeeper-3.6.2.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.37.v20210219/jetty-webapp-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/apache/htrace/htrace-core4/4.1.0-incubating/htrace-core4-4.1.0-incubating.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/module/jackson-module-scala_2.12/2.12.2/jackson-module-scala_2.12-2.12.2.jar:/home/jenkins/.m2/repository/commons-codec/commons-codec/1.15/commons-codec-1.15.jar:/home/jenkins/.m2/repository/commons-io/commons-io/2.8.0/commons-io-2.8.0.jar:/home/jenkins/.m2/repository/com/amazonaws/aws-java-sdk-kms/1.11.844/aws-java-sdk-kms-1.11.844.jar:/home/jenkins/workspace/spark-master-test-k8s/common/network-shuffle/target/spark-network-shuffle_2.12-3.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-proxy/9.4.37.v20210219/jetty-proxy-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-library/2.12.10/scala-library-2.12.10.jar:/home/jenkins/.m2/repository/com/esotericsoftware/kryo-shaded/4.0.2/kryo-shaded-4.0.2.jar:/home/jenkins/.m2/repository/com/esotericsoftware/minlog/1.3.0/minlog-1.3.0.jar:/home/jenkins/.m2/repository/com/amazonaws/amazon-kinesis-client/1.14.0/amazon-kinesis-client-1.14.0.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest_2.12/3.2.3/scalatest_2.12-3.2.3.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy/1.10.13/byte-buddy-1.10.13.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-mustmatchers_2.12/3.2.3/scalatest-mustmatchers_2.12-3.2.3.jar:/home/jenkins/.m2/repository/com/squareup/okhttp3/okhttp/3.11.0/okhttp-3.11.0.jar:/home/jenkins/.m2/repository/com/twitter/chill-java/0.9.5/chill-java-0.9.5.jar:/home/jenkins/.m2/repository/org/slf4j/jul-to-slf4j/1.7.30/jul-to-slf4j-1.7.30.jar:/home/jenkins/.m2/repository/org/json4s/json4s-scalap_2.12/3.7.0-M5/json4s-scalap_2.12-3.7.0-M5.jar:/home/jenkins/.m2/repository/commons-net/commons-net/3.1/commons-net-3.1.jar:/home/jenkins/workspace/spark-master-test-k8s/common/unsafe/target/spark-unsafe_2.12-3.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpclient/4.5.13/httpclient-4.5.13.jar:/home/jenkins/workspace/spark-master-test-k8s/common/tags/target/spark-tags_2.12-3.2.0-SNAPSHOT.jar:/home/jenkins/workspace/spark-master-test-k8s/streaming/target/spark-streaming_2.12-3.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-ie-driver/3.141.59/selenium-ie-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/lz4/lz4-java/1.7.1/lz4-java-1.7.1.jar:/home/jenkins/.m2/repository/org/json4s/json4s-jackson_2.12/3.7.0-M5/json4s-jackson_2.12-3.7.0-M5.jar:/home/jenkins/.m2/repository/org/scalatestplus/scalatestplus-scalacheck_2.12/3.1.0.0-RC2/scalatestplus-scalacheck_2.12-3.1.0.0-RC2.jar:/home/jenkins/.m2/repository/commons-lang/commons-lang/2.6/commons-lang-2.6.jar:/home/jenkins/.m2/repository/joda-time/joda-time/2.10.5/joda-time-2.10.5.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-wordspec_2.12/3.2.3/scalatest-wordspec_2.12-3.2.3.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-graphite/4.1.1/metrics-graphite-4.1.1.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-edge-driver/3.141.59/selenium-edge-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-exec/1.3/commons-exec-1.3.jar:/home/jenkins/.m2/repository/javax/activation/activation/1.1.1/activation-1.1.1.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.12.2/jackson-databind-2.12.2.jar:/home/jenkins/.m2/repository/com/google/guava/guava/14.0.1/guava-14.0.1.jar:/home/jenkins/.m2/repository/org/apache/httpcomponents/httpcore/4.4.12/httpcore-4.4.12.jar:/home/jenkins/.m2/repository/org/scalactic/scalactic_2.12/3.2.3/scalactic_2.12-3.2.3.jar:/home/jenkins/.m2/repository/org/apache/avro/avro/1.10.2/avro-1.10.2.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util-ajax/9.4.37.v20210219/jetty-util-ajax-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/core/jackson-annotations/2.12.2/jackson-annotations-2.12.2.jar:/home/jenkins/.m2/repository/com/fasterxml/jackson/dataformat/jackson-dataformat-cbor/2.12.2/jackson-dataformat-cbor-2.12.2.jar:/home/jenkins/.m2/repository/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar:/home/jenkins/.m2/repository/com/amazonaws/aws-java-sdk-core/1.11.844/aws-java-sdk-core-1.11.844.jar:/home/jenkins/.m2/repository/com/twitter/chill_2.12/0.9.5/chill_2.12-0.9.5.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-math3/3.4.1/commons-math3-3.4.1.jar:/home/jenkins/.m2/repository/net/razorvine/pyrolite/4.30/pyrolite-4.30.jar:/home/jenkins/.m2/repository/jakarta/validation/jakarta.validation-api/2.0.2/jakarta.validation-api-2.0.2.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-core_2.12/3.2.3/scalatest-core_2.12-3.2.3.jar:/home/jenkins/.m2/repository/org/tukaani/xz/1.8/xz-1.8.jar:/home/jenkins/.m2/repository/org/glassfish/jersey/inject/jersey-hk2/2.30/jersey-hk2-2.30.jar:/home/jenkins/workspace/spark-master-test-k8s/launcher/target/spark-launcher_2.12-3.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-plus/9.4.37.v20210219/jetty-plus-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlets/9.4.37.v20210219/jetty-servlets-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/json4s/json4s-ast_2.12/3.7.0-M5/json4s-ast_2.12-3.7.0-M5.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-client-api/3.2.2/hadoop-client-api-3.2.2.jar:/home/jenkins/.m2/repository/com/amazonaws/aws-java-sdk-s3/1.11.844/aws-java-sdk-s3-1.11.844.jar:/home/jenkins/.m2/repository/org/glassfish/jersey/core/jersey-common/2.30/jersey-common-2.30.jar:/home/jenkins/.m2/repository/oro/oro/2.0.8/oro-2.0.8.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-http/9.4.37.v20210219/jetty-http-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/scala-lang/scala-reflect/2.12.10/scala-reflect-2.12.10.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-continuation/9.4.37.v20210219/jetty-continuation-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/log4j/log4j/1.2.17/log4j-1.2.17.jar:/home/jenkins/.m2/repository/com/novocode/junit-interface/0.11/junit-interface-0.11.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-framework/2.13.0/curator-framework-2.13.0.jar:/home/jenkins/.m2/repository/org/apache/xbean/xbean-asm7-shaded/4.15/xbean-asm7-shaded-4.15.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-client/9.4.37.v20210219/jetty-client-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-core/4.1.1/metrics-core-4.1.1.jar:/home/jenkins/.m2/repository/junit/junit/4.13.1/junit-4.13.1.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-lang3/3.11/commons-lang3-3.11.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-json/4.1.1/metrics-json-4.1.1.jar:/home/jenkins/.m2/repository/org/glassfish/hk2/hk2-api/2.6.1/hk2-api-2.6.1.jar:/home/jenkins/.m2/repository/org/roaringbitmap/shims/0.9.0/shims-0.9.0.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-chrome-driver/3.141.59/selenium-chrome-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-opera-driver/3.141.59/selenium-opera-driver-3.141.59.jar:/home/jenkins/.m2/repository/jakarta/annotation/jakarta.annotation-api/1.3.5/jakarta.annotation-api-1.3.5.jar:/home/jenkins/workspace/spark-master-test-k8s/core/target/spark-core_2.12-3.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/glassfish/hk2/osgi-resource-locator/1.0.3/osgi-resource-locator-1.0.3.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-servlet/9.4.37.v20210219/jetty-servlet-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/net/bytebuddy/byte-buddy-agent/1.10.13/byte-buddy-agent-1.10.13.jar:/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-client-runtime/3.2.2/hadoop-client-runtime-3.2.2.jar:/home/jenkins/.m2/repository/org/apache/yetus/audience-annotations/0.5.0/audience-annotations-0.5.0.jar:/home/jenkins/.m2/repository/org/apache/avro/avro-ipc/1.10.2/avro-ipc-1.10.2.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-jvm/4.1.1/metrics-jvm-4.1.1.jar:/home/jenkins/.m2/repository/org/scalatestplus/scalatestplus-selenium_2.12/1.0.0-SNAP5/scalatestplus-selenium_2.12-1.0.0-SNAP5.jar:/home/jenkins/.m2/repository/org/scala-sbt/test-interface/1.0/test-interface-1.0.jar:/home/jenkins/.m2/repository/com/squareup/okio/okio/1.14.0/okio-1.14.0.jar:/home/jenkins/.m2/repository/com/github/luben/zstd-jni/1.4.9-1/zstd-jni-1.4.9-1.jar:/home/jenkins/.m2/repository/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar:/home/jenkins/.m2/repository/com/amazonaws/aws-java-sdk-dynamodb/1.11.844/aws-java-sdk-dynamodb-1.11.844.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-support/3.141.59/selenium-support-3.141.59.jar:/home/jenkins/.m2/repository/org/scala-lang/modules/scala-xml_2.12/1.2.0/scala-xml_2.12-1.2.0.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-recipes/2.13.0/curator-recipes-2.13.0.jar:/home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl/target/spark-streaming-kinesis-asl_2.12-3.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-security/9.4.37.v20210219/jetty-security-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/com/ning/compress-lzf/1.0.3/compress-lzf-1.0.3.jar:/home/jenkins/.m2/repository/org/slf4j/slf4j-api/1.7.30/slf4j-api-1.7.30.jar:/home/jenkins/.m2/repository/org/glassfish/jersey/containers/jersey-container-servlet/2.30/jersey-container-servlet-2.30.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-remote-driver/3.141.59/selenium-remote-driver-3.141.59.jar:/home/jenkins/.m2/repository/org/apache/curator/curator-client/2.13.0/curator-client-2.13.0.jar:/home/jenkins/workspace/spark-master-test-k8s/common/network-common/target/spark-network-common_2.12-3.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-crypto/1.1.0/commons-crypto-1.1.0.jar:/home/jenkins/.m2/repository/org/javassist/javassist/3.25.0-GA/javassist-3.25.0-GA.jar:/home/jenkins/.m2/repository/org/glassfish/jersey/containers/jersey-container-servlet-core/2.30/jersey-container-servlet-core-2.30.jar:/home/jenkins/.m2/repository/com/amazonaws/aws-java-sdk-kinesis/1.11.844/aws-java-sdk-kinesis-1.11.844.jar:/home/jenkins/workspace/spark-master-test-k8s/common/kvstore/target/spark-kvstore_2.12-3.2.0-SNAPSHOT.jar:/home/jenkins/.m2/repository/com/amazonaws/aws-java-sdk-cloudwatch/1.11.844/aws-java-sdk-cloudwatch-1.11.844.jar:/home/jenkins/.m2/repository/com/amazonaws/aws-java-sdk-sts/1.11.844/aws-java-sdk-sts-1.11.844.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-server/9.4.37.v20210219/jetty-server-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-flatspec_2.12/3.2.3/scalatest-flatspec_2.12-3.2.3.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-refspec_2.12/3.2.3/scalatest-refspec_2.12-3.2.3.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-io/9.4.37.v20210219/jetty-io-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-api/3.141.59/selenium-api-3.141.59.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-java/3.141.59/selenium-java-3.141.59.jar:/home/jenkins/.m2/repository/org/mockito/mockito-core/3.4.6/mockito-core-3.4.6.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-xml/9.4.37.v20210219/jetty-xml-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/xerial/snappy/snappy-java/1.1.8.2/snappy-java-1.1.8.2.jar:/home/jenkins/.m2/repository/org/scalacheck/scalacheck_2.12/1.14.2/scalacheck_2.12-1.14.2.jar:/home/jenkins/.m2/repository/org/roaringbitmap/RoaringBitmap/0.9.0/RoaringBitmap-0.9.0.jar:/home/jenkins/.m2/repository/org/slf4j/jcl-over-slf4j/1.7.30/jcl-over-slf4j-1.7.30.jar:/home/jenkins/.m2/repository/org/spark-project/spark/unused/1.0.0/unused-1.0.0.jar:/home/jenkins/.m2/repository/io/dropwizard/metrics/metrics-jmx/4.1.1/metrics-jmx-4.1.1.jar:/home/jenkins/.m2/repository/org/glassfish/jersey/core/jersey-server/2.30/jersey-server-2.30.jar:/home/jenkins/.m2/repository/org/glassfish/jersey/core/jersey-client/2.30/jersey-client-2.30.jar:/home/jenkins/.m2/repository/commons-collections/commons-collections/3.2.2/commons-collections-3.2.2.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-compatible/3.2.3/scalatest-compatible-3.2.3.jar:/home/jenkins/.m2/repository/org/eclipse/jetty/jetty-util/9.4.37.v20210219/jetty-util-9.4.37.v20210219.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-propspec_2.12/3.2.3/scalatest-propspec_2.12-3.2.3.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-featurespec_2.12/3.2.3/scalatest-featurespec_2.12-3.2.3.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-firefox-driver/3.141.59/selenium-firefox-driver-3.141.59.jar:/home/jenkins/.m2/repository/commons-logging/commons-logging/1.1.3/commons-logging-1.1.3.jar:/home/jenkins/.m2/repository/net/sf/py4j/py4j/0.10.9.2/py4j-0.10.9.2.jar:/home/jenkins/.m2/repository/org/seleniumhq/selenium/selenium-safari-driver/3.141.59/selenium-safari-driver-3.141.59.jar:/home/jenkins/.m2/repository/software/amazon/ion/ion-java/1.0.2/ion-java-1.0.2.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-shouldmatchers_2.12/3.2.3/scalatest-shouldmatchers_2.12-3.2.3.jar:/home/jenkins/.m2/repository/org/apache/commons/commons-compress/1.20/commons-compress-1.20.jar:/home/jenkins/.m2/repository/org/scalatest/scalatest-matchers-core_2.12/3.2.3/scalatest-matchers-core_2.12-3.2.3.jar
[INFO] 
[INFO] --- scala-maven-plugin:4.3.0:testCompile (scala-test-compile-first) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] compile in 0.0 s
[INFO] No sources to compile
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M5:test (default-test) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-surefire-plugin:3.0.0-M5:test (test) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- scalatest-maven-plugin:2.0.0:test (test) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Tests are skipped.
[INFO] 
[INFO] --- maven-jar-plugin:3.1.2:test-jar (prepare-test-jar) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Building jar: /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.2.0-SNAPSHOT-tests.jar
[INFO] 
[INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Building jar: /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.2.0-SNAPSHOT.jar
[INFO] 
[INFO] --- maven-site-plugin:3.5.1:attach-descriptor (attach-descriptor) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] 
[INFO] --- maven-shade-plugin:3.2.1:shade (default) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Including org.apache.spark:spark-streaming-kinesis-asl_2.12:jar:3.2.0-SNAPSHOT in the shaded jar.
[INFO] Including com.amazonaws:amazon-kinesis-client:jar:1.14.0 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-dynamodb:jar:1.11.844 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-s3:jar:1.11.844 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-kms:jar:1.11.844 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-kinesis:jar:1.11.844 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-cloudwatch:jar:1.11.844 in the shaded jar.
[INFO] Including org.apache.commons:commons-lang3:jar:3.11 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-sts:jar:1.11.844 in the shaded jar.
[INFO] Including com.amazonaws:aws-java-sdk-core:jar:1.11.844 in the shaded jar.
[INFO] Including org.apache.httpcomponents:httpclient:jar:4.5.13 in the shaded jar.
[INFO] Including org.apache.httpcomponents:httpcore:jar:4.4.12 in the shaded jar.
[INFO] Including software.amazon.ion:ion-java:jar:1.0.2 in the shaded jar.
[INFO] Including joda-time:joda-time:jar:2.10.5 in the shaded jar.
[INFO] Including com.amazonaws:jmespath-java:jar:1.11.844 in the shaded jar.
[INFO] Including com.fasterxml.jackson.dataformat:jackson-dataformat-cbor:jar:2.12.2 in the shaded jar.
[INFO] Including org.apache.spark:spark-tags_2.12:jar:3.2.0-SNAPSHOT in the shaded jar.
[INFO] Including javax.activation:activation:jar:1.1.1 in the shaded jar.
[INFO] Including commons-codec:commons-codec:jar:1.15 in the shaded jar.
[INFO] Including org.scala-lang:scala-library:jar:2.12.10 in the shaded jar.
[INFO] Including com.fasterxml.jackson.core:jackson-core:jar:2.12.2 in the shaded jar.
[INFO] Including com.google.protobuf:protobuf-java:jar:2.6.1 in the shaded jar.
[INFO] Including org.apache.hadoop:hadoop-client-runtime:jar:3.2.2 in the shaded jar.
[INFO] Including org.apache.htrace:htrace-core4:jar:4.1.0-incubating in the shaded jar.
[INFO] Including commons-logging:commons-logging:jar:1.1.3 in the shaded jar.
[INFO] Including com.google.code.findbugs:jsr305:jar:3.0.0 in the shaded jar.
[INFO] Including org.spark-project.spark:unused:jar:1.0.0 in the shaded jar.
[WARNING] Discovered module-info.class. Shading will break its strong encapsulation.
[WARNING] Discovered module-info.class. Shading will break its strong encapsulation.
[WARNING] unused-1.0.0.jar, spark-tags_2.12-3.2.0-SNAPSHOT.jar, spark-streaming-kinesis-asl_2.12-3.2.0-SNAPSHOT.jar define 1 overlapping classes: 
[WARNING]   - org.apache.spark.unused.UnusedStubClass
[WARNING] maven-shade-plugin has detected that some class files are
[WARNING] present in two or more JARs. When this happens, only one
[WARNING] single version of the class is copied to the uber jar.
[WARNING] Usually this is not harmful and you can skip these warnings,
[WARNING] otherwise try to manually exclude artifacts based on
[WARNING] mvn dependency:tree -Ddetail=true and the above output.
[WARNING] See http://maven.apache.org/plugins/maven-shade-plugin/
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.2.0-SNAPSHOT.jar with /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.2.0-SNAPSHOT-shaded.jar
[INFO] 
[INFO] --- maven-source-plugin:3.1.0:jar-no-fork (create-source-jar) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Building jar: /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.2.0-SNAPSHOT-sources.jar
[INFO] 
[INFO] --- maven-source-plugin:3.1.0:test-jar-no-fork (create-source-jar) @ spark-streaming-kinesis-asl-assembly_2.12 ---
[INFO] Building jar: /home/jenkins/workspace/spark-master-test-k8s/external/kinesis-asl-assembly/target/spark-streaming-kinesis-asl-assembly_2.12-3.2.0-SNAPSHOT-test-sources.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for Spark Project Parent POM 3.2.0-SNAPSHOT:
[INFO] 
[INFO] Spark Project Parent POM ........................... SUCCESS [  3.309 s]
[INFO] Spark Project Tags ................................. SUCCESS [  7.407 s]
[INFO] Spark Project Sketch ............................... SUCCESS [  7.439 s]
[INFO] Spark Project Local DB ............................. SUCCESS [  2.225 s]
[INFO] Spark Project Networking ........................... SUCCESS [  3.804 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [  1.675 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [  8.992 s]
[INFO] Spark Project Launcher ............................. SUCCESS [  1.327 s]
[INFO] Spark Project Core ................................. SUCCESS [02:27 min]
[INFO] Spark Project ML Local Library ..................... SUCCESS [ 28.407 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 36.529 s]
[INFO] Spark Project Streaming ............................ SUCCESS [01:00 min]
[INFO] Spark Project Catalyst ............................. SUCCESS [02:43 min]
[INFO] Spark Project SQL .................................. SUCCESS [05:51 min]
[INFO] Spark Project ML Library ........................... SUCCESS [03:35 min]
[INFO] Spark Project Tools ................................ SUCCESS [ 10.927 s]
[INFO] Spark Project Hive ................................. SUCCESS [02:06 min]
[INFO] Spark Project REPL ................................. SUCCESS [ 29.490 s]
[INFO] Spark Project Kubernetes ........................... SUCCESS [ 54.628 s]
[INFO] Spark Project Hive Thrift Server ................... SUCCESS [ 59.932 s]
[INFO] Spark Project Assembly ............................. SUCCESS [  5.639 s]
[INFO] Kafka 0.10+ Token Provider for Streaming ........... SUCCESS [ 38.310 s]
[INFO] Spark Integration for Kafka 0.10 ................... SUCCESS [01:04 min]
[INFO] Kafka 0.10+ Source for Structured Streaming ........ SUCCESS [06:03 min]
[INFO] Spark Kinesis Integration .......................... SUCCESS [01:37 min]
[INFO] Spark Project Examples ............................. SUCCESS [05:06 min]
[INFO] Spark Integration for Kafka 0.10 Assembly .......... SUCCESS [ 12.851 s]
[INFO] Spark Avro ......................................... SUCCESS [03:19 min]
[INFO] Spark Project Kinesis Assembly ..................... SUCCESS [ 17.144 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  40:28 min
[INFO] Finished at: 2021-04-06T12:00:55-07:00
[INFO] ------------------------------------------------------------------------
+ rm -rf /home/jenkins/workspace/spark-master-test-k8s/dist
+ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/dist/jars
+ echo 'Spark 3.2.0-SNAPSHOT (git revision 4b5fc1da75) built for Hadoop 3.2.2'
+ echo 'Build flags: -DzincPort=3519' -Pkubernetes -Pkinesis-asl -Phive -Phive-thriftserver
+ cp /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/activation-1.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/aircompressor-0.16.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/algebra_2.12-2.0.0-M2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/annotations-17.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/antlr4-runtime-4.8-1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/antlr-runtime-3.5.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/aopalliance-repackaged-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arpack_combined_all-0.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arrow-format-2.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arrow-memory-core-2.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arrow-memory-netty-2.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/arrow-vector-2.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/audience-annotations-0.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/automaton-1.11-8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/avro-1.10.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/avro-ipc-1.10.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/avro-mapred-1.10.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/bonecp-0.8.0.RELEASE.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/breeze_2.12-1.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/breeze-macros_2.12-1.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/cats-kernel_2.12-2.0.0-M4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/chill_2.12-0.9.5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/chill-java-0.9.5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-cli-1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-codec-1.15.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-collections-3.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-compiler-3.0.16.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-compress-1.20.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-crypto-1.1.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-dbcp-1.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-httpclient-3.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-io-2.8.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-lang-2.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-lang3-3.11.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-logging-1.1.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-math3-3.4.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-net-3.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-pool-1.5.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/commons-text-1.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/compress-lzf-1.0.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/core-1.1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/curator-client-2.13.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/curator-framework-2.13.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/curator-recipes-2.13.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/datanucleus-api-jdo-4.2.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/datanucleus-core-4.1.17.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/datanucleus-rdbms-4.1.19.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/derby-10.14.2.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/dropwizard-metrics-hadoop-metrics2-reporter-0.1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/flatbuffers-java-1.9.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/generex-1.0.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/gson-2.2.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/guava-14.0.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hadoop-client-api-3.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hadoop-client-runtime-3.2.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/HikariCP-2.5.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-beeline-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-cli-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-common-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-exec-2.3.8-core.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-jdbc-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-llap-common-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-metastore-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-serde-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-service-rpc-3.1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-shims-0.23-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-shims-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-shims-common-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-shims-scheduler-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-storage-api-2.7.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hive-vector-code-gen-2.3.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hk2-api-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hk2-locator-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/hk2-utils-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/htrace-core4-4.1.0-incubating.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/httpclient-4.5.13.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/httpcore-4.4.12.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/istack-commons-runtime-3.0.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/ivy-2.4.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-annotations-2.12.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-core-2.12.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-core-asl-1.9.13.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-databind-2.12.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-dataformat-yaml-2.12.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-datatype-jsr310-2.11.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-mapper-asl-1.9.13.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-module-jaxb-annotations-2.12.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jackson-module-scala_2.12-2.12.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.activation-api-1.2.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.annotation-api-1.3.5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.inject-2.6.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.servlet-api-4.0.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.validation-api-2.0.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.ws.rs-api-2.1.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jakarta.xml.bind-api-2.3.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/janino-3.0.16.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/javassist-3.25.0-GA.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/javax.jdo-3.2.0-m3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/javolution-5.5.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jaxb-api-2.2.11.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jaxb-runtime-2.3.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jcl-over-slf4j-1.7.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jdo-api-3.0.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-client-2.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-common-2.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-container-servlet-2.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-container-servlet-core-2.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-hk2-2.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-media-jaxb-2.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jersey-server-2.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/JLargeArrays-1.5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jline-2.14.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/joda-time-2.10.5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jodd-core-3.5.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jpam-1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json-1.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json4s-ast_2.12-3.7.0-M5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json4s-core_2.12-3.7.0-M5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json4s-jackson_2.12-3.7.0-M5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/json4s-scalap_2.12-3.7.0-M5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jsr305-3.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jta-1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/JTransforms-3.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/jul-to-slf4j-1.7.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kryo-shaded-4.0.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-client-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-admissionregistration-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-apiextensions-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-apps-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-autoscaling-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-batch-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-certificates-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-common-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-coordination-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-core-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-discovery-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-events-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-extensions-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-metrics-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-networking-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-node-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-policy-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-rbac-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-scheduling-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-settings-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/kubernetes-model-storageclass-4.13.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/leveldbjni-all-1.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/libfb303-0.9.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/libthrift-0.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/log4j-1.2.17.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/logging-interceptor-3.12.12.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/lz4-java-1.7.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/machinist_2.12-0.6.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/macro-compat_2.12-1.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-core-4.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-graphite-4.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-jmx-4.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-json-4.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/metrics-jvm-4.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/minlog-1.3.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/netty-all-4.1.51.Final.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/objenesis-2.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/okhttp-3.12.12.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/okio-1.14.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/opencsv-2.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/orc-core-1.6.7.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/orc-mapreduce-1.6.7.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/orc-shims-1.6.7.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/oro-2.0.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/osgi-resource-locator-1.0.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/paranamer-2.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-column-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-common-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-encoding-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-format-structures-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-hadoop-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/parquet-jackson-1.12.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/protobuf-java-2.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/py4j-0.10.9.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/pyrolite-4.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/RoaringBitmap-0.9.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-collection-compat_2.12-2.1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-compiler-2.12.10.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-library-2.12.10.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-parser-combinators_2.12-1.1.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-reflect-2.12.10.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/scala-xml_2.12-1.2.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/shapeless_2.12-2.3.3.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/shims-0.9.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/slf4j-api-1.7.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/slf4j-log4j12-1.7.30.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/snakeyaml-1.27.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/snappy-java-1.1.8.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-catalyst_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-core_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-graphx_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-hive_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-hive-thriftserver_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-kubernetes_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-kvstore_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-launcher_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-mllib_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-mllib-local_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-network-common_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-network-shuffle_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-repl_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-sketch_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-sql_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-streaming_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-tags_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-tags_2.12-3.2.0-SNAPSHOT-tests.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spark-unsafe_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spire_2.12-0.17.0-M1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spire-macros_2.12-0.17.0-M1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spire-platform_2.12-0.17.0-M1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/spire-util_2.12-0.17.0-M1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/ST4-4.0.4.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/stax-api-1.0.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/stream-2.9.6.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/super-csv-2.2.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/threeten-extra-1.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/transaction-api-1.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/univocity-parsers-2.9.1.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/velocity-1.5.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/xbean-asm7-shaded-4.15.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/xz-1.8.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/zjsonpatch-0.3.0.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/zookeeper-3.6.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/zookeeper-jute-3.6.2.jar /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars/zstd-jni-1.4.9-1.jar /home/jenkins/workspace/spark-master-test-k8s/dist/jars/
+ '[' -f '/home/jenkins/workspace/spark-master-test-k8s/common/network-yarn/target/scala*/spark-*-yarn-shuffle.jar' ']'
+ '[' -d /home/jenkins/workspace/spark-master-test-k8s/resource-managers/kubernetes/core/target/ ']'
+ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/dist/kubernetes/
+ cp -a /home/jenkins/workspace/spark-master-test-k8s/resource-managers/kubernetes/docker/src/main/dockerfiles /home/jenkins/workspace/spark-master-test-k8s/dist/kubernetes/
+ cp -a /home/jenkins/workspace/spark-master-test-k8s/resource-managers/kubernetes/integration-tests/tests /home/jenkins/workspace/spark-master-test-k8s/dist/kubernetes/
+ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars
+ cp /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/aircompressor-0.16.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/annotations-17.0.0.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/commons-codec-1.15.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/commons-lang3-3.11.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/hive-storage-api-2.7.2.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/istack-commons-runtime-3.0.8.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/jakarta.xml.bind-api-2.3.2.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/jaxb-runtime-2.3.2.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/orc-core-1.6.7.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/orc-mapreduce-1.6.7.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/orc-shims-1.6.7.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/scopt_2.12-3.7.1.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/spark-examples_2.12-3.2.0-SNAPSHOT.jar /home/jenkins/workspace/spark-master-test-k8s/examples/target/scala-2.12/jars/threeten-extra-1.5.0.jar /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/aircompressor-0.16.jar
+ name=aircompressor-0.16.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/aircompressor-0.16.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/aircompressor-0.16.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/annotations-17.0.0.jar
+ name=annotations-17.0.0.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/annotations-17.0.0.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/annotations-17.0.0.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/commons-codec-1.15.jar
+ name=commons-codec-1.15.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/commons-codec-1.15.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/commons-codec-1.15.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/commons-lang3-3.11.jar
+ name=commons-lang3-3.11.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/commons-lang3-3.11.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/commons-lang3-3.11.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/hive-storage-api-2.7.2.jar
+ name=hive-storage-api-2.7.2.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/hive-storage-api-2.7.2.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/hive-storage-api-2.7.2.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/istack-commons-runtime-3.0.8.jar
+ name=istack-commons-runtime-3.0.8.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/istack-commons-runtime-3.0.8.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/istack-commons-runtime-3.0.8.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/jakarta.xml.bind-api-2.3.2.jar
+ name=jakarta.xml.bind-api-2.3.2.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/jakarta.xml.bind-api-2.3.2.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/jakarta.xml.bind-api-2.3.2.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/jaxb-runtime-2.3.2.jar
+ name=jaxb-runtime-2.3.2.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/jaxb-runtime-2.3.2.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/jaxb-runtime-2.3.2.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-core-1.6.7.jar
+ name=orc-core-1.6.7.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/orc-core-1.6.7.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-core-1.6.7.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-mapreduce-1.6.7.jar
+ name=orc-mapreduce-1.6.7.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/orc-mapreduce-1.6.7.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-mapreduce-1.6.7.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-shims-1.6.7.jar
+ name=orc-shims-1.6.7.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/orc-shims-1.6.7.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/orc-shims-1.6.7.jar
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/scopt_2.12-3.7.1.jar
+ name=scopt_2.12-3.7.1.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/scopt_2.12-3.7.1.jar ']'
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/spark-examples_2.12-3.2.0-SNAPSHOT.jar
+ name=spark-examples_2.12-3.2.0-SNAPSHOT.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/spark-examples_2.12-3.2.0-SNAPSHOT.jar ']'
+ for f in "$DISTDIR"/examples/jars/*
++ basename /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/threeten-extra-1.5.0.jar
+ name=threeten-extra-1.5.0.jar
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/dist/jars/threeten-extra-1.5.0.jar ']'
+ rm /home/jenkins/workspace/spark-master-test-k8s/dist/examples/jars/threeten-extra-1.5.0.jar
+ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/dist/examples/src/main
+ cp -r /home/jenkins/workspace/spark-master-test-k8s/examples/src/main /home/jenkins/workspace/spark-master-test-k8s/dist/examples/src/
+ '[' -e /home/jenkins/workspace/spark-master-test-k8s/LICENSE-binary ']'
+ cp /home/jenkins/workspace/spark-master-test-k8s/LICENSE-binary /home/jenkins/workspace/spark-master-test-k8s/dist/LICENSE
+ cp -r /home/jenkins/workspace/spark-master-test-k8s/licenses-binary /home/jenkins/workspace/spark-master-test-k8s/dist/licenses
+ cp /home/jenkins/workspace/spark-master-test-k8s/NOTICE-binary /home/jenkins/workspace/spark-master-test-k8s/dist/NOTICE
+ '[' -e /home/jenkins/workspace/spark-master-test-k8s/CHANGES.txt ']'
+ cp -r /home/jenkins/workspace/spark-master-test-k8s/data /home/jenkins/workspace/spark-master-test-k8s/dist
+ '[' false == true ']'
+ echo 'Skipping building python distribution package'
Skipping building python distribution package
+ '[' true == true ']'
+ echo 'Building R source package'
Building R source package
++ grep Version /home/jenkins/workspace/spark-master-test-k8s/R/pkg/DESCRIPTION
++ awk '{print $NF}'
+ R_PACKAGE_VERSION=3.2.0
+ pushd /home/jenkins/workspace/spark-master-test-k8s/R
+ NO_TESTS=1
+ /home/jenkins/workspace/spark-master-test-k8s/R/check-cran.sh
Using R_SCRIPT_PATH = /usr/bin
++++ dirname /home/jenkins/workspace/spark-master-test-k8s/R/install-dev.sh
+++ cd /home/jenkins/workspace/spark-master-test-k8s/R
+++ pwd
++ FWDIR=/home/jenkins/workspace/spark-master-test-k8s/R
++ LIB_DIR=/home/jenkins/workspace/spark-master-test-k8s/R/lib
++ mkdir -p /home/jenkins/workspace/spark-master-test-k8s/R/lib
++ pushd /home/jenkins/workspace/spark-master-test-k8s/R
++ . /home/jenkins/workspace/spark-master-test-k8s/R/find-r.sh
+++ '[' -z /usr/bin ']'
++ . /home/jenkins/workspace/spark-master-test-k8s/R/create-rd.sh
+++ set -o pipefail
+++ set -e
+++++ dirname /home/jenkins/workspace/spark-master-test-k8s/R/create-rd.sh
++++ cd /home/jenkins/workspace/spark-master-test-k8s/R
++++ pwd
+++ FWDIR=/home/jenkins/workspace/spark-master-test-k8s/R
+++ pushd /home/jenkins/workspace/spark-master-test-k8s/R
+++ . /home/jenkins/workspace/spark-master-test-k8s/R/find-r.sh
++++ '[' -z /usr/bin ']'
+++ /usr/bin/Rscript -e ' if(requireNamespace("devtools", quietly=TRUE)) { setwd("/home/jenkins/workspace/spark-master-test-k8s/R"); devtools::document(pkg="./pkg", roclets="rd") }'
Updating SparkR documentation
First time using roxygen2. Upgrading automatically...
Loading SparkR
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
Writing structType.Rd
Writing print.structType.Rd
Writing structField.Rd
Writing print.structField.Rd
Writing summarize.Rd
Writing alias.Rd
Writing arrange.Rd
Writing as.data.frame.Rd
Writing cache.Rd
Writing checkpoint.Rd
Writing coalesce.Rd
Writing collect.Rd
Writing columns.Rd
Writing coltypes.Rd
Writing count.Rd
Writing cov.Rd
Writing corr.Rd
Writing createOrReplaceTempView.Rd
Writing cube.Rd
Writing dapply.Rd
Writing dapplyCollect.Rd
Writing gapply.Rd
Writing gapplyCollect.Rd
Writing describe.Rd
Writing distinct.Rd
Writing drop.Rd
Writing dropDuplicates.Rd
Writing nafunctions.Rd
Writing dtypes.Rd
Writing explain.Rd
Writing except.Rd
Writing exceptAll.Rd
Writing filter.Rd
Writing first.Rd
Writing groupBy.Rd
Writing hint.Rd
Writing insertInto.Rd
Writing intersect.Rd
Writing intersectAll.Rd
Writing isLocal.Rd
Writing isStreaming.Rd
Writing limit.Rd
Writing localCheckpoint.Rd
Writing merge.Rd
Writing mutate.Rd
Writing orderBy.Rd
Writing persist.Rd
Writing printSchema.Rd
Writing registerTempTable-deprecated.Rd
Writing rename.Rd
Writing repartition.Rd
Writing repartitionByRange.Rd
Writing sample.Rd
Writing rollup.Rd
Writing sampleBy.Rd
Writing saveAsTable.Rd
Writing take.Rd
Writing write.df.Rd
Writing write.jdbc.Rd
Writing write.json.Rd
Writing write.orc.Rd
Writing write.parquet.Rd
Writing write.stream.Rd
Writing write.text.Rd
Writing schema.Rd
Writing select.Rd
Writing selectExpr.Rd
Writing showDF.Rd
Writing subset.Rd
Writing summary.Rd
Writing union.Rd
Writing unionAll.Rd
Writing unionByName.Rd
Writing unpersist.Rd
Writing with.Rd
Writing withColumn.Rd
Writing withWatermark.Rd
Writing randomSplit.Rd
Writing broadcast.Rd
Writing columnfunctions.Rd
Writing between.Rd
Writing cast.Rd
Writing endsWith.Rd
Writing startsWith.Rd
Writing column_nonaggregate_functions.Rd
Writing otherwise.Rd
Writing over.Rd
Writing eq_null_safe.Rd
Writing withField.Rd
Writing dropFields.Rd
Writing partitionBy.Rd
Writing rowsBetween.Rd
Writing rangeBetween.Rd
Writing windowPartitionBy.Rd
Writing windowOrderBy.Rd
Writing column_datetime_diff_functions.Rd
Writing column_aggregate_functions.Rd
Writing column_collection_functions.Rd
Writing column_ml_functions.Rd
Writing column_string_functions.Rd
Writing column_misc_functions.Rd
Writing avg.Rd
Writing column_math_functions.Rd
Writing column.Rd
Writing column_window_functions.Rd
Writing column_datetime_functions.Rd
Writing column_avro_functions.Rd
Writing last.Rd
Writing not.Rd
Writing fitted.Rd
Writing predict.Rd
Writing rbind.Rd
Writing spark.als.Rd
Writing spark.bisectingKmeans.Rd
Writing spark.fmClassifier.Rd
Writing spark.fmRegressor.Rd
Writing spark.gaussianMixture.Rd
Writing spark.gbt.Rd
Writing spark.glm.Rd
Writing spark.isoreg.Rd
Writing spark.kmeans.Rd
Writing spark.kstest.Rd
Writing spark.lda.Rd
Writing spark.logit.Rd
Writing spark.mlp.Rd
Writing spark.naiveBayes.Rd
Writing spark.decisionTree.Rd
Writing spark.randomForest.Rd
Writing spark.survreg.Rd
Writing spark.svmLinear.Rd
Writing spark.fpGrowth.Rd
Writing spark.prefixSpan.Rd
Writing spark.powerIterationClustering.Rd
Writing spark.lm.Rd
Writing write.ml.Rd
Writing awaitTermination.Rd
Writing isActive.Rd
Writing lastProgress.Rd
Writing queryName.Rd
Writing status.Rd
Writing stopQuery.Rd
Writing print.jobj.Rd
Writing show.Rd
Writing substr.Rd
Writing match.Rd
Writing GroupedData.Rd
Writing pivot.Rd
Writing SparkDataFrame.Rd
Writing storageLevel.Rd
Writing toJSON.Rd
Writing nrow.Rd
Writing ncol.Rd
Writing dim.Rd
Writing head.Rd
Writing join.Rd
Writing crossJoin.Rd
Writing attach.Rd
Writing str.Rd
Writing histogram.Rd
Writing getNumPartitions.Rd
Writing sparkR.conf.Rd
Writing sparkR.version.Rd
Writing createDataFrame.Rd
Writing read.json.Rd
Writing read.orc.Rd
Writing read.parquet.Rd
Writing read.text.Rd
Writing sql.Rd
Writing tableToDF.Rd
Writing read.df.Rd
Writing read.jdbc.Rd
Writing read.stream.Rd
Writing WindowSpec.Rd
Writing createExternalTable-deprecated.Rd
Writing createTable.Rd
Writing cacheTable.Rd
Writing uncacheTable.Rd
Writing clearCache.Rd
Writing dropTempTable-deprecated.Rd
Writing dropTempView.Rd
Writing tables.Rd
Writing tableNames.Rd
Writing currentDatabase.Rd
Writing setCurrentDatabase.Rd
Writing listDatabases.Rd
Writing listTables.Rd
Writing listColumns.Rd
Writing listFunctions.Rd
Writing recoverPartitions.Rd
Writing refreshTable.Rd
Writing refreshByPath.Rd
Writing spark.addFile.Rd
Writing spark.getSparkFilesRootDirectory.Rd
Writing spark.getSparkFiles.Rd
Writing spark.lapply.Rd
Writing setLogLevel.Rd
Writing setCheckpointDir.Rd
Writing unresolved_named_lambda_var.Rd
Writing create_lambda.Rd
Writing invoke_higher_order_function.Rd
Writing install.spark.Rd
Writing sparkR.callJMethod.Rd
Writing sparkR.callJStatic.Rd
Writing sparkR.newJObject.Rd
Writing LinearSVCModel-class.Rd
Writing LogisticRegressionModel-class.Rd
Writing MultilayerPerceptronClassificationModel-class.Rd
Writing NaiveBayesModel-class.Rd
Writing FMClassificationModel-class.Rd
Writing BisectingKMeansModel-class.Rd
Writing GaussianMixtureModel-class.Rd
Writing KMeansModel-class.Rd
Writing LDAModel-class.Rd
Writing PowerIterationClustering-class.Rd
Writing FPGrowthModel-class.Rd
Writing PrefixSpan-class.Rd
Writing ALSModel-class.Rd
Writing AFTSurvivalRegressionModel-class.Rd
Writing GeneralizedLinearRegressionModel-class.Rd
Writing IsotonicRegressionModel-class.Rd
Writing LinearRegressionModel-class.Rd
Writing FMRegressionModel-class.Rd
Writing glm.Rd
Writing KSTest-class.Rd
Writing GBTRegressionModel-class.Rd
Writing GBTClassificationModel-class.Rd
Writing RandomForestRegressionModel-class.Rd
Writing RandomForestClassificationModel-class.Rd
Writing DecisionTreeRegressionModel-class.Rd
Writing DecisionTreeClassificationModel-class.Rd
Writing read.ml.Rd
Writing sparkR.session.stop.Rd
Writing sparkR.init-deprecated.Rd
Writing sparkRSQL.init-deprecated.Rd
Writing sparkRHive.init-deprecated.Rd
Writing sparkR.session.Rd
Writing sparkR.uiWebUrl.Rd
Writing setJobGroup.Rd
Writing clearJobGroup.Rd
Writing cancelJobGroup.Rd
Writing setJobDescription.Rd
Writing setLocalProperty.Rd
Writing getLocalProperty.Rd
Writing crosstab.Rd
Writing freqItems.Rd
Writing approxQuantile.Rd
Writing StreamingQuery.Rd
Writing hashCode.Rd
++ /usr/bin/R CMD INSTALL --library=/home/jenkins/workspace/spark-master-test-k8s/R/lib /home/jenkins/workspace/spark-master-test-k8s/R/pkg/
* installing *source* package ‘SparkR’ ...
** using staged installation
** R
** inst
** byte-compile and prepare package for lazy loading
Creating a new generic function for ‘as.data.frame’ in package ‘SparkR’
Creating a new generic function for ‘colnames’ in package ‘SparkR’
Creating a new generic function for ‘colnames<-’ in package ‘SparkR’
Creating a new generic function for ‘cov’ in package ‘SparkR’
Creating a new generic function for ‘drop’ in package ‘SparkR’
Creating a new generic function for ‘na.omit’ in package ‘SparkR’
Creating a new generic function for ‘filter’ in package ‘SparkR’
Creating a new generic function for ‘intersect’ in package ‘SparkR’
Creating a new generic function for ‘sample’ in package ‘SparkR’
Creating a new generic function for ‘transform’ in package ‘SparkR’
Creating a new generic function for ‘subset’ in package ‘SparkR’
Creating a new generic function for ‘summary’ in package ‘SparkR’
Creating a new generic function for ‘union’ in package ‘SparkR’
Creating a new generic function for ‘endsWith’ in package ‘SparkR’
Creating a new generic function for ‘startsWith’ in package ‘SparkR’
Creating a new generic function for ‘lag’ in package ‘SparkR’
Creating a new generic function for ‘rank’ in package ‘SparkR’
Creating a new generic function for ‘sd’ in package ‘SparkR’
Creating a new generic function for ‘var’ in package ‘SparkR’
Creating a new generic function for ‘window’ in package ‘SparkR’
Creating a new generic function for ‘predict’ in package ‘SparkR’
Creating a new generic function for ‘rbind’ in package ‘SparkR’
Creating a generic function for ‘substr’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘%in%’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘lapply’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘Filter’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘nrow’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ncol’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘factorial’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘atan2’ from package ‘base’ in package ‘SparkR’
Creating a generic function for ‘ifelse’ from package ‘base’ in package ‘SparkR’
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (SparkR)
++ cd /home/jenkins/workspace/spark-master-test-k8s/R/lib
++ jar cfM /home/jenkins/workspace/spark-master-test-k8s/R/lib/sparkr.zip SparkR
++ popd
++ cd /home/jenkins/workspace/spark-master-test-k8s/R/..
++ pwd
+ SPARK_HOME=/home/jenkins/workspace/spark-master-test-k8s
+ . /home/jenkins/workspace/spark-master-test-k8s/bin/load-spark-env.sh
++ '[' -z /home/jenkins/workspace/spark-master-test-k8s ']'
++ SPARK_ENV_SH=spark-env.sh
++ '[' -z '' ']'
++ export SPARK_ENV_LOADED=1
++ SPARK_ENV_LOADED=1
++ export SPARK_CONF_DIR=/home/jenkins/workspace/spark-master-test-k8s/conf
++ SPARK_CONF_DIR=/home/jenkins/workspace/spark-master-test-k8s/conf
++ SPARK_ENV_SH=/home/jenkins/workspace/spark-master-test-k8s/conf/spark-env.sh
++ [[ -f /home/jenkins/workspace/spark-master-test-k8s/conf/spark-env.sh ]]
++ '[' -z '' ']'
++ SCALA_VERSION_1=2.13
++ SCALA_VERSION_2=2.12
++ ASSEMBLY_DIR_1=/home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.13
++ ASSEMBLY_DIR_2=/home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12
++ ENV_VARIABLE_DOC=https://spark.apache.org/docs/latest/configuration.html#environment-variables
++ [[ -d /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.13 ]]
++ [[ -d /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.13 ]]
++ export SPARK_SCALA_VERSION=2.12
++ SPARK_SCALA_VERSION=2.12
+ '[' -f /home/jenkins/workspace/spark-master-test-k8s/RELEASE ']'
+ SPARK_JARS_DIR=/home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars
+ '[' -d /home/jenkins/workspace/spark-master-test-k8s/assembly/target/scala-2.12/jars ']'
+ SPARK_HOME=/home/jenkins/workspace/spark-master-test-k8s
+ /usr/bin/R CMD build /home/jenkins/workspace/spark-master-test-k8s/R/pkg
* checking for file ‘/home/jenkins/workspace/spark-master-test-k8s/R/pkg/DESCRIPTION’ ... OK
* preparing ‘SparkR’:
* checking DESCRIPTION meta-information ... OK
* installing the package to build vignettes
* creating vignettes ... ERROR
--- re-building ‘sparkr-vignettes.Rmd’ using rmarkdown

Attaching package: 'SparkR'

The following objects are masked from 'package:stats':

    cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from 'package:base':

    as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
    rank, rbind, sample, startsWith, subset, summary, transform, union

Picked up _JAVA_OPTIONS: -XX:-UsePerfData 
Picked up _JAVA_OPTIONS: -XX:-UsePerfData 
21/04/06 12:01:46 WARN Utils: Your hostname, research-jenkins-worker-06 resolves to a loopback address: 127.0.1.1; using 172.17.0.1 instead (on interface docker0)
21/04/06 12:01:46 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
21/04/06 12:01:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[Stage 0:>                                                          (0 + 1) / 1]

                                                                                
21/04/06 12:01:57 WARN Instrumentation: [2584848d] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:01:57 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
21/04/06 12:01:57 WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
21/04/06 12:01:57 WARN LAPACK: Failed to load implementation from: com.github.fommil.netlib.NativeSystemLAPACK
21/04/06 12:01:57 WARN LAPACK: Failed to load implementation from: com.github.fommil.netlib.NativeRefLAPACK
21/04/06 12:02:02 WARN package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
Warning in FUN(X[[i]], ...) :
  Use resid_ds instead of resid.ds as column name
Warning in FUN(X[[i]], ...) :
  Use ecog_ps instead of ecog.ps as column name
21/04/06 12:02:21 WARN Instrumentation: [45f2bfbd] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:02:22 WARN Instrumentation: [b7fc9378] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:02:23 WARN Instrumentation: [d0393769] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:02:23 WARN Instrumentation: [d0393769] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:02:23 WARN Instrumentation: [d0393769] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:02:23 WARN Instrumentation: [d0393769] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:02:23 WARN Instrumentation: [d0393769] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:02:24 WARN Instrumentation: [c5a1b555] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:02:43 WARN PrefixSpan: Input data is not cached.
21/04/06 12:02:44 WARN Instrumentation: [ee5f0e86] regParam is zero, which might cause numerical instability and overfitting.
21/04/06 12:02:45 ERROR Utils: Aborting task
java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
21/04/06 12:02:45 ERROR FileFormatWriter: Job job_202104061202452615640632200498384_3881 aborted.
21/04/06 12:02:45 ERROR Executor: Exception in task 0.0 in stage 3881.0 (TID 878)
org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more
21/04/06 12:02:45 WARN TaskSetManager: Lost task 0.0 in stage 3881.0 (TID 878) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more

21/04/06 12:02:45 ERROR TaskSetManager: Task 0 in stage 3881.0 failed 1 times; aborting job
21/04/06 12:02:45 ERROR FileFormatWriter: Aborting job 5ddccd1f-17de-49c9-a9df-afcd2227e3e2.
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3881.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3881.0 (TID 878) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2262)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2211)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2210)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2210)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1083)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1083)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1083)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2449)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2391)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2380)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:872)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2220)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:201)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:179)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:217)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:214)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:133)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:132)
	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:774)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:411)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more
21/04/06 12:02:45 ERROR Instrumentation: org.apache.spark.SparkException: Job aborted.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:427)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:233)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:179)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:217)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:214)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:133)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:132)
	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:774)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:411)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3881.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3881.0 (TID 878) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2262)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2211)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2210)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2210)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1083)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1083)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1083)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2449)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2391)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2380)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:872)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2220)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:201)
	... 94 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more

21/04/06 12:02:45 ERROR Instrumentation: org.apache.spark.SparkException: Job aborted.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:427)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:233)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:179)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:217)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:214)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:133)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:132)
	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:774)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:411)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3881.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3881.0 (TID 878) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2262)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2211)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2210)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2210)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1083)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1083)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1083)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2449)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2391)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2380)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:872)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2220)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:201)
	... 94 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more

21/04/06 12:02:45 ERROR RBackendHandler: save on 1204 failed
java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:427)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:233)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:179)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:217)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:214)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:133)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:132)
	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:774)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:411)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	... 37 more
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3881.0 failed 1 times, most recent failure: Lost task 0.0 in stage 3881.0 (TID 878) (172.17.0.1 executor driver): org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more

Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2262)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2211)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2210)
	at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
	at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2210)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1083)
	at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1083)
	at scala.Option.foreach(Option.scala:407)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1083)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2449)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2391)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2380)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:872)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:2220)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:201)
	... 94 more
Caused by: org.apache.spark.SparkException: Task failed while writing rows.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:431)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:298)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$write$15(FileFormatWriter.scala:211)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
	at org.apache.spark.scheduler.Task.run(Task.scala:131)
	at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1437)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more
Caused by: java.lang.NoSuchMethodError: java.nio.ByteBuffer.rewind()Ljava/nio/ByteBuffer;
	at org.apache.parquet.hadoop.codec.SnappyCompressor.reset(SnappyCompressor.java:156)
	at org.apache.hadoop.io.compress.CodecPool.returnCompressor(CodecPool.java:210)
	at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.release(CodecFactory.java:177)
	at org.apache.parquet.hadoop.CodecFactory.release(CodecFactory.java:250)
	at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:168)
	at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
	at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:282)
	at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1471)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:288)
	... 9 more
Quitting from lines 1120-1137 (sparkr-vignettes.Rmd) 
Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
org.apache.spark.SparkException: Job aborted.
	at org.apache.spark.sql.errors.QueryExecutionErrors$.jobAbortedError(QueryExecutionErrors.scala:427)
	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:233)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:186)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:131)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:179)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:217)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:214)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:175)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:133)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:132)
	at org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
	at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:774)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
	at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
	at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:411)
	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
	at org.apache.spark.sql.DataFrameWriter.parquet(DataFrameWriter.scala:874)
	at org.apache.spark.ml.feature.RFormulaModel$RFormulaModelWriter.saveImpl(RFormula.scala:434)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$5(Pipeline.scala:257)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4(Pipeline.scala:257)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$4$adapted(Pipeline.scala:254)
	at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:36)
	at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:198)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1(Pipeline.scala:254)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.$anonfun$saveImpl$1$adapted(Pipeline.scala:247)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.Pipeline$SharedReadWrite$.saveImpl(Pipeline.scala:247)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.saveImpl(Pipeline.scala:346)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.super$save(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$4(Pipeline.scala:344)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent(events.scala:174)
	at org.apache.spark.ml.MLEvents.withSaveInstanceEvent$(events.scala:169)
	at org.apache.spark.ml.util.Instrumentation.withSaveInstanceEvent(Instrumentation.scala:42)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3(Pipeline.scala:344)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.$anonfun$save$3$adapted(Pipeline.scala:344)
	at org.apache.spark.ml.util.Instrumentation$.$anonfun$instrumented$1(Instrumentation.scala:191)
	at scala.util.Try$.apply(Try.scala:213)
	at org.apache.spark.ml.util.Instrumentation$.instrumented(Instrumentation.scala:191)
	at org.apache.spark.ml.PipelineModel$PipelineModelWriter.save(Pipeline.scala:344)
	at org.apache.spark.ml.util.MLWritable.save(ReadWrite.scala:287)
	at org.apache.spark.ml.util.MLWritable.save$(ReadWrite.scala:287)
	at org.apache.spark.ml.PipelineModel.save(Pipeline.scala:296)
	at org.apache.spark.ml.r.GeneralizedLinearRegressionWrapper$GeneralizedLinearRegressionWrapperWriter.saveImpl(GeneralizedLinearRegressionWrapper.scala:174)
	at org.apache.spark.ml.util.MLWriter.save(ReadWrite.scala:168)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:164)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:105)
	at org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:39)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
	at 
--- failed re-building ‘sparkr-vignettes.Rmd’

SUMMARY: processing the following file failed:
  ‘sparkr-vignettes.Rmd’

Error: Vignette re-building failed.
Execution halted
+ retcode=1
+ PATH=/usr/java/latest/bin:/home/anaconda/envs/py36/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.6.3/bin/:/home/jenkins/gems/bin:/usr/local/go/bin:/home/jenkins/go-projects/bin:/home/jenkins/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/bin:/usr/sbin
+ /home/jenkins/bin/kill_zinc_nailgun.py --zinc-port 3519
+ ((  1 == 0  ))
+ rm -rf
+ exit
Archiving artifacts
‘spark-*.tgz’ doesn’t match anything
ERROR: Step ‘Archive the artifacts’ failed: No artifacts found that match the file pattern "spark-*.tgz". Configuration error?
Finished: FAILURE