Test Result : FileSuite

0 failures (±0)
38 tests (±0)
Took 18 sec.

All Tests

Test nameDurationStatus
Get input files via new Hadoop API0.35 secPassed
Get input files via old Hadoop API0.55 secPassed
SPARK-22357 test binaryFiles minPartitions0.94 secPassed
SPARK-25100: Support commit tasks when Kyro registration is required1 secPassed
SequenceFile (compressed)0.59 secPassed
SequenceFile with writable key0.33 secPassed
SequenceFile with writable key and value0.3 secPassed
SequenceFile with writable value0.36 secPassed
SequenceFiles0.72 secPassed
allow user to disable the output directory existence checking (new Hadoop API0.46 secPassed
allow user to disable the output directory existence checking (old Hadoop API)0.46 secPassed
binary file input as byte array0.3 secPassed
file caching0.36 secPassed
fixed record length binary file as byte array0.21 secPassed
implicit conversions in reading SequenceFiles0.45 secPassed
minimum split size per node and per rack should be less than or equal to maxSplitSize0.29 secPassed
negative binary record length should raise an exception0.32 secPassed
object files of classes from a JAR0.89 secPassed
object files of complex types0.33 secPassed
object files of ints0.28 secPassed
portabledatastream caching tests0.38 secPassed
portabledatastream flatmap tests0.25 secPassed
portabledatastream persist disk storage0.55 secPassed
prevent user from overwriting the empty directory (new Hadoop API)0.28 secPassed
prevent user from overwriting the empty directory (old Hadoop API)0.28 secPassed
prevent user from overwriting the non-empty directory (new Hadoop API)0.45 secPassed
prevent user from overwriting the non-empty directory (old Hadoop API)0.38 secPassed
read SequenceFile using new Hadoop API0.38 secPassed
save Hadoop Dataset through new Hadoop API0.37 secPassed
save Hadoop Dataset through old Hadoop API0.51 secPassed
spark.files.ignoreCorruptFiles should work both HadoopRDD and NewHadoopRDD0.67 secPassed
spark.files.ignoreMissingFiles should work both HadoopRDD and NewHadoopRDD0.54 secPassed
spark.hadoopRDD.ignoreEmptySplits work correctly (new Hadoop API)0.69 secPassed
spark.hadoopRDD.ignoreEmptySplits work correctly (old Hadoop API)0.7 secPassed
text files0.32 secPassed
text files (compressed)0.94 secPassed
text files do not allow null rows0.39 secPassed
write SequenceFile using new Hadoop API0.4 secPassed