Test Result : FileBasedDataSourceSuite

0 failures (±0)
54 tests (±0)
Took 35 sec.

All Tests

Test nameDurationStatus
Do not use cache on append1.1 secPassed
Do not use cache on overwrite0.97 secPassed
Enabling/disabling ignoreMissingFiles using csv2.8 secPassed
Enabling/disabling ignoreMissingFiles using json2.4 secPassed
Enabling/disabling ignoreMissingFiles using orc2.5 secPassed
Enabling/disabling ignoreMissingFiles using parquet2.7 secPassed
Enabling/disabling ignoreMissingFiles using text2.3 secPassed
File source v2: support partition pruning1.5 secPassed
File source v2: support passing data filters to FileScan without partitionFilters1.4 secPassed
Option recursiveFileLookup: disable partition inferring33 msPassed
Option recursiveFileLookup: recursive loading correctly0.14 secPassed
Return correct results when data columns overlap with partition columns0.84 secPassed
Return correct results when data columns overlap with partition columns (nested data)0.8 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - orc0.21 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - parquet0.23 secPassed
SPARK-22146 read files containing special characters using csv0.26 secPassed
SPARK-22146 read files containing special characters using json0.22 secPassed
SPARK-22146 read files containing special characters using orc0.18 secPassed
SPARK-22146 read files containing special characters using parquet0.24 secPassed
SPARK-22146 read files containing special characters using text0.21 secPassed
SPARK-22790,SPARK-27668: spark.sql.sources.compressionFactor takes effect0.47 secPassed
SPARK-23072 Write and read back unicode column names - csv0.28 secPassed
SPARK-23072 Write and read back unicode column names - json0.23 secPassed
SPARK-23072 Write and read back unicode column names - orc0.19 secPassed
SPARK-23072 Write and read back unicode column names - parquet0.22 secPassed
SPARK-23148 read files containing special characters using csv with multiline enabled0.23 secPassed
SPARK-23148 read files containing special characters using json with multiline enabled0.28 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - orc0.19 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - parquet0.23 secPassed
SPARK-23372 error while writing empty schema files using csv11 msPassed
SPARK-23372 error while writing empty schema files using json10 msPassed
SPARK-23372 error while writing empty schema files using orc14 msPassed
SPARK-23372 error while writing empty schema files using parquet10 msPassed
SPARK-23372 error while writing empty schema files using text11 msPassed
SPARK-24204 error handling for unsupported Array/Map/Struct types - csv0.65 secPassed
SPARK-24204 error handling for unsupported Interval data types - csv, json, parquet, orc0.62 secPassed
SPARK-24204 error handling for unsupported Null data types - csv, parquet, orc0.96 secPassed
SPARK-24691 error handling for unsupported types - text0.36 secPassed
SPARK-25237 compute correct input metrics in FileScanRDD0.32 secPassed
SPARK-31116: Select nested schema with case insensitive mode1.3 secPassed
SPARK-32827: Set max metadata string length50 msPassed
SPARK-32889: column name supports special characters using json1.8 secPassed
SPARK-32889: column name supports special characters using orc1.5 secPassed
SPARK-35669: special char in CSV header with filter pushdown0.37 secPassed
Spark native readers should respect spark.sql.caseSensitive - orc0.54 secPassed
Spark native readers should respect spark.sql.caseSensitive - parquet0.65 secPassed
UDF input_file_name()0.38 secPassed
Writing empty datasets should not fail - csv0.1 secPassed
Writing empty datasets should not fail - json98 msPassed
Writing empty datasets should not fail - orc0.11 secPassed
Writing empty datasets should not fail - parquet95 msPassed
Writing empty datasets should not fail - text0.1 secPassed
sizeInBytes should be the total size of all files0.24 secPassed
test casts pushdown on orc/parquet for integral types1 secPassed