Test Result : FileBasedDataSourceSuite

0 failures (±0)
53 tests (±0)
Took 24 sec.

All Tests

Test nameDurationStatus
Do not use cache on append0.64 secPassed
Do not use cache on overwrite0.7 secPassed
Enabling/disabling ignoreMissingFiles using csv2.4 secPassed
Enabling/disabling ignoreMissingFiles using json1.8 secPassed
Enabling/disabling ignoreMissingFiles using orc1.8 secPassed
Enabling/disabling ignoreMissingFiles using parquet1.8 secPassed
Enabling/disabling ignoreMissingFiles using text1.6 secPassed
File source v2: support partition pruning1.3 secPassed
File source v2: support passing data filters to FileScan without partitionFilters1.1 secPassed
File table location should include both values of option `path` and `paths`0.37 secPassed
Option pathGlobFilter: filter files correctly0.34 secPassed
Option pathGlobFilter: simple extension filtering should contains partition info0.41 secPassed
Option recursiveFileLookup: disable partition inferring31 msPassed
Option recursiveFileLookup: recursive loading correctly0.11 secPassed
Return correct results when data columns overlap with partition columns0.67 secPassed
Return correct results when data columns overlap with partition columns (nested data)0.57 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - orc0.16 secPassed
SPARK-15474 Write and read back non-empty schema with empty dataframe - parquet0.16 secPassed
SPARK-22146 read files containing special characters using csv0.17 secPassed
SPARK-22146 read files containing special characters using json0.14 secPassed
SPARK-22146 read files containing special characters using orc0.13 secPassed
SPARK-22146 read files containing special characters using parquet0.19 secPassed
SPARK-22146 read files containing special characters using text0.11 secPassed
SPARK-22790,SPARK-27668: spark.sql.sources.compressionFactor takes effect0.36 secPassed
SPARK-23072 Write and read back unicode column names - csv0.17 secPassed
SPARK-23072 Write and read back unicode column names - json0.14 secPassed
SPARK-23072 Write and read back unicode column names - orc0.14 secPassed
SPARK-23072 Write and read back unicode column names - parquet0.19 secPassed
SPARK-23148 read files containing special characters using csv with multiline enabled0.19 secPassed
SPARK-23148 read files containing special characters using json with multiline enabled0.17 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - orc0.14 secPassed
SPARK-23271 empty RDD when saved should write a metadata only file - parquet0.18 secPassed
SPARK-23372 error while writing empty schema files using csv10 msPassed
SPARK-23372 error while writing empty schema files using json8 msPassed
SPARK-23372 error while writing empty schema files using orc13 msPassed
SPARK-23372 error while writing empty schema files using parquet9 msPassed
SPARK-23372 error while writing empty schema files using text8 msPassed
SPARK-24204 error handling for unsupported Array/Map/Struct types - csv0.48 secPassed
SPARK-24204 error handling for unsupported Interval data types - csv, json, parquet, orc0.34 secPassed
SPARK-24204 error handling for unsupported Null data types - csv, parquet, orc0.65 secPassed
SPARK-24691 error handling for unsupported types - text0.27 secPassed
SPARK-25237 compute correct input metrics in FileScanRDD0.19 secPassed
SPARK-31116: Select nested schema with case insensitive mode1.1 secPassed
SPARK-31935: Hadoop file system config should be effective in data source options0.47 secPassed
Spark native readers should respect spark.sql.caseSensitive - orc0.53 secPassed
Spark native readers should respect spark.sql.caseSensitive - parquet0.64 secPassed
UDF input_file_name()0.27 secPassed
Writing empty datasets should not fail - csv73 msPassed
Writing empty datasets should not fail - json72 msPassed
Writing empty datasets should not fail - orc91 msPassed
Writing empty datasets should not fail - parquet69 msPassed
Writing empty datasets should not fail - text75 msPassed
sizeInBytes should be the total size of all files0.16 secPassed