Changes

Summary

  1. [SPARK-33599][SQL][FOLLOWUP] FIX Github Action with unidoc (commit: c17c76d) (details)
  2. [SPARK-33841][CORE] Fix issue with jobs disappearing intermittently from (commit: 554600c) (details)
  3. [SPARK-33812][SQL] Split the histogram column stats when saving to hive (commit: de234ee) (details)
  4. [SPARK-33518][ML] Improve performance of ML ALS recommendForAll by GEMV (commit: 44563a0) (details)
  5. [SPARK-33843][BUILD] Upgrade to Zstd 1.4.8 (commit: 00642ee) (details)
  6. [SPARK-32976][SQL][FOLLOWUP] SET and RESTORE (commit: dd44ba5) (details)
  7. [SPARK-33829][SQL] Renaming v2 tables should recreate the cache (commit: 06075d8) (details)
  8. [MINOR][DOCS] Fix typos in ScalaDocs for DataStreamWriter#foreachBatch (commit: 37c4cd8) (details)
  9. [SPARK-33850][SQL] EXPLAIN FORMATTED doesn't show the plan for (commit: 70da86a) (details)
  10. [SPARK-33854][BUILD] Use ListBuffer instead of Stack in SparkBuild.scala (commit: 2b6ef56) (details)
  11. [SPARK-33852][SQL][TESTS] Use assertAnalysisError in HiveDDLSuite.scala (commit: df2314b) (details)
  12. [SPARK-33756][SQL] Make BytesToBytesMap's MapIterator idempotent (commit: 1339168) (details)
Commit c17c76dd1647953f9bdb7135ba08a9b9f25460c9 by dongjoon
[SPARK-33599][SQL][FOLLOWUP] FIX Github Action with unidoc

### What changes were proposed in this pull request?

FIX Github Action with unidoc

### Why are the changes needed?

FIX Github Action with unidoc
### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

Pass GA

Closes #30846 from yaooqinn/SPARK-33599.

Authored-by: Kent Yao <yaooqinn@hotmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
(commit: c17c76d)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/QueryExecutionErrors.scala (diff)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/QueryCompilationErrors.scala (diff)
Commit 554600c2af0dbc8979955807658fafef5dc66c08 by dongjoon
[SPARK-33841][CORE] Fix issue with jobs disappearing intermittently from the SHS under high load

### What changes were proposed in this pull request?

Mark SHS event log entries that were `processing` at the beginning of the `checkForLogs` run as not stale and check for this mark before deleting an event log. This fixes the issue when a particular job was displayed in the SHS and disappeared after some time, but then, in several minutes showed up again.

### Why are the changes needed?

The issue is caused by [SPARK-29043](https://issues.apache.org/jira/browse/SPARK-29043), which is designated to improve the concurrent performance of the History Server. The [change](https://github.com/apache/spark/pull/25797/files#) breaks the ["app deletion" logic](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R563) because of missing proper synchronization for `processing` event log entries. Since SHS now [filters out](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R462) all `processing` event log entries, such entries do not have a chance to be [updated with the new `lastProcessed`](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R472) time and thus any entity that completes processing right after [filtering](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R462) and before [the check for stale entities](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R560) will be identified as stale and will be deleted from the UI until the next `checkForLogs` run. This is because [updated `lastProcessed` time is used as criteria](https://github.com/apache/spark/pull/25797/files#diff-128a6af0d78f4a6180774faedb335d6168dfc4defff58f5aa3021fc1bd767bc0R557), and event log entries that missed to be updated with a new time, will match that criteria.

The issue can be reproduced by generating a big number of event logs and uploading them to the SHS event log directory on S3. Essentially, around 236(26.7 MB) copies of an event log directory were created using [shs-monitor](https://github.com/vladhlinsky/shs-monitor/tree/spark-master) script. Strange behavior of SHS counting the total number of applications was noticed - at first, the number was increasing as expected, but with the next page refresh, the total number of applications decreased. No errors were logged by SHS.

58 entities are displayed at `17:35:35`:
![1-58-entries-at-17-35](https://user-images.githubusercontent.com/61428392/102648949-1129e400-4171-11eb-9463-ed1454a8f6b2.png)
25 entities are displayed at `17:36:40`:
![2-25-entries-at-17-36](https://user-images.githubusercontent.com/61428392/102648974-1c7d0f80-4171-11eb-95d8-78c2bb37a168.png)

### Does this PR introduce _any_ user-facing change?

Yes, SHS users won't face the behavior when the number of displayed applications decreases periodically.

### How was this patch tested?

Tested using [shs-monitor](https://github.com/vladhlinsky/shs-monitor/tree/spark-master) script:
* Build SHS with the proposed change
* Download Hadoop AWS and AWS Java SDK
* Prepare S3 bucket and user for programmatic access, grant required roles to the user. Get access key and secret key
* Configure SHS to read event logs from S3
* Start [monitor](https://github.com/vladhlinsky/shs-monitor/blob/spark-master/monitor.sh) script to query SHS API
* Run 5 [producers](https://github.com/vladhlinsky/shs-monitor/blob/spark-master/producer.sh) for ~5 mins, create 125(14.2 MB) event log directory copies
* Wait for SHS to load all the applications
* Verify that the number of loaded applications increases continuously over time

For more details, please refer to the [shs-monitor](https://github.com/vladhlinsky/shs-monitor/tree/spark-master) repository.
> This version of the reproduction uses event log directories instead of single files, since recent optimization
> [SPARK-33790](https://issues.apache.org/jira/browse/SPARK-33790) makes it hard to reproduce the issue with single event log files.

Closes #30845 from vladhlinsky/SPARK-33841.

Authored-by: Vlad Glinsky <vladhlinsky@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
(commit: 554600c)
The file was modifiedcore/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala (diff)
Commit de234eec8febce99ede5ef9ae2301e36739a0f85 by gurwls223
[SPARK-33812][SQL] Split the histogram column stats when saving to hive metastore as table property

### What changes were proposed in this pull request?

Hive metastore has a limitation for the table property length. To work around it, Spark split the schema json string into several parts when saving to hive metastore as table properties. We need to do the same for histogram column stats as it can go very big.

This PR refactors the table property splitting code, so that we can share it between the schema json string and histogram column stats.

### Why are the changes needed?

To be able to analyze table when histogram data is big.

### Does this PR introduce _any_ user-facing change?

no

### How was this patch tested?

existing test and new tests

Closes #30809 from cloud-fan/cbo.

Authored-by: Wenchen Fan <wenchen@databricks.com>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
(commit: de234ee)
The file was modifiedsql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala (diff)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala (diff)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala (diff)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/StatisticsCollectionSuite.scala (diff)
The file was modifiedsql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala (diff)
The file was modifiedsql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala (diff)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/RuntimeConfigSuite.scala (diff)
The file was modifiedsql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala (diff)
Commit 44563a0412257645e0053ee2c44d6eb3447e9d4f by srowen
[SPARK-33518][ML] Improve performance of ML ALS recommendForAll by GEMV

### What changes were proposed in this pull request?
There were a lot of works on improving ALS's recommendForAll

For now, I found that it maybe futhermore optimized by

1, using GEMV and sharing a pre-allocated buffer per task;

2, using guava.ordering instead of BoundedPriorityQueue;

### Why are the changes needed?
In my test, using `f2jBLAS.sgemv`, it is about 2.3X faster than existing impl.

|Impl| Master | GEMM | GEMV | GEMV + array aggregator | GEMV + guava ordering + array aggregator  | GEMV + guava ordering|
|------|----------|------------|----------|------------|------------|------------|
|Duration|341229|363741|191201|189790|148417|147222|

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
existing testsuites

Closes #30468 from zhengruifeng/als_rec_opt.

Authored-by: zhengruifeng <ruifengz@foxmail.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
(commit: 44563a0)
The file was modifiedmllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala (diff)
Commit 00642ee19e6969ca7996fb44d16d001fcf17b407 by dongjoon
[SPARK-33843][BUILD] Upgrade to Zstd 1.4.8

### What changes were proposed in this pull request?

This PR aims to upgrade Zstd library to 1.4.8.

### Why are the changes needed?

This will bring Zstd 1.4.7 and 1.4.8 improvement and bug fixes and the following from `zstd-jni`.
- https://github.com/facebook/zstd/releases/tag/v1.4.7
- https://github.com/facebook/zstd/releases/tag/v1.4.8
- https://github.com/luben/zstd-jni/issues/153 (Apple M1 architecture)

### Does this PR introduce _any_ user-facing change?

This will unblock Apple Silicon usage.

### How was this patch tested?

Pass the CIs.

Closes #30848 from dongjoon-hyun/SPARK-33843.

Authored-by: Dongjoon Hyun <dongjoon@apache.org>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
(commit: 00642ee)
The file was modifiedpom.xml (diff)
The file was modifieddev/deps/spark-deps-hadoop-2.7-hive-2.3 (diff)
The file was modifieddev/deps/spark-deps-hadoop-3.2-hive-2.3 (diff)
Commit dd44ba5460c3850c87e93c2c126d980cb1b3a8b4 by dongjoon
[SPARK-32976][SQL][FOLLOWUP] SET and RESTORE hive.exec.dynamic.partition.mode for HiveSQLInsertTestSuite to avoid flakiness

### What changes were proposed in this pull request?

As https://github.com/apache/spark/pull/29893#discussion_r545303780 mentioned:

> We need to set spark.conf.set("hive.exec.dynamic.partition.mode", "nonstrict") before executing this suite; otherwise, test("insert with column list - follow table output order + partitioned table") will fail.
The reason why it does not fail because some test cases [running before this suite] do not change the default value of hive.exec.dynamic.partition.mode back to strict. However, the order of test suite execution is not deterministic.
### Why are the changes needed?

avoid flakiness in tests

### Does this PR introduce _any_ user-facing change?

no
### How was this patch tested?

existing tests

Closes #30843 from yaooqinn/SPARK-32976-F.

Authored-by: Kent Yao <yaooqinn@hotmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
(commit: dd44ba5)
The file was modifiedsql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSQLInsertTestSuite.scala (diff)
Commit 06075d849e07a97f7aba0dceece57ed45cbae040 by dongjoon
[SPARK-33829][SQL] Renaming v2 tables should recreate the cache

### What changes were proposed in this pull request?

Currently, renaming v2 tables does not invalidate/recreate the cache, leading to an incorrect behavior (cache not being used) when v2 tables are renamed. This PR fixes the behavior.

### Why are the changes needed?

Fixing a bug since the cache associated with the renamed table is not being cleaned up/recreated.

### Does this PR introduce _any_ user-facing change?

Yes, now when a v2 table is renamed, cache is correctly updated.

### How was this patch tested?

Added a new test

Closes #30825 from imback82/rename_recreate_cache_v2.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
(commit: 06075d8)
The file was modifiedsql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/DataSourceV2Strategy.scala (diff)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2SQLSuite.scala (diff)
The file was modifiedsql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/RenameTableExec.scala (diff)
Commit 37c4cd8f05316227465ff9cccbba063779827660 by srowen
[MINOR][DOCS] Fix typos in ScalaDocs for DataStreamWriter#foreachBatch

The title is pretty self-explanatory.

### What changes were proposed in this pull request?

Fixing typos in the docs for `foreachBatch` functions.

### Why are the changes needed?

To fix typos in JavaDoc/ScalaDoc.

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Yes.

Closes #30782 from ammar1x/patch-1.

Lead-authored-by: Ammar Al-Batool <ammar.albatool@gmail.com>
Co-authored-by: Ammar Al-Batool <ammar.al-batool@disneystreaming.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
(commit: 37c4cd8)
The file was modifiedsql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala (diff)
Commit 70da86a085b61a0981c3f9fc6dbd897716472642 by dongjoon
[SPARK-33850][SQL] EXPLAIN FORMATTED doesn't show the plan for subqueries if AQE is enabled

### What changes were proposed in this pull request?

This PR fixes an issue that when AQE is enabled, EXPLAIN FORMATTED doesn't show the plan for subqueries.

```scala
val df = spark.range(1, 100)
df.createTempView("df")
spark.sql("SELECT (SELECT min(id) AS v FROM df)").explain("FORMATTED")

== Physical Plan ==
AdaptiveSparkPlan (3)
+- Project (2)
+- Scan OneRowRelation (1)

(1) Scan OneRowRelation
Output: []
Arguments: ParallelCollectionRDD[0] at explain at <console>:24, OneRowRelation, UnknownPartitioning(0)

(2) Project
Output [1]: [Subquery subquery#3, [id=#20] AS scalarsubquery()#5L]
Input: []

(3) AdaptiveSparkPlan
Output [1]: [scalarsubquery()#5L]
Arguments: isFinalPlan=false
```

After this change, the plan for the subquerie is shown.
```scala
== Physical Plan ==
* Project (2)
+- * Scan OneRowRelation (1)

(1) Scan OneRowRelation [codegen id : 1]
Output: []
Arguments: ParallelCollectionRDD[0] at explain at <console>:24, OneRowRelation, UnknownPartitioning(0)

(2) Project [codegen id : 1]
Output [1]: [Subquery scalar-subquery#3, [id=#24] AS scalarsubquery()#5L]
Input: []

===== Subqueries =====

Subquery:1 Hosting operator id = 2 Hosting Expression = Subquery scalar-subquery#3, [id=#24]
* HashAggregate (6)
+- Exchange (5)
   +- * HashAggregate (4)
      +- * Range (3)

(3) Range [codegen id : 1]
Output [1]: [id#0L]
Arguments: Range (1, 100, step=1, splits=Some(12))

(4) HashAggregate [codegen id : 1]
Input [1]: [id#0L]
Keys: []
Functions [1]: [partial_min(id#0L)]
Aggregate Attributes [1]: [min#7L]
Results [1]: [min#8L]

(5) Exchange
Input [1]: [min#8L]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, [id=#20]

(6) HashAggregate [codegen id : 2]
Input [1]: [min#8L]
Keys: []
Functions [1]: [min(id#0L)]
Aggregate Attributes [1]: [min(id#0L)#4L]
Results [1]: [min(id#0L)#4L AS v#2L]
```

### Why are the changes needed?

For better debuggability.

### Does this PR introduce _any_ user-facing change?

Yes. Users can see the formatted plan for subqueries.

### How was this patch tested?

New test.

Closes #30855 from sarutak/fix-aqe-explain.

Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
(commit: 70da86a)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala (diff)
The file was modifiedsql/core/src/test/resources/sql-tests/results/explain-aqe.sql.out (diff)
The file was modifiedsql/core/src/main/scala/org/apache/spark/sql/execution/ExplainUtils.scala (diff)
Commit 2b6ef5606bec1a4547c8e850440bf12cc3422e1d by dongjoon
[SPARK-33854][BUILD] Use ListBuffer instead of Stack in SparkBuild.scala

### What changes were proposed in this pull request?
This PR aims to use ListBuffer instead of Stack in SparkBuild.scala to remove deprecation warning.

### Why are the changes needed?

Stack is deprecated in Scala 2.12.0.

```scala
% build/sbt compile
...
[warn] /Users/william/spark/project/SparkBuild.scala:1112:25:
class Stack in package mutable is deprecated (since 2.12.0):
Stack is an inelegant and potentially poorly-performing wrapper around List.
Use a List assigned to a var instead.
[warn]         val stack = new Stack[File]()
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manual.

Closes #30860 from williamhyun/SPARK-33854.

Authored-by: William Hyun <williamhyun3@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
(commit: 2b6ef56)
The file was modifiedproject/SparkBuild.scala (diff)
Commit df2314b63aaf4992ac86ea0b68dae8554b066828 by dongjoon
[SPARK-33852][SQL][TESTS] Use assertAnalysisError in HiveDDLSuite.scala

### What changes were proposed in this pull request?

`HiveDDLSuite` has many of the following patterns:
```scala
val e = intercept[AnalysisException] {
  sql(sqlString)
}
assert(e.message.contains(exceptionMessage))
```

However, there already exists `assertAnalysisError` helper function which does exactly the same thing.

### Why are the changes needed?

To refactor code to simplify.

### Does this PR introduce _any_ user-facing change?

No, just refactoring the test code.

### How was this patch tested?

Existing tests

Closes #30857 from imback82/hive_ddl_suite_use_assertAnalysisError.

Authored-by: Terry Kim <yuminkim@gmail.com>
Signed-off-by: Dongjoon Hyun <dongjoon@apache.org>
(commit: df2314b)
The file was modifiedsql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala (diff)
Commit 13391683e7a863671d3d719dc81e20ec2a870725 by srowen
[SPARK-33756][SQL] Make BytesToBytesMap's MapIterator idempotent

### What changes were proposed in this pull request?
Make MapIterator of BytesToBytesMap `hasNext` method idempotent

### Why are the changes needed?
The `hasNext` maybe called multiple times, if not guarded, second call of hasNext method after reaching the end of iterator will throw NoSuchElement exception.

### Does this PR introduce _any_ user-facing change?
NO.

### How was this patch tested?
Update a unit test to cover this case.

Closes #30728 from advancedxy/SPARK-33756.

Authored-by: Xianjin YE <advancedxy@gmail.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
(commit: 1339168)
The file was modifiedcore/src/main/java/org/apache/spark/unsafe/map/BytesToBytesMap.java (diff)
The file was modifiedcore/src/test/java/org/apache/spark/unsafe/map/AbstractBytesToBytesMapSuite.java (diff)