Changes

Summary

  1. [SPARK-30821][K8S] Handle executor failure with multiple containers (commit: f659527) (details)
  2. [SPARK-29438][SS][FOLLOWUP] Add regression tests for Streaming (commit: 0c66a88) (details)
Commit f65952772702f0a8772c93b79f562f35c337f5a5 by hkarau
[SPARK-30821][K8S] Handle executor failure with multiple containers
Handle executor failure with multiple containers
Added a spark property spark.kubernetes.executor.checkAllContainers,
with default being false. When it's true, the executor snapshot will
take all containers in the executor into consideration when deciding
whether the executor is in "Running" state, if the pod restart policy is
"Never". Also, added the new spark property to the doc.
### What changes were proposed in this pull request?
Checking of all containers in the executor pod when reporting executor
status, if the `spark.kubernetes.executor.checkAllContainers` property
is set to true.
### Why are the changes needed?
Currently, a pod remains "running" as long as there is at least one
running container. This prevents Spark from noticing when a container
has failed in an executor pod with multiple containers. With this
change, user can configure the behavior to be different. Namely, if any
container in the executor pod has failed, either the executor process or
one of its sidecars, the pod is considered to be failed, and it will be
rescheduled.
### Does this PR introduce _any_ user-facing change?
Yes, new spark property added. User is now able to choose whether to
turn on this feature using the
`spark.kubernetes.executor.checkAllContainers` property.
### How was this patch tested?
Unit test was added and all passed. I tried to run integration test by
following the instruction
[here](https://spark.apache.org/developer-tools.html) (section "Testing
K8S") and also
[here](https://github.com/apache/spark/blob/master/resource-managers/kubernetes/integration-tests/README.md),
but I wasn't able to run it smoothly as it fails to talk with minikube
cluster. Maybe it's because my minikube version is too new (I'm using
v1.13.1)...? Since I've been trying it for two days and still can't make
it work, I decided to submit this PR and hopefully the Jenkins test will
pass.
Closes #29924 from huskysun/exec-sidecar-failure.
Authored-by: Shiqi Sun <s.sun@salesforce.com> Signed-off-by: Holden
Karau <hkarau@apple.com>
(commit: f659527)
The file was modifiedresource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorLifecycleTestUtils.scala (diff)
The file was modifiedresource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/DeterministicExecutorPodsSnapshotsStore.scala (diff)
The file was modifieddocs/running-on-kubernetes.md (diff)
The file was modifiedresource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala (diff)
The file was modifiedresource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshotSuite.scala (diff)
The file was modifiedresource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala (diff)
The file was modifiedresource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshot.scala (diff)
The file was modifiedresource-managers/kubernetes/core/src/test/scala/org/apache/spark/scheduler/cluster/k8s/ExecutorPodsSnapshotsStoreSuite.scala (diff)
Commit 0c66a88d1d1336c9f3b474622315254952cbd56e by viirya
[SPARK-29438][SS][FOLLOWUP] Add regression tests for Streaming
Aggregation and flatMapGroupsWithState
### What changes were proposed in this pull request?
This patch adds new UTs to prevent SPARK-29438 for streaming aggregation
as well as flatMapGroupsWithState, as we agree about the review comment
quote here:
https://github.com/apache/spark/pull/26162#issuecomment-576929692
> LGTM for this PR. But on a additional note, this is a very subtle and
easy-to-make bug with TaskContext.getPartitionId. I wonder if this bug
is present in any other stateful operation. I wonder if this bug is
present in any other stateful operation. Can you please verify how
partitionId is used in the other stateful operations?
For now they're not broken, but even better if we have UTs to prevent
the case for the future.
### Why are the changes needed?
New UTs will prevent streaming aggregation and flatMapGroupsWithState to
be broken in future where it is placed on the right side of UNION and
the number of partition is changing on the left side of UNION. Please
refer SPARK-29438 for more details.
### Does this PR introduce any user-facing change?
No.
### How was this patch tested?
Added UTs.
Closes #27333 from HeartSaVioR/SPARK-29438-add-regression-test.
Authored-by: Jungtaek Lim (HeartSaVioR) <kabhwan.opensource@gmail.com>
Signed-off-by: Liang-Chi Hsieh <viirya@gmail.com>
(commit: 0c66a88)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingDeduplicationSuite.scala (diff)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationSuite.scala (diff)
The file was modifiedsql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsWithStateSuite.scala (diff)