Changes

Summary

  1. [SPARK-35027][CORE] Close the inputStream in FileAppender when writin… (details)
Commit 25caec4a3b069b75c95468c60a6d27c72354ad72 by srowen
[SPARK-35027][CORE] Close the inputStream in FileAppender when writin…

### What changes were proposed in this pull request?

1. add "closeStreams" to FileAppender and RollingFileAppender
2. set "closeStreams" to "true" in ExecutorRunner

### Why are the changes needed?

The executor will hang when due disk full or other exceptions which happened in writting to outputStream: the root cause is the "inputStream" is not closed after the error happens:
1. ExecutorRunner creates two files appenders for pipe: one for stdout, one for stderr
2. FileAppender.appendStreamToFile exits the loop when writing to outputStream
3. FileAppender closes the outputStream, but left the inputStream which refers the pipe's stdout and stderr opened
4. The executor will hang when printing the log message if the pipe is full (no one consume the outputs)
5. From the driver side, you can see the task can't be completed for ever

With this fix, the step 4 will throw an exception, the driver can catch up the exception and reschedule the failed task to other executors.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Add new tests for the "closeStreams" in FileAppenderSuite

Closes #33263 from jhu-chang/SPARK-35027.

Authored-by: Jie <gt.hu.chang@gmail.com>
Signed-off-by: Sean Owen <srowen@gmail.com>
(cherry picked from commit 1a8c6755a1802afdb9a73793e9348d322176125a)
Signed-off-by: Sean Owen <srowen@gmail.com>
The file was modifiedcore/src/main/scala/org/apache/spark/util/logging/RollingFileAppender.scala (diff)
The file was modifiedcore/src/test/scala/org/apache/spark/util/FileAppenderSuite.scala (diff)
The file was modifiedcore/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala (diff)
The file was modifiedcore/src/main/scala/org/apache/spark/util/logging/FileAppender.scala (diff)