How do you handle JDK/JRE patch updates for Java apps on K8s?
I’m curious how people running Java workloads on Kubernetes handle JDK/JRE updates and security patches without rebuilding every app image. Background: in Mesos (https://ift.tt/V21CSsj) times, we used to keep the JDK on the runner nodes. When a CVE or patch came out, we updated the host JDK, and all apps picked it up. That was convenient for fast security rollouts. On k8s, almost everyone I see bakes the JDK into the container image, which means: new JDK → rebuild base image → rebuild app images (or at least rebuild base) → push → roll out. That is reliable and reproducible, but it is impossible to update the JDK version for, for example, 2000 apps quickly. Questions I have for people who run Java on k8s at scale: Do you rebuild images for every JDK patch? If so, how do you keep the pipeline fast/automated? What approaches we talked about (still looking for something better): - Rebuild images on every JDK patch (CI pipeline that automatically bumps base image + rebuilds): reproducible but heavy and slow. - Host-provided JDK (like Mesos) via hostPath or a shared volume (every path version must be available): fast patches, but brittle (node drift, version chaos between k8s nodes, less reproducible, potential security/permission problems). - Base, standard image for all java apps (alpine+java) that our platform updates and init container downloading user app on startup, so that we can update it in the background. - Sidecar or init-container that places a JDK into a shared volume, and the app container uses that volume: mutable runtime without rebuilding images — how well does this work in practice? 0 comments on Hacker News.
I’m curious how people running Java workloads on Kubernetes handle JDK/JRE updates and security patches without rebuilding every app image. Background: in Mesos (https://ift.tt/V21CSsj) times, we used to keep the JDK on the runner nodes. When a CVE or patch came out, we updated the host JDK, and all apps picked it up. That was convenient for fast security rollouts. On k8s, almost everyone I see bakes the JDK into the container image, which means: new JDK → rebuild base image → rebuild app images (or at least rebuild base) → push → roll out. That is reliable and reproducible, but it is impossible to update the JDK version for, for example, 2000 apps quickly. Questions I have for people who run Java on k8s at scale: Do you rebuild images for every JDK patch? If so, how do you keep the pipeline fast/automated? What approaches we talked about (still looking for something better): - Rebuild images on every JDK patch (CI pipeline that automatically bumps base image + rebuilds): reproducible but heavy and slow. - Host-provided JDK (like Mesos) via hostPath or a shared volume (every path version must be available): fast patches, but brittle (node drift, version chaos between k8s nodes, less reproducible, potential security/permission problems). - Base, standard image for all java apps (alpine+java) that our platform updates and init container downloading user app on startup, so that we can update it in the background. - Sidecar or init-container that places a JDK into a shared volume, and the app container uses that volume: mutable runtime without rebuilding images — how well does this work in practice?
I’m curious how people running Java workloads on Kubernetes handle JDK/JRE updates and security patches without rebuilding every app image. Background: in Mesos (https://ift.tt/V21CSsj) times, we used to keep the JDK on the runner nodes. When a CVE or patch came out, we updated the host JDK, and all apps picked it up. That was convenient for fast security rollouts. On k8s, almost everyone I see bakes the JDK into the container image, which means: new JDK → rebuild base image → rebuild app images (or at least rebuild base) → push → roll out. That is reliable and reproducible, but it is impossible to update the JDK version for, for example, 2000 apps quickly. Questions I have for people who run Java on k8s at scale: Do you rebuild images for every JDK patch? If so, how do you keep the pipeline fast/automated? What approaches we talked about (still looking for something better): - Rebuild images on every JDK patch (CI pipeline that automatically bumps base image + rebuilds): reproducible but heavy and slow. - Host-provided JDK (like Mesos) via hostPath or a shared volume (every path version must be available): fast patches, but brittle (node drift, version chaos between k8s nodes, less reproducible, potential security/permission problems). - Base, standard image for all java apps (alpine+java) that our platform updates and init container downloading user app on startup, so that we can update it in the background. - Sidecar or init-container that places a JDK into a shared volume, and the app container uses that volume: mutable runtime without rebuilding images — how well does this work in practice? 0 comments on Hacker News.
I’m curious how people running Java workloads on Kubernetes handle JDK/JRE updates and security patches without rebuilding every app image. Background: in Mesos (https://ift.tt/V21CSsj) times, we used to keep the JDK on the runner nodes. When a CVE or patch came out, we updated the host JDK, and all apps picked it up. That was convenient for fast security rollouts. On k8s, almost everyone I see bakes the JDK into the container image, which means: new JDK → rebuild base image → rebuild app images (or at least rebuild base) → push → roll out. That is reliable and reproducible, but it is impossible to update the JDK version for, for example, 2000 apps quickly. Questions I have for people who run Java on k8s at scale: Do you rebuild images for every JDK patch? If so, how do you keep the pipeline fast/automated? What approaches we talked about (still looking for something better): - Rebuild images on every JDK patch (CI pipeline that automatically bumps base image + rebuilds): reproducible but heavy and slow. - Host-provided JDK (like Mesos) via hostPath or a shared volume (every path version must be available): fast patches, but brittle (node drift, version chaos between k8s nodes, less reproducible, potential security/permission problems). - Base, standard image for all java apps (alpine+java) that our platform updates and init container downloading user app on startup, so that we can update it in the background. - Sidecar or init-container that places a JDK into a shared volume, and the app container uses that volume: mutable runtime without rebuilding images — how well does this work in practice?
Hacker News story: How do you handle JDK/JRE patch updates for Java apps on K8s?
Reviewed by Tha Kur
on
September 02, 2025
Rating:
No comments: