Each version of PCF ships with a set of buildpacks. Each buildpack ships with a set of binaries supported by that buildpack (these are listed in the release notes, for example, Ruby). Because the binaries that ship with the buildpacks iterate often, typically to patch bugs and security issues, so do the buildpacks. This article explains the common strategies used to keep buildpacks up-to-date.
There are three general strategies for keeping buildpacks up-to-date and each one has it's pros and cons which are discussed below.
1. Upgrade Pivotal Application Service
The first option is to simply keep Pivotal Application Service up-to-date. New versions of Pivotal Application Service pull in newer buildpacks (typically the latest versions available at the time of the release).
Pros
Cons
This is a solid option and can be considered the default option. If you don't do anything beyond upgrade Pivotal Application Service, then you are using this approach.
2. Upgrade buildpacks
The second option is to manually upgrade buildpacks in-place. The general process for this is to download new buildpacks as they are released to Pivotal Networks. You can then use the cf update-buildpack command to update your existing buildpacks to the latest version, which makes sure you have the latest binaries for your environment (Java runtime, Tomcat, Ruby, HTTPD, etc.)
Pros
Cons
This is likely the best option for security-conscious users as it upgrades buildpacks quickly and it forces users to upgrade on the next stage of their application. It does have some challenges though, so refer to the "Impact" section for some suggestions on how to work through them.
3. Create new buildpacks
The third option is to upgrade by versioning and creating new buildpacks. The general process for this approach is to download new buildpacks as they are released to Pivotal Networks. Then instead of updating the buildpack in-place, you use the cf create-buildpack command to create a new buildpack that has a unique name, typically including the version number.
Pros
Cons
This option seems to work well for operators that work with developers that naturally gravitate towards using the latest software. In that case, they'll willingly update and you can efficiently retire old versions of buildpacks. If you're in a situation where developers don't like to upgrade, then you will have to force them to upgrade at some point as you retire buildpacks and you'll end up having the additional problem where user apps can fail because of the forced buildpack upgrades (See the "Impact" section for more details on this problem).
This option is also the option that offers the most flexibility for binaries as it allows an operator to support more than the current and previous versions of dependencies. If developers need to have very specific version support, then this is likely the best route.
Impact
Security and Bug Fixes
Buildpacks are packaged with software binaries. Using older versions of buildpacks means you have older binaries of things like the Java Runtime, Apache Tomcat, Ruby, PHP, Python, and other dependencies. Using older versions may leave you susceptible to bugs that have not been patched or to security vulnerabilities.
Staging Failures due to buildpack upgrades
When upgrading buildpacks in-place, you may see some users complain that an application pushed successfully yesterday and is failing today despite there being no changes to the application.
This can happen for a few reasons, but the main reason for this is because the supported versions for binaries evolve over time. Each buildpack will only support the current and previous versions of a dependency. For example, we support v2.0.0 and v2.0.1 today, but the next buildpack will support v2.0.1 and v2.0.2. If your applications are set to use specific Runtime versions, then they can fail when the expected version is no longer available. From the previous example, if an app is set to use version 2.0.0 of a dependency, it will break when you upgrade the buildpack because that version will no longer be available post-upgrade.
One strategy for working around this is by communicating with your developers. Let them know in advance of buildpack changes and let them know what binary changes are coming up (Pivotal publishes the changes in the release notes for the buildpacks). Developers can then plan and know when changes will happen.
System application failures
This problem is similar to Staging Failures due to Buildpack Upgrades, but it happens with applications deployed by Pivotal Application Service and other Pivotal tiles. The apps that we ship as a part of the platform, Apps Manager, for example, may have these same version challenges. If you upgrade the buildpacks enough without upgrading Pivotal Application Service, you can get into a situation where the versions of a dependency that is available through your buildpacks will no longer meet the needs of the platform apps, which will then fail to stage.
Fortunately, you can work around this by keeping your Pivotal Application Service environments up-to-date. You don't have to deploy every maintenance release (although you certainly could), you just don't want to fall behind by many versions. There is no hard rule as to how many versions you can safely fall behind before seeing problems, as it depends on how quickly binaries update. That said, the further you fall behind, the more likely you'll encounter this problem.
Staging failures due to buildpack downloads
This problem happens when there are a large number of buildpacks or when one buildpack, in particular, is very large. Prior to an application staging on a Diego Cell, the cell must download the buildpacks (assuming they have not cached on the cell already). There is a finite amount of time for the staging process to occur and time spent downloading the buildpacks counts against this. If the buildpacks cannot be downloaded to the Cell within that time limit, then staging will fail. This problem usually occurs when an application is deployed to a new Cell, where no buildpacks are cached, and when the application does not specify a buildpack, which causes the Cell to download all buildpacks. As such, the easiest solution is to specify a buildpack when deploying an application.
The other issue that can occur when a large number of buildpacks are downloaded is that the Cell can run out of disk space. As previously mentioned, the Cell will cache buildpacks locally so that they are only downloaded once. When using the default values, there should be plenty of ephemeral disk space for this cache, but if there are a lot of applications deployed to the cell, or the applications deployed to the cell are using large amounts of disk space, or the Cell is configured with a small ephemeral disk, or the Cell is configured with a very large amount of memory (causes a large swap disk, which makes the ephemeral disk smaller) then you might run into the case where the Cell cannot download your buildpacks. The workaround then is to increase the size of the ephemeral disk.