Last week, the infrastructure team identified the potential compromise of a key
infrastructure machine. This compromise could have taken advantage of, what
could be categorized as, an attempt to target contributors with elevated
access. Unfortunately, when facing the uncertainty of a potential compromise,
the safest option is to treat it as if it were an actual incident, and react
accordingly. The machine in question had access to binaries published to our
primary and secondary mirrors, and to contributor account information.
Since this machine is not the source of truth for Jenkins binaries, we verified
that the files distributed to Jenkins users: plugins, packages, etc, were not
tampered with. We cannot, however, verify that contributor account information
was not accessed or tampered with and, as a proactive measure, we are issuing a
password reset for all contributor accounts. We have also spent significant effort
migrating all key services off of the potentially compromised machine to
(virtual) hardware so the machine can be re-imaged or decommissioned entirely.
What you should do now
If you have ever filed an issue in JIRA,
edited a wiki page, released a plugin or
otherwise created an account via the Jenkins
website, you have a Jenkins community account. You should be receiving a
password reset email shortly, but if you have re-used your Jenkins account
password with other services we strongly encourage you to update your passwords
with those other services. If you’re not already using one, we also encourage
the use of a password manager for generating and managing service-specific
passwords.
The generated password sent out is temporary and will expire if you do not
use it to update your account. Once it expires you will need recover your
account with the password reset
in the accounts app.
This does not apply to your own Jenkins installation, or any account that you
may use to log into it. If you do not have a Jenkins community account, there is
no action you need to take.
What we’re doing to prevent events like this in the future
As stated above, the potentially compromised machine is being removed from our
infrastructure. That helps address the immediate problem but doesn’t put
guarantees in place for the future. To help prevent potential issues in the
future we’re taking the following actions:
Incorporating more security policy enforcement into our
Puppet-driven infrastructure. Without a
configuration management tool enforcing a given state for some legacy services,
user error and manual mis-configurations can adversely affect project security.
As of right now, all key services are managed by Puppet.
Balkanizing our machine and permissions model more. The machine affected was
literally the first independent (outside of Sun) piece of project
infrastructure and like many legacy systems, it grew to host a multitude of
services. We are rapidly evolving away from that model with increasing levels
of user and host separation for project services.
In a similar vein, we have also introduced a trusted zone in our
infrastructure which is not routable on the public internet, where sensitive
operations, such as generating update center information, can be managed and
secured more effectively.
We are performing an infrastructure permissions audit. Some portions of our
infrastructure are 6+ years old and have had contributors come and go. Any
inactive users with unnecessarily elevated permissions in the project
infrastructure will have those permissions revoked.
I would like to extend thanks, on behalf of the Jenkins project, to
CloudBees for their help in funding and
migrating this infrastructure.
If you have further questions about the Jenkins project infrastructure, you can
join us in the #jenkins-infra channel on Freenode
or in an Infrastructure Q&A session I’ve scheduled for next Wednesday (April
27) at 20:00 UTC (12:00 PST).