The Anatomy of an AWS Key Leak to a Public Code Repository


Many of us working with any cloud provider know that you should never ever commit access keys to a public github repo. Some really bad things can happen if you do.

AWS (and I assume all the cloud providers have their equivalent) publish their own best practices about how you should manage access keys.

One of the items mentioned there - is never to commit your credentials into your source code!!

Let me show you a real case that happened last week. 

(of course all identifiable information has been redacted - except for the specific Access key that was used - and of course it has been disabled)

Someone committed an access key to a public github repository. 


Here is the commit message 

commit xxxxxxxx26ff48a83d1154xxxxxxxxxxxxa802

Author: SomePerson <someone@some_email.com>

Date:   Mon Mar 4 10:31:04 2019 +0200


--- (All events will be counted from this point) ---


55 seconds later - I received an email from AWS (T+55s)

From: "Amazon Web Services, Inc." <no-reply-aws@amazon.com>

To: john@doe.com

Subject: Action Required: Your AWS account xxxxxxxxxxxx is compromised

Date: Mon, 4 Mar 2019 08:31:59 +0000

1 second later (T+56s) AWS had already opened a support ticket about incident

Just over 1 minute later (T+2:02m) someone tried to use the key - but since the IAM role attached to the user (and its exposed key) did not have the permissions required - the attempt failed!!

(This is why you should make sure you only give the minimum required permissions for a specific task and not the kitchen sink..)

Here is the access attempt that was logged in Cloudtrail


Here is where I went in and disabled the access key (T+5:58m)

Here was the notification message I received from GuardDuty which was enabled on the account (T+24:58m)

Date: Mon, 4 Mar 2019 08:56:02 +0000 From: AWS Notifications <no-reply@sns.amazonaws.com> To: john@doe.com Message-ID: <0100016947eac6b1-7b5de111-502d-4988-8077-ae4fe58a87c9-000000@email.amazonses.com> Subject: AWS Notification Message

There are a few things I would like to point out regarding the incident above (which we in the categorized to one of a low severity). 

  1. As you can see above the first thing that the attacker tried to do was to run a list keys. That would usually be the first thing someone would try - to try and understand which users are available in the system (assuming that the user has the permission to perform that action)

    You can read more about how a potential hacker would exploit this in this series of posts.

  2. I assume since the attacker saw that they do not have enough permissions - they decided this was not a worthy enough target to continue to try the exploit. Why waste the time if you are going to have to work really hard to get what you want. That is why we only saw a single attempt to use the key. If I was the hacker - I would just wait for the next compromised key and try again.
  3. The reason this attack was not successful - was because the role attached to the User (and its access keys) was built in such a way that they did not have permissions to do anything in IAM.

    This was by design. The concept of least privilege is so important - and 10 times more when you are working in the cloud - that you should implement it - in every part of your design and infrastructure.

  4. AWS responded extremely fast - that is due to them (I assume) scraping the API of all public github commits (for example). It could have been that I was just in time for a cycle - but based on my past experience - the response time is usually within a minute. It would be great if they could share how they do this and handle the huge amount of events that flow through these feeds. They still have to match up the exact compromised key to the account, and kick off the automatic process (email+ticket). All of this was done in less than 60 seconds. I am impressed (as should we all be).
  5. One thing I do not understand is that why AWS would not immediately disable the key. The business implications of having a key out in a public repo - are so severe - and the  use case that would require a key in the open - is something that I cannot fathom as being a valid scenario. If AWS already find a compromised key, know which account it belongs to, and kick off a process - then why not already disable the key in the process?? The amount of time and work that AWS would have to invest (in support tickets and calls) working with a customer to clean up the account, forfeit the charges incurred because of the leak - are above and beyond anything they would incur by automatically disabling the key in the first place. AWS has started to take a stance on some security features - by disabling thing by default (for example - public S3 buckets) to protect their customers from causing harm to themselves. I for one would welcome this change with open arms!
  6. It took me over 5 minutes to actually act on the exposed credential - in 5 minutes, a malicious actor can do some real and serious damage to your AWS account.
  7. GuardDuty - was slow, but it obvious why this was the case. It takes about 15 minutes until the event is delivered to CloudTrail - and GuardDuty then has to analyze based on previous behavior. So this product should not be used for prevention - but rather - for forensic analysis after the fact. There is no real way to identify this data on your own and analyze against your baseline for behavior - so this product is in my honest opinion still very valuable.
  8. How does one stop this from happening?There are a number of ways to tackle this question.

    In my honest opinion, it is mainly raising awareness - from the bottom all the way to the top. The same way people know that if you leave your credit card on the floor - there is a very good chance it will be abused. Drill this into people from day 1 and hopefully it will not happen again.

    There are tools that are out there - that you can use as part of your workflow - such as

    git-secrets that prevent such incidents from even happening - but you would have to assure that every single person, and every single computer they ever work on - would have this installed - which is a much bigger problem to solve.

    Install your own tools to monitor your repositories - or use a service such as GitGuardian that does this for you (not only for AWS - but other credentials as well). 

As always please feel free to share this post and leave your feedback on on Twitter @maishsk