Part 2: How to stop me harvesting credit card numbers and passwords from your site
By David Gilbertson
31 - 40 minutes
I wrote a post recently describing how I distributed malicious code that gathers credit card numbers and passwords from thousands of sites in a way that’s quite difficult to detect.
The comments this post received filled me with joy, expressing such sentiments as “chilling”, “disturbing”, and “utterly terrifying”. (Much like the compliments I receive on the dance floor.)
In this follow-up post I’d like to put down the megaphone and put forward some practical advice.
There’s no need to try and avoid third-party code (huzzah!)
Display this file in an iframe
Serve the file from a static file server on a different domain
You might also consider avoiding sensitive data entirely by using third-party sign-in and a third-party to collect and handle credit card information.
The things I suggest in this post only really work for sites where sensitive information is quite limited and can be cordoned off (passwords, credit card numbers, etc). If you work on a chat app or an email client or a database GUI, where everything is potentially sensitive, I’ve got nuthin’.
I think a healthy dose of fear is a good place to start.
I suggest pondering how you would feel making an announcement like OnePlus had to recently:
… a malicious script was injected into the payment page code to sniff out credit card info while it was being entered… The malicious script operated intermittently, capturing and sending data directly from the user’s browser … up to 40k users at oneplus.net may be affected by the incident
Now let’s sharpen that vague sense of dread into something more specific.
Perhaps an animalogy will prove useful…
I imagine third-party code as a big ol’ doberman. He looks calm; gentle, even. But there are flickers of an unknown potential in his dark, unblinking eyes. Let’s just say I’m not putting anything I hold dear near his pointy end.
I picture my users’ sensitive information as a cute, defenceless hamster. I watch as it innocently licks its little front feet, grooming its dumb little face, frolicking without a care at the base of the doberman.
Now, if you’ve ever been friends with a doberman (I highly recommend it), you probably know that they are wonderful, gentle creatures and don’t deserve their reputation for being vicious. But still, I’m sure you’ll agree it’s a bad idea to leave one alone with a hamster that bares a striking resemblance to a chew toy.
Sure, maybe you’ll come home from work to the adorable scene of Professor Baggy Pants asleep on the back of Sergeant Chompers. Or maybe you’ll come home to witness only air where the hamster used to be, and a dog with his head cocked to one side, like “may I see the dessert menu?”
I don’t think code that comes from npm, GTM or DFP or anywhere else should have a reputation for being necessarily dangerous. But I’d suggest that unless you can guarantee the good behaviour of this code, it’s irresponsible to leave it alone with your users’ sensitive information.
So … that’s the mindset that I suggest we all adopt: sensitive information and third-party code should not be left alone together.
The site in this example has a credit card form that’s vulnerable to malicious third-party code, just like the ones on several very large ecommerce sites that you probably thought were better at security.
This page is teeming with third-party code. It uses React, and was created with Create React App, so it had 886 npm packages before I even got started (seriously).
(Side rant: I am disappointed in Google for this. Their developer advocates spend a lot of time teaching us how to make the web fast; shaving off a few dozen kilobytes here and some milliseconds there — this is awesome stuff. But at the same time they allow their DFP ad network to send megabytes to a user’s device, making hundreds of network requests and sitting on the CPU for entire seconds. Google, I know you have the right brains to come up with a smarter, faster way to deliver ads. Why are you not?)
OK, getting back to the topic at hand… Obviously, what I need to do is prise my users’ sensitive information from the grubby hands of all that third-party code; I want that form to be on its own little island.
Now that we’re, like, two fifths of the way through this post, I’ll start to actually describe some approaches.
Option 2: same as option 1, but the page is served in an iframe
Option 3: same as option 2, but the parent page and the iframe communicate with each other via postMessage
Unfortunately, because the header, footer and navigation of my site are all React components, I can’t use them on this very vanilla page. So the ‘header’ you see is a manual replication of my full header without all the usual functionality. It’s a blue rectangle.
When the user has filled in that form (filled out that form? — why are opposites the same!?), they will click submit, and be redirected to the next step in the checkout flow. This might require some back-end changes to keep track of the user and the data they’ve submitted as they move across pages.
Here’s a pen with some no-js regex validation and conditional styling if you want to see it in action. (The limitations are small but glaring.)
I would suggest that if you’re going to do this, keep it all in a single file.
Complexity is the enemy here (more so than ever). The HTML file for the above example — with CSS embedded in a <style> tag — is about 100 lines all up; since it’s so small and makes no network requests, it is near-impossible to meddle with undetected.
Unfortunately, this approach requires duplicating CSS. I have thought about this a great deal and looked at several approaches. All of them required more code than the amount of duplicated code they aimed to prevent.
So, I would suggest that while the mantra of “Don’t Repeat Yourself” is excellent guidance, it should not be seen as an absolute, unbreakable rule that must be adhered to at all costs. In some rare cases, like the one described here, repetition of code is the lesser of two evils.
The most useful rules are those you know when to break.
(My new year’s resolution is to try and sound more profound without actually saying anything of substance.)
The first option is OK, but it’s a step down from a UI and UX perspective, and the point at which you’re taking someone’s money is about the last place you want to introduce journey-friction.
Option 2 fixes this by taking the form and serving it in an iframe.
You might be tempted to do something like this:
In that example, the parent page and the contents of the iframe can still see and interact with each other freely. This would be like leaving a doberman in one room, hamster in another, with a door between them that the doberman can simply push open when it gets peckish.
What I need to do is ‘sandbox’ that iframe. Which (I just learned) has nothing to do with the sandbox attribute of an iframe, since that’s about protecting the parent page from the iframe. I want to protect the contents of the iframe from the parent page.
As luck would have it, browsers have a built-in distrust of things that come from different origins. It’s called the same-origin policy [insert edgy political commentary here].
Because of this, simply loading the frame from a different domain is enough to prevent communication between the two.
If you’re wondering about the accessibility of content in an iframe, a) good for you, and b) wonder no longer. According to WebAIM: “There are no distinct accessibility issues with inline frames. The content of the inline frame is read at the point it is encountered (based on markup order) as if it were content within the parent page.”
Let’s think about what happens once the form is filled in. The user will hit the submit button in the form in the iframe, and I want that to navigate the parent page. But if they’re on different origins, is this even possible?
Ya, that’s what the target attribute of a form is for:
So, the user can type their sensitive information into a form that fits in seamlessly with the surrounding page. Then, when they submit, the top level page is redirected in response to the form submission.
Option 2 is a huge increase in security — I no longer have a sitting-duck credit card form. But it’s still a step back in usability.
The ideal solution wouldn’t require any full page redirects…
In my example site I actually want to keep the credit card data in state, along with the details of the product being purchased, and submit all that info in one AJAX-style request.
This is blindingly easy. I’ll use postMessage to send the data from the form up to the parent page.
This is the page being served in the iframe…
…and in the parent page (or more specifically, in the React component that requested the iframe in the first place), I just listen for messages from the iframe and update the state accordingly:
If I was feeling frisky, I could instead send data up from the form to the parent in an onchange event for each input individually.
While I’m frisking, there’s nothing stopping the parent page from doing some validation and sending the validity state back down to the plain-Jane form. This allows me to reuse any validation logic that I may have elsewhere in my site.
[Edit: two cleverpeople in the comments have suggested that the iFrame could submit the data, without redirecting the parent page, then communicate the success/failure state back to the parent page using postMessage. This way, no data is ever sent to the parent page.]
So, that’s it! Your user’s sensitive information is safely entered into an iframe on a different origin, hidden from the parent page, but the data captured can still be a part of the state of your app, meaning no changes are required to the user experience.
At this point, you might be thinking that sending the credit card data up into the parent page defeats the whole purpose. Isn’t it then accessible to any malicious code?
There are two parts to this answer, and I can’t think of a simple way to explain it. Sorry.
The reason I think this is a reasonable risk to take is easier to understand from the perspective of the hacker. Imagine it’s your job to come up with some malicious code that can run on any website, seeking out sensitive information and sending it off to a server somewhere. Every time you send something, you run the risk of being caught. So it’s in your best interest to only send data that you are certain is valuable.
If this was my job I would not be indiscriminately listening to message events and sending off the data I find in them. Not when thousands of sites have perfectly vulnerable credit card forms with neatly labelled inputs.
The second part to the answer is that if the malicious code you’re worried about isn’t just some generic code, it might know to listen to that message event on your site and pluck the credit card numbers out. This idea of protecting against code that was written specifically for your site deserves its own section…
So far I have described attacks using generic malicious code. That is, code that doesn’t know what website it’s running on, it just looks for, gathers and sends sensitive information to the villain’s evil lair in the basement of a volcano.
Targeted malicious code, on the other hand, is code written to tango with your site specifically. It is crafted by a skilled developer who has spent weeks familiarising themselves with every nook and cranny of your DOM.
If your site has been infected with targeted malicious code, you’re screwed. No two ways about it. You might have put everything in a perfectly secure iframe, but the malicious code will just remove the iframe and replace it with a form. An attacker could even change the prices displayed on your site, maybe offer 50% off and tell users they need to re-enter their credit card details if they want the goods. You are well and truly owned.
If you’ve got targeted malicious code on your site, you might as well bend over and pick up a flower and smell it — you know, focus on the positive things in life.
This is why it’s so insanely important to have a content security policy. Otherwise an attacker can mass-distribute generic malicious code (say, via an npm package) that can ‘upgrade’ to targeted code by sending a request to an evil server that returns a payload tailored to your site.
The attacker is free to update and add to their targeted code at their leisure.
You really must get yourself a CSP.
OK that was the long way of saying: using postMessage to send sensitive data from an iframe up to the parent only slightly increases your risk. Generic malicious code is not likely to see this, and targeted code will get your users’ credit card data no matter what you do.
(For the record, I wouldn’t use option 1, 2, or 3 on my own small site. I’d let the professionals handle my credit card data, and offer only sign-in with Google/Facebook/Twitter. Of course don’t follow this advice unless you’ve done the sums of revenue lost from users that won’t sign up with social vs the cost/risk of capturing and storing passwords securely.)
You might think that if you follow the advice above you’re safe and sound. Nope. I can think of four more places you could get into trouble, and I vow to keep this updated with the wisdom of the crowd.
I’ve now got a super-lightweight HTML file, ready to capture user input without being spied on. I just need to stick it somewhere so that it can be served from a separate domain.
Maybe I’ll just fire up a simple Node server somewhere. I’ll just add one little logging package…
OK, 204 is a lot, but you might be wondering how code running on a server that only serves files can endanger user data typed in the browser?
Well, the problem is that any code, from any npm package, that’s running on your server can do whatever it wants to any other code, including code handling network traffic.
Now, I’m just an impostor developer who is easily confused by four-letter words like this and call, but even I could work out how to inject a script into an outbound response and allow it to make requests to my evil domain by editing the CSP header.
The gist above is not actually useful on its own (as eagle eyed readers will have noticed), and a real hacker probably wouldn’t go after Express like this. I’m just illustrating the point that your server is the wild wild west and anything that’s running down there has the potential to expose data that a user enters in their browser.
(If you’re a package author, you might consider using Object.freeze or Object.defineProperty with writable: false to lock down your stuff.)
In reality, it’s probably a bit far-fetched to think there are Node modules doing something this egregious with outbound requests — to me it seems like this would be too easy to detect.
But do you really want to go to all the trouble of creating a form that doesn’t contain any third-party code only to give third-party code the ability to modify it right before sending it to the user? That’s your call.
My suggestion is to serve these ‘secure’ files from a static file server, or don’t bother doing any of this.
Yes that heading is both the step we’re up to and the name of a vulnerability.
Just install the firebase-tools from npm and… oh no, I’m using an npm package to avoid npm packages.
OK, deep breath David, maybe it’s one of those beautiful zero-dependency packages.
Installing … installing …
Jezus Kanye, 640 packages!
OK I give up on making recommendations, you’re on your own. Just get your HTML files onto a server somehow. At some point we all need to trust code written by strangers.
Fun fact: it’s taken me a few weeks to write this post. I’m in a final draft and I just installed the Firebase tools again to check I got that number right…
I wonder what those seven new packages do? I wonder if the people that manage the Firebase tools wonder what those seven new packages do? I wonder if anyone knows what all the packages their package requires do?
You may have noticed that I haven’t suggested that you incorporate your ‘secure’ HTML files in your build pipeline (for example, to share CSS), even though that would solve the duplication-of-code problem.
This is because any of the hundreds of packages involved in even the simplest Webpack build can potentially modify the output of the build process. Webpack on its own requires 367 packages. Something benign like a css-loader will add 246 more. The excellent html-webpack-plugin you might use to put the right CSS file name in your index file will add 156 packages on top of that.
Again, I think it’s highly unlikely that any of these will be injecting scripts into your minified output. But still, it seems wrong to go to so much effort to produce a pristine, tiny, hand-written, human-readable hamster-friendly HTML file only to process it with several hundred dobermans right before bedtime.
The last thing to protect against is the most dangerous of all. Something that has access to modify any code you’ve written and take down any security barriers you have put up: the new kid that starts 6 months from now and doesn’t know what they’re doing.
This is actually one of the trickiest things to protect against. The only solution I can think of is a ‘unit test’ of sorts that ensures there’s no external scripts in any of these ‘secure’ files.
I’m allowing <script> tags with no source (so, inline code), but blocking script tags with a src attribute. I set jsdom to execute scripts so I can catch if someone is creating a new script element with a document.createElement().
At least this way, the new kid would actually need to modify a unit test to add a script, and with any luck that would wake up a code reviewer enough to question the move.
It’s also a good idea to run checks of this nature on the published secure HTML file. You could then be more comfortable using things like Firebase tools and Webpack, knowing that alarm bells will sound in the extremely unlikely event that one of those 1,200 packages edits your output.
Before I go, I want to address a sentiment I’ve heard quite a lot over the past few weeks — the suggestion that developers should use fewer npm packages.
I understand the emotional drive behind this: packages can be bad, less packages must be less bad.
But it’s a bad suggestion; if the security of your user’s data relies on you using fewer npm packages, your security isn’t any good.
It’s like leaving your hamster alone with fewer dobermans.
If I was starting a new project tomorrow, creating a site that handled highly sensitive information, I would use my preferred tools of React and Webpack and Babel and friends, just like I would have a month ago.
I don’t care if there’s a thousand packages, or that they will constantly be changing, or that I will never know for sure if one of them contains malicious code.
None of that matters to me because I’m not going to leave any of them alone in a room with Professor Baggy Pants.
Hey, thanks for reading! As always, security is a team sport; if I’ve said something dumb or given bad advice, let me know and I’ll fix it. If you’ve got a nice idea, let me know and I’ll add it and pretend it was mine.