Application Security by Obscurity
“Security by obscurity” is a pejorative term to most in the security industry and with good reason. Typically, it’s just a matter of time before light shines on the dark recesses of an obscure application, illuminating its shabby security controls. That’s not to say “security by obscurity” hasn’t worked successfully in the past. It does work, at least until it doesn’t.
In his book Beyond Fear (pp. 211-12), security guru Bruce Schneier gave an example of how security by obscurity has successfully transported priceless diamonds:
At 3,106 carats, a little under a pound and a half, the Cullinan Diamond was the largest uncut diamond ever discovered. It was extracted from the earth at the Premier Mine, near Pretoria, South Africa, in 1905. Appreciating the literal enormity of the find, the Transvaal government bought the diamond as a gift for King Edward VII. Transporting the stone to England was a huge security problem, of course, and there was much debate on how best to do it. Detectives were sent from London to guard it on its journey. News leaked that a certain steamer was carrying it, and the presence of the detectives confirmed this. But the diamond on that steamer was a fake. Only a few people knew of the real plan; they packed the Cullinan in a small box, stuck a three-shilling stamp on it, and sent it to England anonymously by unregistered parcel post.
The success of the Cullinan Diamond story is based largely on the fact that it was a one-time event. If the diamond had to be transported back and forth as frequently as your organization’s electronic transactions, history certainly would have demonstrated the foolishness of the humble box and three shilling stamp by the second or third trip.
I recently thought of the diamond story during an Application Security Assessment with a large retail client. This retailer, like most these days, had a deadline to complete annual penetration tests on their cardholder applications for PCI compliance. I got the assignment to scrutinize the application responsible for backend payment processing, which had been assessed by competitors the past 3 years. It turns out the “application” was really a “legacy” custom service listening on an obscure TCP port on a mainframe.
Since the retailer was kind enough to make the previous years’ reports on the application by “the other guys” available to me, I noticed they had simply pointed nmap at the mainframe IP address, noted the open ports and ran a commercial vulnerability scanner at it - an exercise in futility since the scanner won’t know the custom application’s flaws.
The “app” (they didn’t use that word back then) appears to have been written (and probably last modified) back when kids walked uphill in snow to school (both ways!). Even a state-of-the-art scanner that can enumerate defects in web applications is useless on an app that was written back before HTTP was invented.
Suffice it to say the previous three years of application penetration test reports were grasping at straws for findings. “Um, we think your mother’s sister’s brother-in-law saw FTP running somewhere nearby on the network, and, um, while it’s not in scope for this application … exactly … um … you should probably turn that off.” The app is obscure, and they didn’t bother to put in the time to actually learn and understand the app.
After lots of questions and minor coercion through red tape, the client produced a PDF document over 100 pages long (and last modified when Clinton was president!) describing the custom protocol on this service and unique techniques to represent data in its message parameters. Staring at me in black and white - and I could not believe what I was reading - was the humble parcel packaging and three shilling stamp.
No transport encryption on the protocol whatsoever…no authentication to validate the transaction was legitimate…no integrity controls to prove the message wasn’t tampered in flight…just a thin veneer of brown butcher paper!
The credit card numbers were encrypted statically, similar to a credit card tokenization solution, so an encrypted number could be re-sent in a replay attack.
The rest of the application penetration test felt much more like my days as a software developer mandated by management to integrate with a business partner’s app that is less than technically pretty (i.e. no syntactic sugar). I fired up Visual Studio, dusted off TCP socket programming in C# (do people still do that?) and started thinking in hexadecimal and EBCDIC.
Several paleo-friendly, blended frozen banana, coconut oil and French-pressed coffee drinks later, I had a working exploit tool, complete with obligatory ASCII art. It wasn’t elite hacking - no remote code execution, no injection, no session hijacking and certainly no zero-day exploits. Just understanding an application’s design and demonstrating its weaknesses by sending a message to it. Any capable software developer could have exploited it, and that was the point.
It took a week to get the client to provide the information I needed to understand the application. It took another couple days to write code that would properly talk to this obscure, custom TCP socket service. And it took 30 seconds to send any fraudulent credit card transactions through the payment processor that I wanted.
I could create new transactions with existing encrypted credit card numbers, reverse transactions I just saw floating across the wire, issue random refunds or make somebody else’s credit card pay for my transaction, whatever. I had total control of the central system for retail and e-commerce transactions. Anybody who could spend the time to watch the network traffic could have eventually learned what I learned.
And this application had been reviewed by “experts” for the past three years?
Ironically, a couple weeks later I received a different assignment from the same client: another Application Security Assessment, but this time for a web service. It took about ten minutes before I realized what I was looking at: a simple HTTP wrapper around the same legacy TCP socket on that mainframe. It didn’t have the same name, but it was talking to that mainframe under the hood.
As it turns out, humble parcels with three shilling stamps can be sent over HTTP, GET or POST these days! The “modern” web service simply repackaged the same flaws I had seen only days previously.
Drop us a line if you think you might have a humble brown box with a three shilling stamp protecting your diamonds.