The State of Security

July 06, 2016
#security #offense #pentesting #penetration testing #cybersecurity #red team #blue team #defense #infosec |
| | Share on Google+

Here we will look at what Google's Project Zero Font Bug can teach us about engineering workflow, the nature of exploits, legacy issues, and then we'll examine red/blue team dynamics in recent years.

The Engineering Process

Engineering, whether it be computer related or a more traditional variant, typically follows a simple workflow:

  1. Make it work
  2. Improve it - make it pretty, make it function better/faster/more efficiently etc
  3. Secure it

Take a look at any invention when it first came out - cars might be a simple way to illustrate this. The "first model" is often cited as the 1908 Model T. Nearly everything on this car was an afterthought except the basics of getting you from point A to point B; the engine only generated 20 HP with a top speed of 40-45 MPH, they had no seatbelts/air bags/AC, no locks, only came in one color for a while (black), had a hand-crank to start it up... the list goes on. It wasn't until 1919 that the hand-crank was removed despite the possibility for it to injure the operator. Compared to cars today with satellite radio, navigation systems, heating and air conditioning, ABS and 4 wheel drive systems etc this model is an abomination. The internet and technology world is unfortunately fairly similar and this has been well known since 1998 when the hacker collective "L0pht" warned the US government about numerous security issues.

Your computers, they told the panel of senators in May 1998, are not safe — not the software, not the hardware, not the networks that link them together. The companies that build these things don’t care, the hackers continued, and they have no reason to care because failure costs them nothing. And the federal government has neither the skill nor the will to do anything about it.

What happened instead was a tragedy of missed opportunity, and 17 years later the world is still paying the price in rampant insecurity.

One big difference with technology in general is the speed of change - it's unprecedented and we're having a hard time keeping up in both the technology (legacy/backwards compatibility) and business (outdated quickly, investment costs/ROI) realms. Our tech engineering workflow typically looks more like:

  1. Make it work
  2. Try making it a bit prettier, possibly easier to use... maybe improve how well it works.
  3. SELL SELL SELL, make money, meet deadlines, and beat other vendors to market etc

Real World Example - The Nature of Exploits and Legacy Issues

Bas Alberts said it best in his Daily Dave rant on the Wassenaar Arrangement and Google's Project Zero Font Bug (which has just recently posted it's first year of results).

Say e.g. the NSA or whoever actually cared about someone fixing "hundreds!" of bugs in desktop software and the real Internet wasn't a facsimile of an early 90ies LAN party. Say that was the case.

"Facsimile of an early 90ies LAN party" sums up the idea pretty well despite the advances we've made. Bas goes on to talk about the nature of exploitation and how the industry currently has a perverse focus on vulnerabilities:

At the end of the day my team, Google's team, and lots of people's teams are rooted in a culture of vulnerability masturbation. We fawn over "beautiful" bugs and OMGWOW primitives... And it's all bullshit. If you care about security that is.

"But to stop exploitation you have to understand it!". Sure. But here's an inconvenient truth. You are not going to stop exploitation. Ever.

You might stop my exploitation. You might stop my entire generation's exploitation. But somewhere the dark is seeding away methodologies you don't know about, and will never know about. Because somewhere hackers are hacking, and they've got shit to do. None of which includes telling you about it at blackhat or anywhere else.

That is empirically the truth.

I say this as someone who's made a career out of exploit development. It's been my life for 20 years. But I make no mistake about it being a labor of love... I still love me a good 30 page dissertation on world shattering font bugs ... even though, and trust me if I tell you most of team Google damn well knows this, many people have sat on the exact same dissertation for many years.

This stance on exploitation is also seen in The Unfalsifiability of Security Claims which boils down to it being essentially impossible to certify a product against an exploit that you are unaware of. Funny enough his rant seems more relevant than ever with Project Zero's new findings - look at the corresponding reddit post in netsec for an idea of why.

Some highlights:

"Why is so much (G)UI crap operating at such a low level and at such high privileges in Windows?If KDE decides to suddenly strip naked and start spinning in circles while screaming and ejaculating, the worst it will likely manage to do is crash itself, and, maybe X11."

responses (summed up) ->

It's because Windows user mode in...3.1 and NT 4 couldn't handle it, and it's been legacy ever since. They are finally starting to bring some gui stuff back to user mode, but there is still a lot left to go.

Legacy reasons. Microsoft's been trying for a while to move more stuff in to userspace but the pushback from vendors and users is fierce because it obsoletes so many bits of software (and drivers) that rely on old framework. They had a new font/windowing rendering system ready to go in Vista but at the time the intel GPUs could not support it so Intel leaned on Microsoft and it become non-mandatory - and by extension pretty much non existent. (No vendor, ever, will re-write their code if they don't have to)

So why the commotion? Legacy - which is one example of many that illustrates just how dated some parts of technology infrastructure can be (both hardware and software). Another good example is the SS7 exploit in cellular networks.

Pentesting Industry changes

So - how can team defense step up? Bas continues on to talk about proper defense.

But if you care about systemic security. The kind where you don't give two flying fucks if Bob's desktop gets clientsided or Jane's Android doesn't know how big an mpeg4 frame oughta be, then you will stop circle jerking over individual vulnerabilities and listen to what Spender has been saying for years.

Which is: you don't chase and fix vulnerabilities, you design a system around fundamentally stopping routes of impact. For spender it is eradicating entire bug classes in his grsecurity project. For network engineers it is understanding each and every exfiltration path on your network and segmenting accordingly.

Containment is the name of the game. Not prevention. The compromise is inevitable and the routes are legion. It is going to happen.

This is similar to a post Dave Aitel made about the changing nature of Pentesting itself.

At Infiltrate the Microsoft penetration testing team did the final presentation. First of all, their goal is to move FASTER than log replication. I know a lot of modern players are pretending to be able to do their intrusion analysis in real time. REAL TIME IS NOT POSSIBLE. Not even your brain works in "real time".

The basic theme of the talk was simple: Hit any one host in a large domain. Grab all the LDAP data you can (Groups/Machines/Users) and then sweep as much as you can across the domain to find out LastLoggedIn data. Then exfiltrate it as fast as possible. It'll be "moderately large" (4GB) but you can download it reliably over DNS or ICMP even with a modern system like INNUENDO. You can then remove yourself from the network before the IR team has a chance to do anything.

With the data you retrieved, you can do all sorts of cool analysis that will enable lateral movement or follow up attacks.

Really there are several things changing:

  1. Top level methodology is changing. The Microsoft team emphasized that once they go in, and gather the right data, they can use advanced machine learning and data analysis to show them exactly which users to phish next, and how. They know once they get back in exactly which machines they need to go onto to control the network. It's no longer a guessing game. It's more deterministic. Looking at some of these methodologies means how you buy penetration testing has to change. Once you realize "The attacker at some point is going to get on one of the boxes on my domain" you have to start testing lateral movement, data exfiltration, and incident response from that perspective.

  2. Advanced low level techniques are being commoditized, partially because Kaspersky and co. are doing a good job writing giant white papers on the things they catch in the wild. In INNUENDO's case this means the public penetration testing community can get an advanced implant including the in-memory loader, high-level language VM and API, multiple channels, built in sniffer and debugger, and OPSEC workflow.

Funny he should mention Kaspersky... you know, from the SS7 thing above? Yeah.

Related post: Subverting Security

July 06, 2016
#security #offense #pentesting #penetration testing #cybersecurity #red team #blue team #defense #infosec |
| | Share on Google+