Threat Management, Vulnerability Management

The bug hunt

Share

Facebook and other companies have turned to the research community for help finding flaws, reports Angela Moscaritolo.

Standing on stage at the Facebook F8 developer's conference in September, founder and CEO Mark Zuckerberg (left) boasted that the social media site he invented in his Harvard dorm room back in 2004 – the same site which now has more than 800 million users – recently hit a milestone: Half a billion people used Facebook in a single day. 

There is no denying that the behemoth that is Facebook has become ingrained into users' everyday lives. But even giants can fall. If members believe the information they post on Facebook is unsafe, they will move on – plain and simple. 

This reality is not lost on those who work for the company. In fact, it's quite the opposite. Within the walls of Facebooks's headquarters in Palo Alto, exists a culture dedicated to providing users with a secure experience, says Joe Sullivan, the company's chief security officer. 

“Trust is fundamental,” Sullivan says. “That's something we think about every day. There is never a situation where the company trades off security for something else. If there is a security issue, we drop everything and deal with it.” 

One of the necessities in running a web presence used by hundreds of millions of people each day is ensuring its code is free of errors – security vulnerabilities – that could allow an attacker to gain access to private accounts. By any measure, coding errors are extremely prevalent, not just in websites spanning the internet, but also in commercial computing products and custom-developed systems. 

“Vulnerabilities are dangerous, and people outside of the [computer security] industry aren't aware of how many latent vulnerabilities there are in products they use every day,” says Dino Dai Zovi, an independent security consultant who started bug hunting to find such issues in 1999, and who has disclosed flaws in products made by Apple and Sun Microsystems (now owned by Oracle).

While Sullivan estimates that hundreds of employees across Facebook work on security issues, there are two primary groups dedicated to preventing, finding and fixing vulnerabilities. The platform integrity team, within the software engineering department, works to ensure that every single engineer in the company follows secure-coding practices. Then, the six-person product security team, which is part of the security department Sullivan manages, works to “poke holes” in the code that has been created, scouring it for vulnerabilities. 

In addition to the internal bug finders, the company also calls on external auditors to review code for weaknesses before it is released online. 

And, to ramp up its efforts to find holes that could be abused by attackers, Facebook recently followed the lead of several other major web companies – including Google and Mozilla – to launch a so-called “bug bounty” program. Such initiatives offer independent researchers monetary incentives for the private disclosure of vulnerabilities and exploits. 

Since rolling out the program in July, Facebook has already doled out $70,000 to researchers around the world for the discrete disclosure of 72 vulnerabilities, all of which have since been fixed, Sullivan says. 

“I think it is a good thing to have more people testing our site, and I believe that because we launched the program we have encouraged more people with expertise in security issues to help us,” he says.

Landscape shifts 

The bug bounty programs of today represent a significant evolution in the historically fragile relationship between researchers who find security issues and companies whose products are affected. In the late 1990s and early 2000s, most large companies didn't have a defined process for dealing with reports of vulnerabilities coming in from the research community, Dai Zovi says. 

“At best, they would ignore you,” he recalls. “At times, they were hostile and threatened researchers with lawsuits.” 

The idea to begin paying researchers for vulnerabilities initially came from the vendor community. The first such initiatives were the Vulnerability Contributor Program (VCP), launched in 2002 by security firm iDefense (now owned by VeriSign), and the Zero Day Initiative (ZDI), founded in 2005 by TippingPoint (now owned by HP). These programs remain the top players in the commercial bug market today.

The most important shift in the vulnerability disclosure model occurred when software makers themselves started offering bug bounties, Dai Zovi says. “Vendors are switching from passively receiving reports to actively soliciting them,” he says. 

Mozilla, maker of the popular Firefox web browser, began such a program in 2004. The company provides monetary rewards for the private disclosure of bugs classified as “critical,” or “high” – its most severe ratings designated for flaws that could allow an attacker to install malware without user interaction, obtain confidential data from a user's machine, or cause a denial of service requiring extensive cleanup or reinstallation of the operating system. Since launching the program, Mozilla has received somewhere between 150 and 160 bounty-eligible bugs, and thousands of others that are lower in severity, says Brandon Sterne, a Mozilla security engineer. 

Considering some companies still try to deal with security flaws internally and don't welcome bug reports from the research community, Mozilla, along with a number of other companies with such programs, are undoubtedly ahead of the curve,.

Facebook, too, has traditionally encouraged researchers to notify the company directly about security problems.

“We haven't sued anyone or reported anyone to law enforcement who has reported a vulnerability to us, nor do we intend to,” Sullivan says. 

In fact, the social networking site advanced its bug solicitation efforts after Sullivan's team spoke with professionals at other companies with established bug bounty programs and received positive feedback. Facebook now offers at least $500 for privately disclosed flaws that may “compromise the integrity or privacy of Facebook user data.” For a particularly bad flaw, the company has given $5,000. 

Just two months after the project was launched, Sullivan says he is “astonished” by how impactful it has been. It has enabled Facebook to build relationships with researchers from all over the world. The top two bug finders so far have been a college student from the United States and an individual in Turkey, both of whom have already been paid at least five different times, totaling between $5,000 and $10,000 each, Sullivan says. 

Further, the bugs that are being disclosed are flaws for which the company wouldn't normally have been looking. And, the initiative has proven to be an invaluable recruitment tool. 

“We had one person who asked us if they could have admission to the F8 conference instead of receiving the bounty,” Sullivan says. “We flew them out to San Francisco and scheduled them for a series of engineering interviews the next day.” 

The flip side 

Though Sullivan says Facebook's bug bounty program is only making the company better, such initiatives have been the source of controversy among some members of the security community. 

Adobe, the San Jose, Calif.-based firm that makes the popular Reader and Acrobat software, does not believe providing cash rewards in return for vulnerability details is the best way to protect its customers, says Brad Arkin, the company's senior director of product security and privacy. Doing so could cause firms to focus too much attention on offensive protections, and, as a result, neglect research investments for exploit mitigation techniques. 

This imbalance, “can lead to an unhealthy ecosystem where there are too many people looking for problems with too few people looking for ways to solve or defend against those problems,” Arkin says.

Tim Stanley, director of information and infrastructure security for Waste Management, a North American provider of trash removal and recycling services, says he is “on the fence” about bug bounty programs. While he doesn't totally oppose them, he says such endeavors may cause companies to expend too much of their resources fixing old software, rather than innovating for the next versions. 

“Every company has a finite set of resources,” Stanley says. Time and money may be better invested in efforts to ramp up secure coding efforts, which are more proactive, he adds. 

Instead of providing cash rewards for product deficiencies, Adobe invests resources to test its products for flaws, both internally and externally, through consulting engagements with the security research community, Arkin says. But even so, the company “greatly values” the help of individuals who disclose issues in its products and credits bug finders in its security bulletins. 

Mozilla's Sterne says name recognition in itself is extremely valued by security researchers since being credited with finding a flaw in a major site or product is invaluable for the résumé. 

Still, a simple “thank you” can only go so far. Researchers have increasingly become fed up with vendors who expect them to disclose flaws for free. In 2009, a group of researchers, including Dai Zovi, started a movement called “No More Free Bugs,” arguing that the reporting of flaws should no longer be given away at no cost. 

Mozilla, for one, last July upped the price of its bug bounties from $500 to $3,000. 

“We realized there is a big business here,” Sterne says. “There's a lot of money being made on the black market for this type of research.”

Google quickly followed suit, raising the top reward for holes in its Chrome browser from $1,337 to $3,133.70. Since launching its vulnerability rewards program last year, Google has paid out a total of a half-million dollars in prizes. 

But even with the price increases, it is often hard for researchers to make a good income from bug disclosures alone, Dai Zovi says. Such a conundrum may lead these individuals to the cybercriminal underground, where highly exploitable vulnerabilities have been sold at auction for upward of $100,000, experts say. 

Selling a bug on the cybercriminal underground, however, is a whole different ballgame. 

“If you were a black market security researcher discovering exploitable security bugs in software products, you would have to weaponize the exploit, which means setting it up so it is easily deployable and ready to use against consumers on the internet,” he says. That takes a significant amount of work, he adds, and involves first finding the bug and then weaponizing the exploit. 

Companies that offer bug bounties only require submitters to find a flaw and demonstrate that it is exploitable. In the end, the question of what to do with a previously undisclosed vulnerability really comes down to the individual's motivation and moral standards. 

But knowing about the active black market economy for bugs, software vendors themselves commonly troll cybercriminal forums looking for discussions about vulnerabilities in their products, experts say. 

Patching the holes

While vendors are, more than ever before, actively looking for and soliciting bugs, some say they should be more transparent about the issues that are discovered, and provide fixes in a more timely manner. Affected vendors still sometimes wait months – even years – to remediate security issues that are privately disclosed to them, says Dan Holden (right), director of security research at the Digital Vaccine Laboratories at HP TippingPoint.

“There is no good reason why a vulnerability should take two years to patch,” Holden says. “If you know about a vulnerability, there is a high likelihood that others do as well.” 

To encourage vendors to provide patches in a more timely manner, HP TippingPoint last August changed its ZDI bug bounty program, giving affected companies a deadline to provide a fix. If a flaw isn't remediated six months after being disclosed to the vendor, ZDI will now publish limited information about it, as well as mitigation information. 

In addition, last year Google called on companies to fix flaws within 60 days, and announced it would publicly disclose issues its researchers discovered if the affected company does not provide a fix within that timeframe. 

“Private vulnerability disclosure was, at times, allowing bugs to remain for long periods of time, even when under active exploitation,” Adam Mein, security program manager at Google, says of the company's rationale to impose a deadline. 

Such delays have, understandably, been the source of incredible frustration for researchers and security professionals at end-user companies, such as Waste Management's Stanley. Users expect safe, functional products and it's up to the vendors to provide that, Stanley says. 

“I suspect some companies fear that if customers know there is a problem with their product, that people won't buy it,” he says. “I'm more likely to buy a product from a company that I know is open and transparent about the issues because it's somebody I can trust.”

But while vendors, researchers and end-users still commonly butt heads about vulnerability issues, most can agree that the existence of flaws will be an issue for the foreseeable future, and working together to combat them is imperative. 

Building on the early success of its bug bounty program, Facebook's Sullvan says the company is planning to expand the initiative to pre-production code. Currently, Facebook only provides bounties for vulnerabilities discovered in code that already is in production. At some point in the future, it plans to begin asking researchers to review code that has not yet been released, Sullivan says. Google also says it plans to expand its list of products eligible for bug bounties. 

“The security community is a really powerful, amazing group of people that is bigger than we thought,” Sullivan says. “There are lots and lots of people who care about this, if you're willing to talk about flaws. We need to have an environment where people are really open to getting better.” 


[sidebar]

Vulnerability trends: Disclosures declining

During the first six months of 2011, there was a “distinct and significant decrease” in the disclosure of new vulnerabilities compared to previous years, according to the HP's “2011 Mid-Year Top Cybersecurity Risks Report.” As of June 30, the Open Source Vulnerability Database had recorded 3,087 flaws in internet-based systems, applications and other computing tools – about 25 percent fewer than the 4,091 catalogued during the same period in 2010. 

In fact, the reporting of new vulnerabilities has been slowly declining since 2006. On a positive note, the drop can be partly attributed to increased efforts from software makers and system developers to reduce such flaws prior to releasing their products. But even so, data collected by HP from scans of actual customer web application deployments shows that the actual number of flaws is not decreasing; just the number of bugs being reported. 

“Production websites for some of the world's leading organizations are still bursting with vulnerabilities that leave the websites open to devastating attacks,” the report states. 

The most common types of vulnerabilities being exploited in websites today are those classified as SQL injection and cross-site request forgery, both of which are web application flaws, says Dino Dai Zovi, an independent security consultant. In desktop software and mobile devices, however, memory corruption flaws are most prevalent. – AM


Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds