14
The 2019 TechBeacon Buyer’s Guide for Application Security Guide Guide www.microfocus.com Security

The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

  • Upload
    others

  • View
    12

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

The 2019 TechBeacon Buyer’s Guide for Application Security

Guide

Guidewww.microfocus.com

Security

Page 2: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

Table of ContentsThe Purpose of This Buyer’s Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

AppSec 101: Common Security Mistakes Made by Developers . . . . . . . . . . . . . . . . . . . . 3

Three States of Vulnerability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Discovering Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Disclosing Vulnerabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Developing an AppSec Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

The Tools for Application Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

How AppSec Tools Are Used by Developers, Testers, and Security Teams:

What Should Be Your Priority? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

The Buyer Profile: What Are Your Unique Business Needs for

Application Security? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

Are You Ready for an Effective AppSec Culture? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

Page 3: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

GuideThe 2019 TechBeacon Buyer’s Guide for Application Security

2

Software development organizations are struggling to secure the ap-plications they develop and deploy, for several reasons . One, the grow-ing dependency on open source code as a cost-cutting, time-saving foundation for application construction requires development teams to carefully check that code for vulnerabilities .

Despite their fast-fire, DevOps-driven production schedules, dev teams have to face a critical fact: open source and other third-party compo-nent providers have not always done a good job of application security testing prior to making their code available . Someone has to test that code, and if it isn’t being done by your providers, then it’s up to you .

Second, when a vulnerability is detected, the time it takes a responsible party to repair and patch the affected code presents multiple problems for software developers and consumers . These problems include: 1) The functionality of that unsafe code goes unused by the software dev team’s customers; which 2) can have revenue implications, if customers decide to look elsewhere for that functionality; meanwhile, 3) attackers are potentially able to exploit the vulnerability in other implementations of that same code .

The time and cost involved in remediating unsecure code is motivat-ing the most savvy development teams to build security into their

Page 4: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

3www .microfocus .com

applications as soon as possible within the secure development life-cycle (SDLC) . They recognize that the easiest way to avoid application vulnerabilities is to not create them in the first place as they design and create code .

Unfortunately, teams who exercise that level of awareness and capabil-ity are rare in today’s highly competitive market for software applica-tions . That needs to improve . If software development organizations are going to satisfy their customers who are delivering—and utilizing—the billions of endpoints across mobile, enterprise, and IoT devices, applica-tion security will have to become a top priority .

The Purpose of This Buyer’s Guide The critical need for application security raises three questions that this guide will address:

1. What behaviors in the software community necessitate application security practices?

2. What processes and products enable application security risks to be mitigated?

3. What questions do application development teams need to answer in order to determine their next steps in application security risk mitigation?

While the answers to those questions are not trivial, there are things that every dev team can do to greatly improve their application security capabilities .

■ If you’re designing and delivering applications for yourself or your customers, you can build in security prior to deployment .

■ As you implement and configure your applications, you can learn how to do that securely .

■ And after your applications ship, you can maintain them securely— i.e., protect them against attacks by finding and eliminating vulnerabilities that are inserted or emerge over time .

Those steps are, of course, related . Development teams who use se-curity best practices during the coding phases are less likely to deploy vulnerable applications . But what about the other dev teams? Let’s consider what teams frequently get wrong, and the consequences of those mistakes .

Note: If you are well-versed in application security issues and want to skip to the more advanced discussion, move down to section “How AppSec tools are used by developers, testers, and security teams: What should be your priority?” in this guide .

AppSec 101: Common Security Mistakes Made by Developers One of the biggest problems in application security is the lack of dis-cipline around patch management, which is discussed in more detail below . Essentially, you need a program that 1) receives the updates from your open source providers as well as your proprietary vendors, and 2) applies the patches according to instructions immediately . Yet, as obvious as this advice may sound, the lack of attention to security patches has been the cause of numerous breaches .

There is also a tremendous amount of legacy code that hasn’t been ana-lyzed for security . This includes software that’s been around for years, with dependencies that may or may not have been investigated . “Legacy code never seems to go away,” says Luther Martin, Distinguished Engineer and data security specialist for Micro Focus . “Corporate IT grows the way the Winchester Mystery House was constructed . . . never ending, day in and day out . You end up with some staircases that go to nowhere, but other stairways that are valid . It isn’t always clear . You can’t tell what’s supporting what, and it’s too painful to make a change . The pointless staircase that goes nowhere may also be supporting a wall of some room on the other side which is perfectly useful .”

New technologies pose a different set of issues. When these are intro-duced, dev teams are often more willing to adopt them before conduct-ing the AppSec analysis required to ensure they don’t introduce new vulnerabilities into their development and/or deployment stack .

In general, developers and dev managers tend to lack sufficient edu-cation in the areas of threat modeling and risk reduction . The AppSec maturity models strongly suggest regular code reviews and software analysis . But as far as dev is concerned, delivering the functionality within tight deadlines overweighs the security risks in most cases .

With the proliferation of programming languages, related frameworks and libraries, architectures, and all the legacy code out there, attack surfaces are expanding . Mobile phone apps and apps based on IoT implementations represent one of the biggest areas of IT growth, but also a huge potential for new attacks . If developers write new code for these devices and environments without knowledge of what vulnerabili-ties can be introduced, then they are compounding the world’s AppSec complexity .

So, what kinds of vulnerabilities can lead to attacks?

Page 5: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

GuideThe 2019 TechBeacon Buyer’s Guide for Application Security

4

Three States of VulnerabilityLet’s define a few common terms used in most AppSec discussions. A vulnerability is caused by one or more weaknesses in software, or its con-figuration, that may be exploited by an attacker. An attack itself can come in a variety of classes, including distributed denial of service (DDoS), cross-site scripting (XSS), SQL injection, and many others . Well-known attacks are often given common names, such as “WannaCry .” A threat is the potential for one or more of these attacks to occur . Threats include possible attacks by ransomware, viruses, and other types of exploits .

To understand the types of vulnerabilities and threat levels you need to guard against, it helps to use the following structure: “known knowns, known unknowns, and unknown unknowns .” While US Secretary of Defense Donald Rumsfeld popularized these descriptors in 2002, they have been used in many fields of study for years. In terms of cyberse-curity, here’s how you can understand them:

1. Known-knowns: These are previously identified vulnerabilities that are, usually, listed in one or more vulnerability databases . They can be guarded against by patch management and software composition analysis (SCA)—i.e., by installing the fixes sent by vendors after some vulnerability in their software has been detected and, usually, corrected . Of course, it is the responsibility of individual customers to install the patch properly and in a timely way . Regarding SCA, Sonatype, Black Duck (Synopsis), and others were created in this space .

2. Known unknowns: These are vulnerabilities based on known types of weaknesses, but you don’t know if instances exist in your own systems . To guard against this potential you need to constantly scan your code and its dependencies for instances of vulnerabilities or active malicious code. Let’s say you know about a specific cross- site scripting (XSS) vulnerability in a consumed software component, in which malicious scripts are injected into otherwise benign and trusted web sites . The vulnerability has a name, and an assigned CVE, but you don’t know if that vulnerability resides in your code . Ideally, scanning for the signature of that particular XSS will discover any threat .

3. Unknown unknowns: In cybersecurity terms, these are zero-day vulnerabilities that represent the discovery—and often the exploit—of a fundamentally new attack vector (involving a new weakness type). No tool can find them, because no signature has ever been recorded, no ID has been assigned to instances of it, and there are “zero” days between the discovery and the attack . In other words, the first symptom of the vulnerability is the attack, or exploit, itself.

Discovering Vulnerabilities Anyone, especially anyone with a background in software security and software vulnerabilities, can discover a weakness in software code . People looking for vulnerabilities are often referred to as researchers . They may be contributors to a “bug bounty” program as explained be-low, or they may work directly with a software vendor to discover and report weaknesses in code before and after software release .

Bug Bounty ProgramsBug bounty programs offer a structured method for reporting software vulnerabilities, and they typically offer some sort of financial incen-tive to the reporter . One well-known bug bounty program is the Zero Day Initiative (ZDI) sponsored by Trend Micro . ZDI makes it easy for re-searchers to report un-patched vulnerabilities . When vulnerabilities are reported, ZDI passes them to the party responsible for the software and follows a documented protocol that allows the vendor time to address the issue .

The protocol also includes the steps ZDI will take if the vendor fails to provide a patch for the vulnerability, which may include workarounds for the problem and the communication record ZDI has developed over time with the vendor .

“Our goal with ZDI is to connect researchers [vulnerability hunters] to the vendors who have created the vulnerable software,” says Brian Gorenc,

Page 6: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

5www .microfocus .com

director at Trend Micro and leader of the Zero Day Initiative . “After re-searchers submit the vulnerabilities to us, we validate that they are in fact unpatched and are what we call a zero-day vulnerability .”

“Then we offer to purchase the vulnerability from the researcher, and we work with the affected vendor to make the fix, get the vulnerability assigned with a CVE number (see below), and ensure the database is populated correctly . We need to ensure the vendor has enough detail to create the patch, or mitigate the vulnerability . We also do root cause analysis, working within multiple test cases, and we work with the re-searcher to make sure they get the proper attribution for their discovery .”

“We also have something like a ‘frequent flyer’ program: the more vulner-abilities a researcher submits to us, the higher their reward levels . Our highest platinum level brings with it additional monetary reward; those who reach that level describe vulnerabilities in excellent detail, which in turn gives the vendor a way to make their patches or mitigations all the more reliable .”

“We buy vulnerabilities from all over the world . Our best researchers tend to be individuals, working on the side because research is their passion . They use us as an outlet for that work and are often working in parts of the world where salaries are depressed; the rewards we offer represent a great way for them to supplement their pay checks .”

Individual Reporters: Hackers and Other Security EnthusiastsBug bounty programs are not the only way vulnerabilities get discovered and reported. “People tend to fall into one of two different groups: Tech enthusiasts who happen to be security aficionados, and amateurs in the space, who certainly are not making a living at it, but do vulnerability re-porting as a side interest,” says Erdem Menges, a security product man-ager at Micro Focus® Fortify. “When this former group finds a vulnerability, they usually want to be known for the discovery .” Which means, some-times, the vulnerability receives a name of the discoverer’s choosing .

Others who discover vulnerabilities are pros, frequently hired by, and paid for by, vendors of AppSec products who are doing their own re-search. They look for patterns and find vulnerabilities that need to be disclosed to the industry . As with bug bounty programs, their goal is to alert a vendor before the vulnerability is widely known, i .e ., to mitigate the threat before there is wide impact . Most of these reporters don’t seek extra payment for finding and disclosing vulnerabilities, but see it as part of their responsibility to the software community .

It’s best to assume that all software has vulnerabilities . The reality is, most vulnerabilities have not been discovered yet; it’s just a matter of

time . “Back in 1999, I remember reading about this new vulnerability called ‘SQL injection,’” says Luther Martin referring to what, 20 years later, is one of the best known modes of attack . “Until that was an-nounced, no one knew this might be a vulnerability . It had been sitting there for years, but no one had exploited it in an attack .” A recent ex-ample of a vulnerability that had been lurking for years is ShellShock .

“Attack surfaces shift over time,” says Brian Gorenc . “As people do re-search, and a new attack surface is exposed, many researchers begin doing what we call ‘variant hunting,’ looking at that attack surface at different angles to find different ways in. Eventually, with enough eyes on the problem, the patches make the vulnerability harder to penetrate . The hope is that the vulnerability will go away .”

Disclosing VulnerabilitiesWhen a vulnerability is discovered in released code, the most obvious course of action is to fix it, patch it, and make the patch available to customers . But this is not always the case . If your organization writes software and also runs software written by other organizations, then there are several models for vulnerability disclosure and multiple “expert sources” that can apply to you—depending on whether the code is yours (“my code”) or another party’s (“not my code”), as follows:

■ Non-disclosure: My code, and my analysis of it leads to the finding of a vulnerability; it is my choice to not disclose it publically. (Non-disclosure can also be for code that you do not own.)

■ Responsible disclosure: In general, the identified vulnerability will not be publically disclosed until a suitable amount of time has been provided to the responsible party for creating and distributing a patch .

■ Self-disclosure: My code, and my analysis of it leads to the finding of a vulnerability . I disclose the vulnerability publically, often along with making a patch available .

■ Third-party disclosure: Not my code, and my analysis of it led to the discovery of a vulnerability . I disclose the vulnerability to the owner of the code/system so they can develop a patch and release it .

■ Vendor disclosure: Not my code, and my analysis of it led to the discovery of a vulnerability . I disclose the vulnerability to the owner of the system/code, which leads them to make a disclosure (often along with making patch available) . Sometimes referred to as “Full Vendor Disclosure” .

■ Full disclosure: Not my code, my analysis of the system led to the discovery of a vulnerability; the details are released in full publicity without vendor involvement (owner of the system/code) . Sometimes referred to as “Immediate Public Disclosure .”

Page 7: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

GuideThe 2019 TechBeacon Buyer’s Guide for Application Security

6

■ Hybrid disclosure: ZDI’s bug bounty program offers what might be called “hybrid disclosure”: they provide a grace period for a vendor to create a patch before ZDI makes a public disclosure .

The CVE and Vulnerability DatabasesKnown vulnerabilities are collected and maintained in widely available databases, the best-known being the “Common vulnerabilities and exposures” (CVE) list maintained by the MITRE Corporation and the “National vulnerability database” (NVD) maintained by the US Federal government .

When a vulnerability is discovered and reported by individuals or bug bounty programs, the CVE and NVD teams coordinate to ensure they are cataloged; given an identifying number; and described according to history, level of threat, and patch availability. According to the NVD web-site, “The NVD is the CVE dictionary augmented with additional analysis, a database, and a fine-grained search engine. The NVD is a superset of CVE. The NVD is synchronized with CVE such that any updates to CVE appear immediately on the NVD.”

A common concern regarding these databases is that they provide an easy hunting ground for would-be attackers . As noted on the CVE website, “sharing information within the cybersecurity community [i .e .,

organizations seeking protection] is more difficult than it is for hackers,” who use the dark web for nefarious purposes, including the marketing of “Ransomware as a Service” (RaaS) kits .

But the CVE insists “the benefits ... outweigh the risks.” Most security experts recognize that the CVE and the NVD provide valuable informa-tion to serve as guidelines or bare minimums for organizations . The im-portant thing for application security teams to do, in return, is to stay up to date on reported vulnerabilities, apply security patches immediately as they become available from their software providers, and implement a patch management program as a routine part of IT operations .

It’s also important to know that these databases represent only a small percentage of the actual vulnerabilities that get discovered on a daily basis .

Developing an AppSec Strategy If you’re a chief information security offer (CISO), how do you improve security for your organization and your business? Your goal is to miti-gate the actual risk of any security breach—to minimize its impact and understand the likelihood of it occurring . Application securing is about proactively preventing, identifying, protecting, and defending risk in ap-plications caused by security vulnerabilities .

Page 8: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

7www .microfocus .com

Of course, you need people with the right skills—your developers need to understand how to “build in” security as early as possible into the SDLC (i .e ., they need to know what tools to use and when) . Plus, your AppSec tools need to blend well with your DevOps pipeline if you’re practicing continuous integration and delivery .

Implementing a Patch Management ProgramIn an ideal world, software is created without any vulnerabilities . In reality, the weaknesses which give rise to vulnerabilities are essentially flaws in design, implementation, or configuration. When these are introduced, the best solution is for the vendor to create a patch and distribute it to their customers with instructions . But it’s up to you, the consumers of the vulnerable software, to apply the patches .

Given the heavy reliance on open-source code and third-party compo-nents by developers, and as noted earlier the relatively high instances of vulnerabilities in these components, patch management is espe-cially critical—not only to development teams themselves but also to IT Ops teams who receive code for deployment within their business environment .

Responsible code providers—both open source providers and pro-prietary vendors—routinely find and repair vulnerabilities in the code your team may rely on . Software vendors are always more capable of understanding their product than consumers, including security spe-cialists, says Luther Martin .

“I remember a recent case where a vulnerability was only a problem if a specific flag had been enabled,” says Martin. “When that vulnerability was reported to the vendor, they were able to examine the report and determine what to do within a few minutes, which led to a quick patch. If this problem had been left to outside teams to explore and correct, it might take months to realize the same quick fix. They simply wouldn’t know the internal workings of the software .”

Understanding the Cultural ShiftAn effective application security process requires developers to be wary of basing their code on untested components . There is no way to know what security risks lurk inside untested code . “It’s kind of like go-ing grocery shopping without paying any attention to food labels,” says Derek Weeks, vice president for Sonatype, a company that helps de-velopers manage open-source components across different applica-tions . “If you’re a developer and you’re picking a component for a heavily governed environment, you might have to submit that component to an open-source governance group for approval .”

Approval processes like that can dramatically slow down a team working on two-week sprints . “You either work around the newer components that might require six weeks for approval, or you fall back on the old components” that are safe, tried and true, but often don’t provide the great functionality you’re looking for . The good news, in some sense, is that “there are 10,000 open source ‘parts’ that arrive on the market every single day,” says Weeks . “Developers need to choose parts on the basis of the attributes that will satisfy the app they’re developing .”

“Some problems should be solved on the network, some in the soft-ware,” says Alex Hoole, Principal Researcher for Software Security Re-search at Micro Focus. “Some have to be fixed in the hardware, and yet other problems exist within people and culture .”

“As far as people are concerned, there’s nothing you can do in software that will prevent perpetrators from launching some social engineer-ing exploit—phishing, for instance. And there’s nothing that could have been done within culture that would have prevented zero-day attacks like Spectre and Meltdown, which were previously unknown . Systems development culture should evolve in the light of such vulnerabilities .”

The Tools for Application SecurityAs for the AppSec tools themselves, they cover a number of different testing styles and purposes that will certainly influence your purchase decision . Bear in mind that security tools, in general, span a wide variety of needs within an enterprise . For example:

■ Security information and event management (SIEM) systems detect intrusions via log and alert data and correlate it with data from other systems to identify real threats .

■ Data security tools typically focus on data encryption and user access to sensitive material, which requires a company-wide program for user identity and authentication. Data security requires strong application security to ensure the implementation and configuration of data security is not undermined. They are complimentary, but this guide is not focused on data security .

Instead, this guide is focused on application security and its tools, which come in many flavors:

■ Static application security testing (SAST) tools are the front line of secure code development . SAST tools scan source code to find known patterns of weaknesses which lead to vulnerabilities. Think of these tools as the first step in weeding out most vulnerabilities . SAST tools are ideally integrated with the developer IDE to detect known vulnerabilities prior to code commit . Most SAST tools support the major web languages, Java and .Net. Static tools generally also support some form

Page 9: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

GuideThe 2019 TechBeacon Buyer’s Guide for Application Security

8

of C, C++, or C# . In addition to integrating with the developers’ IDE, the tools should be able to support your DevOps team’s software delivery and integration pipeline . Static tools should be run as often as practical to provide feedback directly to the developer, allowing managers and the security team to monitor the progress of developers in eliminating security defects .

■ Interactive application security testing (IAST) tools perform what is often called “glass-box” testing . IAST is designed to catch attacks that the other approaches cannot by running an agent that collects event data from running applications . Either by installing software agents on an application server, or by instrumenting the application at development time, interactive analysis techniques allow the collection of data on application and security events in pre- production environments .

■ Dynamic application security testing (DAST) tools run against compiled, or production, code to test for known vulnerabilities in the runtime environment . Also known as “black-box testing,” dynamic analysis can locate various types of vulnerabilities in running applications . In most cases, organizations should run both dynamic and static testing: Static analysis tools give developers feedback and educate them at the same time, while dynamic analysis tools can give security teams a quick win by immediately pinpointing exploitable vulnerabilities in either production or pre-production environments .

■ Run time application self-protection (RASP) tools work “inside” an application’s runtime environment to detect changes that may indicate an attack is under way. RASP may be effective for legacy applications, for which modifying the source code to fix vulnerabilities may not be an option . RASP tools are often used in combination with a web application firewall (WAN) guarding the perimeter of the runtime environment, while the RASP tool runs inside the runtime production environment itself . For more modern applications, that environment can include the VM or the container in which an application and its various components and APIs reside .

■ Web application firewalls (WAF) detect intrusions at the perimeter of an application server’s network . WAFs are signature based devices, meaning that they apply “a set of rules to an HTTP conversation,” as described on the OWASP website . “Generally, these rules cover common attacks such as cross-site scripting (XSS) and SQL injection .”

“WAFs protect servers. A WAF is deployed to protect a specific web application or set of web applications” and may “come in the form of an appliance, server plugin, or filter, and may be customized to an applica-tion. The effort to perform this customization can be significant and needs to be maintained as the application is modified.”

■ Software composition analysis (SCA) dissects all of the foundational units of code that comprise a shippable application . SCA is growing increasingly popular, given the wide use of open source components as part of an app’s composition . “The pressure companies are putting on developers to create and deliver changes faster and faster is a big reason for this pattern,” says Amy DeMartine, Principal Analyst, Forrester Research . “All they have to develop is a thin slice of proprietary code, with 80-90% of the rest consisting of open source components that serve as building blocks—functions that can build a graph or call a form, for example—then you can be first to deliver that new chunk of functionality through your DevOps pipeline .

But using open source components with some of the new, popular man-agement tools and frameworks—like Maven, Node.js, PyPi, and Ruby-Gems—creates considerable vulnerabilities. “It’s not that we’re writing worse code these days,” says DeMartine . “It’s just that the people writing open source components are not always scanning before they release, and it’s critical that the business consuming these components look at these new versions and understand where the vulnerabilities lie .”

■ Penetration testing tools assess app vulnerability by mimicking the hacks that attackers would attempt on a live application . Which means, essentially, that “pen testing” is a form of dynamic analysis . Pen testing is often considered more thorough than DAST, because it is not just an automatic test. It benefits from being a combination of automated tests, customized scripts, and manual tests run by humans. It’s able to find business logic flows which normally automatic tools are not designed to find. Penetration testing will always have a place in the secure development life cycle .

■ SaaS-based application security testing services is one of the fastest growing markets for AppSec testing . As described on Veracode’s website, “SaaS application security services are constantly improving as the threat landscape evolves, helping to keep your defenses up to date without needing to constantly upgrade on-premises technology .” However, Jason Bloomberg, president of Intellyx, questions how teams adopting SaaS-based AppSec will handle the governance complexities “How do you establish the policies for who can use the cloud resources as well as how to use them?” Plus, he notes that a third-party SaaS application is “not going to be aware of all your internal users’ identities and permissions .”

■ Mobile security testing: Mobile testing consists of static code analysis for mobile application source code, customized dynamic testing methods for compiled mobile apps (such as fuzzing) and testing of the server backend/services . The Mobile Vulnerabilities Security Verification Standard from OWASP (MVSVS) provides a good model to govern the verification needs for mobile apps.

Page 10: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

9www .microfocus .com

The ideal tools cover a variety of languages,” says Luther Martin . “You can then use that one tool to scan a lot of things . If you can’t scan certain chunks of code because of a language barrier, I can guarantee that there are vulnerabilities you’re not seeing .”

As a customer, you can run DAST and IAST against software you have purchased . “But it won’t be as good as scanning the source code,” says Alvaro Munoz, a researcher for Software Security Research at Micro Focus . “You want to ensure that your suppliers have good practices in place for SAST and penetration testing before you install the software .”

Beware of vendor claims: If you’re about to make a tool purchase, be-ware that many vendors only offer a single category, such as SAST, DAST, IAST or RASP. “They may tell you that everything else offers the wrong approach, and this simply isn’t true,” says Munoz .

“In fact, anything you can bring to the SDLC to make it more secure is valid . . . all of those approaches have their place, although within any given organization a team may prefer one type of tool over another .”

Beware of language dependencies: “You’ll find that specific categories of tools, from different security tool vendors, work only with specific languages,” Munoz says . “For example, if you’re a bank using Java, then static analysis may work well for you, simply because static analysis tends to work with a strongly typed language like Java . But if you have certain banking applications written in JavaScript, you may need ad-ditional approaches, such as dynamic analysis .”

Also, libraries and frameworks: And language isn’t the only thing to consider . You need to think about your libraries and frameworks as well . “If you’re expanding to Python and PHP, you won’t be able to use all ap-proaches . You’ll want to ensure that your vendor supports the technol-ogy stack that you’ve invested in .

From a buyer’s point of view, Munoz strongly recommends that you ensure the vendor supports the languages, frameworks, and libraries you are using or plan to use. “Note that the language matters with SAST tools . Static analysis is easier for developers, in general . It does not in-volve a full runtime environment, with everything synchronized in order to get an app up and running .

How AppSec Tools Are Used by Developers, Testers, and Security Teams: What Should Be Your Priority?

Figure 1. Sample Continuous Security Flow for the Software Lifecycle

Page 11: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

GuideThe 2019 TechBeacon Buyer’s Guide for Application Security

10

Sequencing your security test types: With these kinds of tests, you need to have the frontend and the backend of your application available . “Which probably means you have to get permission from the operations team to set up the testing,” says Munoz, “and you may need permission from the QA team in order to perform the test . After all that, you can start your dynamic test . Dynamic testing involves a much larger team effort to work effectively.”

By contrast, with static AppSec testing, a small team can scan the frontend source code, scan the source code for the backend and link these scans to the continuous integration and deployment pipeline . You can often have a scan running as you’re doing the integration between frontends and backends . Static testing allows you to test earlier in the development life cycle, right in the IDE (integrated development environ-ment) being used by the development team, through build integration, or even through a SaaS offering.

Start small with static testing: Munoz warns against giving develop-ers who are new to AppSec testing a code analyzer with all its many different checks enabled. “This will result in so many found issues that it will overwhelm the developer . He or she simply won’t want to use it . And most of the issues found by the tool will not be understandable to a developer unfamiliar with AppSec, in terms of what to do, how to remediate,” he adds .

“If you have .5 million lines of code, and you’re running a static applica-tion security testing (SAST) tool, you can overwhelm your test team with thousands of false positive results from a test that looks for all kinds of vulnerabilities . “

“Don’t look for all of those instances at once,” says Munoz . “Look for, say, cross-site scripting vulnerabilities, then for DDoS, then for SQL injec-tion . Don’t try to cover all these vulnerabilities at once, because your developers or test teams will get discouraged . It’s best to start small .”

Create an AppSec program one step at a time: If possible, as you launch your AppSec program, train the development team by first us-ing a centralized security team who runs the scans . This team can run the scans, triage them, remove false positives, and focus on the limited categories of issues that developers can fix. Once developers get these results and begin fixing the issues, they will become familiar with threats and vulnerabilities . They’ll soon be in a better position to run these tests themselves .

The next step is to identify a “security champion” from each of your development teams, someone who can perform the same steps that

the centralized security team was doing at the beginning of the AppSec initiative . This person can then teach other developers about application security and how to write secure code .

The final step is to allow each developer to run the static test scans themselves, so they can each understand how to fix issues and most importantly, understand how to build in security prior to the testing phase .

The Buyer Profile: What Are Your Unique Business Needs for Application Security?Application security requirements differ from one business to the next, depending on business sector and other factors . A company in the credit card industry, for example, will have different needs from a manufacturer in the pharmaceutical supply chain . Some of these dif-ferences have to do with compliance mandates, such as PCI DSS (the Payment Card Industry Data Security Standard) or the FDA’s DSCSA (the Drug Supply Chain Security Act) . At the same time, note that these two examples are from highly regulated industries, so there will be some common security requirements for both of these enterprises.

Aside from industry-related requirements, every business will have unique security requirements based on IT operations configuration, languages used, modes of deployment, service level agreements, and, potentially, factors related to competitive differentiation.

The point is, before you can make a wise choice for application secu-rity products, you should consider the factors that will determine your organization’s AppSec profile. Here are some variables:

■ Do we have any compliance obligations (DISA STIG, FISMA, PCI DSS, MISRA, HIPAA, etc .)?

■ Which programming languages do we use to develop our software?

■ What operating systems are used for development and for deployment?

■ What is the makeup of our applications that need analysis (e .g ., standalone, daemon server, web service, micro services, mobile app, library, etc .)?

■ What sort of software dependencies do we have (e .g ., open source and commercial third-party dependencies)?

■ Which appsec tools are we already using?

■ What build systems and IDEs are used by our developers?

■ What is our attack surface? What can an attacker actually reach, whether the perpetrator is inside or outside the organization?

■ Have we conducted any threat modeling? If so, what rankings and categories did we assign to the threats identified?

Page 12: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

11www .microfocus .com

■ Will we have dedicated application security experts, developers, or contractors/SaaS performing AppSec evaluations?

■ How will the answers to these questions be different five years from now?

Different business units within the same organization may have differ-ent AppSec needs, as well. Derek Weeks offers a common scenario for large enterprises: “In Business Unit A, you have few compliance rules; in Business Unit B, you have HIPAA compliance rules, so their policies are going to be more stringent; in Business Unit C you have GDPR or PCI compliance rules . Given the vulnerability scoring provided by the CVE, developers for Business Unit A may be able to use an open source component in your applications if, say, it has a score of 7 or less . But for Business Unit C, you may not be able to use a component that has any level of vulnerability .”

Depending on the scope of your AppSec program, you may need to support wide variances in requirements across your company.

Are You Ready for an Effective AppSec Culture?Here are some questions to ask yourself and your team leads. If you’re considering a formal AppSec program, one that enforces behavior and skills across your entire software development and delivery lifecycle, are you ready to enable that with tools and processes? Or are you fighting an uphill battle? Do you have secure coding guidelines and standards today?

For example, the CertC and CertJava programming guides offer advice on how to avoid introducing vulnerabilities into your code . And there are tools that can help you find vulnerabilities as you type. In many cases, these may be vulnerabilities you didn’t know about as you are coding .

As code becomes more complex, vulnerabilities are harder to find, given the many interdependencies that occur throughout today’s soft-ware programs . Ideally, tools can be run against larger sets of code to discover more complicated vulnerabilities . But in terms of application development culture, when vulnerabilities are found, are your teams prepared to fix them? And how quickly can they address the risks?

As people become familiar with secure coding practices, ideally they are less likely to create the same mistakes over time .

AppSec Takes a VillageReaching a mature level of application security requires collaboration of multiple business units including security, development, testing and operations . There is no silver bullet that can secure all applications at once but with an application security program and incremental improve-ments, application risks can be managed in a predictable way .

Vendor ResponsibilityWhen it comes to vendor vs . customer responsibility, every vendor is assumed over the past several years to be using some tool to scan their software for vulnerabilities . If you’re a consumer, it’s reasonable to expect that your own scan of that product should not raise any severity flags. “But if it does,” says Martin, “then your vendor isn’t doing their job. They shouldn’t be shipping the software before these vulnerabilities are mitigated . Bottom line, if you’re a consumer, you HAVE to check . Because not all vendors are as careful as they should be .”

“Some of our own customers actually send people onsite to do manual code reviews of every release,” “From their point of view, the tools are good, but they will never catch everything . For example, a tool cannot discern ‘intent’ in a chunk of code . But a human can look at the code and quickly determine whether, say, a key term flagged by a tool as a potential vulnerability actually represents a problem or not . An auto-mated tool can have a hard time recognizing what is safe vs . what is not in these situations .”

AppSec and the Ambiguity of Programming Languages “Software is written in programming languages, and we communicate in a space of ambiguity,” says Luther Martin . “Our communication is not precise, not even close . We have to know what each other mean when

Page 13: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

GuideThe 2019 TechBeacon Buyer’s Guide for Application Security

12

we communicate . And it’s inherently challenging to take that ambigu-ous model and make a computer language that is any less ambiguous .

“To make a language more precise might make it more secure . But the way people think is incompatible with precision, the way an ideal com-puter language might be .”

I .e ., developer, heed this fact . Don’t take chances . Test your code .

Resources1. https://techbeacon.com/how-deal-vulnerabilities-you-didnt-

know-about

2. www.tripwire.com/state-of-security/security-data-protection/cyber-security/10-essential-bug-bounty-programs-2017/

3. www.bsimm.com/

4. A Developer’s Guide to the OWASP Top 10 2017

5. https://zerodayexploit.weebly.com/types-of-disclosure.html

6. www.sans.org/reading-room/whitepapers/threats/define- responsible-disclosure-932

7. https://courses.cs.washington.edu/courses/csep590/05au/ whitepaper_turnin/software_vulnerabilities_by_cencini_ yu_chan.pdf

8. https://techbeacon.com/bug-bounties-pay-are-they-right- your-company

9. https://techbeacon.com/developer-secure-code-starter- kit-resources

10. https://techbeacon.com/how-developers-can-take-lead- security

Fun Fact“In the real world, half of your development time happens before release, and half afterward,” says Luther Martin . “This is why vendors have to charge for support and maintenance . After software ships, they have bug fixes, incremental changes and new features, and any change they make introduces potential vulnerabilities .”

Learn more atwww.microfocus.com/appsecurity

“We accept today that we need to do sophisticated analysis. To not be scanning your software is simply negligent.”

LUTHER MARTIN

Distinguished Engineer and Data Security SpecialistMicro Focus

Page 14: The 2019 TechBeacon Buyer’s Guide for Application Security · The 2019 TechBeacon Buyer’s Guide for Application Security 4 Three States of Vulnerability Let’s define a few common

164-000022-002  |  M  |  07/19  |  © 2019 Micro Focus or one of its affiliates. Micro Focus and the Micro Focus logo, among others, are trademarks or registered trademarks of Micro Focus or its subsidiaries or affiliated companies in the United Kingdom, United States and other countries. All other  marks are the property of their respective owners.

Contact us at:www.microfocus.com

Like what you read? Share it.