Upload
parasoft
View
373
Download
1
Embed Size (px)
Citation preview
Parasoft Copyright © 2016 22
Your Presenters
Arthur Hicken is Chief Evangelist at Parasoft where
he has been involved in automating various software
development and testing practices for over 20 years.
He has worked on projects including cybersecurity,
database development, the software development
lifecycle, web publishing and monitoring, and
integration with legacy systems.
Follow him @codecurmudgeon
Alan Zeichick is Principal Analyst at Camden Associates,
where he focuses on networking technologies, cloud
computing, software development and telecom. In the software
development industry, Alan is well known as the founding
Editor-in-Chief of SD Times and of Software Test &
Performance Magazine. A former mainframe jockey, Alan has
been researching, writing, speaking and consulting on cutting-
edge information technology for more than three decades.
Follow him @zeichick
Parasoft Copyright © 2016 33
Agenda
Policy enforcement
Reducing defects during coding
Effective techniques for acceptance testing
Using metrics analytics to measure risk
Using SDLC analytics
Parasoft Copyright © 2016 44
Open and hide your control panel
Join audio:
• Choose “Mic & Speakers” to use
VoIP
• Choose “Telephone” and dial
using the information provided
Submit questions and comments via
the Questions panel
Note: Today’s presentation is being
recorded and will be provided within
48 hours.
Your Participation
GoToWebinar Housekeeping
Parasoft Copyright © 2016 66
Yet security is only one of the defects
▪ Many cars have more than 100 ECUs▪ 10s of millions of lines of code
▪ Software Everywhere
No visibility into embedded software from vendors
▪ More software than a fighter jet!
Parasoft Copyright © 2016 77
It’s a rolling data center
“A typical luxury sedan now contains about 100 megabytes of code that controls 50 to 70 computers inside the car, most of which communicate over a shared internal network.”
- MIT Technology Review
Parasoft Copyright © 2016 99
Complexity breeds errors, bugs, recalls
• Software errors $350/car in 2005 (IEEE)
• Auto problems cost $,$$$,$$$,$$$
• NHTSA estimates $100 per vehicle
• $3 billion annually for recalls/fixes
• Many recent recalls > $1 billion
• Liability above and beyond repairs
• Software is an ever growing percent
• Consequences are serious
Parasoft Copyright © 2016 10
We’re not talking about Pokemon Go
▪ Software quality in automotive is life-safety critical
▪ Added complexity introduces new vulnerabilities and safety concerns
▪ Car makers can’t audit or review every line of code
▪ The threats of failure involves lives.
▪ It’s clear: We can’t simply treat QA as in the past — or as primarily a hardware issue
Parasoft Copyright © 2016 12
Software recalls are a growing problem
▪ With proliferation of ECUs, the problem is exploding From 2005-2012, 32 software recalls affected 3.6
million vehicles From 2012-June 2015, 63 software recalls affected
6.4 millions vehicles From less than 5% of all recalls in 2011, software-
related recalls rose to almost 15% in 2015
In 2011, only 3 software components were recalled; in 2015, it rose to 20 components
Parasoft Copyright © 2016 14
It’s getting worse, and will keep doing so
▪ Defects? Design flaws? Security vulnerability?▪ Does it matter?
August 2016: VW Car-Net shown to be capturing and uploading vehicle sensor data
July 2016: Several incidents with Tesla may be attributable to faulty software
June 2016: Apple patents iPhone Bluetooth method to unlock/start cars remotely
June 2016: Hackers demo remote takeover of Mitsubishi Outlander Plug-In Hybrid
Parasoft Copyright © 2016 1919
Poll: Code quality driver
What is the biggest driver of automotive code quality?
Adherence to industry standards like MISRA
Peer review of all source code
Quality contracts with supply chain
A solid process for the SDLC
Context-rich metrics and metadata analysis
Parasoft Copyright © 2016 20
Digging deep into policy, practices, metrics
Policies cover the entire SDLC, not just coding and testing
Policies should be defined by architects or group of architects
Architects and managers define the practices required to meet those policies
The organization needs to find tools to automate practices and policy enforcement
Tools like Policy Center can help by syncing what we are doing with what we are measuring with external standards, like ISO 26262 or MISRA
Why? In order to set goals, measure results
Parasoft Copyright © 2016 2121
Software verification in many stages
Coding standards Unit testing Integration testing Functional testing Memory error detection
Coverage analysis Regression testing Workflow automation Peer code review & document inspections
Parasoft Copyright © 2016 2222
ISO 26262 Software Tools Map
Number Description Tool Functionality
5.4.6 Correctness of software design and implementation
Static analysis
8.4.4 Design principles for software unit design and implementation
Static analysis
8.4.5 Verification methods Static analysis , flow analysis, peer review
9.4.1-2 Unit test execution Unit testing
9.4.3-4 Unit test creation Unit testing, coverage
9.4.5 Test requirements Requirements management
10.4.2 Integration tests Test execution, coverage
10.4.5 Completeness of integration testing coverage
10.4.7 Requirements for integration test environment Stubs, virtualization/emulation
Parasoft Copyright © 2016 2323
Where to start
1. Measure software test coverage
2. Improve test coverage
3. Static analysis is an essential tactic
4. Implement preventative coding standards
5. Use runtime memory detection in the test lab
6. Link requirements to code and tests and results
7. Get results data back from supply chain vendors
8. Analyze data collected during development
Parasoft Copyright © 2016 2424
Poll: When to catch defects
When is the best time to catch a safety-critical software defect?
During the design/architecture phase
While writing the code
When code is checked-in to the repository
During peer review
During testing
Parasoft Copyright © 2016 2626
Managing all that complexity
Prevention of software flaws Coding Standards
MISRA – software coding standards ISO 16949 - Automotive Quality ISO 26262 – Functional Safety ISO 33001 - Process assessment for software development
Runtime error detection
Integrated development testing results Coding standards, unit testing, coverage,
requirements, static analysis…
Visibility & Traceability
Parasoft Copyright © 2016 2727
Value of coding standards
The MISRA Guidelines were written specifically for use in systems that
contain a safety aspect to them. The guidelines address potentially unsafe
C language features, and provide programming rules to avoid those
pitfalls.
Parasoft Copyright © 2016 2828
Static analysis means prevention
Relationship of automated analysis Preventative static analysis
Flow analysis
Runtime error detection
Uninitialized memory example Runtime will find it IF the test suite is thorough
Flow analysis may find it depending on complexity
Pattern to prevent: Initialize variables upon declaration
Much of MISRA is designed to prevent rather than detect
Parasoft Copyright © 2016 2929
Standards reduce supply chain risks
Standards and static analysis applied properly prevent errors, reduce risk
Integrated results provides control, measurement, traceability, accountability
Cost of good software is less than the harm caused by a recall or other failure
Parasoft Copyright © 2016 30
What’s Needed to Control Risk
A clear sense of ownership and responsibility for quality
Policies that define quality, such as test coverage, conformance with standards, open source licenses
Practices that enforce those policies – like automated static tests or code reviews
Metrics that show compliance – like 90% code coverage in current tests, or 28% failure rate
Definition of “done” – like zero tasks/user stores incomplete, all unit test failures with 5%, etc.
Definition of “done” will change at different times/phases
Parasoft Copyright © 2016 3131
Validating Requirements
Plan testing for defined Requirements
Test scenario definitions to document use-cases
Confirm Requirement/Test associations in the Requirement Test Matrix report
Automated tests for Requirement Definitions
Code level unit tests
Functional tests
Associate tests with Requirements using annotations
31
Parasoft Copyright © 2016 3333
Acceptance Testing
Verify that software development policy is working
Verify that requirements are met
Check for possible failures / flaws
Tests from the outside in
Variety of real-world scenarios
Include security ”penetration” testing
Parasoft Copyright © 2016 3434
Contractual Software Verification
Coding standards
Unit testing
Memory error detection
Coverage analysis
Regression testing
Peer code review & document inspections
Parasoft Copyright © 2016 3636
What Can You Measure
Code churn
Field bugs
Static analysis findings
Test failures
Coverage
Performance
Counts (lines, files, …)
Bug Arrival Rates
Parasoft Copyright © 2016 37
Example: Code Metrics
Do problems indicate simple coder error, or something more?
Prioritize by understanding impacts of policy violations
Understand where complexity means poor design, hard to validate performance or cover with static/dynamic tests
Identify areas with good (or bad) ROI for refactoring
After all, refactoring is not only costly (time and money) but also introduces new risk
Parasoft Copyright © 2016 3838
Metrics with a Capital M
Some metrics have taken on a life of their
own
Complexity
Cohesion
Coupling
Maintainability
KLOC
There are no silver bullets, no single metric that defines “good” vs. “bad” software
Parasoft Copyright © 2016 39
Context comes from metadata
Understand the meaning of the code, modules, tests
Functional vs non-functional requirements
And externalities, such as licenses, budgets, industry standards, as well as audits
For every task there is an assignment (ownership)
As well as budgets, priorities and risk assessments
May be specific security rules.
Parasoft Copyright © 2016 4040
Checking Your Work
Did you get the right numbers?
Are they going in the right direction?
Are you measuring enough?
Are unexpected things happening?
Are the measurements automatic?
Manual estimates are inconsistent
Multiple layers of manual collection yield
compound rounding errors
Parasoft Copyright © 2016 4141
Poll: What kind of testing?
Which testing is most valuable in software verification?
Unit testing
Functional testing
Static code analysis
Memory leak detection
Penetration testing
Parasoft Copyright © 2016 4242
Dashboards Vs Process Intelligence
2001 Ford Explorer
• Isolated data points
• No priority
• Binary warnings (Check
Engine)
• Nothing
• All
2016 Chevy Volt
• Multi-variate analysis
• Customizable
• Engine efficiency
• Valuable data (Range)
• Real-time feedback
Parasoft Copyright © 2016 4343
Harnessing “Big” Data
Aggregate data
Correlate data
Mine data
Create
Reports
Dashboards
Tasks
Alerts
Continuous testing/delivery/release
Parasoft Copyright © 2016 44
Decisions Based on Metrics and Metadata
What’s the priority and cost/benefit of fixing now vs. fixing later vs. letting it slide
Are there bigger problems with code complexity
What about integration with external systems (like TFS)
Architects and top managers configure parameters, to determine how rules are enforced
After all, it’s only partly about functional requirements for the code; non-functional requirements are equally important – or perhaps even more so.
Polices, and thus practices, must be configured to enforce both functional and non-functional requirements
Parasoft Copyright © 2016 4545
Parasoft Support for Automotive
Policy definition management
Requirement definition and tracking
Static analysis
Unit test
Peer review
Runtime error detection
Coverage
Parasoft Copyright © 2016 4646
Conclusions
Safety-critical automotive software issues are going to get worse
• Due to ever-increasing requirements, faster processors, and greater connectivity
This affects software from in-house and the vast supply chain
• The right tests can help make informed decisions faster
Coding standards and practices can drive policies
• And lead to definitions of “done”
Code metrics can help manage risk…
• If they have the right metadata context
• And if integrated across SDLC practices and systems
The result: transparency, feedback, supervision — and managed quality
Parasoft Copyright © 2016 4747
Blog: http://alm.parasoft.com
Web: http://www.parasoft.com/jsp/resources
Facebook: https://www.facebook.com/parasoftcorporation
Twitter: @Parasoft @CodeCurmudgeon @Zeichick
LinkedIn: http://www.linkedin.com/company/parasoft
Google+ Community: Static Analysis for Fun and Profit
Webinar: RX for FDA Software ComplianceAug 25th
IoT API's session - API World - San Jose, CA Sep 13th
Managing Auto Supply Chain Risk – Automotive Software Kongress - Germany Sep 21-22