Automating good coding practices

Embed Size (px)

Citation preview

  • 1. Automating Good Coding Practices Silicon Valley Code Camp 2010 Kevin Peterson @kdpeterson kaChing.com

2. Working Code vs. Good Code

  • Looks good to me

3. Solves today's problem 4. Easiest to measure from the outside

  • Looks good to you

5. Solves tomorrow's problem 6. Evaluating it is just as hard as writing it 7. Falls to the tragedy of the commons 8. Policies must encourage good code 9. Types of Automation

  • Unit tests

10. Integration tests 11. Style and illegal calls checks 12. Static analysis 13. Automated monitoring 14. Some context: kaChing

  • Financial services

15. Continuous deployment 16. Trunk stable 17. DevOps no QA, no Ops 18. Swiss compiler geeks 19. Types of non-automation

  • Culture

20. Buy-in 21. Fix-it days 22. Unit Tests

  • JUnit

23. jMock 24. DBUnit 25. Domain-specific helpers 26. Framework-specific helpers 27. Helpers 28. Helpers 29. Global Tests

  • Using Kawala declarative test runner

30. Style 31. Dangerous methods 32. Minimal coverage tests 33. Dependencies and Visibility 34. Easy to add exceptions 35. Any failure breaks the build 36. Dependency Test 37. Java Bad Code Snippets Test 38. Forbidden Calls Test 39. Lib Dir Well Formed Test 40. Open Source

  • code.google.com/p/kawala

41. AbstractDeclarativeTestRunner 42. BadCodeSnippetsRunner 43. DependencyTestRunner 44. Assert, helpers, test suite builders 45. Precommit Tests

  • Dependencies

46. Forbidden Calls 47. Visibility 48. Java Code Snippets 49. Scala Code Snippets

  • Json Entities

50. Queries Are Tested 51. Front end calls 52. Injection 53. Username-keyed tests 54. Static Analysis

  • FindBugs all clear

55. PMD 175 warnings right now 56. Easy to add exceptions 57. Runs as separate build(we're working on it) 58. Addinga warning breaks the build(Hudson) 59. Hudson Analysis Plugin 60. FindBugs Example 61. PMD Example 62. PMD Fix 63. How to add exceptions Step two : Give each member of the team two cards. Go over the list of rules with the team and have them vote on them. Voting is done using the cards where:

    • No cards: I think the rule is stupid and we should filter it out in the findbugsExclude.xml
  • 64. One card: The rule is important but not critical.

65. Two cards: The rule is super important and we should fix it right away. David from testdriven.com via Eishay 66. How to add exceptions II

  • Add the exceptions you want

67. No oversight, no questions asked 68. Lets you be strict with rules 69. Makes you consider whether adding the exception is right 70. Where do exceptions go?

  • SuppressWarnings

71. Annotate in-place 72. PMD, VisibilityTest

  • All in one place

73. Most of our tests 74. FindBugs 75. Monitoring

  • It doesn't end with the build

76. Self tests on startup 77. Nagios 78. ESP 79. Daily report signoff business rules 80. Monitoring Example: ESP 81. Dev-Ops An engineer with a pager is an engineer passionate about code quality. 82. Summary Unit Tests Global Tests Monitoring Does my code do what I intended it to do? Is my code likely to break? Is my code actually working right now? 83. What we don't do (much) 84. Code Coverage

  • QueriesAreTestedTest

85. No tests email 86. No Emma or Cobertura in build process 87. Ad hoc use of Emma or MoreUnit 88. Are coverage numbers useful? 89. Integration Tests

  • Mostly manual at this time

90. If we can't run it every commit, does it have value? 91. Formal Code Reviews

  • Are humans any better than automation?

92. Enough better that it's worth the time cost? 93. Post-commit, pre-deployment SQL review 94. Informatal hey, this is hairy reviews 95. Pair on difficult components 96. Writing Comments We do not write a lot of comments. Since they are not executable, they tend to get out of date. Well-written tests are live specs and explain what, how and why. Get used to reading tests like you read English, and you'll be just fine. There are two major exceptions to the sparse comments world:

  • open-sourced code

97. algorithmic details kaChing Java Coding Style 98. What we don't do (and never will) 99. QA

  • No QA department

100. Throw it over the wall leads to false sense of accomplishment 101. Fixing QA-found bugs seems productive 102. But it's actually a sign you screwed up 103. What we might do soon 104. Build Queue

  • Commit, see if tests pass

105. Require #rollback or #buildfix to commit Strong social pressure to not break the build Holding up other engineers vs.

  • Considering moving to build queue

106. By-request Code Reviews

  • Probably depends on build queue

107. Gerrit? 108. Dependent on switch to git? 109. Better staging environment

  • Hard for us to test data migration issues

110. Hard to test inter-service calls 111. Errors don't occur on dev boxes 112. Front end has it, with Selenium testing 113. One more thing 114. Standardize tools

  • Check in your Eclipse project

115. Configure code formatting 116. Share your templates 117. Organize imports on save 118. Keeps your history clean 119. Kevin Peterson kaChing.com @kdpeterson http://eng.kaching.com