FLOSS2009 Øyvind Hauge

Embed Size (px)

DESCRIPTION

Presentation of the paper "An Empirical Study on Selection of Open Source Software - Preliminary Results" from the ICSE workshop FLOSS2009 in Vancouver, Canada

Citation preview

  • 1. An Empirical Study on Selection of Open Source Software yvind Hauge, Thomas sterlie, Carl-Fredrik Srensen, Marinela Gerea [email_address] yvind Hauge, Thomas sterlie, Carl-Fredrik Srensen and Marinela Gerea, An Empirical Study on Selection of Open Source Software - Preliminary Results, in: Proceedings of the 2009 ICSE Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development (FLOSS 2009), May 18th, Vancouver, Canada, pages 42-47, IEEE Computer Society, 2009 Doi:http://dx.doi.org/10.1109/FLOSS.2009.5071359

2. Main findings

  • Situation-specific constraints are far more important than general product-specific evaluation criteria

3. Software is selected based on 'first fit' rather than 'best fit' 4. This motivates a shift of focus from normative evaluation methods to the situation where the selection is performed 5. CBSD and Software selection 6. 7. Normative selection efforts

  • Many COTS and OSS methods and evaluation schema

8. Engineering approach

  • Expects predefined requirements

9. Evaluation of several alternatives to find the best match 10. Weighted scoring calculates the best match 11. A few challenges

  • Lack of/too many/the wrong/overlapping metrics

12. Information not always available or reliable 13. Lack of user guides 14. Time consuming 15. Few candidate components 16. Strong ties to a provider 17. Do not reflect the context 18. The study

  • How do developers select OSS components?

19. Semi-structured interviews 20. Developers in 16 companies 21. Overweight of web applications 22. Identification

  • Experience

23. Recommendations 24. Monitoring familiar communities 25. Unstructured searches 26. Selection

  • Internal experience

27. External feedback/reputation 28. Prototyping 29. Situational selection

  • A few situation-specific constraints areverydecisive

30. Developer dependent

  • High focus onexperience

31. Rely on recommendations, familiar communities and user experiences Technology dependent 32. 33. 'First fit' rather than 'best fit'

  • Evaluation starts during the identification

34. If a likely component is found

    • Evaluated and tested
  • 35. Adopted if it is 'good enough' or rejected if it is not
  • Knowledge from one evaluation is used in the search for and evaluation of the next component

36. 37. Implications Shift of focus

  • The most important constraints in each situation and their impact on candidate components

38. Evaluating whether a component is 'good enough' rather than 'the best' 39. Informal experiences from other people 40. Make the experiences of others more available