36
Rating Scales for Collective Intelligence in Innovation Communities > Christoph Riedl Ivo Blohm Jan Marco Leimeister Helmut Krcmar Why Quick and Easy Decision Making Does Not Get it Right

ICIS Rating Scales for Collective IntelligenceIcis idea rating-v1.0-final

  • Upload
    riedlc

  • View
    634

  • Download
    4

Embed Size (px)

Citation preview

Rating Scales for

Collective Intelligence in

Innovation Communities

> Christoph Riedl

Ivo Blohm

Jan Marco Leimeister

Helmut Krcmar

Why Quick and Easy Decision

Making Does Not Get it Right

1. Problem Setting

So, there are large data pools…How do you select the best ideas?

2. Theory

Background

Research Questions

Which rating mechanisms perform best for selecting innovation ideas?

Dimensions of Idea Quality

Idea quality

Novelty

Feasibility

Relevance

Elaboration

Ease of transforming an idea into a new product

An idea‘s value for the organization

An idea‘s concretization and maturity

An idea‘s originality and innovativeness

Source: [1, 2, 3]

3. Research

Model

Research Model

H1: The granularity of the rating scale positively influences

its rating accuracy.

Rating Scale

Judgment

Accuracy

Rating

Satisfaction

H1+

H2+

H2: The granularity of the rating scale positively influences the users' satisfaction with their ratings.

Research Model

H3a: User expertise moderates the relationship between

rating scale granularity and rating accuracy such that the

positive relationship will be weakened for high levels of user

expertise and strengthened for low levels of user expertise.

User

ExpertiseH3a

Rating Scale

Judgment

Accuracy

Rating

Satisfaction

H1+

H2+

Research ModelUser

ExpertiseH3a

H3b

Rating Scale

Judgment

Accuracy

Rating

Satisfaction

H1+

H2+

H3b: User expertise moderates the relationship between

rating scale granularity and rating satisfaction such that the

positive relationship will be strengthened for high levels of

user expertise and weakened for low levels of user

expertise.

Research Methodology

• Pool of 24 ideas from real-world idea competition

• Multi-method study

• Web-based experiment

• Survey measuring rating satisfaction of participants

• Independent expert (N=7) rating of idea quality (based on Consensual Assessment Technique, [1] and [2])

4. Experiment

Participant Demographics

N = 313

Participant Demographics

Screenshot of system

Research Design

Promote/DemoteRating

ComplexRating

5Star Rating

So much for the data space and its attributes. Next, we have to think about who our users are and what they want to do. All lifeloggingapplications are first of all about

5. Results

Correct Identification of Good and Bad Ideas

5.1

3.9

2.5

0

1

2

3

4

5

6

Me

an o

f co

rre

ctly

ide

nti

fie

d

ide

as

Rating Scale

4.9

3.6

1

0

1

2

3

4

5

6

Me

an o

f w

ron

gly

ide

nti

fie

d

ide

as

Rating Scale

Error Identifying Top Ideas as Good and Bottom Ideas as Bad

Rating Accuracy (Fit-Score)

0.2 0.3

1.5

0

1

2

3

4

5

Me

an o

f ad

just

ed

fit

sco

re

Rating Scale

Participants’ Rating Satisfaction

3.2

3.9 3.7

0

1

2

3

4

5

Me

an o

f ra

tin

g sa

tisf

acti

on

Rating Scale

ANOVA Results

Panel B. Effect of Rating Scale on Rating Satisfaction

Source dfSum of

Squares

Mean of

SquaresF Hypothesis Supported

Between Groups 2 7.44 3.72 4.52*** H2 Yes

Within Groups 310 253.36 0.82

Total 312 270.80

Panel A. Effect of Rating Scale on Rating Accuracy

Source dfSum of

Squares

Mean of

SquaresF Hypothesis Supported

Between Groups 2 121.23 60.61 9.05*** H1 Yes

Within Groups 310 2075.77 6.70

Total 312 2196.99

N = 313, *** significant with p < 0.001, ** significant with p < 0.01, * significant with p < 0.05

ANOVA Results

Post-hoc comparisons:

Complex rating scale leads to significantly higher rating accuracy

than promote/demote rating and

5-star rating (p < 0.001)

Panel A. Moderating Effect of User Expertise on Rating Scale & Rating Accuracy

Step Independent Variable R² ΔR² Hypotheses Supported

1 Expertise 0.02 -

2Dummy 1

0.11** 0.09***Dummy 2

3Expertise x Dummy1

0.12** 0.01 H3a NoExpertise x Dummy2

Panel B. Moderating Effect of User Expertise on Rating Scale & Rating Satisfaction

Step Independent Variable R² ΔR² Hypotheses Supported

1 Expertise 0.03 -

2Dummy 1

0.08** 0.05**Dummy 2

3Expertise x Dummy1

0.10* 0.02 H3b NoExpertise x Dummy2

N = 313, *** significant with p < 0.001, ** significant with p < 0.01, * significant with p < 0.05

Regression Results

Regression Results

There is no direct and no moderating effect of user expertise.

The scale with the highest rating accuracy / rating satisfaction should be used for all

user groups.

6. Contribution

Limitations

Expert as base-line

Forced choice

Theory

• Theory Building

– Collective Intelligence

• Theory Extension –

Creativity Research

Practice

• Design recommendation

Rating Scales for

Collective Intelligence

in Innovation Communities

> Christoph Riedl

Ivo Blohm

Jan Marco Leimeister

Helmut Krcmar

[email protected]

twitter: @criedl

Image credits:

Title background: Author collection

Starbucks Idea: http://mystarbucksidea.force.com/

The Thinker: http://www.flickr.com/photos/tmartin/32010732/

Information Overload: http://www.flickr.com/photos/verbeeldingskr8/3638834128/#/

Scientists: http://www.flickr.com/photos/marsdd/2986989396/

Reading girl: http://www.flickr.com/photos/12392252@N03/2482835894/

User: http://blog.mozilla.com/metrics/files/2009/07/voice_of_user2.jpg

Male Icon: http://icons.mysitemyway.com/wp-content/gallery/whitewashed-star-patterned-icons-

symbols-shapes/131821-whitewashed-star-patterned-icon-symbols-shapes-male-symbol1-

sc48.png

Harvard University: http://gallery.hd.org/_exhibits/places-and-sights/_more1999/_more05/US-MA-

Cambridge-Harvard-University-red-brick-building-sunshine-grass-lawn-students-1-AJHD.jpg

Notebook scribbles: http://www.flickr.com/photos/cherryboppy/4812211497/

La Cuidad: http://www.flickr.com/photos/37645476@N05/3488148351/

Theory and Practice: http://www.flickr.com/photos/arenamontanus/2766579982

Papers:[1] Amabile, T. M. (1996). Creativity in Context. Update to Social Psychology of Creativity. 1 edition, Westview

Press, Oxford, UK.[2] Blohm, I., Bretschneider, U., Leimeister, J. M. and Krcmar, H. (2010). Does collaboration among participants

lead to better ideas in IT-based idea competitions? An empirical investigation. In Proceedings of the 43th Hawaii Internat. Conf. System Sci. p. Kauai, Hawai.

[3] Dean, D. L., Hender, J. M., Rodgers, T. L. and Santanen, E. L. (2006). Identifying quality, novel, and creative ideas: Constructs and scales for idea evaluation. Journal of the Association for Information Systems, 7 (10), 646-698.