Folksonomies: In General and in Libraries

Preview:

DESCRIPTION

Workshop on "Folksonomies" held at the National Library Board in Singapore, 2009.

Citation preview

1

Workshop on

FOLKSONOMIES

Singapore, February 9, 2009

2

We are very proud to hold a workshop in the „informational city“

of Singapore.

3

Wolfgang G. Stock

Isabella Peters

Researcher, Dept. for Information Science, Heinrich-Heine-University Düsseldorf, Germany

Lectures on Web 2.0 Services and Information Retrieval

Main research area: Folksonomies in Knowledge Representation and Information Retrieval

Professor, Head of Dept. for Information Science, Heinrich-Heine-University Düsseldorf, Germany

Lectures on Information Retrieval, Knowledge Representation, Informetrics and Information Market

Main research areas: Folksonomies, Emotional Information Retrieval and Informetrics of LIS Journals

4

Social Network on our Workshop

• literature, slides, links• Start discussions! Start a forum! Invite more people! Blog!, etc.

http://taggingworkshop.ning.com/

5

Agenda

1. Folksonomies – Indexing without rules

2. Folksonomies in information services and library catalogues

Short Break

3. Folksonomies and knowledge representation

4. Folksonomies and information retrieval

Short Break

5. Tag gardening for folksonomy enrichment and maintenance

6. Find „more like me!“. The social function of a folksonomy

6

Lesson 1

Folksonomies – Indexing without Rules

7

Indexing without Rules

“Anything goes”

“Against method”, 1975 (Paul K. Feyerabend, Austro-American philosopher)

Tagging• no rules • no methods – or even against methods• indexing a single document

– synonyms – why not? (New York – NY – Big Apple – … )– homonyms – never heard! (not: Java [Programming Language] – Java

[Island], but Java)– translations – why not? (Singapore – Singapur – …)– typing errors – nobody is perfect (Syngapur)– hierarchical relations (hyponymy) – why not? (Düsseldorf –

North Rhine-Westfalia – Germany)– hierarchical relations (meronymy) – why not? (tree – branch – leaf)

8

Indexing

9

Prosumer

“Prosumer”

• introduced 1980 by Alvin Toffler (American economist) in “The Third Wave”

• prosumerism: characteristic property of the knowledge society

Producer Consumer

Prosumer

10

Tri-partite System

• document (resource)• prosumer (user)• tag

11

Cognitive Indexing Processes

Source: Sinha (2005)

12

Library 2.0

13

14

15

Network Economy: Positive Feedback Loop

Time

Number of active

(tagging) users

Critical Mass

New userscome along.New users

come along.

Value of the networkincrease.

Value of the networkincrease.

Number of users of the network increase.Number of users of

the network increase.

„success breeds success“

only one standard (in a technological area)

16

Time

red:

Competitor

yellow:

Your Library 2.0 Service

„Combat Area“

„Take off“

Positive Feedback

„Saturation“

„Entry“

Number of active

(tagging) users

Critical Mass

17

How to Become a Standard (Part 1)

• marketing– product

• invite the catalogue’s users to tag• make it easy!• no additional password

– price• “price” = user’s time• Save the time of the user! (Ranganathan): time for

tagging – time for searching

– place• add the folksonomy to a well-known service (e.g., your

library catalogue)

18

How to Become a Standard (Part 2)

• marketing (continued)– promotion

• advertising / public relations• communicate the benefits: “Search with your own tags!” –

“Create your personomy!” – “Share your knowledge!” – “Find more – and better – resources!” – “Find other users similar to your interests!” – …

• awards: “Tagger of the Week” – “Super-Poster” – “Best tagger award” / prizes (e.g., books)

– personnel• especially at the entry phase: your staff has to tag• for promotion• always: software specialists

– processes• process management: knowledge representation tasks

(e.g., tag gardening)• process management: information retrieval tasks (e.g.,

relevance ranking)

19

Lesson 2

Folksonomies in Information Services and Library Catalogues

20

Narrow Folksonomies

• only one tagger (the content creator)

• no multiple tagging

• example: YouTube

Tags

21

Extended Narrow Folksonomies

• more than one tagger

• no multiple tagging

• example: Flickr

Source: Vander Wal (2005)

Tags

Add Tags Option

22

Broad Folksonomies

• more than one tagger

• multiple tagging

• example: Delicious

Source: Vander Wal (2005)

Tags

23

Tagging of OPACs

2 possibilities:

• 1) tagging of resources within the library’s website

• 2) tagging of resources outside the library’s firewall

24

Tagging of OPACS: Within Library’s Website: PennTags

http://tags.library.upenn.edu/

25

Tagging of OPACS: Within Library’s Website: Ann Arbor District Library

http://www.aadl.org/catalog

26

Tagging of OPACS: Within Library’s Website: University Library Hildesheim

http://www.uni-hildesheim.de/mybib/all_tags

27

Tagging of OPACS: Within Library’s Website

• advantages:

– user behaviour can be directly observed and exploited for own applications

– used knowledge organization system (KOS) can profit from user behaviour and user language

– users will be “attracted” to the library

– library will appear “trendy”

28

Tagging of OPACS: Within Library’s Website

• disadvantages:

– development and implementation (costs and manpower) of the tagging service have to be taken over from the library

– if only users may tag: librarians may loose their work motivation or may have a feeling of uselessness

– “lock-in”-effect of users

29

Tagging of Resources Outside the Library‘s Firewall: HEIDI

http://katalog.ub.uni-heidelberg.de/cgi-bin/search.cgi

30

Tagging of Resources Outside the Library‘s Firewall: LibraryThing

http://www.librarything.com/search

31

Tagging of Resources Outside the Library‘s Firewall: BibSonomy

http://www.bibsonomy.org/

32

Tagging of Resources Outside the Library‘s Firewall

• advantages:

– development and implementation (costs and manpower) of the tagging service haven‘t to be taken over from the library

– the library may profit from the “know-how” of the provider of the tagging system

– users may profit from tagging activities of hundreds of other users no lock-in

– library appears “trendy”

33

Tagging of Resources Outside the Library‘s Firewall

• disadvantages

– user behaviour cannot be observed or exploited

– your users support other tagging service

– used KOS cannot profit from user behaviour

34

Social OPAC – Thoughts which have to be made in advance

according to Furner (2007)

during indexing

• the degree of restriction (if any) placed on the number and/or combination of tags that a tagger may assign to a given resource;

• the degree of restriction (if any) placed on the tagger’s choice and form of tags;

• the provision (if any) of context-sensitive suggestions for tags, or for facets that the tagger may wish to consider;

35

Social OPAC – Thoughts which have to be made in advance

according to Furner (2007)

during indexing

• the provision of access (if any) to structured vocabularies of tags;

• the provision of access (if any) to lists or clouds of most frequently- or recently-assigned tags;

• the provision of online access to the full content of resources.

36

Social OPAC – Thoughts which have to be made in advance

according to Furner (2007) during retrieval

• the degree of restriction (if any) placed on the number and/or combination of tags that a searcher may use in a given query;

• the degree of restriction (if any) placed on the searcher’s choice and form of tags;

• the provision (if any) of context-sensitive suggestions for tags, or for facets that the searcher may wish to consider;

37

Social OPAC – Thoughts which have to be made in advance

according to Furner (2007) during retrieval

• the provision of access (if any) to structured vocabularies of tags;

• the provision of access (if any) to lists or clouds of most frequently- or recently-searched tags;

• the extent to which tag search is integrated into the existing OPAC search.

38

Social OPAC – Thoughts which have to be made in advance

according to Furner (2007) for system design and user environment

• to engender a sense of community among library users in separate and remote locations;

• to allow library users to identify other individuals with whom they share interests;

• to engender a sense of empowerment among library users who may not otherwise participate in or contribute to library activities;

39

Social OPAC – Thoughts which have to be made in advance

according to Furner (2007) for system design and user environment

• to encourage library users to engage with the resources that they tag, and thereby to allow users to come to a deeper understanding of those resources and of the contexts in which they were produced;

• to improve the effectiveness of retrieval of records and discovery of resources;

• to improve the effectiveness of personal rediscovery of resources;

40

Social OPAC – Thoughts which have to be made in advance

according to Furner (2007) for system design and user environment

• to allow library users to determine which kinds of resources and/or topics are currently popular, newsworthy, or receiving attention;

• to improve the entertainment value of, and thereby the level of user satisfaction with, the search experience;

• to reduce the costs normally incurred in manually cataloging, indexing, or classifying the resources in a collection;

41

Social OPAC – Thoughts which have to be made in advance

according to Arch (2007) for system design and user environment

• how to handle spam or spagging;

• how to handle linguistic variations, synonyms, homonyms, etc.;

• „ramp up-problem“: who will provide the first content?– subject specialist at your library– forwarding links to users, who are interested in the topic– ...

42

Social OPAC: Benefits

according to Furner (2007)

How do the users benefit?

they participate in the activities of a community of like-minded people;

they identify other individuals with whom they share interests;

they contribute to the activities of the library;

43

Social OPAC: Benefits

according to Furner (2007)

How do the users benefit?

they engage with the resources being tagged and/ or with the records that describe them;

they contribute to improvements in the effectiveness of other users’ searches;

they bookmark resources to which repeated personal access is foreseen;

44

Social OPAC: Benefits

according to Furner (2007)

How do the users benefit?

they determine which kinds of resources and/or topics are currently receiving attention;

they pass the time in a manner that provides entertainment;

they share their knowledge of the content of resources with others;

45

Social OPAC: Benefits

according to Furner (2007)

How do the users benefit?

they demonstrate the extent of their knowledge of the content of resources;

they instantly recognize the „aboutness“ of the ressource via the tags;

they benefit from the receipt of any concrete incentives supplied by the implementing institution in return for tagging efforts.

46

Short Break

47

Lesson 3

Folksonomies and Knowledge Representation

48

Collective Intelligence

Collective Intelligence• “Wisdom of the Crowds” (Surowiecki)• “Hive Minds” (Kroski) – “Vox populi” (Galton) – “Crowdsourcing”• no discussions, diversity of opinions, decentralisation• users tag a document independently from each other• statistical aggregation of data

Collaborative Intelligence• discussions and consensus• prototype service: Wikipedia (but: 90 + 9 + 1 – rule)

“Madness of the Crowds”• e.g., soccer fans – hooligans• no diversity of opinion – no independence – no decentralisation –

no (statistical) aggregation

49

Power Law Tag Distribution

Source: http:// del.icio.us

Tags zu www.visitlondon.com

0

10

20

30

40

50

60

70

f (x)= C / xa

Users

Tags

80/20-Rule

Power Tags

Long Tail

50

Tags zu www.asis.org

0

5

10

15

20

25

30

35

Inverse-logistic Tag Distribution

Source: http:// del.icio.us

Users

Tags

f (x)= e-C‘(x-1)b

Long Trunk

Long Tail

Power Tags

51

Document-specific Tag Distributions

distributions of the top 10 tags in a broad folksonomy (sample: Delicious)

N = 650 bookmarks (minimum of 100 different taging users)

Source: Reher (2008); unpublished

52

Power Tags

• Power Law Distribution • Inverse-logistic Distribution

Power Tags Power Tags

53

Tagging Behaviour

• 1 … 3 tags per document and user• motivations for tagging

– future (own) retrieval– contribution and sharing– attract attention– play and competition– self presentation– opinion expression

• factors which influence tagging– conformity– the role of recommendation

Source: Sen et al. (2006)

Source: Rader & Wash (2008 )

54

Sentiment Tags• negative tags: “awful” – “foolish”, …• positive tags: “amazing” – “useful”, …• applicable for sentiment analysis of documents

Source: Yanbe et al. (2007); service: Hatena Bookmarks

55

Documents which Provoke Emotions

Tagging using scroll-bars

Source:

Schmidt and Stock (2009)

56

Tag Types

57

Discrimination Power of Tags

Tags in Folksonomy

Tags in Concrete Document

frequent rare

frequent

“Power tags”

low discrimina

tion

strong discrimina

tion

rare

“Long tail”

very low discrimina

tion

low discrimina

tion

58

Benefits of Indexing with Folksonomies

• authentic user language – solution of the “vocabulary problem”• actuality• multiple interpretations – many perspectives – bridging the semantic gap• raise access to information resources• follow “desire lines” of users• cheap indexing method – shared indexing• the more taggers, the more the system becomes better – network effects• capable of indexing mass information on the Web• resources for development of knowledge organization systems• mass quality “control”• searching - browsing – serendipity • neologisms • identify communities and “small worlds”• collaborative recommender system• make people sensitive to information indexing

59

Disadvantages of Indexing with Folksonomies• absence of controlled vocabulary• different basic levels (in the sense of Eleanor Rosch)• different interests – loss of context information• language merging• hidden paradigmatic relations• merging of formal (bibliographical) and aboutness tags• no specific fields• tags make evaluations (“stupid”) • spam-tags• syncategoremata (user-specific tags, “me”)• performative tags (“to do”, “to read”)• other misleading keywords

60

Lesson 4

Folksonomies and Information Retrieval

61

Knowledge Representation and Information Retrieval• two sides of the same coin

• Immanuel Kant (German philosopher): Thoughts without content are empty, intuitions without concepts are blind. ...

Knowledge Representation without Information Retrieval is empty.

Information Retrieval without Knowledge Representation is blind.

FeedbackLoop

62

Information Linguistics• “cleaning tags up”• but: only additionally to raw tags• important basic tasks:

– language identification– word identification (problems:

“informationscience”, “information_science”, …)

– detection and correction of typing errors

– context-specific tags (“me”)– identification of named entities– word form conflation (using, e.g.,

Porter stemmer)– decompounding, phrases– homonymy – synonymy

• “higher” tasks:– semantic relations– translation

63

Relevance Ranking: State of the Art

Interestingness Ranking (Yahoo! / Flickr)• number of tags to the document

• number of users, who tagged the document

• number of users, who retrieved the document

• time (the older the document the less relevant)

• relevance of metadata

Personalized Interestingness Ranking

• user preferences (e.g. favorites)

• user‘s residence

64

Relevance Ranking of Tagged Documents

Source: Peters and Stock (2007)

65

Retrieval Status Value – Factor 1: Tags

• tag frequency or TF*IDF or TF*ITF– index tags (only in broad folksonomies)– search tags

• tag evaluation (feedback of users: „Is this tag useful for finding this document?“)

• more than one search argument: vector space model• time (new tags in platform: higher weight)• Super Poster (term tagged by super poster: higher weight)• Power Tag (higher weight)

tag evaluation

66

Retrieval Status Value – Factor 2: Collaboration

• click rates of a document• number of tagging users• number of comments• linked documents: PageRank

67

Retrieval Status Value – Factor 3: Prosumer

• performative document weight• sentiment weight• rating weight

68

Short Break

69

Lesson 5

Tag Gardening for Folksonomy Enrichment and Maintenance

70

The Folksonomy Tag Garden

71

Goal of Tag Gardening: Emergent Semantics

72

• removing “bad tags”: spelling variants (plural vs. singular, conflation of multi-word tags) and spam through “pesticides”

• achieved by type-ahead functionality during indexing, editing functionalities for tags afterwards the application (remove, change, etc.), Natural Language Processing of index tags and search tags, indexing and retrieval tutorials or guidelines for users, authorised users as pesticides

• in order to enhance recall and a consistent indexing vocabulary

simplest form of tag gardening because of

neglecting semantics of tags

Weeding

73

• extending the folksonomy with rarely used “baby tags” as high-frequency tags do not sufficiently discriminate resources

• achieved by displaying an inverse tag cloud during indexing or particular “green house” areas where the seedlings may develop and grow, discrete tag suggestions during indexing

• in order to enhance precision and

expressiveness of the folksonomy

Seeding

74

• shaping the folksonomy into “flower beds”, distinguishing similar looking “plants”, assigning their “species”, branding each species with labels and giving additional information regarding their application (e.g., cooking, healing, etc.)

• achieved by conflation of multi-language tags, summarization of synonyms, division of homonyms, establishment of semantic relations by comparison with KOS (afterwards indexing)

• in order to enhance precision and expressiveness of the folksonomy by adding semantics, for query expansion during retrieval via semantic relations, for enhanced navigation within the folksonomy, as basis for semantic-oriented displays

Landscape Architecture

75

• combination of folksonomies and KOS during indexing and retrieval

• achieved by semantic-oriented tag suggestions during indexing and retrieval ( tag suggestions not based on tag popularity to avoid self-fulfilling “success breeds success-effect”) or field-based tagging which stimulates semantically richer index tags and search tags

• in order to enhance precision and recall and the expressiveness of the folksonomy by adding semantics, for query expansion during retrieval via semantic relations, for enhanced indexing functionalities, for enhanced navigation within the folksonomy, as basis for semantic-oriented displays

Fertilizing

76

Emergent Semantics

• folksonomies have no explicit structure; there are no visible paradigmatic semantic relations

• document-specific co-occurring tags are linked by syntagmatic relations

• task: to identify paradigmatic relations and to use them in a controlled vocabulary

Synonyms

Is_a

77

(geographical) meronymy

synonymy

hyponymy

Hidden Paradigmatic Relations

Source: http:// www.flickr.com

78

Hidden Paradigmatic Relations. Flickr Landscape Photos

Flickr landscape photos: N=491; analysable tags: 3,618; tags per photo: 7.4Possible document-specific relations (tag-pairs, co-occurrences): 16,098

Document-specific relationsSynonymy 0.56%Abbreviation 0.12%Quasi-Synonymy 0.21%Translation 2.65%

Equivalence (sum) 3.54%Taxonomy 0.23%Simple hyponymy 0.06%

Hyponymy (IS-A relation) (sum) 0.29%Geographical meronymy (administrative) 4.94%Geographical meronymy (not administrative) 3.91%Element-collection-relation 0.21%Component-complex-relation 0.84%Segment-time-bond event-relation 0.11%Other meronymy 0.01%

Meronymy (IS-PART-OF relation) (sum) 10.02%Instance 0.23%

Instance 0.23%All relations 14.08%

Source: own research project

79

From Tag Gardening to Collaborative KOS Development

community members als gardeners

• tagging

• evaluation of tags

• field-specific tagging

additional: professional chief-gardener

• KOS development

• new concepts / new words for known concepts

• relations between concepts

80

Maintenance of KOS and Folksonomy

Source: Christiaens (2006)

Folksonomy KOS

Tag Gardening

new terms – new relations

81

From Tag Clouds to Tag Clusters

tag cloud

• alphabetical arrangement

• font size = „importance“ (but mostly no concrete data)

• no relations between tagstag cluster

• tags located in a network

• tuneable granularity (threshold value of similarity)

• relations between tags

• processes:

- calculation of similarity (Jaccard-Sneath, …)

- cluster algorithmsSource: Knautz (2008)

82

Lesson 6

Find „More like me!“ The Social Function of a Folksonomy

83

Users – Tags - Documents

thematically linked

shared users thematically linked

shared documents

84

Shared Documents & Thematically Linked Users

more like this ...

similar documents

detection of documents

more like me ...

similar users

detection of communities

thematically linked

shared documents

85

More like me! Or: More like This User!

• starting point: single user (ego)• processing

– (1) tag-specific similarity• all tags of ego: a(t)• all tags of another user B: b(t)• common tags of ego and another user B: g(t)

– (2) document-specific similarity• all tagged documents of ego: a(d)• all tagged documents of another user B: b(d)• common tagged documents of ego and another user B: g(d)

– calculation of similarity• tag-specific: Jaccard-Sneath: Sim(tag; Ego,B) = g(t) / [a(t) + b(t) – g(t)]• document-specific: Jaccard-Sneath: Sim(doc; Ego,B) = g(d) / [a(d) + b(d) – g(d)]• ranking of Bi by similarity to ego (say, top 10 tag-specific and top 10 document-

specific users)• merging of both lists (exclusion of duplicates)• cluster analysis (k-nearest neighbours, single linkage, complete linkage, group

average linkage)

– result presentation: social network of ego in the centre

86

More like me! Or: More like This User!

single linkage clustering (fictitious example)

Sim(tag) = 0.21

Sim(doc) = 0.25

Sim(tag) = 0.65

Sim(doc) = 0.55

Sim(tag) = 0.33

Sim(doc) = 0.29

Sim(tag) = 0.17

Sim(doc) = 0.23

Sim(tag) = 0.08

Sim(doc) = 0.11

Sim(tag) = 0.15

Sim(doc) = 0.17

Sim(tag) = 0.45

Sim(doc) = 0.36

87

The Social Function of a Folksonomy

objectives:• recommendation of other users with similar interests• hints for forming a virtual community• and – perhaps – for forming a (real) social group

88

Final Discussion

Folksonomies in Library Catalogues –Lessons Learned

89

Lessons Learned

Folksonomies – Indexing without rules

• tagging: anything goes – against methods

• actor: prosumer

• tri-partite system: document – prosumer – tag

• folksonomy behaves like a network good– only one standard– “success breeds success”

• essential: marketing

90

Lessons Learned

Folksonomies in information services and library catalogues

• folksonomy types– narrow folksonomy (only one tagger per document – no

multiple tagging)– extended narrow folksonomy (more than tagger per document – no

multiple tagging)– broad folksonomy (more than one tagger per document – multiple

tags)

• “best” solution for library catalogues– broad folksonomy or– extended narrow folksonomy (only usable if search tags can be

processed)

• platform– own platform (example: PennTags)– third party platform (example: LibraryThing)

91

Lessons Learned

Folksonomies and knowledge representation

• collective intelligence (diversity of options, independence of taggers, decentralisation, statistical aggregation of data)

• document-specific tag distributions– power law– inverse-logistic distribution– Power Tags

• tagging behaviour of the users– 1 … 3 tags per document and user– conformity– recommendation (very problematic)

• sentiment tags (positive – negative)

• documents which provoke emotions

92

Lessons Learned

Folksonomies and information retrieval

• information linguistics (natural language processing)– additional to raw tags: “cleaning” of tags– important tasks: language identification, error detection,

word form conflation

• relevance ranking criteria (calculation of retrieval status values)– tags (TF*IDF, tag evaluation, super posters, time, power tags)– “collaboration” (click rates, number of tagging users, number

of comments, PageRank)– prosumer (performative tags, sentiment tags, rating)

93

Lessons Learned

Tag gardening for folksonomy enrichment and maintenance

• “weeding”: information linguistics (NLP): core tasks

• “seeding”: baby tags, inverse tag cloud

• “landscape architecture”: combination of folksonomy and KOS (afterwards indexing)

• “fertilizing”: using KOS during indexing and retrieval

• emergent semantics: identification of (hidden) paradigmatic relations (e.g., synonymy and hierarchy)

94

Lessons Learned

Find „more like me!“. The social function of a folksonomy

• a new function: “More like me!”

• recommendation of other users with similar interests

• helpful for community building

95

We would like to thank you very much for attending this Workshop.

Greetings from Düsseldorf, Germany!

Recommended