64
Data Cleaning and Transformation – Record Linkage Helena Galhardas DEI IST (based on the slides: “A Survey of Data Quality Issues in Cooperative Information Systems”, Carlo Batini, Tiziana Catarci, Monica Scannapieco, 23rd International Conference on Conceptual Modelling (ER 2004) and “Searching and Integrating Information on the Web, Seminar 3”, Chen Li, Univ. Irvine)

Token

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Token

Data Cleaning and Transformation – Record Linkage

Helena GalhardasDEI IST

(based on the slides: “A Survey of Data Quality Issues in Cooperative Information Systems”, Carlo Batini, Tiziana Catarci, Monica Scannapieco, 23rd International Conference on Conceptual Modelling (ER 2004) and “Searching and Integrating Information on the Web, Seminar 3”, Chen Li, Univ. Irvine)

Page 2: Token

Agenda

Motivation Introduction Details on specific steps

Pre-processing Comparison functions Types of blocking methods

Sorted Neighborhood Method Approximate String Joins over RDBMS

Page 3: Token

Motivation

Correlate data from different data sources Data is often dirty Needs to be cleansed before being used

Example: A hospital needs to merge patient records from

different data sources They have different formats, typos, and

abbreviations

Page 4: Token

Example

Name SSN AddrJack Lemmon 430-871-8294 Maple St

Harrison Ford 292-918-2913 Culver Blvd

Tom Hanks 234-762-1234 Main St

… … …

Table RName SSN Addr

Ton Hanks 234-162-1234 Main Street

Kevin Spacey 928-184-2813 Frost Blvd

Jack Lemon 430-817-8294 Maple Street

… … …

Table S

Find records from different datasets that could be the same entity

Page 5: Token

Another Example P. Bernstein, D. Chiu: Using Semi-Joins to Solve

Relational Queries. JACM 28(1): 25-40(1981) Philip A. Bernstein, Dah-Ming W. Chiu,

Using Semi-Joins to Solve Relational Queries, Journal of the ACM (JACM), v.28 n.1, p.25-40, Jan. 1981

Page 6: Token

Type of data considered (Two) formatted tables

Homogeneous in common attributes Heterogeneous

Format (Full name vs Acronym) Semantics (e.g. Age vs Date of Birth) Errors

(Two) groups of tables Dimensional hierarchy

(Two) XML documents

Page 7: Token

Record linkage and its evolution towards object identification in databases ….

Crlo Batini BtiniCarlo55 54

Record linkageFirst record and second record represent the same aspect of reality?

Object identification in databasesFirst group of records and second group of records represent the same aspect of reality?

Crlo Batini Pescara Pscara Itly Europe

Carlo Batini Pscara Pscara Italy Europe

Page 8: Token

…and object identification in semistructured documents

<country> <name> United States of America </name><cities> New York, Los Angeles, Chicago

</cities><lakes>

<name> Lake Michigan </name></lakes>

</country>

<country> United States <city> New York </city>

<city> Los Angeles </city><lakes>

<lake> Lake Michigan </lake></lakes>

</country>

are the sameobject?

and

Page 9: Token

Record linkage

Problem statement: “Given two relations, identify the potentially

matched records Efficiently and Effectively”

Also known as: approximated duplicate detection, entity resolution/identification, etc

Page 10: Token

Challenges How to define good similarity functions?

Many functions proposed (edit distance, cosine similarity, …) Domain knowledge is critical

Names: “Wall Street Journal” and “LA Times” Address: “Main Street” versus “Main St”

Record-oriented: A pair of records with different fields is considered

How to do matching efficiently Offline join version Online (interactive) search

Nearest search Range search

Record-set oriented: A potentially large set (or two sets )of records needs to be compared

Page 11: Token

Introduction

Page 12: Token

Relevant steps of record linkage techs.

A

B

Input files

A x B

InitialSearch space

PossiblematchedS’A x B

Reduced search

Space S’

Match

Unmatch

Assignment

Search space

reduction

Decision model

Page 13: Token

General strategy 0. Preprocessing

Standardize fields to compare and correct simple errors1. Estabilish a search space reduction method (also called

blocking method) Given the search space S = A x B of the two files, find a new

search space S’ contained in S, to apply further steps.2. Choose comparison function

Choose the function/set of rules that express the distance between pairs of records in S’

3. Choose decision model Choose the method for assigning pairs in S’ to M, the set of

matching records, U the set of unmatching records, and P the set of possible matches

4. Check effectiveness of method

Page 14: Token

Phases- [Preprocessing] - Estabilish a blocking method - Compute comparison function

- Apply decision model - [Check effectiveness of method]

Page 15: Token

Details on specific steps

Page 16: Token

Data preprocessing (preparation) Parsing: locates, identifies, and isolates individual fields in the source files Ex: parsing of name and address components into

consistent packets of information Data transformation: simple conversions applied to

te data in order for them to conform to the data types of the corresponding domains Ex: data type conversions, range checking

Data standardization: normalization of the information represented in certain fields to a specific content format Ex: address information, date and time formatting, name

and title formatting

Page 17: Token

Comparison functions It is not easy to match duplicate records even after

parsing, data standardization, and identification of similar fields Misspellings and different conventions for recording the

same information still result in different, multiple representations of a unique object in the database.

Field matching techniques: for measuring the similarity of individual fields

Detection of duplicate records: for measuring the similarity of entire records

Page 18: Token

(String) field matching functions

Character-based similarity metrics: handle typographical errors well

Token-based similarity metrics: handle typographical conventions that lead to rearrangement of words Ex: “John Smith” vs “Smith, John”

Phonetic similarity measures: handle similarity between strings that are not similar at a character or token level Ex: “Kageonne” and “Cajun”

Page 19: Token

Character-based similarity metrics (1)

Hamming distance - counts the number of mismatches between two numbers or text strings (fixed length strings)

Ex: The Hamming distance between “00185” and “00155” is 1, because there is one mismatch

Edit distance - the minimum cost to convert one of the strings to the other by a sequence of character insertions, deletions and replacements. Each one of these modifications is assigned a cost value.

Ex: the edit distance between “Smith” and “Sitch” is 2, since “Smith” is obtained by adding “m” and deleting “c” from “Sitch”.

Several variations/improvements: Affine Gap Distance, Smith-Waterman distance, etc

Page 20: Token

Jaro’s algorithm or distance finds the number of common characters and the number of transposed characters in the two strings. Jaro distance accounts for insertions, deletions, and transpositions.

Common characters are all characters s1[i] and s2[j], for which s1[i] =s2[j] and |i-j|<= 1/2min(|s1|, |s2|) , which means characters that appear in both strings within a distance of half the length of the shorter string. A transposed character is a common character that appears in different positions.

Ex: Comparing “Smith” and “Simth”, there are five common characters, two of which are transposed

Jaro(s1, s2) = 1/3(Nc/|s1|+Nc/|s2|+(Nc-Nt/2)/Nc)Where Nc is the nb of comon characters between the two strings

and Nt is the number of transpositions

Character-based similarity metrics (2)

Page 21: Token

Character-based similarity metrics (3)N-grams comparison function forms a vector as the set

of all the substrings of length n for each string. The n-gram representation of a string has a non-zero component vector for each n-letter substring it contains, where the magnitude is equal to the number of times the substring occurs. The distance between the two strings is defined as the square radix of the distance between the two vectors.

Page 22: Token

Token-based similarity metrics (1)TF-IDF (Token Frequency-Inverse Document Frequency) or cosine

similarity assigns higher weights to tokens appearing frequently in a document (TF weight) and lower weights to tokens that appear frequently in the whole set of documents (IDF weight).

TFw is the number of times that w appears in the fieldIDFw = |D|/nwwhere nw is the number of records in the database D that contain the word wSeparates each string s into words and assign a weight to each word w:vs(w) = log(TFw+1)*log(IDFw)

The cosine similarity of s1 and s2 is:sim(s1, s2) = j=1

|D| vs1(j)*vs2(j)/||vs1||*||vs2||

Widely used for matching strings in documents

Page 23: Token

Token-based similarity metrics (2)

Widely used for matching strings in documents The cosine similarity works well for a large variety of

entries and is insensitive to the location of words Ex: “John Smith” is equivalent to “Smith, John”

The introduction of frequent words only minimally affects the similarity of two strings Ex: “John Smith” and “Mr John Smith” have similarity

close to one. Does not capture word spelling errors

Ex: “Compter Science Departament” and “Deprtment of Computer Scence” wil have similarity equal to zero

Page 24: Token

Phonetic similarity metrics

Soundex: the most common phonetic coding scheme. Based on the assignment of identical code digits to phonetically similar groups of consonants and is used mainly to match surnames.

Page 25: Token

Detection of duplicate records Two categories of methods for matching records

with multiple fields: Approaches that rely on training data to learn how to

match the records Some probabilistic approaches

Ex: [Fellegi and Sunter 67] Supervised machine learning techniques

Ex: [Bilenko et al 03] Approaches that rely on domain knowledge or on generic

distance metrics to match records Approaches that use declarative languages for matching

Ex: AJAX Approaches that devise distance metrics suitable for duplicate

detection Ex: [Monge and Elkan 96]

Page 26: Token

General strategy 0. Preprocessing

Standardize fields to compare and correct simple errors 1. Estabilish a search space reduction method (also

called blocking method) Given the search space S = A x B of the two files, find a new

search space S’ contained in S, to apply further steps.2. Choose comparison function

Choose the function/set of rules that express the distance between pairs of records in S’

3. Choose decision model Choose the method for assigning pairs in S’ to M, the set of

matching records, U the set of unmatching records, and P the set of possible matches

4. Check effectiveness of method

Page 27: Token

Types of Blocking/Searching methods (1)

Blocking - partition of the file into mutually exclusive blocks. Comparisons are restricted to records within each

block. Blocking can be implemented by:

Sorting the file according to a block key (chosen by an expert).

Hashing - a record is hashed according to its block key in a hash block. Only records in the same hash block are considered for comparison.

Page 28: Token

Types of Blocking/Searching methods (2) Sorted neighbour or Windowing – moving a

window of a specific size w over the data file, comparing only the records that belong to this window

Multiple windowing - Several scans, each of which uses a different sorting key may be applied to increase the possibility of combining matched records

Windowing with choice of key based on the quality of the key

Page 29: Token

Sorted Neighbour searching method

1. Create Key: Compute a key K for each record in the list by extracting relevant fields or portions of fields. Relevance is decided by experts.

2. Sort Data: Sort the records in the data list using K 3. Merge: Move a fixed size window through the

sequential list of records limiting the comparisons for matching records to those records in the window. If the size of the window is w records, then every new record entering the window is compared with the previous w – 1 records to find “matching” records

Page 30: Token

SNM: Create key

Compute a key for each record by extracting relevant fields or portions of fields

Example:

First Last Address ID Key

Sal Stolfo 123 First Street 45678987 STLSAL123FRST456

Page 31: Token

SNM: Sort Data

Sort the records in the data list using the key in step 1

This can be very time consuming O(NlogN) for a good algorithm, O(N2) for a bad algorithm

Page 32: Token

SNM: Merge records

Move a fixed size window through the sequential list of records.

This limits the comparisons to the records in the window

Page 33: Token

SNM: Considerations

What is the optimal window size while Maximizing accuracy Minimizing computational cost

Execution time for large DB will be bound by Disk I/O Number of passes over the data set

Page 34: Token

Selection of Keys

The effectiveness of the SNM highly depends on the key selected to sort the records

A key is defined to be a sequence of a subset of attributes

Keys must provide sufficient discriminating power

Page 35: Token

Example of Records and Keys

First Last Address ID Key

Sal Stolfo 123 First Street 45678987 STLSAL123FRST456

Sal Stolfo 123 First Street 45678987 STLSAL123FRST456

Sal Stolpho 123 First Street 45678987 STLSAL123FRST456

Sal Stiles 123 Forest Street 45654321 STLSAL123FRST456

Page 36: Token

Equational Theory

The comparison during the merge phase is an inferential process

Compares much more information than simply the key

The more information there is, the better inferences can be made

Page 37: Token

Equational Theory - Example Two names are spelled nearly identically and

have the same address It may be inferred that they are the same person

Two social security numbers are the same but the names and addresses are totally different Could be the same person who moved Could be two different people and there is an

error in the social security number

Page 38: Token

A simplified rule in English

Given two records, r1 and r2IF the last name of r1 equals the last name of r2,

AND the first names differ slightly,AND the address of r1 equals the address of r2

THENr1 is equivalent to r2

Page 39: Token

The distance function

A “distance function” is used to compare pieces of data (usually text)

Apply “distance function” to data that “differ slightly”

Select a threshold to capture obvious typographical errors. Impacts number of successful matches and

number of false positives

Page 40: Token

Examples of matched records

SSN Name (First, Initial, Last) Address334600443334600443

Lisa BoardmanLisa Brown

144 Wars St.144 Ward St.

525520001525520001

Ramon BonillaRaymond Bonilla

38 Ward St.38 Ward St.

00

Diana D. AmbrosionDiana A. Dambrosion

40 Brik Church Av.40 Brick Church Av.

789912345879912345

Kathi KasonKathy Kason

48 North St.48 North St.

879912345879912345

Kathy KasonKathy Smith

48 North St.48 North St.

Page 41: Token

Building an equational theory The process of creating a good equational

theory is similar to the process of creating a good knowledge-base for an expert system

In complex problems, an expert’s assistance is needed to write the equational theory

Page 42: Token

Transitive Closure

In general, no single pass (i.e. no single key) will be sufficient to catch all matching records

An attribute that appears first in the key has higher discriminating power than those appearing after them If an employee has two records in a DB with SSN

193456782 and 913456782, it’s unlikely they will fall under the same window

Page 43: Token

Transitive Closure

To increase the number of similar records merged Widen the scanning window size, w Execute several independent runs of the SNM

Use a different key each time Use a relatively small window Call this the Multi-Pass approach

Page 44: Token

Multi-pass approach

Each independent run of the Multi-Pass approach will produce a set of pairs of records

Although one field in a record may be in error, another field may not

Transitive closure can be applied to those pairs to be merged

Page 45: Token

Multi-pass Matches

Pass 1 (Lastname discriminates)KSNKAT48NRTH789 (Kathi Kason 789912345 )KSNKAT48NRTH879 (Kathy Kason 879912345 )

Pass 2 (Firstname discriminates)KATKSN48NRTH789 (Kathi Kason 789912345 )KATKSN48NRTH879 (Kathy Kason 879912345 )

Pass 3 (Address discriminates)48NRTH879KSNKAT (Kathy Kason 879912345 )48NRTH879SMTKAT (Kathy Smith 879912345 )

Page 46: Token

Transitive Equality ExampleIF A implies B

AND B implies CTHEN A implies C

From example:789912345 Kathi Kason 48 North St. (A)879912345 Kathy Kason 48 North St. (B)879912345 Kathy Smith 48 North St. (C)

Page 47: Token

Another approach: Approximate String Joins [Gravano 2001]

Service A

Jenny Stamatopoulou

John Paul McDougal

Aldridge Rodriguez

Panos Ipeirotis

John Smith

Service B

Panos Ipirotis

Jonh Smith

Jenny Stamatopulou

John P. McDougal

Al Dridge Rodriguez

We want to join tuples with “similar” string fields Similarity measure: Edit Distance Each Insertion, Deletion, Replacement increases distance by

one

K=1K=2

K=1K=3

K=1

Page 48: Token

Focus: Approximate String Joins over Relational DBMSs

Join two tables on string attributes and keep all pairs of strings with Edit Distance ≤ K

Solve the problem in a database-friendly way(if possible with an existing RDBMS)

Page 49: Token

Current Approaches for Processing Approximate String Joins

No native support for approximate joins in RDBMSs

Two existing (straightforward) solutions: Join data outside of DBMS Join data via user-defined functions (UDFs) inside

the DBMS

Page 50: Token

Approximate String Joins outside of a DBMS

1. Export data2. Join outside of DBMS3. Import the result

Main advantage: We can exploit any state-of-the-art string-matching algorithm, without restrictions from DBMS functionality

Disadvantages: Substantial amounts of data to be exported/imported Cannot be easily integrated with further processing

steps in the DBMS

Page 51: Token

Approximate String Joins with UDFs1. Write a UDF to check if two strings match

within distance K2. Write an SQL statement that applies the UDF

to the string pairs

SELECT R.stringAttr, S.stringAttrFROM R, SWHERE edit_distance(R.stringAttr,

S.stringAttr, K)

Main advantage: Ease of implementation

Main disadvantage: UDF applied to entire cross-product of relations

Page 52: Token

Our Approach: Approximate String Joins over an Unmodified RDBMS

1. Preprocess data and generate auxiliary tables2. Perform join exploiting standard RDBMS

capabilities

Advantages No modification of underlying RDBMS needed. Can leverage the RDBMS query optimizer. Much more efficient than the approach based on

naive UDFs

Page 53: Token

Intuition and Roadmap Intuition:

Similar strings have many common substrings Use exact joins to perform approximate joins

(current DBMSs are good for exact joins) A good candidate set can be verified for false

positives Roadmap:

Break strings into substrings of length q (q-grams) Perform an exact join on the q-grams Find candidate string pairs based on the results Check only candidate pairs with a UDF to obtain

final answer

Page 54: Token

What is a “Q-gram”? Q-gram: A sequence of q characters of the original

string

Example for q=3vacations

{##v, #va, vac, aca, cat, ati, tio, ion, ons, ns$, s$$}

String with length L → L + q - 1 q-grams

Similar strings have a many common q-grams

Page 55: Token

Q-grams and Edit Distance Operations With no edits: L + q - 1 common q-grams

Replacement: (L + q – 1) - q common q-gramsVacations: {##v, #va, vac, aca, cat, ati, tio, ion, ons, ns#, s$$}Vacalions: {##v, #va, vac, aca, cal, ali, lio, ion, ons, ns#, s$$}

Insertion: (Lmax + q – 1) - q common q-gramsVacations: {##v, #va, vac, aca, cat, ati, tio, ion, ons, ns#, s$$}Vacatlions: {##v, #va, vac, aca, cat, atl, tli, lio, ion, ons, ns#, s$$}

Deletion: (Lmax + q – 1) - q common q-gramsVacations: {##v, #va, vac, aca, cat, ati, tio, ion, ons, ns#, s$$}Vacaions: {##v, #va, vac, aca, cai, aio, ion, ons, ns#, s$$}

Page 56: Token

Number of Common Q-grams and Edit Distance For Edit Distance = K, there could be at most K

replacements, insertions, deletions

Two strings S1 and S2 with Edit Distance ≤ K have at least [max(S1.len, S2.len) + q - 1] – Kq q-grams in common

Useful filter: eliminate all string pairs without "enough" common q-grams (no false dismissals)

Page 57: Token

Using a DBMS for Q-gram Joins

If we have the q-grams in the DBMS, we can perform this counting efficiently.

Create auxiliary tables with tuples of the form:<sid, qgram>

and join these tables

A GROUP BY – HAVING COUNT clause can perform the counting / filtering

Page 58: Token

Eliminating Candidate Pairs: COUNT FILTERING

SQL for this filter: (parts omitted for clarity)

SELECT R.sid, S.sidFROM R, SWHERE R.qgram=S.qgramGROUP BY R.sid, S.sidHAVING COUNT(*) >= (max(R.strlen, S.strlen) + q - 1) –

K*q

The result is the pair of strings with sufficiently enough common q-grams to ensure that we will not have false negatives.

Page 59: Token

Eliminating Candidate Pairs Further: LENGTH FILTERINGStrings with length difference larger than K cannot

be within Edit Distance K

SELECT R.sid, S.sidFROM R, SWHERE R.qgram=S.qgram AND abs(R.strlen - S.strlen)<=KGROUP BY R.sid, S.sidHAVING COUNT(*) >= (max(R.strlen, S.strlen) + q – 1) –

K*q

We refer to this filter as LENGTH FILTERING

Page 60: Token

Exploiting Q-gram Positions for Filtering

Consider strings aabbzzaacczz and aacczzaabbzz Strings are at edit distance 4 Strings have identical q-grams for q=3

Problem: Matching q-grams that are at different positions in both strings Either q-grams do not "originate" from same q-gram,

or Too many edit operations "caused" spurious q-grams

at various parts of strings to match

Page 61: Token

Example of positional q-gramsPositional q-grams of length q=3 for

john_smith{(1,##j), (2,#jo), (3,joh), (4,ohn), (5,hn_), (6,n_s), (7,_sm), (8,smi),

(9,mit), (10, ith), (11,th$), (12, h$$)}

Positional q-grams of length q=3 forjohn_a_smith

{(1,##j), (2,#jo), (3,joh), (4,ohn), (5,hn_), (6,n_a), (7,_a_), (8,a_s), (9,_sm), (10,smi), (11,mit), (12,ith), (13,th$), (14,h$$)}

Ignoring positional information, the two strings have 11 n-grams in common

Only the five first positional q-grams are common in both strings

An additional six positional q-grams in the two strings differ in their position by just two positions

Page 62: Token

POSITION FILTERING - Filtering using positions

Keep the position of the q-grams <sid, strlen, pos, qgram>

Do not match q-grams that are more than K positions away

SELECT R.sid, S.sidFROM R, SWHERE R.qgram=S.qgram

AND abs(R.strlen - S.strlen)<=KAND abs(R.pos - S.pos)<=K

GROUP BY R.sid, S.sidHAVING COUNT(*) >= (max(R.strlen, S.strlen) + q – 1) – K*q

We refer to this filter as POSITION FILTERING

Page 63: Token

The Actual, Complete SQL Statement

SELECT R1.string, S1.string, R1.sid, S1.sidFROM R1, S1, R, S, WHERE R1.sid=R.sid

AND S1.sid=S.sidAND R.qgram=S.qgram AND abs(strlen(R1.string)–strlen(S1.string))<=KAND abs(R.pos - S.pos)<=K

GROUP BY R1.sid, S1.sid, R1.string, S1.stringHAVING COUNT(*) >=

(max(strlen(R1.string),strlen(S1.string))+ q-1)–K*q

Page 64: Token

References “Data Quality: Concepts, Methodologies and

Techniques”, C. Batini and M. Scannapieco, Springer-Verlag, 2006.

“Duplicate Record Detection: A Survey”, A. Elmagarmid, P. Ipeirotis, V. Verykios, IEEE Transactions on Knowledge and Data Engineering, Vol. 19, no. 1, Jan. 2007.

Approximate String Joins in a Database (Almost) for Free, Gravano et al, VLDB 2001,

Efficient merge and purge: Hernandez and Stolfo, SIGMOD 1995