- Using odds for weighted probabilities
- Weighting efficiency
- Selecting a comparison weight as threshold

Blocking efficiency is measured by recall and precision and the same holds true of weighting efficiency. Before measuring its efficiency, however, we must describe how to derive the weights themselves. Weighting is the process by which we determine whether a comparison would be indicative of a match. The more reliable a field is, the more likely that a disagreement in the data will indicate that the comparison is a non-match. The higher the coincidence value (agrees non-coincidentally), the more likely that agreement in the data will indicate that the comparison is a match. The record comparison weight calculations allow us to express this relationship more precisely |

Depending on the agreement or disagreement of each field we may assign the appropriate weight to the comparison.
A sufficiently high weight should indicate a high probability that the comparison represents a matched record.
Ideally an unmatched pair should be assigned a rather low weight.
The threshold is the comparison weight above which the pair is to be classed as linked and below which the pair is to be classed as unlinked.
The proportion of matched pairs that we thereby class as linked is the .
The proportion of linked pairs that are actually matched is the weighting recall. weighting precision |

.
Record comparison weights are based on odds for the kind of comparison being made between the fields of the record.
Using odds for weighted probabilities is an expression related to probability ([pi]) as follows: Odds |

(4.1) |

It is possible to measure the conditional probability that a field agrees, where the condition is that
the comparison is matched and data is present.
We do this by isolating a set of records that we know
to belong to duplicate groups. We have been calling this measure .
We do not need the duplicate groups to measure the probability of agreement when the comparison is not matched though present.
This is what we have called reliability.
We use the duplicate groups if the additional accuracy of an general coincidence is desired. entity coincidence |

It will be important, therefore, to figure the odds of field value agreement, disagreement, and missing. These odds are then expressed as weights for each field. The weights of the fields then chosen as important are combined to derive a record comparison weight. |

- Odds of field value agreement
- Odds of field value disagreement
- Odds of field value missing
- Changing odds to weights
- Calculating record comparison weights

.
Weighting efficiency is the probability that a matched weight will be above threshold and
Weighting recall is the probability that a particular weight above threshold is matched.
We are relating four classes of weights: weighting precision |

1) all those that are accepted above threshold, i.e., linked (L ),_{t} | ||

2) all those that are matched (M or W ), _{M} | ||

3) those that are matched above threshold (M ), and_{t} | ||

4) those that are unmatched (W ) above threshold (_{U}U )._{t} |

recall = P ( L | _{t}M ) = M ÷ _{t}M | (4.11) |

precision = P ( M | L ) = _{t}M ÷ ( _{t}M + _{t}U )_{t} | (4.12) |

.
The best choice of threshold depends on our ablilty to maximize both Selecting a comparison weight as threshold and weighting recall.
Normally if we try to increase recall, the precision is reduced, and if we take steps to increase precision, the recall is reduced. weighting precision |

We now discuss how to to plot the distribution of comparison weights with an eye to setting the most appropriate threshold. Once this is done the parameters for the matching algorithm will be complete. Refer to figure 2 below. This figure is actually an idealized version of a frequency polygon based on the histogram constructed by charting weights for all the comparisons made in blocks. The negative weights, which represent fractional powers of two, tend to be those for unmatched comparisons. The positive represent in large part the matched comparisons. Thus the curve is bimodal. |

In this case we do not have a model for the frequency density distribution. This means that we can only guess at the two curves underlying the two modes. A conservative estimate is outlined on the figure with the dotted lines. The green line is a pretty much minimal estimate of the intrusion of the unmatched on the matched, and the orange, the matched on the unmatched. Aiding this guess is an important observation: the proportion of linked to unlinked at zero should be about the same as the proportion of the two totals. |