Discussion – 2 pages
Discussion – Intro to Data Mining
After completing the reading this week, select two questions from each chapter to answer. Be sure to follow the requirements below:
Chapter 2:
1. What is an attribute and note the importance?
2. What are the different types of attributes?
3. What is the difference between discrete and continuous data?
4. Why is data quality important?
5. What occurs in data preprocessing?
6. In section 2.4, review the measures of similarity and dissimilarity, select one topic and note the key factors.
In an APA7 formatted essay answer all questions above.
There should be headings to each of the questions above as well.
Ensure there are at least two-peer reviewed sources to support your work.
The paper should be at least two pages of content (this does not include the cover page or reference page).
Text Book
Title: Introduction to Data Mining
ISBN: 9780133128901
Authors: Pang-Ning Tan, Michael Steinbach, Anuj Karpatne, Vipin Kumar
Publisher: Addison-Wesley
Publication Date: 2013-01-01
Edition: 2nd Edition.
Data Mining: Data
Lecture Notes for Chapter 2
Introduction to Data Mining , 2nd Edition
by
Tan, Steinbach, Kumar
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Outline
Attributes and Objects
Types of Data
Data Quality
Similarity and Distance
Data Preprocessing
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
What is Data?
Collection of data objects and their attributes
An attribute is a property or characteristic of an object
Examples: eye color of a person, temperature, etc.
Attribute is also known as variable, field, characteristic, dimension, or feature
A collection of attributes describe an object
Object is also known as record, point, case, sample, entity, or instance
Attributes
Objects
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Attribute Values
Attribute values are numbers or symbols assigned to an attribute for a particular object
Distinction between attributes and attribute values
Same attribute can be mapped to different attribute values
Example: height can be measured in feet or meters
Different attributes can be mapped to the same set of values
Example: Attribute values for ID and age are integers
But properties of attribute can be different than the properties of the values used to represent the attribute
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Measurement of Length
The way you measure an attribute may not match the attributes properties.
This scale preserves the ordering and additvity properties of length.
This scale preserves only the ordering property of length.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Types of Attributes
There are different types of attributes
Nominal
Examples: ID numbers, eye color, zip codes
Ordinal
Examples: rankings (e.g., taste of potato chips on a scale from 1-10), grades, height {tall, medium, short}
Interval
Examples: calendar dates, temperatures in Celsius or Fahrenheit.
Ratio
Examples: temperature in Kelvin, length, counts, elapsed time (e.g., time to run a race)
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Properties of Attribute Values
The type of an attribute depends on which of the following properties/operations it possesses:
Distinctness: =
Order: < >
Differences are + –
meaningful :
Ratios are * /
meaningful
Nominal attribute: distinctness
Ordinal attribute: distinctness & order
Interval attribute: distinctness, order & meaningful differences
Ratio attribute: all 4 properties/operations
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Difference Between Ratio and Interval
Is it physically meaningful to say that a temperature of 10 ° is twice that of 5° on
the Celsius scale?
the Fahrenheit scale?
the Kelvin scale?
Consider measuring the height above average
If Bill’s height is three inches above average and Bob’s height is six inches above average, then would we say that Bob is twice as tall as Bill?
Is this situation analogous to that of temperature?
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
This categorization of attributes is due to S. S. Stevens
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
This categorization of attributes is due to S. S. Stevens
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Discrete and Continuous Attributes
Discrete Attribute
Has only a finite or countably infinite set of values
Examples: zip codes, counts, or the set of words in a collection of documents
Often represented as integer variables.
Note: binary attributes are a special case of discrete attributes
Continuous Attribute
Has real numbers as attribute values
Examples: temperature, height, or weight.
Practically, real values can only be measured and represented using a finite number of digits.
Continuous attributes are typically represented as floating-point variables.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Asymmetric Attributes
Only presence (a non-zero attribute value) is regarded as important
Words present in documents
Items present in customer transactions
If we met a friend in the grocery store would we ever say the following?
“I see our purchases are very similar since we didn’t buy most of the same things.”
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Critiques of the attribute categorization
Incomplete
Asymmetric binary
Cyclical
Multivariate
Partially ordered
Partial membership
Relationships between the data
Real data is approximate and noisy
This can complicate recognition of the proper attribute type
Treating one attribute type as another may be approximately correct
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Key Messages for Attribute Types
The types of operations you choose should be “meaningful” for the type of data you have
Distinctness, order, meaningful intervals, and meaningful ratios are only four (among many possible) properties of data
The data type you see – often numbers or strings – may not capture all the properties or may suggest properties that are not present
Analysis may depend on these other properties of the data
Many statistical analyses depend only on the distribution
In the end, what is meaningful can be specific to domain
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Important Characteristics of Data
Dimensionality (number of attributes)
High dimensional data brings a number of challenges
Sparsity
Only presence counts
Resolution
Patterns depend on the scale
Size
Type of analysis may depend on size of data
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Types of data sets
Record
Data Matrix
Document Data
Transaction Data
Graph
World Wide Web
Molecular Structures
Ordered
Spatial Data
Temporal Data
Sequential Data
Genetic Sequence Data
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Record Data
Data that consists of a collection of records, each of which consists of a fixed set of attributes
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Data Matrix
If data objects have the same fixed set of numeric attributes, then the data objects can be thought of as points in a multi-dimensional space, where each dimension represents a distinct attribute
Such a data set can be represented by an m by n matrix, where there are m rows, one for each object, and n columns, one for each attribute
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Document Data
Each document becomes a ‘term’ vector
Each term is a component (attribute) of the vector
The value of each component is the number of times the corresponding term occurs in the document.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Transaction Data
A special type of data, where
Each transaction involves a set of items.
For example, consider a grocery store. The set of products purchased by a customer during one shopping trip constitute a transaction, while the individual products that were purchased are the items.
Can represent transaction data as record data
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Graph Data
Examples: Generic graph, a molecule, and webpages
Benzene Molecule: C6H6
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Ordered Data
Sequences of transactions
An element of the sequence
Items/Events
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Ordered Data
Genomic sequence data
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Ordered Data
Spatio-Temporal Data
Average Monthly Temperature of land and ocean
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Data Quality
Poor data quality negatively affects many data processing efforts
Data mining example: a classification model for detecting people who are loan risks is built using poor data
Some credit-worthy candidates are denied loans
More loans are given to individuals that default
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Data Quality …
What kinds of data quality problems?
How can we detect problems with the data?
What can we do about these problems?
Examples of data quality problems:
Noise and outliers
Wrong data
Fake data
Missing values
Duplicate data
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Noise
For objects, noise is an extraneous object
For attributes, noise refers to modification of original values
Examples: distortion of a person’s voice when talking on a poor phone and “snow” on television screen
The figures below show two sine waves of the same magnitude and different frequencies, the waves combined, and the two sine waves with random noise
The magnitude and shape of the original signal is distorted
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Outliers are data objects with characteristics that are considerably different than most of the other data objects in the data set
Case 1: Outliers are
noise that interferes
with data analysis
Case 2: Outliers are
the goal of our analysis
Credit card fraud
Intrusion detection
Causes?
Outliers
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Missing Values
Reasons for missing values
Information is not collected
(e.g., people decline to give their age and weight)
Attributes may not be applicable to all cases
(e.g., annual income is not applicable to children)
Handling missing values
Eliminate data objects or variables
Estimate missing values
Example: time series of temperature
Example: census results
Ignore the missing value during analysis
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Duplicate Data
Data set may include data objects that are duplicates, or almost duplicates of one another
Major issue when merging data from heterogeneous sources
Examples:
Same person with multiple email addresses
Data cleaning
Process of dealing with duplicate data issues
When should duplicate data not be removed?
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Similarity and Dissimilarity Measures
Similarity measure
Numerical measure of how alike two data objects are.
Is higher when objects are more alike.
Often falls in the range [0,1]
Dissimilarity measure
Numerical measure of how different two data objects are
Lower when objects are more alike
Minimum dissimilarity is often 0
Upper limit varies
Proximity refers to a similarity or dissimilarity
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Similarity/Dissimilarity for Simple Attributes
The following table shows the similarity and dissimilarity between two objects, x and y, with respect to a single, simple attribute.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Euclidean Distance
Euclidean Distance
where n is the number of dimensions (attributes) and xk and yk are, respectively, the kth attributes (components) or data objects x and y.
Standardization is necessary, if scales differ.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Euclidean Distance
Distance Matrix
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Minkowski Distance
Minkowski Distance is a generalization of Euclidean Distance
Where r is a parameter, n is the number of dimensions (attributes) and xk and yk are, respectively, the kth attributes (components) or data objects x and y.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Minkowski Distance: Examples
r = 1. City block (Manhattan, taxicab, L1 norm) distance.
A common example of this for binary vectors is the Hamming distance, which is just the number of bits that are different between two binary vectors
r = 2. Euclidean distance
r . “supremum” (Lmax norm, L norm) distance.
This is the maximum difference between any component of the vectors
Do not confuse r with n, i.e., all these distances are defined for all numbers of dimensions.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Minkowski Distance
Distance Matrix
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Mahalanobis Distance
For red points, the Euclidean distance is 14.7, Mahalanobis distance is 6.
is the covariance matrix
-0.5
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Mahalanobis Distance
Covariance Matrix:
A: (0.5, 0.5)
B: (0, 1)
C: (1.5, 1.5)
Mahal(A,B) = 5
Mahal(A,C) = 4
B
A
C
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Common Properties of a Distance
Distances, such as the Euclidean distance, have some well known properties.
d(x, y) 0 for all x and y and d(x, y) = 0 if and only if x = y.
d(x, y) = d(y, x) for all x and y. (Symmetry)
d(x, z) d(x, y) + d(y, z) for all points x, y, and z.
(Triangle Inequality)
where d(x, y) is the distance (dissimilarity) between points (data objects), x and y.
A distance that satisfies these properties is a metric
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Common Properties of a Similarity
Similarities, also have some well known properties.
s(x, y) = 1 (or maximum similarity) only if x = y.
(does not always hold, e.g., cosine)
s(x, y) = s(y, x) for all x and y. (Symmetry)
where s(x, y) is the similarity between points (data objects), x and y.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Similarity Between Binary Vectors
Common situation is that objects, x and y, have only binary attributes
Compute similarities using the following quantities
f01 = the number of attributes where x was 0 and y was 1
f10 = the number of attributes where x was 1 and y was 0
f00 = the number of attributes where x was 0 and y was 0
f11 = the number of attributes where x was 1 and y was 1
Simple Matching and Jaccard Coefficients
SMC = number of matches / number of attributes
= (f11 + f00) / (f01 + f10 + f11 + f00)
J = number of 11 matches / number of non-zero attributes
= (f11) / (f01 + f10 + f11)
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
SMC versus Jaccard: Example
x = 1 0 0 0 0 0 0 0 0 0
y = 0 0 0 0 0 0 1 0 0 1
f01 = 2 (the number of attributes where x was 0 and y was 1)
f10 = 1 (the number of attributes where x was 1 and y was 0)
f00 = 7 (the number of attributes where x was 0 and y was 0)
f11 = 0 (the number of attributes where x was 1 and y was 1)
SMC = (f11 + f00) / (f01 + f10 + f11 + f00)
= (0+7) / (2+1+0+7) = 0.7
J = (f11) / (f01 + f10 + f11) = 0 / (2 + 1 + 0) = 0
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Cosine Similarity
If d1 and d2 are two document vectors, then
cos( d1, d2 ) =
where
Example:
d1 = 3 2 0 5 0 0 0 2 0 0
d2 = 1 0 0 0 0 0 0 1 0 2
| d1 || = (3*3+2*2+0*0+5*5+0*0+0*0+0*0+2*2+0*0+0*0)0.5 = (42) 0.5 = 6.481
|| d2 || = (1*1+0*0+0*0+0*0+0*0+0*0+0*0+1*1+0*0+2*2) 0.5 = (6) 0.5 = 2.449
cos(d1, d2 ) = 0.3150
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Correlation measures the linear relationship between objects
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Visually Evaluating Correlation
Scatter plots showing the similarity from –1 to 1.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Drawback of Correlation
x = (-3, -2, -1, 0, 1, 2, 3)
y = (9, 4, 1, 0, 1, 4, 9)
yi = xi2
mean(x) = 0, mean(y) = 4
std(x) = 2.16, std(y) = 3.74
corr = (-3)(5)+(-2)(0)+(-1)(-3)+(0)(-4)+(1)(-3)+(2)(0)+3(5) / ( 6 * 2.16 * 3.74 )
= 0
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Correlation vs Cosine vs Euclidean Distance
Compare the three proximity measures according to their behavior under variable transformation
scaling: multiplication by a value
translation: adding a constant
Consider the example
x = (1, 2, 4, 3, 0, 0, 0), y = (1, 2, 3, 4, 0, 0, 0)
ys = y * 2 (scaled version of y), yt = y + 5 (translated version)
Property
Cosine
Correlation
Euclidean Distance
Invariant to scaling (multiplication)
Yes
Yes
No
Invariant to translation (addition)
No
Yes
No
Measure
(x , y)
(x , ys)
(x , yt)
Cosine
0.9667
0.9667
0.7940
Correlation
0.9429
0.9429
0.9429
Euclidean Distance
1.4142
5.8310
14.2127
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Correlation vs cosine vs Euclidean distance
Choice of the right proximity measure depends on the domain
What is the correct choice of proximity measure for the following situations?
Comparing documents using the frequencies of words
Documents are considered similar if the word frequencies are similar
Comparing the temperature in Celsius of two locations
Two locations are considered similar if the temperatures are similar in magnitude
Comparing two time series of temperature measured in Celsius
Two time series are considered similar if their “shape” is similar, i.e., they vary in the same way over time, achieving minimums and maximums at similar times, etc.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Comparison of Proximity Measures
Domain of application
Similarity measures tend to be specific to the type of attribute and data
Record data, images, graphs, sequences, 3D-protein structure, etc. tend to have different measures
However, one can talk about various properties that you would like a proximity measure to have
Symmetry is a common one
Tolerance to noise and outliers is another
Ability to find more types of patterns?
Many others possible
The measure must be applicable to the data and produce results that agree with domain knowledge
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Information Based Measures
Information theory is a well-developed and fundamental disciple with broad applications
Some similarity measures are based on information theory
Mutual information in various versions
Maximal Information Coefficient (MIC) and related measures
General and can handle non-linear relationships
Can be complicated and time intensive to compute
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Information and Probability
Information relates to possible outcomes of an event
transmission of a message, flip of a coin, or measurement of a piece of data
The more certain an outcome, the less information that it contains and vice-versa
For example, if a coin has two heads, then an outcome of heads provides no information
More quantitatively, the information is related the probability of an outcome
The smaller the probability of an outcome, the more information it provides and vice-versa
Entropy is the commonly used measure
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Entropy
For
a variable (event), X,
with n possible values (outcomes), x1, x2 …, xn
each outcome having probability, p1, p2 …, pn
the entropy of X , H(X), is given by
Entropy is between 0 and log2n and is measured in bits
Thus, entropy is a measure of how many bits it takes to represent an observation of X on average
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Entropy Examples
For a coin with probability p of heads and probability q = 1 – p of tails
For p= 0.5, q = 0.5 (fair coin) H = 1
For p = 1 or q = 1, H = 0
What is the entropy of a fair four-sided die?
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Entropy for Sample Data: Example
Maximum entropy is log25 = 2.3219
Hair Color
Count
p
-plog2p
Black
75
0.75
0.3113
Brown
15
0.15
0.4105
Blond
5
0.05
0.2161
Red
0
0.00
0
Other
5
0.05
0.2161
Total
100
1.0
1.1540
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Entropy for Sample Data
Suppose we have
a number of observations (m) of some attribute, X, e.g., the hair color of students in the class,
where there are n different possible values
And the number of observation in the ith category is mi
Then, for this sample
For continuous data, the calculation is harder
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Mutual Information
Information one variable provides about another
Formally, , where
H(X,Y) is the joint entropy of X and Y,
Where pij is the probability that the ith value of X and the jth value of Y occur together
For discrete variables, this is easy to compute
Maximum mutual information for discrete variables is
log2(min( nX, nY ), where nX (nY) is the number of values of X (Y)
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Mutual Information Example
Student Status
Count
p
-plog2p
Undergrad
45
0.45
0.5184
Grad
55
0.55
0.4744
Total
100
1.00
0.9928
Grade
Count
p
-plog2p
A
35
0.35
0.5301
B
50
0.50
0.5000
C
15
0.15
0.4105
Total
100
1.00
1.4406
Student Status
Grade
Count
p
-plog2p
Undergrad
A
5
0.05
0.2161
Undergrad
B
30
0.30
0.5211
Undergrad
C
10
0.10
0.3322
Grad
A
30
0.30
0.5211
Grad
B
20
0.20
0.4644
Grad
C
5
0.05
0.2161
Total
100
1.00
2.2710
Mutual information of Student Status and Grade = 0.9928 + 1.4406 – 2.2710 = 0.1624
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Maximal Information Coefficient
Reshef, David N., Yakir A. Reshef, Hilary K. Finucane, Sharon R. Grossman, Gilean McVean, Peter J. Turnbaugh, Eric S. Lander, Michael Mitzenmacher, and Pardis C. Sabeti. “Detecting novel associations in large data sets.” science 334, no. 6062 (2011): 1518-1524.
Applies mutual information to two continuous variables
Consider the possible binnings of the variables into discrete categories
nX × nY ≤ N0.6 where
nX is the number of values of X
nY is the number of values of Y
N is the number of samples (observations, data objects)
Compute the mutual information
Normalized by log2(min( nX, nY )
Take the highest value
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
General Approach for Combining Similarities
Sometimes attributes are of many different types, but an overall similarity is needed.
1: For the kth attribute, compute a similarity, sk(x, y), in the range [0, 1].
2: Define an indicator variable, k, for the kth attribute as follows:
k = 0 if the kth attribute is an asymmetric attribute and
both objects have a value of 0, or if one of the objects has a missing value for the kth attribute
k = 1 otherwise
3. Compute
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Using Weights to Combine Similarities
May not want to treat all attributes the same.
Use non-negative weights
Can also define a weighted form of distance
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Data Preprocessing
Aggregation
Sampling
Discretization and Binarization
Attribute Transformation
Dimensionality Reduction
Feature subset selection
Feature creation
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Aggregation
Combining two or more attributes (or objects) into a single attribute (or object)
Purpose
Data reduction – reduce the number of attributes or objects
Change of scale
Cities aggregated into regions, states, countries, etc.
Days aggregated into weeks, months, or years
More “stable” data – aggregated data tends to have less variability
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Example: Precipitation in Australia
This example is based on precipitation in Australia from the period 1982 to 1993.
The next slide shows
A histogram for the standard deviation of average monthly precipitation for 3,030 0.5◦ by 0.5◦ grid cells in Australia, and
A histogram for the standard deviation of the average yearly precipitation for the same locations.
The average yearly precipitation has less variability than the average monthly precipitation.
All precipitation measurements (and their standard deviations) are in centimeters.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Example: Precipitation in Australia …
Standard Deviation of Average Monthly Precipitation
Standard Deviation of Average Yearly Precipitation
Variation of Precipitation in Australia
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Sampling
Sampling is the main technique employed for data reduction.
It is often used for both the preliminary investigation of the data and the final data analysis.
Statisticians often sample because obtaining the entire set of data of interest is too expensive or time consuming.
Sampling is typically used in data mining because processing the entire set of data of interest is too expensive or time consuming.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Sampling …
The key principle for effective sampling is the following:
Using a sample will work almost as well as using the entire data set, if the sample is representative
A sample is representative if it has approximately the same properties (of interest) as the original set of data
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Sample Size
8000 points 2000 Points 500 Points
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Types of Sampling
Simple Random Sampling
There is an equal probability of selecting any particular item
Sampling without replacement
As each item is selected, it is removed from the population
Sampling with replacement
Objects are not removed from the population as they are selected for the sample.
In sampling with replacement, the same object can be picked up more than once
Stratified sampling
Split the data into several partitions; then draw random samples from each partition
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Sample Size
What sample size is necessary to get at least one object from each of 10 equal-sized groups.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Discretization
Discretization is the process of converting a continuous attribute into an ordinal attribute
A potentially infinite number of values are mapped into a small number of categories
Discretization is used in both unsupervised and supervised settings
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Unsupervised Discretization
Data consists of four groups of points and two outliers. Data is one-dimensional, but a random y component is added to reduce overlap.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Unsupervised Discretization
Equal interval width approach used to obtain 4 values.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Unsupervised Discretization
Equal frequency approach used to obtain 4 values.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Unsupervised Discretization
K-means approach to obtain 4 values.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Discretization in Supervised Settings
Many classification algorithms work best if both the independent and dependent variables have only a few values
We give an illustration of the usefulness of discretization using the following example.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Binarization
Binarization maps a continuous or categorical attribute into one or more binary variables
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Attribute Transformation
An attribute transform is a function that maps the entire set of values of a given attribute to a new set of replacement values such that each old value can be identified with one of the new values
Simple functions: xk, log(x), ex, |x|
Normalization
Refers to various techniques to adjust to differences among attributes in terms of frequency of occurrence, mean, variance, range
Take out unwanted, common signal, e.g., seasonality
In statistics, standardization refers to subtracting off the means and dividing by the standard deviation
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Example: Sample Time Series of Plant Growth
Correlations between time series
Minneapolis
Correlations between time series
Net Primary Production (NPP) is a measure of plant growth used by ecosystem scientists.
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Seasonality Accounts for Much Correlation
Correlations between time series
Minneapolis
Normalized using monthly Z Score:
Subtract off monthly mean and divide by monthly standard deviation
Correlations between time series
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Curse of Dimensionality
When dimensionality increases, data becomes increasingly sparse in the space that it occupies
Definitions of density and distance between points, which are critical for clustering and outlier detection, become less meaningful
Randomly generate 500 points
Compute difference between max and min distance between any pair of points
01/27/2021
‹#›
Introduction to Data Mining, 2nd Edition Tan, Steinbach, Karpatne, Kumar
Dimensionality Reduction
Purpose:
Avoid curse of dimensionality
Reduce amount of time and memory required by data mining algorithms
Allow data to be more easily visualized
May help to eliminate irrelevant features or reduce noise
Techniques
Principal Components Analysis (PCA)
Singular Value Decomposition