from dsc80_utils import *
Announcements 📣¶
- Lab 6 is due tomorrow
- The Project 3 Checkpoint due on Friday.
Agenda 📆¶
- Text features.
- Bag of words.
- Cosine similarity.
- TF-IDF.
- Example: State of the Union addresses 🎤.
Question 🤔 (Answer at q.dsc80.com)
^\w{2,5}.\d*\/[^A-Z5]{1,}
Select all strings below that contain any match with the regular expression above.
"billy4/Za"
"billy4/za"
"DAI_s2154/pacific"
"daisy/ZZZZZ"
"bi_/_lly98"
"!@__!14/atlantic"
(This problem was from Sp22 Final question 7.2.)
Text features¶
Review: Regression and features¶
- In DSC 40A, our running example was to use regression to predict a data scientist's salary, given their GPA, years of experience, and years of education.
- After minimizing empirical risk to determine optimal parameters, $w_0^*, \dots, w_3^*$, we made predictions using:
- GPA, years of experience, and years of education are features – they represent a data scientist as a vector of numbers.
- e.g. Your feature vector may be [3.5, 1, 7].
- This approach requires features to be numeric.
Moving forward¶
Suppose we'd like to predict the sentiment of a piece of text from 1 to 10.
- 10: Very positive (happy).
- 1: Very negative (sad, angry).
Example:
Input: "DSC 80 is a pretty good class."
Output: 7.
We can frame this as a regression problem, but we can't directly use what we learned in 40A, because here our inputs are text, not numbers.
Text features¶
- Big question: How do we represent a text document as a feature vector of numbers?
- If we can do this, we can:
- use a text document as input in a regression or classification model (in a few lectures).
- quantify the similarity of two text documents (today).
Example: San Diego employee salaries¶
- Transparent California publishes the salaries of all City of San Diego employees.
- Let's look at the 2022 data.
salaries = pd.read_csv('https://transcal.s3.amazonaws.com/public/export/san-diego-2022.csv')
salaries['Employee Name'] = salaries['Employee Name'].str.split().str[0] + ' Xxxx'
salaries.head()
Aside on privacy and ethics¶
- Even though the data we downloaded is publicly available, employee names still correspond to real people.
- Be careful when dealing with PII (personably identifiable information).
- Only work with the data that is needed for your analysis.
- Even when data is public, people have a reasonable right to privacy.
- Remember to think about the impacts of your work outside of your Jupyter Notebook.
Goal: Quantifying similarity¶
- Our goal is to describe, numerically, how similar two job titles are.
- For instance, our similarity metric should tell us that
'Deputy Fire Chief'
and'Fire Battalion Chief'
are more similar than'Deputy Fire Chief'
and'City Attorney'
.
- Idea: Two job titles are similar if they contain shared words, regardless of order. So, to measure the similarity between two job titles, we could count the number of words they share in common.
- Before we do this, we need to be confident that the job titles are clean and consistent – let's explore.
Exploring job titles¶
jobtitles = salaries['Job Title']
jobtitles.head()
How many employees are in the dataset? How many unique job titles are there?
jobtitles.shape[0], jobtitles.nunique()
What are the most common job titles?
jobtitles.value_counts().iloc[:100]
jobtitles.value_counts().iloc[:10].sort_values().plot(kind='barh')
Are there any missing job titles?
jobtitles.isna().sum()
Fortunately, no.
Canonicalization¶
Remember, our goal is ultimately to count the number of shared words between job titles. But before we start counting the number of shared words, we need to consider the following:
- Some job titles may have punctuation, like
'-'
and'&'
, which may count as words when they shouldn't.'Assistant - Manager'
and'Assistant Manager'
should count as the same job title.
- Some job titles may have "glue" words, like
'to'
and'the'
, which (we can argue) also shouldn't count as words.'Assistant To The Manager'
and'Assistant Manager'
should count as the same job title.
- If we just want to focus on the titles themselves, then perhaps roman numerals should be removed: that is,
'Police Officer Ii'
and'Police Officer I'
should count as the same job title.
Let's address the above issues. The process of converting job titles so that they are always represented the same way is called canonicalization.
Punctuation¶
Are there job titles with unnecessary punctuation that we can remove?
To find out, we can write a regular expression that looks for characters other than letters, numbers, and spaces.
We can use regular expressions with the
.str
methods we learned earlier in the quarter just by usingregex=True
.
jobtitles.str.contains(..., regex=True).sum()
jobtitles[jobtitles.str.contains(r'[^A-Za-z0-9 ]', regex=True)].head()
It seems like we should replace these pieces of punctuation with a single space.
"Glue" words¶
Are there job titles with "glue" words in the middle, such as 'Assistant to the Manager'
?
To figure out if any titles contain the word 'to'
, we can't just do the following, because it will evaluate to True
for job titles that have 'to'
anywhere in them, even if not as a standalone word.
# Why are we converting to lowercase?
jobtitles.str.lower().str.contains('to').sum()
jobtitles[jobtitles.str.lower().str.contains('to')]
Instead, we need to look for 'to'
separated by word boundaries.
jobtitles.str.lower().str.contains(..., regex=True).sum()
jobtitles[jobtitles.str.lower().str.contains(r'\bto\b', regex=True)]
We can look for other filler words too, like 'the'
and 'for'
.
jobtitles[jobtitles.str.lower().str.contains(r'\bthe\b', regex=True)]
jobtitles[jobtitles.str.lower().str.contains(r'\bfor\b', regex=True)]
We should probably remove these "glue" words.
Roman numerals (e.g. "Ii")¶
Lastly, let's try and identify job titles that have roman numerals at the end, like 'i'
(1), 'ii'
(2), 'iii'
(3), or 'iv'
(4). As before, we'll convert to lowercase first.
jobtitles[jobtitles.str.lower().str.contains(r'\bi+v?\b', regex=True)]
Let's get rid of those numbers, too.
Fixing punctuation and removing "glue" words and roman numerals¶
Let's put the preceeding three steps together and canonicalize job titles by:
- converting to lowercase,
- removing each occurrence of
'to'
,'the'
, and'for'
, - replacing each non-letter/digit/space character with a space,
- replacing each sequence of roman numerals – either
'i'
,'ii'
,'iii'
, or'iv'
at the end with nothing, and - replacing each sequence of multiple spaces with a single space.
jobtitles = (
jobtitles
.str.lower()
.str.replace(r'\bto\b|\bthe\b|\bfor\b', '', regex=True)
.str.replace(r'[^A-Za-z0-9 ]', ' ', regex=True)
.str.replace(r'\bi+v?\b', '', regex=True)
.str.replace(r' +', ' ', regex=True) # ' +' matches 1 or more occurrences of a space.
.str.strip() # Removes leading/trailing spaces if present.
)
jobtitles.sample(5)
(jobtitles == 'police officer').sum()
Possible issue: inconsistent representations¶
Another possible issue is that some job titles may have inconsistent representations of the same word (e.g. 'Asst.'
vs 'Assistant'
).
jobtitles[jobtitles.str.contains('asst')].value_counts()
jobtitles[jobtitles.str.contains('assistant')].value_counts().head()
The 2020 salaries dataset had several of these issues, but fortunately they appear to be fixed for us in the 2022 dataset (thanks, Transparent California).
Bag of words 💰¶
Text similarity¶
Recall, our idea is to measure the similarity of two job titles by counting the number of shared words between the job titles. How do we actually do that, for all of the job titles we have?
A counts matrix¶
Let's create a "counts" matrix, such that:
- there is 1 row per job title,
- there is 1 column per unique word that is used in job titles, and
- the value in row
title
and columnword
is the number of occurrences ofword
intitle
.
Such a matrix might look like:
senior | lecturer | teaching | professor | assistant | associate | |
---|---|---|---|---|---|---|
senior lecturer | 1 | 1 | 0 | 0 | 0 | 0 |
assistant teaching professor | 0 | 0 | 1 | 1 | 1 | 0 |
associate professor | 0 | 0 | 0 | 1 | 0 | 1 |
senior assistant to the assistant professor | 1 | 0 | 0 | 1 | 2 | 0 |
Then, we can make statements like: "assistant teaching professor" is more similar to "associate professor" than to "senior lecturer".
Creating a counts matrix¶
First, we need to determine all words that are used across all job titles.
jobtitles.str.split()
# The .explode method concats the lists together.
all_words = jobtitles.str.split()# .explode()
all_words
Next, to determine the columns of our matrix, we need to find a list of all unique words used in titles. We can do this with np.unique
, but value_counts
shows us the distribution, which is interesting.
unique_words = all_words.value_counts()
unique_words
Note that in unique_words.index
, job titles are sorted by number of occurrences!
For each of the 330 unique words that are used in job titles, we can count the number of occurrences of the word in each job title.
'deputy fire chief'
contains the word'deputy'
once, the word'fire'
once, and the word'chief'
once.'assistant managers assistant'
contains the word'assistant'
twice and the word'managers'
once.
# Created using a dictionary to avoid a "DataFrame is highly fragmented" warning.
counts_dict = {}
for word in unique_words.index:
re_pat = fr'\b{word}\b'
counts_dict[word] = jobtitles.str.count(re_pat).astype(int).tolist()
counts_df = pd.DataFrame(counts_dict).set_index(jobtitles)
counts_df.head()
counts_df.shape
counts_df
has one row for all 12831 employees, and one column for each unique word that is used in a job title.
Its third row, for example, tells us that the third job title contains 'police'
once and 'assistant'
once.
Bag of words¶
- The bag of words model represents texts (e.g. job titles, sentences, documents) as vectors of word counts.
- The "counts" matrices we have worked with so far were created using the bag of words model.
- The bag of words model defines a vector space in $\mathbb{R}^{\text{number of unique words}}$.
- In the matrix on the previous slide, each row was a vector corresponding to a specific job title.
- It is called "bag of words" because it doesn't consider order.
Cosine similarity¶
Question: What job titles are most similar to 'deputy fire chief'
?¶
- Remember, our idea was to count the number of shared words between two job titles.
- We now have access to
counts_df
, which contains a row vector for each job title.
- How can we use it to count the number of shared words between two job titles?
Counting shared words¶
To start, let's compare the row vectors for 'deputy fire chief'
and 'fire battalion chief'
.
dfc = counts_df.loc['deputy fire chief'].iloc[0]
dfc
fbc = counts_df.loc['fire battalion chief'].iloc[0]
fbc
We can stack these two vectors horizontally.
pair_counts = (
pd.concat([dfc, fbc], axis=1)
.sort_values(by=['deputy fire chief', 'fire battalion chief'], ascending=False)
.head(10)
.T
)
pair_counts
'deputy fire chief'
and 'fire battalion chief'
have 2 shared words in common. One way to arrive at this result mathematically is by taking their dot product:
np.dot(pair_counts.iloc[0], pair_counts.iloc[1])
Recall: The dot product¶
- Recall, if $\vec{a} = \begin{bmatrix} a_1 & a_2 & ... & a_n \end{bmatrix}^T$ and $\vec{b} = \begin{bmatrix} b_1 & b_2 & ... & b_n \end{bmatrix}^T$ are two vectors, then their dot product $\vec{a} \cdot \vec{b}$ is defined as:
- The dot product also has a geometric interpretation. If $|\vec{a}|$ and $|\vec{b}|$ are the $L_2$ norms (lengths) of $\vec{a}$ and $\vec{b}$, and $\theta$ is the angle between $\vec{a}$ and $\vec{b}$, then:
- $\cos \theta$ is equal to its maximum value (1) when $\theta = 0$, i.e. when $\vec{a}$ and $\vec{b}$ point in the same direction.
Cosine similarity and bag of words¶
To measure the similarity between two word vectors, instead of just counting the number of shared words, we should compute their normalized dot product, also known as their cosine similarity.
$$\cos \theta = \boxed{\frac{\vec{a} \cdot \vec{b}}{|\vec{a}| | \vec{b}|}}$$- If all elements in $\vec{a}$ and $\vec{b}$ are non-negative, then $\cos \theta$ ranges from 0 to 1.
- 🚨 Key idea: The larger $\cos \theta$ is, the more similar the two vectors are!
- It is important to normalize by the lengths of the vectors, otherwise texts with more words will have artificially high similarities with other texts.
Normalizing¶
$$\cos \theta = \boxed{\frac{\vec{a} \cdot \vec{b}}{|\vec{a}| | \vec{b}|}}$$- Why can't we just use the dot product – that is, why must we divide by $|\vec{a}| | \vec{b}|$?
- Consider the following example:
big | data | science | |
---|---|---|---|
big big big big data | 4 | 1 | 0 |
big data science | 1 | 1 | 1 |
science big data | 1 | 1 | 1 |
Pair | Dot Product | Cosine Similarity |
---|---|---|
big data science and big big big big data | 5 | 0.7001 |
big data science and science big data | 3 | 1 |
'big big big big data'
has a large dot product with'big data science'
just because it has the word'big'
four times. But intuitively,'big data science'
and'science big data'
should be as similar as possible, since they're permutations of the same sentence.
- So, make sure to compute the cosine similarity – don't just use the dot product!
Note: Sometimes, you will see the cosine distance being used. It is the complement of cosine similarity:
$$\text{dist}(\vec{a}, \vec{b}) = 1 - \cos \theta$$
If $\text{dist}(\vec{a}, \vec{b})$ is small, the two word vectors are similar.
A recipe for computing similarities¶
Given a set of documents, to find the most similar text to one document $d$ in particular:
- Use the bag of words model to create a counts matrix, in which:
- there is 1 row per document,
- there is 1 column per unique word that is used across documents, and
- the value in row
doc
and columnword
is the number of occurrences ofword
indoc
.
- Compute the cosine similarity between $d$'s row vector and all other documents' row vectors.
- The other document with the greatest cosine similarity is the most similar, under the bag of words model.
Example: Global warming 🌎¶
Consider the following three documents.
sentences = pd.Series([
'I really really want global peace',
'I must enjoy global warming',
'We must solve climate change'
])
sentences
Let's represent each document using the bag of words model.
unique_words = sentences.str.split().explode().value_counts()
unique_words
counts_dict = {}
for word in unique_words.index:
re_pat = fr'\b{word}\b'
counts_dict[word] = sentences.str.count(re_pat).astype(int).tolist()
counts_df = pd.DataFrame(counts_dict).set_index(sentences)
counts_df
Let's now find the cosine similarity between each document.
counts_df
def sim_pair(s1, s2):
return np.dot(s1, s2) / (np.linalg.norm(s1) * np.linalg.norm(s2))
# Look at the documentation of the .corr method to see how this works!
counts_df.T.corr(sim_pair)
Issue: Bag of words only encodes the words that each document uses, not their meanings.
- "I really really want global peace" and "We must solve climate change" have similar meanings, but have no shared words, and thus a low cosine similarity.
- "I really really want global peace" and "I must enjoy global warming" have very different meanings, but a relatively high cosine similarity.
Pitfalls of the bag of words model¶
Remember, the key assumption underlying the bag of words model is that two documents are similar if they share many words in common.
- The bag of words model doesn't consider order.
- The job titles
'deputy fire chief'
and'chief fire deputy'
are treated as the same.
- The job titles
- The bag of words model doesn't consider the meaning of words.
'I love data science'
and'I hate data science'
share 75% of their words, but have very different meanings.
- The bag of words model treats all words as being equally important.
'deputy'
and'fire'
have the same importance, even though'fire'
is probably more important in describing someone's job title.- Let's address this point.
Question 🤔 (Answer at q.dsc80.com)
From the Fa23 final:
Consider the following corpus:
Document number Content
1 yesterday rainy today sunny
2 yesterday sunny today sunny
3 today rainy yesterday today
4 yesterday yesterday today today
Using a bag-of-words representation, which two documents have the largest dot product?
Using a bag-of-words representation, what is the cosine similarity between documents 2 and 3?
TF-IDF¶
The importance of words¶
Issue: The bag of words model doesn't know which words are "important" in a document. Consider the following document:
How do we determine which words are important?
- The most common words ("the", "has") often don't have much meaning!
- The very rare words are also less important!
Goal: Find a way of quantifying the importance of a word in a document by balancing the above two factors, i.e. find the word that best summarizes a document.
Term frequency¶
- The term frequency of a word (term) $t$ in a document $d$, denoted $\text{tf}(t, d)$ is the proportion of words in document $d$ that are equal to $t$.
- Example: What is the term frequency of "billy" in the following document?
- Answer: $\frac{2}{13}$.
- Intuition: Words that occur often within a document are important to the document's meaning.
- If $\text{tf}(t, d)$ is large, then word $t$ occurs often in $d$.
- If $\text{tf}(t, d)$ is small, then word $t$ does not occur often $d$.
- Issue: "has" also has a TF of $\frac{2}{13}$, but it seems less important than "billy".
Inverse document frequency¶
- The inverse document frequency of a word $t$ in a set of documents $d_1, d_2, ...$ is
- Example: What is the inverse document frequency of "billy" in the following three documents?
- "my brother has a friend named billy who has an uncle named billy"
- "my favorite artist is named jilly boel"
- "why does he talk about someone named billy so often"
- Answer: $\log \left(\frac{3}{2}\right) \approx 0.4055$.
- Intuition: If a word appears in every document (like "the" or "has"), it is probably not a good summary of any one document.
- If $\text{idf}(t)$ is large, then $t$ is rarely found in documents.
- If $\text{idf}(t)$ is small, then $t$ is commonly found in documents.
- Think of $\text{idf}(t)$ as the "rarity factor" of $t$ across documents – the larger $\text{idf}(t)$ is, the more rare $t$ is.
Intuition¶
$$\text{tf}(t, d) = \frac{\text{# of occurrences of $t$ in $d$}}{\text{total # of words in $d$}}$$$$\text{idf}(t) = \log \left(\frac{\text{total # of documents}}{\text{# of documents in which $t$ appears}} \right)$$Goal: Quantify how well word $t$ summarizes document $d$.
- If $\text{tf}(t, d)$ is small, then $t$ doesn't occur very often in $d$, so $t$ can't be a good summary of $d$.
- If $\text{idf}(t)$ is small, then $t$ occurs often amongst all documents, and so it is not a good summary of any one document.
- If $\text{tf}(t, d)$ and $\text{idf}(t)$ are both large, then $t$ occurs often in $d$ but rarely overall. This makes $t$ a good summary of document $d$.
Term frequency-inverse document frequency¶
The term frequency-inverse document frequency (TF-IDF) of word $t$ in document $d$ is the product:
$$ \begin{align*}\text{tfidf}(t, d) &= \text{tf}(t, d) \cdot \text{idf}(t) \\\ &= \frac{\text{# of occurrences of $t$ in $d$}}{\text{total # of words in $d$}} \cdot \log \left(\frac{\text{total # of documents}}{\text{# of documents in which $t$ appears}} \right) \end{align*} $$- If $\text{tfidf}(t, d)$ is large, then $t$ is a good summary of $d$, because $t$ occurs often in $d$ but rarely across all documents.
- TF-IDF is a heuristic – it has no probabilistic justification.
- To know if $\text{tfidf}(t, d)$ is large for one particular word $t$, we need to compare it to $\text{tfidf}(t_i, d)$, for several different words $t_i$.
Computing TF-IDF¶
Question: What is the TF-IDF of "global" in the second sentence?
sentences
Answer:
tf = sentences.iloc[1].count('global') / len(sentences.iloc[1].split())
tf
idf = np.log(len(sentences) / sentences.str.contains('global').sum())
idf
tf * idf
Question: Is this big or small? Is "global" the best summary of the second sentence?
TF-IDF of all words in all documents¶
On its own, the TF-IDF of a word in a document doesn't really tell us anything; we must compare it to TF-IDFs of other words in that same document.
sentences
unique_words = np.unique(sentences.str.split().explode())
unique_words
tfidf_dict = {}
for word in unique_words:
re_pat = fr'\b{word}\b'
tf = sentences.str.count(re_pat) / sentences.str.split().str.len()
idf = np.log(len(sentences) / sentences.str.contains(re_pat).sum())
tfidf_dict[word] = tf * idf
tfidf = pd.DataFrame(tfidf_dict).set_index(sentences)
tfidf
Interpreting TF-IDFs¶
display_df(tfidf, cols=12)
The above DataFrame tells us that:
- the TF-IDF of
'really'
in the first sentence is $\approx$ 0.37, - the TF-IDF of
'climate'
in the second sentence is 0.
Note that there are two ways that $\text{tfidf}(t, d) = \text{tf}(t, d) \cdot \text{idf}(t)$ can be 0:
- If $t$ appears in every document, because then $\text{idf}(t) = \log (\frac{\text{# documents}}{\text{# documents}}) = \log(1) = 0$.
- If $t$ does not appear in document $d$, because then $\text{tf}(t, d) = \frac{0}{\text{len}(d)} = 0$.
The word that best summarizes a document is the word with the highest TF-IDF for that document:
display_df(tfidf, cols=12)
tfidf.idxmax(axis=1)
Look closely at the rows of tfidf
– in documents 2 and 3, the max TF-IDF is not unique!
Question 🤔 (Answer at q.dsc80.com)
What questions do you have?
Example: State of the Union addresses 🎤¶
State of the Union addresses¶
The 2024 State of the Union address was on be on March 7th, 2024; last year's was on February 7th, 2023.
from IPython.display import YouTubeVideo
YouTubeVideo('gzcBTUvVp7M')
The data¶
from pathlib import Path
sotu_txt = Path('data') / 'stateoftheunion1790-2023.txt'
sotu = sotu_txt.read_text()
len(sotu)
The entire corpus (another word for "set of documents") is over 10 million characters long... let's not display it in our notebook.
print(sotu[:1600])
Each speech is separated by '***'
.
speeches = sotu.split('\n***\n')[1:]
len(speeches)
Note that each "speech" currently contains other information, like the name of the president and the date of the address.
print(speeches[-1][:1000])
Let's extract just the speech text.
import re
def extract_struct(speech):
L = speech.strip().split('\n', maxsplit=3)
L[3] = re.sub(r"[^A-Za-z' ]", ' ', L[3]).lower()
return dict(zip(['speech', 'president', 'date', 'contents'], L))
speeches_df = pd.DataFrame(list(map(extract_struct, speeches)))
speeches_df
Finding the most important words in each speech¶
Here, a "document" is a speech. We have 233 documents.
speeches_df
A rough sketch of what we'll compute:
for each word t:
for each speech d:
compute tfidf(t, d)
unique_words = speeches_df['contents'].str.split().explode().value_counts()
# Take the top 500 most common words for speed
unique_words = unique_words.iloc[:500].index
unique_words
💡 Pro-Tip: Using tqdm
¶
This code takes a while to run, so we'll use the tdqm
package to track its progress. (Install with mamba install tqdm
if needed).
from tqdm.notebook import tqdm
tfidf_dict = {}
tf_denom = speeches_df['contents'].str.split().str.len()
# Wrap the sequence with `tqdm()` to display a progress bar
for word in tqdm(unique_words):
re_pat = fr' {word} ' # Imperfect pattern for speed.
tf = speeches_df['contents'].str.count(re_pat) / tf_denom
idf = np.log(len(speeches_df) / speeches_df['contents'].str.contains(re_pat).sum())
tfidf_dict[word] = tf * idf
tfidf = pd.DataFrame(tfidf_dict)
tfidf.head()
Note that the TF-IDFs of many common words are all 0!
Summarizing speeches¶
By using idxmax
, we can find the word with the highest TF-IDF in each speech.
summaries = tfidf.idxmax(axis=1)
summaries
What if we want to see the 5 words with the highest TF-IDFs, for each speech?
def five_largest(row):
return ', '.join(row.index[row.argsort()][-5:])
keywords = tfidf.apply(five_largest, axis=1)
keywords_df = pd.concat([
speeches_df['president'],
speeches_df['date'],
keywords
], axis=1)
keywords_df
Uncomment the cell below to see every single row of keywords_df
.
# display_df(keywords_df, rows=233)
Aside: What if we remove the $\log$ from $\text{idf}(t)$?¶
Let's try it and see what happens.
tfidf_nl_dict = {}
tf_denom = speeches_df['contents'].str.split().str.len()
for word in tqdm(unique_words):
re_pat = fr' {word} ' # Imperfect pattern for speed.
tf = speeches_df['contents'].str.count(re_pat) / tf_denom
idf_nl = len(speeches_df) / speeches_df['contents'].str.contains(re_pat).sum()
tfidf_nl_dict[word] = tf * idf_nl
tfidf_nl = pd.DataFrame(tfidf_nl_dict)
tfidf_nl.head()
keywords_nl = tfidf_nl.apply(five_largest, axis=1)
keywords_nl_df = pd.concat([
speeches_df['president'],
speeches_df['date'],
keywords_nl
], axis=1)
keywords_nl_df
The role of $\log$ in $\text{idf}(t)$¶
$$ \begin{align*}\text{tfidf}(t, d) &= \text{tf}(t, d) \cdot \text{idf}(t) \\\ &= \frac{\text{# of occurrences of $t$ in $d$}}{\text{total # of words in $d$}} \cdot \log \left(\frac{\text{total # of documents}}{\text{# of documents in which $t$ appears}} \right) \end{align*} $$- Remember, for any positive input $x$, $\log(x)$ is (much) smaller than $x$.
- In $\text{idf}(t)$, the $\log$ "dampens" the impact of the ratio $\frac{\text{# documents}}{\text{# documents with $t$}}$.
- If a word is very common, the ratio will be close to 1. The log of the ratio will be close to 0.
(1000 / 999)
np.log(1000 / 999)
- If a word is very common (e.g. 'the'), removing the log multiplies the statistic by a large factor.
- If a word is very rare, the ratio will be very large. However, for instance, a word being seen in 2 out of 50 documents is not very different than being seen in 2 out of 500 documents (it is very rare in both cases), and so $\text{idf}(t)$ should be similar in both cases.
(50 / 2)
(500 / 2)
np.log(50 / 2)
np.log(500 / 2)
Question 🤔 (Answer at q.dsc80.com)
From the Fa23 final:
Consider the following corpus:
Document number Content
1 yesterday rainy today sunny
2 yesterday sunny today sunny
3 today rainy yesterday today
4 yesterday yesterday today today
Which words have a TF-IDF score of 0 for all four documents?
Summary, next time¶
Summary¶
- One way to turn documents, like
'deputy fire chief'
, into feature vectors, is to count the number of occurrences of each word in the document, ignoring order. This is done using the bag of words model. - To measure the similarity of two documents under the bag of words model, compute the cosine similarity of their two word vectors.
- Term frequency-inverse document frequency (TF-IDF) is a statistic that tries to quantify how important a word (term) is to a document. It balances:
- how often a word appears in a particular document, $\text{tf}(t, d)$, with
- how often a word appears across documents, $\text{idf}(t)$.
- For a given document, the word with the highest TF-IDF is thought to "best summarize" that document.
Next time¶
Modeling and feature engineering.