Course Description
A typical organization loses an estimated 5% of its yearly revenue to fraud. In this course, learn to fight fraud by using data. Apply supervised learning algorithms to detect fraudulent behavior based upon past fraud, and use unsupervised learning methods to discover new types of fraud activities.
Fraudulent transactions are rare compared to the norm. As such, learn to properly classify imbalanced datasets.
The course provides technical and theoretical insights and demonstrates how to implement fraud detection models. Finally, get tips and advice from real-life experience to help prevent common mistakes in fraud analytics.
Imports
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
import numpy as np
from pprint import pprint as pp
import csv
from pathlib import Path
import seaborn as sns
from itertools import product
import string
import nltk
from nltk.corpus import stopwords
from nltk.stem.wordnet import WordNetLemmatizer
from imblearn.over_sampling import SMOTE
from imblearn.over_sampling import BorderlineSMOTE
from imblearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import r2_score, classification_report, confusion_matrix, accuracy_score, roc_auc_score, roc_curve, precision_recall_curve, average_precision_score
from sklearn.metrics import homogeneity_score, silhouette_score
from sklearn.ensemble import RandomForestClassifier, VotingClassifier
from sklearn.preprocessing import MinMaxScaler
from sklearn.cluster import MiniBatchKMeans, DBSCAN
import gensim
from gensim import corpora
Pandas Configuration Options
pd.set_option('display.max_columns', 700)
pd.set_option('display.max_rows', 400)
pd.set_option('display.min_rows', 10)
pd.set_option('display.expand_frame_repr', True)
Data Files Location
Data File Objects
data = Path.cwd() / 'data' / 'fraud_detection'
ch1 = data / 'chapter_1'
cc1_file = ch1 / 'creditcard_sampledata.csv'
cc3_file = ch1 / 'creditcard_sampledata_3.csv'
ch2 = data / 'chapter_2'
cc2_file = ch2 / 'creditcard_sampledata_2.csv'
ch3 = data / 'chapter_3'
banksim_file = ch3 / 'banksim.csv'
banksim_adj_file = ch3 / 'banksim_adj.csv'
db_full_file = ch3 / 'db_full.pickle'
labels_file = ch3 / 'labels.pickle'
labels_full_file = ch3 / 'labels_full.pickle'
x_scaled_file = ch3 / 'x_scaled.pickle'
x_scaled_full_file = ch3 / 'x_scaled_full.pickle'
ch4 = data / 'chapter_4'
enron_emails_clean_file = ch4 / 'enron_emails_clean.csv'
cleantext_file = ch4 / 'cleantext.pickle'
corpus_file = ch4 / 'corpus.pickle'
dict_file = ch4 / 'dict.pickle'
ldamodel_file = ch4 / 'ldamodel.pickle'
Learn about the typical challenges associated with fraud detection. Learn how to resample data in a smart way, and tackle problems with imbalanced data.
In this chapter, you will work on creditcard_sampledata.csv
, a dataset containing credit card transactions data. Fraud occurrences are fortunately an extreme minority in these transactions.
However, Machine Learning algorithms usually work best when the different classes contained in the dataset are more or less equally present. If there are few cases of fraud, then there's little data to learn how to identify them. This is known as class imbalance, and it's one of the main challenges of fraud detection.
Let's explore this dataset, and observe this class imbalance problem.
Instructions
import pandas as pd
, read the credit card data in and assign it to df
. This has been done for you..info()
to print information about df
..value_counts()
to get the count of fraudulent and non-fraudulent transactions in the 'Class'
column. Assign the result to occ
.df = pd.read_csv(cc3_file)
df.info()
df.head()
# Count the occurrences of fraud and no fraud and print them
occ = df['Class'].value_counts()
occ
# Print the ratio of fraud cases
ratio_cases = occ/len(df.index)
print(f'Ratio of fraudulent cases: {ratio_cases[1]}\nRatio of non-fraudulent cases: {ratio_cases[0]}')
The ratio of fraudulent transactions is very low. This is a case of class imbalance problem, and you're going to learn how to deal with this in the next exercises.
From the previous exercise we know that the ratio of fraud to non-fraud observations is very low. You can do something about that, for example by re-sampling our data, which is explained in the next video.
In this exercise, you'll look at the data and visualize the fraud to non-fraud ratio. It is always a good starting point in your fraud analysis, to look at your data first, before you make any changes to it.
Moreover, when talking to your colleagues, a picture often makes it very clear that we're dealing with heavily imbalanced data. Let's create a plot to visualize the ratio fraud to non-fraud data points on the dataset df
.
The function prep_data()
is already loaded in your workspace, as well as matplotlib.pyplot as plt
.
Instructions
plot_data(X, y)
function, that will nicely plot the given feature set X
with labels y
in a scatter plot. This has been done for you.prep_data()
on your dataset df
to create feature set X
and labels y
.plot_data()
on your newly obtained X
and y
to visualize your results.def prep_data(df: pd.DataFrame) -> (np.ndarray, np.ndarray):
"""
Convert the DataFrame into two variable
X: data columns (V1 - V28)
y: lable column
"""
X = df.iloc[:, 2:30].values
y = df.Class.values
return X, y
# Define a function to create a scatter plot of our data and labels
def plot_data(X: np.ndarray, y: np.ndarray):
plt.scatter(X[y == 0, 0], X[y == 0, 1], label="Class #0", alpha=0.5, linewidth=0.15)
plt.scatter(X[y == 1, 0], X[y == 1, 1], label="Class #1", alpha=0.5, linewidth=0.15, c='r')
plt.legend()
return plt.show()
# Create X and y from the prep_data function
X, y = prep_data(df)
# Plot our data by running our plot data function on X and y
plot_data(X, y)
By visualizing the data, you can immediately see how our fraud cases are scattered over our data, and how few cases we have. A picture often makes the imbalance problem clear. In the next exercises we'll visually explore how to improve our fraud to non-fraud balance.
plt.scatter(df.V2[df.Class == 0], df.V3[df.Class == 0], label="Class #0", alpha=0.5, linewidth=0.15)
plt.scatter(df.V2[df.Class == 1], df.V3[df.Class == 1], label="Class #1", alpha=0.5, linewidth=0.15, c='r')
plt.legend()
plt.show()
from imblearn.over_sampling import RandomOverSampler
method = RandomOverSampler()
X_resampled, y_resampled = method.fit_sample(X, y)
compare_plots(X_resampled, y_resampled, X, y)
# Define resampling method and split into train and test
method = SMOTE(kind='borderline1')
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=0)
# Apply resampling to the training data only
X_resampled, y_resampled = method.fit_sample(X_train, y_train)
# Continue fitting the model and obtain predictions
model = LogisticRegression()
model.fit(X_resampled, y_resampled)
# Get model performance metrics
predicted = model.predict(X_test)
print(classification_report(y_test, predicted))
Which of these methods takes a random subsample of your majority class to account for class "imbalancedness"?
Possible Answers
By using ROS and SMOTE you add more examples to the minority class. RUS adjusts the balance of your data by reducing the majority class.
In this exercise, you're going to re-balance our data using the Synthetic Minority Over-sampling Technique (SMOTE). Unlike ROS, SMOTE does not create exact copies of observations, but creates new, synthetic, samples that are quite similar to the existing observations in the minority class. SMOTE is therefore slightly more sophisticated than just copying observations, so let's apply SMOTE to our credit card data. The dataset df
is available and the packages you need for SMOTE are imported. In the following exercise, you'll visualize the result and compare it to the original data, such that you can see the effect of applying SMOTE very clearly.
Instructions
prep_data
function on df
to create features X
and labels y
.method
..fit_sample()
on the original X
and y
to obtain newly resampled data.plot_data()
function.# Run the prep_data function
X, y = prep_data(df)
print(f'X shape: {X.shape}\ny shape: {y.shape}')
# Define the resampling method
method = SMOTE()
# Create the resampled feature set
X_resampled, y_resampled = method.fit_sample(X, y)
# Plot the resampled data
plot_data(X_resampled, y_resampled)
The minority class is now much more prominently visible in our data. To see the results of SMOTE even better, we'll compare it to the original data in the next exercise.
In the last exercise, you saw that using SMOTE suddenly gives us more observations of the minority class. Let's compare those results to our original data, to get a good feeling for what has actually happened. Let's have a look at the value counts again of our old and new data, and let's plot the two scatter plots of the data side by side. You'll use the function compare_plot() for that that, which takes the following arguments: X
, y
, X_resampled
, y_resampled
, method=''
. The function plots your original data in a scatter plot, along with the resampled side by side.
Instructions
y
. Be mindful that y
is currently a Numpy array, so in order to use value counts, we'll assign y
back as a pandas Series object.y_resampled
. This shows you how the balance between the two classes has changed with SMOTE.compare_plot()
function called on our original data as well our resampled data to see the scatterplots side by side.pd.value_counts(pd.Series(y))
pd.value_counts(pd.Series(y_resampled))
def compare_plot(X: np.ndarray, y: np.ndarray, X_resampled: np.ndarray, y_resampled: np.ndarray, method: str):
plt.subplot(1, 2, 1)
plt.scatter(X[y == 0, 0], X[y == 0, 1], label="Class #0", alpha=0.5, linewidth=0.15)
plt.scatter(X[y == 1, 0], X[y == 1, 1], label="Class #1", alpha=0.5, linewidth=0.15, c='r')
plt.title('Original Set')
plt.subplot(1, 2, 2)
plt.scatter(X_resampled[y_resampled == 0, 0], X_resampled[y_resampled == 0, 1], label="Class #0", alpha=0.5, linewidth=0.15)
plt.scatter(X_resampled[y_resampled == 1, 0], X_resampled[y_resampled == 1, 1], label="Class #1", alpha=0.5, linewidth=0.15, c='r')
plt.title(method)
plt.legend()
plt.show()
compare_plot(X, y, X_resampled, y_resampled, method='SMOTE')
It should by now be clear that SMOTE has balanced our data completely, and that the minority class is now equal in size to the majority class. Visualizing the data shows the effect on the data very clearly. The next exercise will demonstrate multiple ways to implement SMOTE and that each method will have a slightly different effect.
# Step 1: split the features and labels into train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Step 2: Define which model to use
model = LinearRegression()
# Step 3: Fit the model to the training data
model.fit(X_train, y_train)
# Step 4: Obtain model predictions from the test data
y_predicted = model.predict(X_test)
# Step 5: Compare y_test to predictions and obtain performance metrics (r^2 score)
r2_score(y_test, y_predicted)
In this exercise you're going to try finding fraud cases in our credit card dataset the "old way". First you'll define threshold values using common statistics, to split fraud and non-fraud. Then, use those thresholds on your features to detect fraud. This is common practice within fraud analytics teams.
Statistical thresholds are often determined by looking at the mean values of observations. Let's start this exercise by checking whether feature means differ between fraud and non-fraud cases. Then, you'll use that information to create common sense thresholds. Finally, you'll check how well this performs in fraud detection.
pandas
has already been imported as pd
.
Instructions
groupby()
to group df
on Class
and obtain the mean of the features.V1
smaller than -3, and V3
smaller than -5 as a condition to flag fraud cases.crosstab
function from pandas
to compare our flagged fraud cases to actual fraud cases.df.drop(['Unnamed: 0'], axis=1, inplace=True)
df.groupby('Class').mean()
df['flag_as_fraud'] = np.where(np.logical_and(df.V1 < -3, df.V3 < -5), 1, 0)
pd.crosstab(df.Class, df.flag_as_fraud, rownames=['Actual Fraud'], colnames=['Flagged Fraud'])
With this rule, 22 out of 50 fraud cases are detected, 28 are not detected, and 16 false positives are identified.
In this exercise you'll see what happens when you use a simple machine learning model on our credit card data instead.
Do you think you can beat those results? Remember, you've predicted 22 out of 50 fraud cases, and had 16 false positives.
So with that in mind, let's implement a Logistic Regression model. If you have taken the class on supervised learning in Python, you should be familiar with this model. If not, you might want to refresh that at this point. But don't worry, you'll be guided through the structure of the machine learning model.
The X
and y
variables are available in your workspace.
Instructions
X
and y
into training and test data, keeping 30% of the data for testing.model.predict
on X_test
.y_test
with predicted
, and use the given confusion matrix to check your results.# Create the training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# Fit a logistic regression model to our data
model = LogisticRegression(solver='liblinear')
model.fit(X_train, y_train)
# Obtain model predictions
predicted = model.predict(X_test)
# Print the classifcation report and confusion matrix
print('Classification report:\n', classification_report(y_test, predicted))
conf_mat = confusion_matrix(y_true=y_test, y_pred=predicted)
print('Confusion matrix:\n', conf_mat)
Do you think these results are better than the rules based model? We are getting far fewer false positives, so that's an improvement. Also, we're catching a higher percentage of fraud cases, so that is also better than before. Do you understand why we have fewer observations to look at in the confusion matrix? Remember we are using only our test data to calculate the model results on. We're comparing the crosstab on the full dataset from the last exercise, with a confusion matrix of only 30% of the total dataset, so that's where that difference comes from. In the next chapter, we'll dive deeper into understanding these model performance metrics. Let's now explore whether we can improve the prediction results even further with resampling methods.
In this exercise, you're going to take the Logistic Regression model from the previous exercise, and combine that with a SMOTE resampling method. We'll show you how to do that efficiently by using a pipeline that combines the resampling method with the model in one go. First, you need to define the pipeline that you're going to use.
Instructions
Pipeline
module from imblearn
, this has been done for you.SMOTE
method with borderline2
to resampling
, and assign LogisticRegression()
to the model
.Pipeline()
function. You need to state you want to combine resampling
with the model
in the respective place in the argument. I show you how to do this.# Define which resampling method and which ML model to use in the pipeline
# resampling = SMOTE(kind='borderline2') # has been changed to BorderlineSMOTE
resampling = BorderlineSMOTE()
model = LogisticRegression(solver='liblinear')
pipeline = Pipeline([('SMOTE', resampling), ('Logistic Regression', model)])
Now that you have our pipeline defined, aka combining a logistic regression with a SMOTE method, let's run it on the data. You can treat the pipeline as if it were a single machine learning model. Our data X and y are already defined, and the pipeline is defined in the previous exercise. Are you curious to find out what the model results are? Let's give it a try!
Instructions
random_state
to zero.pipeline.predict()
function on our X_test
dataset.# Split your data X and y, into a training and a test set and fit the pipeline onto the training data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
pipeline.fit(X_train, y_train)
predicted = pipeline.predict(X_test)
# Obtain the results from the classification report and confusion matrix
print('Classifcation report:\n', classification_report(y_test, predicted))
conf_mat = confusion_matrix(y_true=y_test, y_pred=predicted)
print('Confusion matrix:\n', conf_mat)
As you can see, the SMOTE slightly improves our results. We now manage to find all cases of fraud, but we have a slightly higher number of false positives, albeit only 7 cases. Remember, resampling doesn't necessarily lead to better results. When the fraud cases are very spread and scattered over the data, using SMOTE can introduce a bit of bias. Nearest neighbors aren't necessarily also fraud cases, so the synthetic samples might 'confuse' the model slightly. In the next chapters, we'll learn how to also adjust our machine learning models to better detect the minority fraud cases.
Learn how to flag fraudulent transactions with supervised learning. Use classifiers, adjust and compare them to find the most efficient fraud detection model.
Implementation:
from sklearn.ensemble import RandomForestClassifier
model = RandomForestClassifier()
model.fit(X_train, y_train)
predicted = model.predict(X_test)
print(f'Accuracy Score:\n{accuracy_score(y_test, predicted)}')
In this exercise, you'll again use credit card transaction data. The features and labels are similar to the data in the previous chapter, and the data is heavily imbalanced. We've given you features X
and labels y
to work with already, which are both numpy arrays.
First you need to explore how prevalent fraud is in the dataset, to understand what the "natural accuracy" is, if we were to predict everything as non-fraud. It's is important to understand which level of "accuracy" you need to "beat" in order to get a better prediction than by doing nothing. In the following exercises, you'll create our first random forest classifier for fraud detection. That will serve as the "baseline" model that you're going to try to improve in the upcoming exercises.
Instructions
y
.y
; remember y
is a NumPy array so .value_counts()
cannot be used in this case.df2 = pd.read_csv(cc2_file)
df2.head()
X, y = prep_data(df2)
print(f'X shape: {X.shape}\ny shape: {y.shape}')
X[0, :]
df2.Class.value_counts()
# Count the total number of observations from the length of y
total_obs = len(y)
total_obs
# Count the total number of non-fraudulent observations
non_fraud = [i for i in y if i == 0]
count_non_fraud = non_fraud.count(0)
count_non_fraud
percentage = count_non_fraud/total_obs * 100
print(f'{percentage:0.2f}%')
This tells us that by doing nothing, we would be correct in 95.9% of the cases. So now you understand, that if we get an accuracy of less than this number, our model does not actually add any value in predicting how many cases are correct. Let's see how a random forest does in predicting fraud in our data.
Let's now create a first random forest classifier for fraud detection. Hopefully you can do better than the baseline accuracy you've just calculated, which was roughly 96%. This model will serve as the "baseline" model that you're going to try to improve in the upcoming exercises. Let's start first with splitting the data into a test and training set, and defining the Random Forest model. The data available are features X
and labels y
.
Instructions
sklearn
.X
and labels y
into a training and test set. Set aside a test set of 30%.model
and keep random_state
at 5. We need to set a random state here in order to be able to compare results across different models.# Split your data into training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# Define the model as the random forest
model = RandomForestClassifier(random_state=5, n_estimators=20)
Let's see how our Random Forest model performs without doing anything special to it. The model
from the previous exercise is available, and you've already split your data in X_train, y_train, X_test, y_test
.
Instructions 1/3
model
to our training data and obtain predictions by getting the model predictions on X_test
.# Fit the model to our training set
model.fit(X_train, y_train)
# Obtain predictions from the test data
predicted = model.predict(X_test)
Instructions 2/3
y_test
with our predicted labels predicted
.print(f'Accuracy Score:\n{accuracy_score(y_test, predicted):0.3f}')
Instructions 3/3
What is a benefit of using Random Forests versus Decision Trees?
Possible Answers
Random Forest prevents overfitting most of the time, by creating random subsets of the features and building smaller trees using these subsets. Afterwards, it combines the subtrees of subsamples of features, so it does not tend to overfit to your entire feature set the way "deep" Decisions Trees do.
False Positives (FP) / False Negatives (FN)
True Positives / True Negatives are the cases predicted correctly (e.g. fraud / non-fraud)
# import the methods
from sklearn.metrics import precision_recall_curve, average_precision_score
# Calculate average precision and the PR curve
average_precision = average_precision_score(y_test, predicted)
# Obtain precision and recall
precision, recall = precision_recall_curve(y_test, predicted)
# Obtain model probabilities
probs = model.predict_proba(X_test)
# Print ROC_AUC score using probabilities
print(metrics.roc_auc_score(y_test, probs[:, 1]))
from sklearn.metrics import classification_report, confusion_matrix
# Obtain predictions
predicted = model.predict(X_test)
# Print classification report using predictions
print(classification_report(y_test, predicted))
# Print confusion matrix using predictions
print(confusion_matrix(y_test, predicted))
In the previous exercises you obtained an accuracy score for your random forest model. This time, we know accuracy can be misleading in the case of fraud detection. With highly imbalanced fraud data, the AUROC curve is a more reliable performance metric, used to compare different classifiers. Moreover, the classification report tells you about the precision and recall of your model, whilst the confusion matrix actually shows how many fraud cases you can predict correctly. So let's get these performance metrics.
You'll continue working on the same random forest model from the previous exercise. Your model, defined as model = RandomForestClassifier(random_state=5)
has been fitted to your training data already, and X_train, y_train, X_test, y_test
are available.
Instructions
sklearn.metrics
.model
.predict_proba()
function.y_test
with predicted
.# Obtain the predictions from our random forest model
predicted = model.predict(X_test)
# Predict probabilities
probs = model.predict_proba(X_test)
# Print the ROC curve, classification report and confusion matrix
print('ROC Score:')
print(roc_auc_score(y_test, probs[:,1]))
print('\nClassification Report:')
print(classification_report(y_test, predicted))
print('\nConfusion Matrix:')
print(confusion_matrix(y_test, predicted))
You have now obtained more meaningful performance metrics that tell us how well the model performs, given the highly imbalanced data that you're working with. The model predicts 76 cases of fraud, out of which 73 are actual fraud. You have only 3 false positives. This is really good, and as a result you have a very high precision score. You do however, miss 18 cases of actual fraud. Recall is therefore not as good as precision.
You can also plot a Precision-Recall curve, to investigate the trade-off between the two in your model. In this curve Precision and Recall are inversely related; as Precision increases, Recall falls and vice-versa. A balance between these two needs to be achieved in your model, otherwise you might end up with many false positives, or not enough actual fraud cases caught. To achieve this and to compare performance, the precision-recall curves come in handy.
Your Random Forest Classifier is available as model
, and the predictions as predicted
. You can simply obtain the average precision score and the PR curve from the sklearn package. The function plot_pr_curve()
plots the results for you. Let's give it a try.
Instructions 1/3
y_test
and your predicted labels predicted
.# Calculate average precision and the PR curve
average_precision = average_precision_score(y_test, predicted)
average_precision
Instructions 2/3
precision_recall_curve()
function on the same arguments y_test
and predicted
and plot the curve (this last thing has been done for you).# Obtain precision and recall
precision, recall, _ = precision_recall_curve(y_test, predicted)
print(f'Precision: {precision}\nRecall: {recall}')
def plot_pr_curve(recall, precision, average_precision):
"""
https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
"""
from inspect import signature
plt.figure()
step_kwargs = ({'step': 'post'}
if 'step' in signature(plt.fill_between).parameters
else {})
plt.step(recall, precision, color='b', alpha=0.2, where='post')
plt.fill_between(recall, precision, alpha=0.2, color='b', **step_kwargs)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.0])
plt.xlim([0.0, 1.0])
plt.title(f'2-class Precision-Recall curve: AP={average_precision:0.2f}')
return plt.show()
# Plot the recall precision tradeoff
plot_pr_curve(recall, precision, average_precision)
Instructions 3/3
What's the benefit of the performance metric ROC curve (AUROC) versus Precision and Recall?
Possible Answers
The ROC curve plots the true positives vs. false positives , for a classifier, as its discrimination threshold is varied. Since, a random method describes a horizontal curve through the unit interval, it has an AUC of 0.5. Minimally, classifiers should perform better than this, and the extent to which they score higher than one another (meaning the area under the ROC curve is larger), they have better expected performance.
class_weight
:balanced
mode: model = RandomForestClassifier(class_weight='balanced')
model = LogisticRegression(class_weight='balanced')
model = SVC(kernel='linear', class_weight='balanced', probability=True)
balanced_subsample
mode: model = RandomForestClassifier(class_weight='balanced_subsample')
balanced
option, except weights are calculated again at each iteration of growing a tree in a the random forestclass_weight={0:1,1:4}
model = RandomForestClassifier(n_estimators=10,
criterion=’gini’,
max_depth=None,
min_samples_split=2,
min_samples_leaf=1,
max_features=’auto’,
n_jobs=-1, class_weight=None)
n_estimators
: one of the most important setting is the number of trees in the forestmax_features
: the number of features considered for splitting at each leaf nodecriterion
: change the way the data is split at each node (default is gini
coefficient)from sklearn.model_selection import GridSearchCV
# Create the parameter grid
param_grid = {'max_depth': [80, 90, 100, 110],
'max_features': [2, 3],
'min_samples_leaf': [3, 4, 5],
'min_samples_split': [8, 10, 12],
'n_estimators': [100, 200, 300, 1000]}
# Define which model to use
model = RandomForestRegressor()
# Instantiate the grid search model
grid_search_model = GridSearchCV(estimator = model,
param_grid = param_grid,
cv = 5,
n_jobs = -1,
scoring='f1')
GridSearchCV
param_grid
precision
, recall
or f1
# Fit the grid search to the data
grid_search_model.fit(X_train, y_train)
# Get the optimal parameters
grid_search_model.best_params_
{'bootstrap': True,
'max_depth': 80,
'max_features': 3,
'min_samples_leaf': 5,
'min_samples_split': 12,
'n_estimators': 100}
GridSearchCV
and model
are fit to the data, obtain the parameters belonging to the optimal model by using the best_params_
attributeGridSearchCV
is computationally heavy# Get the best_estimator results
grid_search.best_estimator_
grid_search.best_score_
best_score_
: mean cross-validated score of the best_estimator_
, which depends on the scoring
optionA simple way to adjust the random forest model to deal with highly imbalanced fraud data, is to use the class_weights
option when defining the sklearn
model. However, as you will see, it is a bit of a blunt force mechanism and might not work for your very special case.
In this exercise you'll explore the weight = "balanced_subsample"
mode the Random Forest model from the earlier exercise. You already have split your data in a training and test set, i.e X_train
, X_test
, y_train
, y_test
are available. The metrics function have already been imported.
Instructions
class_weight
argument of your classifier to balanced_subsample
.roc_auc_score
, the classification report and confusion matrix.# Define the model with balanced subsample
model = RandomForestClassifier(class_weight='balanced_subsample', random_state=5, n_estimators=100)
# Fit your training model to your training set
model.fit(X_train, y_train)
# Obtain the predicted values and probabilities from the model
predicted = model.predict(X_test)
probs = model.predict_proba(X_test)
# Print the ROC curve, classification report and confusion matrix
print('ROC Score:')
print(roc_auc_score(y_test, probs[:,1]))
print('\nClassification Report:')
print(classification_report(y_test, predicted))
print('\nConfusion Matrix:')
print(confusion_matrix(y_test, predicted))
You can see that the model results don't improve drastically. We now have 3 less false positives, but now 19 in stead of 18 false negatives, i.e. cases of fraud we are not catching. If we mostly care about catching fraud, and not so much about the false positives, this does actually not improve our model at all, albeit a simple option to try. In the next exercises you'll see how to more smartly tweak your model to focus on reducing false negatives and catch more fraud.
In this exercise you're going to dive into the options for the random forest classifier, as we'll assign weights and tweak the shape of the decision trees in the forest. You'll define weights manually, to be able to off-set that imbalance slightly. In our case we have 300 fraud to 7000 non-fraud cases, so by setting the weight ratio to 1:12, we get to a 1/3 fraud to 2/3 non-fraud ratio, which is good enough for training the model on.
The data in this exercise has already been split into training and test set, so you just need to focus on defining your model. You can then use the function get_model_results()
as a short cut. This function fits the model to your training data, predicts and obtains performance metrics similar to the steps you did in the previous exercises.
Instructions
weight
option to set the ratio to 1 to 12 for the non-fraud and fraud cases, and set the split criterion to 'entropy'.def get_model_results(X_train: np.ndarray, y_train: np.ndarray,
X_test: np.ndarray, y_test: np.ndarray, model):
"""
model: sklearn model (e.g. RandomForestClassifier)
"""
# Fit your training model to your training set
model.fit(X_train, y_train)
# Obtain the predicted values and probabilities from the model
predicted = model.predict(X_test)
try:
probs = model.predict_proba(X_test)
print('ROC Score:')
print(roc_auc_score(y_test, probs[:,1]))
except AttributeError:
pass
# Print the ROC curve, classification report and confusion matrix
print('\nClassification Report:')
print(classification_report(y_test, predicted))
print('\nConfusion Matrix:')
print(confusion_matrix(y_test, predicted))
# Change the model options
model = RandomForestClassifier(bootstrap=True,
class_weight={0:1, 1:12},
criterion='entropy',
# Change depth of model
max_depth=10,
# Change the number of samples in leaf nodes
min_samples_leaf=10,
# Change the number of trees to use
n_estimators=20,
n_jobs=-1,
random_state=5)
# Run the function get_model_results
get_model_results(X_train, y_train, X_test, y_test, model)
By smartly defining more options in the model, you can obtain better predictions. You have effectively reduced the number of false negatives, i.e. you are catching more cases of fraud, whilst keeping the number of false positives low. In this exercise you've manually changed the options of the model. There is a smarter way of doing it, by using GridSearchCV
, which you'll see in the next exercise!
In this exercise you're going to tweak our model in a less "random" way, but use GridSearchCV
to do the work for you.
With GridSearchCV
you can define which performance metric to score the options on. Since for fraud detection we are mostly interested in catching as many fraud cases as possible, you can optimize your model settings to get the best possible Recall score. If you also cared about reducing the number of false positives, you could optimize on F1-score, this gives you that nice Precision-Recall trade-off.
GridSearchCV
has already been imported from sklearn.model_selection
, so let's give it a try!
Instructions
gini
and entropy
split criterion.RandomForestClassifier
, you want to keep the random_state at 5 to be able to compare models.scoring
option such that it optimizes for recall.X_train
and y_train
and obtain the best parameters for the model.# Define the parameter sets to test
param_grid = {'n_estimators': [1, 30],
'max_features': ['auto', 'log2'],
'max_depth': [4, 8, 10, 12],
'criterion': ['gini', 'entropy']}
# Define the model to use
model = RandomForestClassifier(random_state=5)
# Combine the parameter sets with the defined model
CV_model = GridSearchCV(estimator=model, param_grid=param_grid, cv=5, scoring='recall', n_jobs=-1)
# Fit the model to our training data and obtain best parameters
CV_model.fit(X_train, y_train)
CV_model.best_params_
You discovered that the best parameters for your model are that the split criterion should be set to 'gini'
, the number of estimators (trees) should be 30, the maximum depth of the model should be 8 and the maximum features should be set to "log2"
.
Let's give this a try and see how well our model performs. You can use the get_model_results()
function again to save time.
Instructions
get_model_results()
.# Input the optimal parameters in the model
model = RandomForestClassifier(class_weight={0:1,1:12},
criterion='gini',
max_depth=8,
max_features='log2',
min_samples_leaf=10,
n_estimators=30,
n_jobs=-1,
random_state=5)
# Get results from your model
get_model_results(X_train, y_train, X_test, y_test, model)
The model has been improved even further. The number of false positives has now been slightly reduced even further, which means we are catching more cases of fraud. However, you see that the number of false positives actually went up. That is that Precision-Recall trade-off in action. To decide which final model is best, you need to take into account how bad it is not to catch fraudsters, versus how many false positives the fraud analytics team can deal with. Ultimately, this final decision should be made by you and the fraud team together.
from sklearn.ensemble import VotingClassifier
# Define Models
clf1 = LogisticRegression(random_state=1)
clf2 = RandomForestClassifier(random_state=1)
clf3 = GaussianNB()
# Combine models into ensemble
ensemble_model = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='hard')
# Fit and predict as with other models
ensemble_model.fit(X_train, y_train)
ensemble_model.predict(X_test)
voting='hard'
option uses the predicted class labels and takes the majority votevoting='soft'
option takes the average probability by combining the predicted probabilities of the individual modelsVotingClassifer
with weights=[2,1,1]
In this last lesson you'll combine three algorithms into one model with the VotingClassifier. This allows us to benefit from the different aspects from all models, and hopefully improve overall performance and detect more fraud. The first model, the Logistic Regression, has a slightly higher recall score than our optimal Random Forest model, but gives a lot more false positives. You'll also add a Decision Tree with balanced weights to it. The data is already split into a training and test set, i.e. X_train
, y_train
, X_test
, y_test
are available.
In order to understand how the Voting Classifier can potentially improve your original model, you should check the standalone results of the Logistic Regression model first.
Instructions
# Define the Logistic Regression model with weights
model = LogisticRegression(class_weight={0:1, 1:15}, random_state=5, solver='liblinear')
# Get the model results
get_model_results(X_train, y_train, X_test, y_test, model)
As you can see the Logistic Regression has quite different performance from the Random Forest. More false positives, but also a better Recall. It will therefore will a useful addition to the Random Forest in an ensemble model.
Let's now combine three machine learning models into one, to improve our Random Forest fraud detection model from before. You'll combine our usual Random Forest model, with the Logistic Regression from the previous exercise, with a simple Decision Tree. You can use the short cut get_model_results()
to see the immediate result of the ensemble model.
Instructions
# Define the three classifiers to use in the ensemble
clf1 = LogisticRegression(class_weight={0:1, 1:15},
random_state=5,
solver='liblinear')
clf2 = RandomForestClassifier(class_weight={0:1, 1:12},
criterion='gini',
max_depth=8,
max_features='log2',
min_samples_leaf=10,
n_estimators=30,
n_jobs=-1,
random_state=5)
clf3 = DecisionTreeClassifier(random_state=5,
class_weight="balanced")
# Combine the classifiers in the ensemble model
ensemble_model = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('dt', clf3)], voting='hard')
# Get the results
get_model_results(X_train, y_train, X_test, y_test, ensemble_model)
By combining the classifiers, you can take the best of multiple models. You've increased the cases of fraud you are catching from 76 to 78, and you only have 5 extra false positives in return. If you do care about catching as many fraud cases as you can, whilst keeping the false positives low, this is a pretty good trade-off. The Logistic Regression as a standalone was quite bad in terms of false positives, and the Random Forest was worse in terms of false negatives. By combining these together you indeed managed to improve performance.
You've just seen that the Voting Classifier allows you to improve your fraud detection performance, by combining good aspects from multiple models. Now let's try to adjust the weights we give to these models. By increasing or decreasing weights you can play with how much emphasis you give to a particular model relative to the rest. This comes in handy when a certain model has overall better performance than the rest, but you still want to combine aspects of the others to further improve your results.
For this exercise the data is already split into a training and test set, and clf1
, clf2
and clf3
are available and defined as before, i.e. they are the Logistic Regression, the Random Forest model and the Decision Tree respectively.
Instructions
clf2
) with 4 to 1 to the rest of the classifiers.predicted
from the ensemble model.# Define the ensemble model
ensemble_model = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='soft', weights=[1, 4, 1], flatten_transform=True)
# Get results
get_model_results(X_train, y_train, X_test, y_test, ensemble_model)
The weight option allows you to play with the individual models to get the best final mix for your fraud detection model. Now that you have finalized fraud detection with supervised learning, let's have a look at how fraud detetion can be done when you don't have any labels to train on.
Use unsupervised learning techniques to detect fraud. Segment customers, use K-means clustering and other clustering algorithms to find suspicious occurrences in your data.
In the next exercises, you will be looking at bank payment transaction data. The financial transactions are categorized by type of expense, as well as the amount spent. Moreover, you have some client characteristics available such as age group and gender. Some of the transactions are labeled as fraud; you'll treat these labels as given and will use those to validate the results.
When using unsupervised learning techniques for fraud detection, you want to distinguish normal from abnormal (thus potentially fraudulent) behavior. As a fraud analyst to understand what is "normal", you need to have a good understanding of the data and its characteristics. Let's explore the data in this first exercise.
Instructions 1/3
df
to inspect the size of our data and display the first rows to see which features are available.banksim_df = pd.read_csv(banksim_file)
banksim_df.drop(['Unnamed: 0'], axis=1, inplace=True)
banksim_adj_df = pd.read_csv(banksim_adj_file)
banksim_adj_df.drop(['Unnamed: 0'], axis=1, inplace=True)
banksim_df.shape
banksim_df.head()
banksim_adj_df.shape
banksim_adj_df.head()
Instructions 2/3
banksim_df.groupby(['category']).mean()
Instructions 3/3
Based on these results, can you already say something about fraud in our data?
Possible Answers
In this exercise you're going to check whether there are any obvious patterns for the clients in this data, thus whether you need to segment your data into groups, or whether the data is rather homogenous.
You unfortunately don't have a lot client information available; you can't for example distinguish between the wealth levels of different clients. However, there is data on age available, so let's see whether there is any significant difference between behavior of age groups.
Instructions 1/3
df
by the category age
and get the means for each age group.banksim_df.groupby(['age']).mean()
Instructions 2/3
banksim_df.age.value_counts()
Instructions 3/3
Based on the results you see, does it make sense to divide your data into age segments before running a fraud detection algorithm?
Possible Answers
The average amount spent as well as fraud occurrence is rather similar across groups. Age group '0' stands out but since there are only 40 cases, it does not make sense to split these out in a separate group and run a separate model on them.
In the previous exercises we saw that fraud is more prevalent in certain transaction categories, but that there is no obvious way to segment our data into for example age groups. This time, let's investigate the average amounts spent in normal transactions versus fraud transactions. This gives you an idea of how fraudulent transactions differ structurally from normal transactions.
Instructions
df
with .loc
and assign the condition "where fraud is 1" and "where fraud is 0" for creation of the new dataframes.amount
column of the newly created dataframes in the histogram plot functions and assign the labels fraud
and nonfraud
respectively to the plots.# Create two dataframes with fraud and non-fraud data
df_fraud = banksim_df[banksim_df.fraud == 1]
df_non_fraud = banksim_df[banksim_df.fraud == 0]
# Plot histograms of the amounts in fraud and non-fraud data
plt.hist(df_fraud.amount, alpha=0.5, label='fraud')
plt.hist(df_non_fraud.amount, alpha=0.5, label='nonfraud')
plt.xlabel('amount')
plt.legend()
plt.show()
As the number fraud observations is much smaller, it is difficult to see the full distribution. Nonetheless, you can see that the fraudulent transactions tend to be on the larger side relative to normal observations. This is good news, as it helps us later in detecting fraud from non-fraud. In the next chapter you're going to implement a clustering model to distinguish between normal and abnormal transactions, when the fraud labels are no longer available.
random_state
so models can be compared# Import the packages
from sklearn.preprocessing import MinMaxScaler
from sklearn.cluster import KMeans
# Transform and scale your data
X = np.array(df).astype(np.float)
scaler = MinMaxScaler()
X_scaled = scaler.fit_transform(X)
# Define the k-means model and fit to the data
kmeans = KMeans(n_clusters=6, random_state=42).fit(X_scaled)
clust = range(1, 10)
kmeans = [KMeans(n_clusters=i) for i in clust]
score = [kmeans[i].fit(X_scaled).score(X_scaled) for i in range(len(kmeans))]
plt.plot(clust,score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Elbow Curve')
plt.show()
For ML algorithms using distance based metrics, it is crucial to always scale your data, as features using different scales will distort your results. K-means uses the Euclidean distance to assess distance to cluster centroids, therefore you first need to scale your data before continuing to implement the algorithm. Let's do that first.
Available is the dataframe df
from the previous exercise, with some minor data preparation done so it is ready for you to use with sklearn
. The fraud labels are separately stored under labels, you can use those to check the results later.
Instructions
MinMaxScaler
.df
into a numpy array X
by taking only the values of df
and make sure you have all float
values.X
to obtain scaled values of X_scaled
to force all your features to a 0-1 scale.labels = banksim_adj_df.fraud
cols = ['age', 'amount', 'M', 'es_barsandrestaurants', 'es_contents',
'es_fashion', 'es_food', 'es_health', 'es_home', 'es_hotelservices',
'es_hyper', 'es_leisure', 'es_otherservices', 'es_sportsandtoys',
'es_tech', 'es_transportation', 'es_travel']
# Take the float values of df for X
X = banksim_adj_df[cols].values.astype(np.float)
X.shape
# Define the scaler and apply to the data
scaler = MinMaxScaler()
X_scaled = scaler.fit_transform(X)
A very commonly used clustering algorithm is K-means clustering. For fraud detection, K-means clustering is straightforward to implement and relatively powerful in predicting suspicious cases. It is a good algorithm to start with when working on fraud detection problems. However, fraud data is oftentimes very large, especially when you are working with transaction data. MiniBatch K-means is an efficient way to implement K-means on a large dataset, which you will use in this exercise.
The scaled data from the previous exercise, X_scaled
is available. Let's give it a try.
Instructions
MiniBatchKMeans
from sklearn
.# Define the model
kmeans = MiniBatchKMeans(n_clusters=8, random_state=0)
# Fit the model to the scaled data
kmeans.fit(X_scaled)
You have now fitted your MiniBatch K-means model to the data. In the upcoming exercises you're going to explore whether this model is any good at flagging fraud. But before doing that, you still need to figure our what the right number of clusters to use is. Let's do that in the next exercise.
In the previous exercise you've implemented MiniBatch K-means with 8 clusters, without actually checking what the right amount of clusters should be. For our first fraud detection approach, it is important to get the number of clusters right, especially when you want to use the outliers of those clusters as fraud predictions. To decide which amount of clusters you're going to use, let's apply the Elbow method and see what the optimal number of clusters should be based on this method.
X_scaled
is again available for you to use and MiniBatchKMeans
has been imported from sklearn
.
Instructions
# Define the range of clusters to try
clustno = range(1, 10)
# Run MiniBatch Kmeans over the number of clusters
kmeans = [MiniBatchKMeans(n_clusters=i) for i in clustno]
# Obtain the score for each model
score = [kmeans[i].fit(X_scaled).score(X_scaled) for i in range(len(kmeans))]
# Plot the models and their respective score
plt.plot(clustno, score)
plt.xlabel('Number of Clusters')
plt.ylabel('Score')
plt.title('Elbow Curve')
plt.show()
Now you can see that the optimal number of clusters should probably be at around 3 clusters, as that is where the elbow is in the curve. We'll use this in the next exercise as our baseline model, and see how well this does in detecting fraud
# Run the kmeans model on scaled data
kmeans = KMeans(n_clusters=6, random_state=42,n_jobs=-1).fit(X_scaled)
# Get the cluster number for each datapoint
X_clusters = kmeans.predict(X_scaled)
# Save the cluster centroids
X_clusters_centers = kmeans.cluster_centers_
# Calculate the distance to the cluster centroid for each point
dist = [np.linalg.norm(x-y) for x,y in zip(X_scaled, X_clusters_centers[X_clusters])]
# Create predictions based on distance
km_y_pred = np.array(dist)
km_y_pred[dist>=np.percentile(dist, 93)] = 1
km_y_pred[dist<np.percentile(dist, 93)] = 0
np.linalg.norm
: returns the vector norm, the vector of distance for each datapoint to their assigned clusterIn the next exercises, you're going to use the K-means algorithm to predict fraud, and compare those predictions to the actual labels that are saved, to sense check our results.
The fraudulent transactions are typically flagged as the observations that are furthest aways from the cluster centroid. You'll learn how to do this and how to determine the cut-off in this exercise. In the next one, you'll check the results.
Available are the scaled observations X_scaled, as well as the labels stored under the variable y.
Instructions
# Split the data into training and test set
X_train, X_test, y_train, y_test = train_test_split(X_scaled, labels, test_size=0.3, random_state=0)
# Define K-means model
kmeans = MiniBatchKMeans(n_clusters=3, random_state=42).fit(X_train)
# Obtain predictions and calculate distance from cluster centroid
X_test_clusters = kmeans.predict(X_test)
X_test_clusters_centers = kmeans.cluster_centers_
dist = [np.linalg.norm(x-y) for x, y in zip(X_test, X_test_clusters_centers[X_test_clusters])]
# Create fraud predictions based on outliers on clusters
km_y_pred = np.array(dist)
km_y_pred[dist >= np.percentile(dist, 95)] = 1
km_y_pred[dist < np.percentile(dist, 95)] = 0
In the previous exercise you've flagged all observations to be fraud, if they are in the top 5th percentile in distance from the cluster centroid. I.e. these are the very outliers of the three clusters. For this exercise you have the scaled data and labels already split into training and test set, so y_test is available. The predictions from the previous exercise, km_y_pred, are also available. Let's create some performance metrics and see how well you did.
Instructions 1/3
def plot_confusion_matrix(cm, classes=['Not Fraud', 'Fraud'],
normalize=False,
title='Fraud Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
From:
http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-
examples-model-selection-plot-confusion-matrix-py
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
# print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
# Obtain the ROC score
roc_auc_score(y_test, km_y_pred)
Instructions 2/3
# Create a confusion matrix
km_cm = confusion_matrix(y_test, km_y_pred)
# Plot the confusion matrix in a figure to visualize results
plot_confusion_matrix(km_cm)
Instructions 3/3
If you were to decrease the percentile used as a cutoff point in the previous exercise to 93% instead of 95%, what would that do to your prediction results?
Possible Answers
from sklearn.cluster import DBSCAN
db = DBSCAN(eps=0.5, min_samples=10, n_jobs=-1).fit(X_scaled)
# Get the cluster labels (aka numbers)
pred_labels = db.labels_
# Count the total number of clusters
n_clusters_ = len(set(pred_labels)) - (1 if -1 in pred_labels else 0)
# Print model results
print(f'Estimated number of clusters: {n_clusters_}')
>>> Estimated number of clusters: 31
# Print model results
print(f'Silhouette Coefficient: {metrics.silhouette_score(X_scaled, pred_labels):0.3f}')
>>> Silhouette Coefficient: 0.359
# Get sample counts in each cluster
counts = np.bincount(pred_labels[pred_labels>=0])
print(counts)
>>> [ 763, 496, 840, 355 1086, 676, 63, 306, 560, 134, 28, 18, 262, 128,
332, 22, 22, 13, 31, 38, 36, 28, 14, 12, 30, 10, 11, 10, 21, 10, 5]
eps
labels_
method to get the assigned cluster label for each data pointlabel_
predictionsnp.bincount
numpy
arraycounts
and decide how many of the smaller clusters to flag as fraudIn this exercise you're going to explore using a density based clustering method (DBSCAN) to detect fraud. The advantage of DBSCAN is that you do not need to define the number of clusters beforehand. Also, DBSCAN can handle weirdly shaped data (i.e. non-convex) much better than K-means can. This time, you are not going to take the outliers of the clusters and use that for fraud, but take the smallest clusters in the data and label those as fraud. You again have the scaled dataset, i.e. X_scaled available. Let's give it a try!
Instructions
DBSCAN
.# Initialize and fit the DBscan model
db = DBSCAN(eps=0.9, min_samples=10, n_jobs=-1).fit(X_scaled)
# Obtain the predicted labels and calculate number of clusters
pred_labels = db.labels_
n_clusters = len(set(pred_labels)) - (1 if -1 in labels else 0)
# Print performance metrics for DBscan
print(f'Estimated number of clusters: {n_clusters}')
print(f'Homogeneity: {homogeneity_score(labels, pred_labels):0.3f}')
print(f'Silhouette Coefficient: {silhouette_score(X_scaled, pred_labels):0.3f}')
The number of clusters is much higher than with K-means. For fraud detection this is for now OK, as we are only interested in the smallest clusters, since those are considered as abnormal. Now have a look at those clusters and decide which one to flag as fraud.
In this exercise you're going to have a look at the clusters that came out of DBscan, and flag certain clusters as fraud:
Available are the DBscan model predictions, so n_clusters
is available as well as the cluster labels, which are saved under pred_labels
. Let's give it a try!
Instructions 1/3
pred_labels
and print the results.# Count observations in each cluster number
counts = np.bincount(pred_labels[pred_labels >= 0])
# Print the result
print(counts)
Instructions 2/3
counts
and take the top 3 smallest clusters, and print the results.# Sort the sample counts of the clusters and take the top 3 smallest clusters
smallest_clusters = np.argsort(counts)[:3]
# Print the results
print(f'The smallest clusters are clusters: {smallest_clusters}')
Instructions 3/3
counts
, select the smallest clusters only, to print the number of samples in the three smallest clusters.# Print the counts of the smallest clusters only
print(f'Their counts are: {counts[smallest_clusters]}')
So now we know which smallest clusters you could flag as fraud. If you were to take more of the smallest clusters, you cast your net wider and catch more fraud, but most likely also more false positives. It is up to the fraud analyst to find the right amount of cases to flag and to investigate. In the next exercise you'll check the results with the actual labels.
In this exercise you're going to check the results of your DBscan fraud detection model. In reality, you often don't have reliable labels and this where a fraud analyst can help you validate the results. He/She can check your results and see whether the cases you flagged are indeed suspicious. You can also check historically known cases of fraud and see whether your model flags them.
In this case, you'll use the fraud labels to check your model results. The predicted cluster numbers are available under pred_labels
as well as the original fraud labels
.
Instructions
# Create a dataframe of the predicted cluster numbers and fraud labels
df = pd.DataFrame({'clusternr':pred_labels,'fraud':labels})
# Create a condition flagging fraud for the smallest clusters
df['predicted_fraud'] = np.where((df['clusternr'].isin([21, 17, 9])), 1 , 0)
# Run a crosstab on the results
print(pd.crosstab(df['fraud'], df['predicted_fraud'], rownames=['Actual Fraud'], colnames=['Flagged Fraud']))
How does this compare to the K-means model? The good thing is: our of all flagged cases, roughly 2/3 are actually fraud! Since you only take the three smallest clusters, by definition you flag less cases of fraud, so you catch less but also have less false positives. However, you are missing quite a lot of fraud cases. Increasing the amount of smallest clusters you flag could improve that, at the cost of more false positives of course. In the next chapter you'll learn how to further improve fraud detection models by including text analysis.
Use text data, text mining and topic modeling to detect fraudulent behavior.
# Using a string operator to find words
df['email_body'].str.contains('money laundering')
# Select data that matches
df.loc[df['email_body'].str.contains('money laundering', na=False)]
# Create a list of words to search for
list_of_words = ['police', 'money laundering']
df.loc[df['email_body'].str.contains('|'.join(list_of_words), na=False)]
# Create a fraud flag
df['flag'] = np.where((df['email_body'].str.contains('|'.join(list_of_words)) == True), 1, 0)
In this exercise you're going to work with text data, containing emails from Enron employees. The Enron scandal is a famous fraud case. Enron employees covered up the bad financial position of the company, thereby keeping the stock price artificially high. Enron employees sold their own stock options, and when the truth came out, Enron investors were left with nothing. The goal is to find all emails that mention specific words, such as "sell enron stock".
By using string operations on dataframes, you can easily sift through messy email data and create flags based on word-hits. The Enron email data has been put into a dataframe called df
so let's search for suspicious terms. Feel free to explore df
in the Console before getting started.
Instructions 1/2
df
in the console and look for any emails mentioning 'sell enron stock'.df = pd.read_csv(enron_emails_clean_file)
mask = df['clean_content'].str.contains('sell enron stock', na=False)
Instructions 2/2
df
that meets the condition we created earlier.# Select the data from df using the mask
df[mask]
You see that searching for particular string values in a dataframe can be relatively easy, and allows you to include textual data into your model or analysis. You can use this word search as an additional flag, or as a feature in your fraud detection model. Let's look at how to filter the data using multiple search terms.
Oftentimes you don't want to search on just one term. You probably can create a full "fraud dictionary" of terms that could potentially flag fraudulent clients and/or transactions. Fraud analysts often will have an idea what should be in such a dictionary. In this exercise you're going to flag a multitude of terms, and in the next exercise you'll create a new flag variable out of it. The 'flag' can be used either directly in a machine learning model as a feature, or as an additional filter on top of your machine learning model results. Let's first use a list of terms to filter our data on. The dataframe containing the cleaned emails is again available as df
.
Instructions
searchfor
.# Create a list of terms to search for
searchfor = ['enron stock', 'sell stock', 'stock bonus', 'sell enron stock']
# Filter cleaned emails on searchfor list and select from df
filtered_emails = df[df.clean_content.str.contains('|'.join(searchfor), na=False)]
filtered_emails.head()
By joining the search terms with the 'or' sign, i.e. |, you can search on a multitude of terms in your dataset very easily. Let's now create a flag from this which you can use as a feature in a machine learning model.
This time you are going to create an actual flag variable that gives a 1 when the emails get a hit on the search terms of interest, and 0 otherwise. This is the last step you need to make in order to actually use the text data content as a feature in a machine learning model, or as an actual flag on top of model results. You can continue working with the dataframe df
containing the emails, and the searchfor
list is the one defined in the last exercise.
Instructions
searchfor
list and 0 otherwise.searchfor
list with an "or" indicator.# Create flag variable where the emails match the searchfor terms
df['flag'] = np.where((df['clean_content'].str.contains('|'.join(searchfor)) == True), 1, 0)
# Count the values of the flag variable
count = df['flag'].value_counts()
print(count)
You have now managed to search for a list of strings in several lines of text data. These skills come in handy when you want to flag certain words based on what you discovered in your topic model, or when you know beforehand what you want to search for. In the next exercises you're going to learn how to clean text data and to create your own topic model to further look for indications of fraud in your text data.
Must dos when working with textual data:
from nltk import word_tokenize
from nltk.corpus import stopwords
import string
# 1. Tokenization
text = df.apply(lambda row: word_tokenize(row["email_body"]), axis=1)
text = text.rstrip() # remove whitespace
# replace with lowercase
# text = re.sub(r'[^a-zA-Z]', ' ', text)
text = text.lower()
# 2. Remove all stopwords and punctuation
exclude = set(string.punctuation)
stop = set(stopwords.words('english'))
stop_free = " ".join([word for word in text if((word not in stop) and (not word.isdigit()))])
punc_free = ''.join(word for word in stop_free if word not in exclude)
from nltk.stem.wordnet import WordNetLemmatizer
from nltk.stem.porter import PorterStemmer
# Lemmatize words
lemma = WordNetLemmatizer()
normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split())
# Stem words
porter= PorterStemmer()
cleaned_text = " ".join(porter.stem(token) for token in normalized.split())
print (cleaned_text)
['philip','going','street','curious','hear','perspective','may','wish',
'offer','trading','floor','enron','stock','lower','joined','company',
'business','school','imagine','quite','happy','people','day','relate',
'somewhat','stock','around','fact','broke','day','ago','knowing',
'imagine','letting','event','get','much','taken','similar',
'problem','hope','everything','else','going','well','family','knee',
'surgery','yet','give','call','chance','later']
In the following exercises you're going to clean the Enron emails, in order to be able to use the data in a topic model. Text cleaning can be challenging, so you'll learn some steps to do this well. The dataframe containing the emails df
is available. In a first step you need to define the list of stopwords and punctuations that are to be removed in the next exercise from the text data. Let's give it a try.
Instructions
ntlk
.stop
.string
package and assign it to exclude
.# Define stopwords to exclude
stop = set(stopwords.words('english'))
stop.update(("to", "cc", "subject", "http", "from", "sent", "ect", "u", "fwd", "www", "com", 'html'))
# Define punctuations to exclude and lemmatizer
exclude = set(string.punctuation)
Now that you've defined the stopwords and punctuations, let's use these to clean our enron emails in the dataframe df
further. The lists containing stopwords and punctuations are available under stop
and exclude
There are a few more steps to take before you have cleaned data, such as "lemmatization" of words, and stemming the verbs. The verbs in the email data are already stemmed, and the lemmatization is already done for you in this exercise.
Instructions 1/2
stop
and exclude
to finish of the function: Strip the words from whitespaces using rstrip
, and exclude stopwords and punctuations. Finally lemmatize the words and assign that to normalized
.# Import the lemmatizer from nltk
lemma = WordNetLemmatizer()
def clean(text, stop):
text = str(text).rstrip()
stop_free = " ".join([i for i in text.lower().split() if((i not in stop) and (not i.isdigit()))])
punc_free = ''.join(i for i in stop_free if i not in exclude)
normalized = " ".join(lemma.lemmatize(i) for i in punc_free.split())
return normalized
Instructions 2/2
clean(text,stop)
on each line of text data in our dataframe, and take the column df['clean_content']
for this.# Clean the emails in df and print results
text_clean=[]
for text in df['clean_content']:
text_clean.append(clean(text, stop).split())
text_clean[0][:10]
Now that you have cleaned your data entirely with the necessary steps, including splitting the text into words, removing stopwords and punctuations, and lemmatizing your words. You are now ready to run a topic model on this data. In the following exercises you're going to explore how to do that.
Dictionary
function in corpora
to create a dict
from the text datadoc2bow
)doc2bow
from gensim import corpora
# Create dictionary number of times a word appears
dictionary = corpora.Dictionary(cleaned_emails)
# Filter out (non)frequent words
dictionary.filter_extremes(no_below=5, keep_n=50000)
# Create corpus
corpus = [dictionary.doc2bow(text) for text in cleaned_emails]
print_topics
to obtain the top words from the topicsimport gensim
# Define the LDA model
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics = 3,
id2word=dictionary, passes=15)
# Print the three topics from the model with top words
topics = ldamodel.print_topics(num_words=4)
for topic in topics:
print(topic)
>>> (0, '0.029*"email" + 0.016*"send" + 0.016*"results" + 0.016*"invoice"')
>>> (1, '0.026*"price" + 0.026*"work" + 0.026*"management" + 0.026*"sell"')
>>> (2, '0.029*"distribute" + 0.029*"contact" + 0.016*"supply" + 0.016*"fast"')
In order to run an LDA topic model, you first need to define your dictionary and corpus first, as those need to go into the model. You're going to continue working on the cleaned text data that you've done in the previous exercises. That means that text_clean
is available for you already to continue working with, and you'll use that to create your dictionary and corpus.
This exercise will take a little longer to execute than usual.
Instructions
text_clean
.doc2bow
on each piece of text in text_clean
.dictionary
and corpus
look like.# Define the dictionary
dictionary = corpora.Dictionary(text_clean)
# Define the corpus
corpus = [dictionary.doc2bow(text) for text in text_clean]
print(dictionary)
corpus[0][:10]
These are the two ingredients you need to run your topic model on the enron emails. You are now ready for the final step and create your first fraud detection topic model.
Now it's time to build the LDA model. Using the dictionary
and corpus
, you are ready to discover which topics are present in the Enron emails. With a quick print of words assigned to the topics, you can do a first exploration about whether there are any obvious topics that jump out. Be mindful that the topic model is heavy to calculate so it will take a while to run. Let's give it a try!
Instructions
corpus
and dictionary
.print_topics
on the model results, and select the top 5 words.# Define the LDA model
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=5, id2word=dictionary, passes=5)
# Save the topics and top 5 words
topics = ldamodel.print_topics(num_words=5)
# Print the results
for topic in topics:
print(topic)
You have now successfully created your first topic model on the Enron email data. However, the print of words doesn't really give you enough information to find a topic that might lead you to signs of fraud. You'll therefore need to closely inspect the model results in order to be able to detect anything that can be related to fraud in your data. You'll learn more about this in the next video.
import pyLDAvis.gensim
lda_display = pyLDAvis.gensim.prepare(ldamodel, corpus, dictionary, sort_topics=False)
# if ipython is > 7.16.1 results in DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future
import pyLDAvis.gensim
lda_display = pyLDAvis.gensim.prepare(ldamodel, corpus, dictionary, sort_topics=False)
pyLDAvis.display(lda_display)
get_topic_details
shown here, nicely aggregates this information in a presentable tableget_topic_details
functiondef get_topic_details(ldamodel, corpus):
topic_details_df = pd.DataFrame()
for i, row in enumerate(ldamodel[corpus]):
row = sorted(row, key=lambda x: (x[1]), reverse=True)
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_details_df = topic_details_df.append(pd.Series([topic_num, prop_topic]), ignore_index=True)
topic_details_df.columns = ['Dominant_Topic', '% Score']
return topic_details_df
contents = pd.DataFrame({'Original text':text_clean})
topic_details = pd.concat([get_topic_details(ldamodel,
corpus), contents], axis=1)
topic_details.head()
Dominant_Topic % Score Original text
0 0.0 0.989108 [investools, advisory, free, ...
1 0.0 0.993513 [forwarded, richard, b, ...
2 1.0 0.964858 [hey, wearing, target, purple, ...
3 0.0 0.989241 [leslie, milosevich, santa, clara, ...
Possible Answers
Topic 1 seems to discuss the employee share option program, and seems to point to internal conversation (with "please, may, know" etc), so this is more likely to be related to the internal accounting fraud and trading stock with insider knowledge. Topic 3 seems to be more related to general news around Enron.
In this exercise you're going to link the results from the topic model back to your original data. You now learned that you want to flag everything related to topic 3. As you will see, this is actually not that straightforward. You'll be given the function get_topic_details()
which takes the arguments ldamodel
and corpus
. It retrieves the details of the topics for each line of text. With that function, you can append the results back to your original data. If you want to learn more detail on how to work with the model results, which is beyond the scope of this course, you're highly encouraged to read this article.
Available for you are the dictionary
and corpus
, the text data text_clean
as well as your model results ldamodel
. Also defined is get_topic_details()
.
Instructions 1/3
get_topic_details()
function by inserting your LDA model results and corpus
.def get_topic_details(ldamodel, corpus):
topic_details_df = pd.DataFrame()
for i, row in enumerate(ldamodel[corpus]):
row = sorted(row, key=lambda x: (x[1]), reverse=True)
for j, (topic_num, prop_topic) in enumerate(row):
if j == 0: # => dominant topic
wp = ldamodel.show_topic(topic_num)
topic_details_df = topic_details_df.append(pd.Series([topic_num, prop_topic]), ignore_index=True)
topic_details_df.columns = ['Dominant_Topic', '% Score']
return topic_details_df
# Run get_topic_details function and check the results
topic_details_df = get_topic_details(ldamodel, corpus)
topic_details_df.head()
topic_details_df.tail()
Instructions 2/3
get_topic_details()
to the original text data contained under contents
and inspect the results.# Add original text to topic details in a dataframe
contents = pd.DataFrame({'Original text': text_clean})
topic_details = pd.concat([get_topic_details(ldamodel, corpus), contents], axis=1)
topic_details.sort_values(by=['% Score'], ascending=False).head(10).head()
topic_details.sort_values(by=['% Score'], ascending=False).head(10).tail()
Instructions 3/3
np.where()
function to flag all content that has topic 3 as a dominant topic with a 1, and 0 otherwise# Create flag for text highest associated with topic 3
topic_details['flag'] = np.where((topic_details['Dominant_Topic'] == 3.0), 1, 0)
topic_details_1 = topic_details[topic_details.flag == 1]
topic_details_1.sort_values(by=['% Score'], ascending=False).head(10)
You have now flagged all data that is highest associated with topic 3, that seems to cover internal conversation about enron stock options. You are a true detective. With these exercises you have demonstrated that text mining and topic modeling can be a powerful tool for fraud detection.