Post

Pandas DataFrames - An In-depth Guide

Introduction

1
2
3
4
5
6
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from numpy import NaN
from glob import glob
import re
1
2
3
pd.set_option('display.max_columns', 200)
pd.set_option('display.max_rows', 300)
pd.set_option('display.expand_frame_repr', True)

Course Description

Pandas DataFrames are the most widely used in-memory representation of complex data collections within Python. Whether in finance, scientific fields, or data science, a familiarity with Pandas is essential. This course teaches you to work with real-world data sets containing both string and numeric data, often structured around time series. You will learn powerful analysis, selection, and visualization techniques in this course.

Synopsis

Data Ingestion & Inspection

  • Data loaded into a DataFrame from a CSV file.
  • First few rows of the DataFrame displayed using head().
  • Columns renamed to be more descriptive.
  • Data types of each column inspected using dtypes.
  • Rows with missing data identified using isna().any().
  • Summary statistics generated with describe().

Exploratory Data Analysis

  • Histograms created to visualize distributions of numerical columns.
  • Box plots generated to identify outliers.
  • Scatter plots used to examine relationships between variables.
  • Grouping operations performed using groupby().
  • Pivot tables created for summary statistics.

Time Series in pandas

  • Date and time information set as the index of the DataFrame.
  • Data resampled to different frequencies (e.g., daily, monthly) using resample().
  • Rolling averages calculated to smooth time series data.
  • Time series plotted to visualize trends over time.

Case Study - Sunlight in Austin

  • Data filtered to specific time periods.
  • Specific conditions (e.g., overcast days) identified using string operations.
  • Daily maximum temperatures calculated for filtered data.
  • Cumulative distribution function (CDF) constructed for specific subsets of data.
  • Findings summarized and visualized with appropriate plots.

Data Files

Data ingestion & inspection

In this chapter, you will be introduced to Panda’s DataFrames. You will use Pandas to import and inspect a variety of datasets, ranging from population data obtained from The World Bank to monthly stock data obtained via Yahoo! Finance. You will also practice building DataFrames from scratch, and become familiar with Pandas’ intrinsic data visualization capabilities.

Review pandas DataFrames

  • Example: DataFrame of Apple Stock data
1
2
AAPL = pd.read_csv(r'DataCamp-master/11-pandas-foundations/_datasets/AAPL.csv',
                   index_col='Date', parse_dates=True)
1
AAPL.head()
OpenHighLowCloseAdj CloseVolume
Date
1980-12-120.5133930.5156250.5133930.5133930.023106117258400.0
1980-12-150.4888390.4888390.4866070.4866070.02190043971200.0
1980-12-160.4531250.4531250.4508930.4508930.02029326432000.0
1980-12-170.4620540.4642860.4620540.4620540.02079521610400.0
1980-12-180.4754460.4776790.4754460.4754460.02139818362400.0
  • The rows are labeled by a special data structure called an Index.
    • Indexes in Pandas are tailored lists of labels that permit fast look-up and some powerful relational operations.
  • The index labels in the AAPL DataFrame are dates in reverse chronological order.
  • Labeled rows & columns improves the clarity and intuition of many data analysis tasks.
1
type(AAPL)
1
pandas.core.frame.DataFrame
1
AAPL.shape
1
(9611, 6)
1
AAPL.columns
1
Index(['Open', 'High', 'Low', 'Close', 'Adj Close', 'Volume'], dtype='object')
1
type(AAPL.columns)
1
pandas.core.indexes.base.Index
1
AAPL.index
1
2
3
4
5
6
7
8
DatetimeIndex(['1980-12-12', '1980-12-15', '1980-12-16', '1980-12-17',
               '1980-12-18', '1980-12-19', '1980-12-22', '1980-12-23',
               '1980-12-24', '1980-12-26',
               ...
               '2019-01-10', '2019-01-11', '2019-01-14', '2019-01-15',
               '2019-01-16', '2019-01-17', '2019-01-18', '2019-01-22',
               '2019-01-23', '2019-01-24'],
              dtype='datetime64[ns]', name='Date', length=9611, freq=None)
1
type(AAPL.index)
1
pandas.core.indexes.datetimes.DatetimeIndex
  • DataFrames can be sliced like NumPy arrays or Python lists using colons to specify the start, end and stride of a slice.
1
2
# Start of the DataFrame to the 5th row, inclusive of all columns
AAPL.iloc[:5,:]
OpenHighLowCloseAdj CloseVolume
Date
1980-12-120.5133930.5156250.5133930.5133930.023106117258400.0
1980-12-150.4888390.4888390.4866070.4866070.02190043971200.0
1980-12-160.4531250.4531250.4508930.4508930.02029326432000.0
1980-12-170.4620540.4642860.4620540.4620540.02079521610400.0
1980-12-180.4754460.4776790.4754460.4754460.02139818362400.0
1
2
# Start at the 5th last row to the end of the DataFrame using a negative index
AAPL.iloc[-5:,:]
OpenHighLowCloseAdj CloseVolume
Date
2019-01-17154.199997157.660004153.259995155.860001155.86000129821200.0
2019-01-18157.500000157.880005155.979996156.820007156.82000733751000.0
2019-01-22156.410004156.729996152.619995153.300003153.30000330394000.0
2019-01-23154.149994155.139999151.699997153.919998153.91999823130600.0
2019-01-24154.110001154.479996151.740005152.699997152.69999725421800.0
1
AAPL.head()
OpenHighLowCloseAdj CloseVolume
Date
1980-12-120.5133930.5156250.5133930.5133930.023106117258400.0
1980-12-150.4888390.4888390.4866070.4866070.02190043971200.0
1980-12-160.4531250.4531250.4508930.4508930.02029326432000.0
1980-12-170.4620540.4642860.4620540.4620540.02079521610400.0
1980-12-180.4754460.4776790.4754460.4754460.02139818362400.0
1
AAPL.tail()
OpenHighLowCloseAdj CloseVolume
Date
2019-01-17154.199997157.660004153.259995155.860001155.86000129821200.0
2019-01-18157.500000157.880005155.979996156.820007156.82000733751000.0
2019-01-22156.410004156.729996152.619995153.300003153.30000330394000.0
2019-01-23154.149994155.139999151.699997153.919998153.91999823130600.0
2019-01-24154.110001154.479996151.740005152.699997152.69999725421800.0
1
AAPL.info()
1
2
3
4
5
6
7
8
9
10
11
12
13
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 9611 entries, 1980-12-12 to 2019-01-24
Data columns (total 6 columns):
 #   Column     Non-Null Count  Dtype  
---  ------     --------------  -----  
 0   Open       9610 non-null   float64
 1   High       9610 non-null   float64
 2   Low        9610 non-null   float64
 3   Close      9610 non-null   float64
 4   Adj Close  9610 non-null   float64
 5   Volume     9610 non-null   float64
dtypes: float64(6)
memory usage: 525.6 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
AAPL.Close.plot(kind='line')

# Add first subplot
plt.subplot(2, 1, 1)
AAPL.Close.plot(kind='line')

# Add title and specify axis labels
plt.title('Close')
plt.ylabel('Value - $')
plt.xlabel('Year')

# Add second subplot
plt.subplot(2, 1, 2)
AAPL.Volume.plot(kind='line')

# Add title and specify axis labels
plt.title('Volume')
plt.ylabel('Number of Shares')
plt.xlabel('Year')

# Display the plots
plt.tight_layout()
plt.show()

png

Broadcasting

  • Assigning scalar value to column slice broadcasts value to each row
1
AAPL.iloc[::3, -1] = np.nan  # every 3rd row of Volume is now NaN
1
AAPL.head(7)
OpenHighLowCloseAdj CloseVolume
Date
1980-12-120.5133930.5156250.5133930.5133930.023106NaN
1980-12-150.4888390.4888390.4866070.4866070.02190043971200.0
1980-12-160.4531250.4531250.4508930.4508930.02029326432000.0
1980-12-170.4620540.4642860.4620540.4620540.020795NaN
1980-12-180.4754460.4776790.4754460.4754460.02139818362400.0
1980-12-190.5044640.5066960.5044640.5044640.02270412157600.0
1980-12-220.5290180.5312500.5290180.5290180.023809NaN
1
AAPL.info()
1
2
3
4
5
6
7
8
9
10
11
12
13
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 9611 entries, 1980-12-12 to 2019-01-24
Data columns (total 6 columns):
 #   Column     Non-Null Count  Dtype  
---  ------     --------------  -----  
 0   Open       9610 non-null   float64
 1   High       9610 non-null   float64
 2   Low        9610 non-null   float64
 3   Close      9610 non-null   float64
 4   Adj Close  9610 non-null   float64
 5   Volume     6407 non-null   float64
dtypes: float64(6)
memory usage: 525.6 KB
  • Note Volume now has few non-null numbers

Series

1
low = AAPL.Low
1
type(low)
1
pandas.core.series.Series
1
low.head()
1
2
3
4
5
6
7
Date
1980-12-12    0.513393
1980-12-15    0.486607
1980-12-16    0.450893
1980-12-17    0.462054
1980-12-18    0.475446
Name: Low, dtype: float64
1
lows = low.values
1
type(lows)
1
numpy.ndarray
1
lows[0:5]
1
array([0.513393, 0.486607, 0.450893, 0.462054, 0.475446])
  • A Pandas Series, then, is a 1D labeled NumPy array and a DataFrame is a 2D labeled array whose columns as Series

Exercises

Inspecting your data

You can use the DataFrame methods .head() and .tail() to view the first few and last few rows of a DataFrame. In this exercise, we have imported pandas as pd and loaded population data from 1960 to 2014 as a DataFrame df. This dataset was obtained from the World Bank.

Your job is to use df.head() and df.tail() to verify that the first and last rows match a file on disk. In later exercises, you will see how to extract values from DataFrames with indexing, but for now, manually copy/paste or type values into assignment statements where needed. Select the correct answer for the first and last values in the 'Year' and 'Total Population' columns.

Instructions

Possible Answers

  • First: 1980, 26183676.0; Last: 2000, 35.
  • First: 1960, 92495902.0; Last: 2014, 15245855.0.
  • First: 40.472, 2001; Last: 44.5, 1880.
  • First: CSS, 104170.0; Last: USA, 95.203.
1
wb_df = pd.read_csv(r'DataCamp-master/11-pandas-foundations/_datasets/world_ind_pop_data.csv')
1
wb_df.head()
CountryNameCountryCodeYearTotal PopulationUrban population (% of total)
0Arab WorldARB19609.249590e+0731.285384
1Caribbean small statesCSS19604.190810e+0631.597490
2Central Europe and the BalticsCEB19609.140158e+0744.507921
3East Asia & Pacific (all income levels)EAS19601.042475e+0922.471132
4East Asia & Pacific (developing only)EAP19608.964930e+0816.917679
1
wb_df.tail()
CountryNameCountryCodeYearTotal PopulationUrban population (% of total)
13369Virgin Islands (U.S.)VIR2014104170.095.203
13370West Bank and GazaWBG20144294682.075.026
13371Yemen, Rep.YEM201426183676.034.027
13372ZambiaZMB201415721343.040.472
13373ZimbabweZWE201415245855.032.501

DataFrame data types

Pandas is aware of the data types in the columns of your DataFrame. It is also aware of null and NaN (‘Not-a-Number’) types which often indicate missing data. In this exercise, we have imported pandas as pd and read in the world population data which contains some NaN values, a value often used as a place-holder for missing or otherwise invalid data entries. Your job is to use df.info() to determine information about the total count of non-null entries and infer the total count of 'null' entries, which likely indicates missing data. Select the best description of this data set from the following:

Instructions

Possible Answers

  • The data is all of type float64 and none of it is missing.
  • The data is of mixed type, and 9914 of it is missing.
  • The data is of mixed type, and 3460 float64s are missing.
  • The data is all of type float64, and 3460 float64s are missing.
1
2
3
4
5
6
7
8
9
10
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 13374 entries, 0 to 13373
Data columns (total 5 columns):
CountryName                      13374 non-null object
CountryCode                      13374 non-null object
Year                             13374 non-null int64
Total Population                 9914 non-null float64
Urban population (% of total)    13374 non-null float64
dtypes: float64(2), int64(1), object(2)
memory usage: 522.5+ KB
1
wb_df.info()
1
2
3
4
5
6
7
8
9
10
11
12
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 13374 entries, 0 to 13373
Data columns (total 5 columns):
 #   Column                         Non-Null Count  Dtype  
---  ------                         --------------  -----  
 0   CountryName                    13374 non-null  object 
 1   CountryCode                    13374 non-null  object 
 2   Year                           13374 non-null  int64  
 3   Total Population               13374 non-null  float64
 4   Urban population (% of total)  13374 non-null  float64
dtypes: float64(2), int64(1), object(2)
memory usage: 522.6+ KB

NumPy and pandas working together

Pandas depends upon and interoperates with NumPy, the Python library for fast numeric array computations. For example, you can use the DataFrame attribute .values to represent a DataFrame df as a NumPy array. You can also pass pandas data structures to NumPy methods. In this exercise, we have imported pandas as pd and loaded world population data every 10 years since 1960 into the DataFrame df. This dataset was derived from the one used in the previous exercise.

Your job is to extract the values and store them in an array using the attribute .values. You’ll then use those values as input into the NumPy np.log10() method to compute the base 10 logarithm of the population values. Finally, you will pass the entire pandas DataFrame into the same NumPy np.log10() method and compare the results.

Instructions

  • Import numpy using the standard alias np.
  • Assign the numerical values in the DataFrame df to an array np_vals using the attribute values.
  • Pass np_vals into the NumPy method log10() and store the results in np_vals_log10.
  • Pass the entire df DataFrame into the NumPy method log10() and store the results in df_log10.
  • Inspect the output of the print() code to see the type() of the variables that you created.
1
pop_df = pd.read_csv(r'DataCamp-master/11-pandas-foundations/_datasets/world_population.csv')
1
pop_df.info()
1
2
3
4
5
6
7
8
9
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 6 entries, 0 to 5
Data columns (total 2 columns):
 #   Column            Non-Null Count  Dtype
---  ------            --------------  -----
 0   Date              6 non-null      int64
 1   Total Population  6 non-null      int64
dtypes: int64(2)
memory usage: 228.0 bytes
1
2
# Create array of DataFrame values: np_vals
np_vals = pop_df.values
1
np_vals
1
2
3
4
5
6
array([[      1960, 3034970564],
       [      1970, 3684822701],
       [      1980, 4436590356],
       [      1990, 5282715991],
       [      2000, 6115974486],
       [      2010, 6924282937]], dtype=int64)
1
2
# Create new array of base 10 logarithm values: np_vals_log10
np_vals_log10 = np.log10(np_vals)
1
np_vals_log10
1
2
3
4
5
6
array([[3.29225607, 9.48215448],
       [3.29446623, 9.5664166 ],
       [3.29666519, 9.64704933],
       [3.29885308, 9.72285726],
       [3.30103   , 9.78646566],
       [3.30319606, 9.84037481]])
1
2
# Create array of new DataFrame by passing df to np.log10(): df_log10
pop_df_log10 = np.log10(pop_df)
1
pop_df_log10
DateTotal Population
03.2922569.482154
13.2944669.566417
23.2966659.647049
33.2988539.722857
43.3010309.786466
53.3031969.840375
1
2
# Print original and new data containers
[print(x, 'has type', type(eval(x))) for x in ['np_vals', 'np_vals_log10', 'pop_df', 'pop_df_log10']]
1
2
3
4
5
6
7
8
9
10
np_vals has type <class 'numpy.ndarray'>
np_vals_log10 has type <class 'numpy.ndarray'>
pop_df has type <class 'pandas.core.frame.DataFrame'>
pop_df_log10 has type <class 'pandas.core.frame.DataFrame'>





[None, None, None, None]

As a data scientist, you’ll frequently interact with NumPy arrays, pandas Series, and pandas DataFrames, and you’ll leverage a variety of NumPy and pandas methods to perform your desired computations. Understanding how NumPy and pandas work together will prove to be very useful.

Building DataFrames from Scratch

  • DataFrames read in from CSV
    1
    
    pd.read_csv()
    
  • DataFrames from dict (1)
1
2
3
4
data = {'weekday': ['Sun', 'Sun', 'Mon', 'Mon'],
        'city': ['Austin', 'Dallas', 'Austin', 'Dallas'],
        'visitors': [139, 237, 326, 456],
        'signups': [7, 12, 3, 5]}
1
users = pd.DataFrame(data)
1
users
weekdaycityvisitorssignups
0SunAustin1397
1SunDallas23712
2MonAustin3263
3MonDallas4565
  • DataFrames from dict (2)
    • lists
1
2
3
4
5
6
7
8
9
10
cities = ['Austin', 'Dallas', 'Austin', 'Dallas']
signups = [7, 12, 3, 5]
weekdays = ['Sun', 'Sun', 'Mon', 'Mon']
visitors = [139, 237, 326, 456]

list_labels = ['city', 'signups', 'visitors', 'weekday']
list_cols = [cities, signups, visitors, weekdays]  # list of lists

zipped = list(zip(list_labels, list_cols))  # tuples
zipped
1
2
3
4
[('city', ['Austin', 'Dallas', 'Austin', 'Dallas']),
 ('signups', [7, 12, 3, 5]),
 ('visitors', [139, 237, 326, 456]),
 ('weekday', ['Sun', 'Sun', 'Mon', 'Mon'])]
  • DataFrames from dict (3)
1
data2 = dict(zipped)
1
users2 = pd.DataFrame(data2)
1
users2
citysignupsvisitorsweekday
0Austin7139Sun
1Dallas12237Sun
2Austin3326Mon
3Dallas5456Mon

Broadcasting

  • Saves time by generating long lists, arrays or columns without loops
1
users['fees'] = 0  # Broadcasts value to entire column
1
users
weekdaycityvisitorssignupsfees
0SunAustin13970
1SunDallas237120
2MonAustin32630
3MonDallas45650

Broadcasting with a dict

1
heights = [59.0, 65.2, 62.9, 65.4, 63.7, 65.7, 64.1]
1
data = {'height': heights, 'sex': 'M'}  # M is broadcast to the entire column
1
results = pd.DataFrame(data)
1
results
heightsex
059.0M
165.2M
262.9M
365.4M
463.7M
565.7M
664.1M

Index and columns

  • We can assign list of strings to the attributes columns and index as long as they are of suitable length.
1
results.columns = ['height (in)', 'sex']
1
results.index = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
1
results
height (in)sex
A59.0M
B65.2M
C62.9M
D65.4M
E63.7M
F65.7M
G64.1M

Exercises

Zip lists to build a DataFrame

In this exercise, you’re going to make a pandas DataFrame of the top three countries to win gold medals since 1896 by first building a dictionary. list_keys contains the column names 'Country' and 'Total'. list_values contains the full names of each country and the number of gold medals awarded. The values have been taken from Wikipedia.

Your job is to use these lists to construct a list of tuples, use the list of tuples to construct a dictionary, and then use that dictionary to construct a DataFrame. In doing so, you’ll make use of the list(), zip(), dict() and pd.DataFrame() functions. Pandas has already been imported as pd.

Note: The zip() function in Python 3 and above returns a special zip object, which is essentially a generator. To convert this zip object into a list, you’ll need to use list(). You can learn more about the zip() function as well as generators in Python Data Science Toolbox (Part 2).

Instructions

  • Zip the 2 lists list_keys and list_values together into one list of (key, value) tuples. Be sure to convert the zip object into a list, and store the result in zipped.
  • Inspect the contents of zipped using print(). This has been done for you.
  • Construct a dictionary using zipped. Store the result as data.
  • Construct a DataFrame using the dictionary. Store the result as df.
1
2
list_keys = ['Country', 'Total']
list_values = [['United States', 'Soviet Union', 'United Kingdom'], [1118, 473, 273]]
1
2
zipped = list(zip(list_keys, list_values))  # tuples
zipped
1
2
[('Country', ['United States', 'Soviet Union', 'United Kingdom']),
 ('Total', [1118, 473, 273])]
1
data = dict(zipped)
1
data
1
2
{'Country': ['United States', 'Soviet Union', 'United Kingdom'],
 'Total': [1118, 473, 273]}
1
data_df = pd.DataFrame.from_dict(data)
1
data_df
CountryTotal
0United States1118
1Soviet Union473
2United Kingdom273

Labeling your data

You can use the DataFrame attribute df.columns to view and assign new string labels to columns in a pandas DataFrame.

In this exercise, we have imported pandas as pd and defined a DataFrame df containing top Billboard hits from the 1980s (from Wikipedia). Each row has the year, artist, song name and the number of weeks at the top. However, this DataFrame has the column labels a, b, c, d. Your job is to use the df.columns attribute to re-assign descriptive column labels.

Instructions

  • Create a list of new column labels with 'year', 'artist', 'song', 'chart weeks', and assign it to list_labels.
  • Assign your list of labels to df.columns.
1
2
3
4
5
6
7
billboard_values = np.array([['1980', 'Blondie', 'Call Me', '6'],
                             ['1981', 'Chistorpher Cross', 'Arthurs Theme', '3'],
                             ['1982', 'Joan Jett', 'I Love Rock and Roll', '7']]).transpose()
billboard_keys = ['a', 'b', 'c', 'd']

billboard_zipped = list(zip(billboard_keys, billboard_values))
billboard_zipped
1
2
3
4
5
[('a', array(['1980', '1981', '1982'], dtype='<U20')),
 ('b', array(['Blondie', 'Chistorpher Cross', 'Joan Jett'], dtype='<U20')),
 ('c',
  array(['Call Me', 'Arthurs Theme', 'I Love Rock and Roll'], dtype='<U20')),
 ('d', array(['6', '3', '7'], dtype='<U20'))]
1
billboard_dict = dict(billboard_zipped)
1
billboard_dict
1
2
3
4
{'a': array(['1980', '1981', '1982'], dtype='<U20'),
 'b': array(['Blondie', 'Chistorpher Cross', 'Joan Jett'], dtype='<U20'),
 'c': array(['Call Me', 'Arthurs Theme', 'I Love Rock and Roll'], dtype='<U20'),
 'd': array(['6', '3', '7'], dtype='<U20')}
1
billboard = pd.DataFrame.from_dict(billboard_dict)
1
billboard
abcd
01980BlondieCall Me6
11981Chistorpher CrossArthurs Theme3
21982Joan JettI Love Rock and Roll7
1
2
# Build a list of labels: list_labels
list_labels = ['year', 'artist', 'song', 'chart weeks']
1
2
# Assign the list of labels to the columns attribute: df.columns
billboard.columns = list_labels
1
billboard
yearartistsongchart weeks
01980BlondieCall Me6
11981Chistorpher CrossArthurs Theme3
21982Joan JettI Love Rock and Roll7

Building DataFrames with broadcasting

You can implicitly use ‘broadcasting’, a feature of NumPy, when creating pandas DataFrames. In this exercise, you’re going to create a DataFrame of cities in Pennsylvania that contains the city name in one column and the state name in the second. We have imported the names of 15 cities as the list cities.

Your job is to construct a DataFrame from the list of cities and the string 'PA'.

Instructions

  • Make a string object with the value ‘PA’ and assign it to state.
  • Construct a dictionary with 2 key:value pairs: ‘state’:state and ‘city’:cities.
  • Construct a pandas DataFrame from the dictionary you created and assign it to df
1
2
3
4
5
cities = ['Manheim', 'Preston park', 'Biglerville',
          'Indiana', 'Curwensville', 'Crown',
          'Harveys lake', 'Mineral springs', 'Cassville',\
          'Hannastown', 'Saltsburg', 'Tunkhannock',
          'Pittsburgh', 'Lemasters', 'Great bend']
1
2
# Make a string with the value 'PA': state
state = 'PA'
1
2
# Construct a dictionary: data
data = {'state': state, 'city': cities}
1
2
# Construct a DataFrame from dictionary data: df
pa_df = pd.DataFrame.from_dict(data)
1
2
# Print the DataFrame
print(pa_df)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
   state             city
0     PA          Manheim
1     PA     Preston park
2     PA      Biglerville
3     PA          Indiana
4     PA     Curwensville
5     PA            Crown
6     PA     Harveys lake
7     PA  Mineral springs
8     PA        Cassville
9     PA       Hannastown
10    PA        Saltsburg
11    PA      Tunkhannock
12    PA       Pittsburgh
13    PA        Lemasters
14    PA       Great bend

Importing & Exporting Data

  • Dataset: Sunspot observations collected from SILSO
1
2
3
4
5
6
7
8
9
10
11
12
13
Format: Comma Separated values (adapted for import in spreadsheets)
The separator is the semicolon ';'.

Contents:
Column 1-3: Gregorian calendar date
- Year
- Month
- Day
Column 4: Date in fraction of year.
Column 5: Daily total sunspot number. A value of -1 indicates that no number is available for that day (missing value).
Column 6: Daily standard deviation of the input sunspot numbers from individual stations.
Column 7: Number of observations used to compute the daily value.
Column 8: Definitive/provisional indicator. '1' indicates that the value is definitive. '0' indicates that the value is still provisional.
1
filepath = r'data/silso_sunspot_data_1818-2019.csv'
1
2
sunspots = pd.read_csv(filepath, sep=';')
sunspots.info()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 73413 entries, 0 to 73412
Data columns (total 8 columns):
 #   Column    Non-Null Count  Dtype  
---  ------    --------------  -----  
 0   1818      73413 non-null  int64  
 1   01        73413 non-null  int64  
 2   01.1      73413 non-null  int64  
 3   1818.001  73413 non-null  float64
 4     -1      73413 non-null  int64  
 5    -1.0     73413 non-null  float64
 6      0      73413 non-null  int64  
 7   1         73413 non-null  int64  
dtypes: float64(2), int64(6)
memory usage: 4.5 MB
1
sunspots.iloc[10:20, :]
18180101.11818.001-1-1.001
1018181121818.032-1-1.001
1118181131818.034377.711
1218181141818.037-1-1.001
1318181151818.040-1-1.001
1418181161818.042-1-1.001
1518181171818.0457711.111
1618181181818.0489812.611
1718181191818.05110513.011
1818181201818.053-1-1.001
1918181211818.056-1-1.001

Problems

  • CSV file has no column headers
    • Columns 0-2: Gregorian date (year, month, day)
    • Column 3: Date as fraction as year
    • Column 4: Daily total sunspot number
    • Column 5: Definitive / provisional indicator (1 OR 0)
  • Missing values in column 4: indicated by -1
  • Date representation inconvenient
1
2
sunspots = pd.read_csv(filepath, sep=';', header=None)
sunspots.iloc[10:20, :]
01234567
1018181111818.029-1-1.001
1118181121818.032-1-1.001
1218181131818.034377.711
1318181141818.037-1-1.001
1418181151818.040-1-1.001
1518181161818.042-1-1.001
1618181171818.0457711.111
1718181181818.0489812.611
1818181191818.05110513.011
1918181201818.053-1-1.001

Using names keyword

1
2
col_names = ['year', 'month', 'day', 'dec_date',
             'tot_sunspots', 'daily_std', 'observations', 'definite']
1
2
sunspots = pd.read_csv(filepath, sep=';', header=None, names=col_names)
sunspots.iloc[10:20, :]
yearmonthdaydec_datetot_sunspotsdaily_stdobservationsdefinite
1018181111818.029-1-1.001
1118181121818.032-1-1.001
1218181131818.034377.711
1318181141818.037-1-1.001
1418181151818.040-1-1.001
1518181161818.042-1-1.001
1618181171818.0457711.111
1718181181818.0489812.611
1818181191818.05110513.011
1918181201818.053-1-1.001

Using na_values keyword (1)

1
2
3
4
5
sunspots = pd.read_csv(filepath, sep=';',
                       header=None,
                       names=col_names,
                       na_values='-1')
sunspots.iloc[10:20, :]
yearmonthdaydec_datetot_sunspotsdaily_stdobservationsdefinite
1018181111818.029-1NaN01
1118181121818.032-1NaN01
1218181131818.034377.711
1318181141818.037-1NaN01
1418181151818.040-1NaN01
1518181161818.042-1NaN01
1618181171818.0457711.111
1718181181818.0489812.611
1818181191818.05110513.011
1918181201818.053-1NaN01

Using na_values keyword (2)

1
2
3
4
5
sunspots = pd.read_csv(filepath, sep=';',
                       header=None,
                       names=col_names,
                       na_values='  -1')
sunspots.iloc[10:20, :]
yearmonthdaydec_datetot_sunspotsdaily_stdobservationsdefinite
1018181111818.029NaNNaN01
1118181121818.032NaNNaN01
1218181131818.03437.07.711
1318181141818.037NaNNaN01
1418181151818.040NaNNaN01
1518181161818.042NaNNaN01
1618181171818.04577.011.111
1718181181818.04898.012.611
1818181191818.051105.013.011
1918181201818.053NaNNaN01
1
sunspots.info()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 73414 entries, 0 to 73413
Data columns (total 8 columns):
 #   Column        Non-Null Count  Dtype  
---  ------        --------------  -----  
 0   year          73414 non-null  int64  
 1   month         73414 non-null  int64  
 2   day           73414 non-null  int64  
 3   dec_date      73414 non-null  float64
 4   tot_sunspots  70167 non-null  float64
 5   daily_std     70167 non-null  float64
 6   observations  73414 non-null  int64  
 7   definite      73414 non-null  int64  
dtypes: float64(3), int64(5)
memory usage: 4.5 MB

Using na_values keyword (3)

1
2
3
4
5
6
sunspots = pd.read_csv(filepath, sep=';',
                       header=None,
                       names=col_names,
                       na_values={'tot_sunspots':['  -1'],
                                  'daily_std':['-1']})
sunspots.iloc[10:20, :]
yearmonthdaydec_datetot_sunspotsdaily_stdobservationsdefinite
1018181111818.029NaNNaN01
1118181121818.032NaNNaN01
1218181131818.03437.07.711
1318181141818.037NaNNaN01
1418181151818.040NaNNaN01
1518181161818.042NaNNaN01
1618181171818.04577.011.111
1718181181818.04898.012.611
1818181191818.051105.013.011
1918181201818.053NaNNaN01

Using parse_dates keyword

  • FutureWarning: Support for nested sequences for parse_dates in pd.read_csv is deprecated. Combine the desired columns with pd.to_datetime after parsing instead.
1
2
3
4
5
6
sunspots = pd.read_csv(filepath, sep=';',
                       header=None,
                       names=col_names,
                       na_values={'tot_sunspots':['  -1'],
                                  'daily_std':['-1']},
                       parse_dates=[[0, 1, 2]])
1
2
3
4
5
6
7
8
sunspots = pd.read_csv(filepath, sep=';',
                       header=None,
                       names=col_names,
                       na_values={'tot_sunspots':['  -1'],
                                  'daily_std':['-1']})
sunspots['year_month_day'] = pd.to_datetime(sunspots['year'].astype(str) + '-' + sunspots['month'].astype(str) + '-' + sunspots['day'].astype(str))
sunspots.drop(['year', 'month', 'day'], axis=1, inplace=True)
sunspots.iloc[10:20, :]
dec_datetot_sunspotsdaily_stdobservationsdefiniteyear_month_day
101818.029NaNNaN011818-01-11
111818.032NaNNaN011818-01-12
121818.03437.07.7111818-01-13
131818.037NaNNaN011818-01-14
141818.040NaNNaN011818-01-15
151818.042NaNNaN011818-01-16
161818.04577.011.1111818-01-17
171818.04898.012.6111818-01-18
181818.051105.013.0111818-01-19
191818.053NaNNaN011818-01-20

Inspecting DataFrame

1
sunspots.info()
1
2
3
4
5
6
7
8
9
10
11
12
13
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 73414 entries, 0 to 73413
Data columns (total 6 columns):
 #   Column          Non-Null Count  Dtype         
---  ------          --------------  -----         
 0   dec_date        73414 non-null  float64       
 1   tot_sunspots    70167 non-null  float64       
 2   daily_std       70167 non-null  float64       
 3   observations    73414 non-null  int64         
 4   definite        73414 non-null  int64         
 5   year_month_day  73414 non-null  datetime64[ns]
dtypes: datetime64[ns](1), float64(3), int64(2)
memory usage: 3.4 MB

Using dates as index

1
2
3
sunspots.index = sunspots['year_month_day']
sunspots.index.name = 'date'
sunspots.iloc[10:20, :]
dec_datetot_sunspotsdaily_stdobservationsdefiniteyear_month_day
date
1818-01-111818.029NaNNaN011818-01-11
1818-01-121818.032NaNNaN011818-01-12
1818-01-131818.03437.07.7111818-01-13
1818-01-141818.037NaNNaN011818-01-14
1818-01-151818.040NaNNaN011818-01-15
1818-01-161818.042NaNNaN011818-01-16
1818-01-171818.04577.011.1111818-01-17
1818-01-181818.04898.012.6111818-01-18
1818-01-191818.051105.013.0111818-01-19
1818-01-201818.053NaNNaN011818-01-20
1
sunspots.info()
1
2
3
4
5
6
7
8
9
10
11
12
13
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 73414 entries, 1818-01-01 to 2018-12-31
Data columns (total 6 columns):
 #   Column          Non-Null Count  Dtype         
---  ------          --------------  -----         
 0   dec_date        73414 non-null  float64       
 1   tot_sunspots    70167 non-null  float64       
 2   daily_std       70167 non-null  float64       
 3   observations    73414 non-null  int64         
 4   definite        73414 non-null  int64         
 5   year_month_day  73414 non-null  datetime64[ns]
dtypes: datetime64[ns](1), float64(3), int64(2)
memory usage: 3.9 MB

Trimming redundant columns

1
2
3
cols = ['tot_sunspots', 'daily_std', 'observations', 'definite']
sunspots = sunspots[cols]
sunspots.iloc[10:20, :]
tot_sunspotsdaily_stdobservationsdefinite
date
1818-01-11NaNNaN01
1818-01-12NaNNaN01
1818-01-1337.07.711
1818-01-14NaNNaN01
1818-01-15NaNNaN01
1818-01-16NaNNaN01
1818-01-1777.011.111
1818-01-1898.012.611
1818-01-19105.013.011
1818-01-20NaNNaN01

Writing files

1
2
3
4
5
6
out_csv = 'sunspots.csv'
sunspots.to_csv(out_csv)
out_tsv = 'sunspots.tsv'
sunspots.to_csv(out_tsv, sep='\t')
out_xlsx = 'sunspots.xlsx'
sunspots.to_excel(out_xlsx)

Exercises

Reading a flat file

In previous exercises, we have preloaded the data for you using the pandas function read_csv(). Now, it’s your turn! Your job is to read the World Bank population data you saw earlier into a DataFrame using read_csv(). The file is available in the variable data_file.

The next step is to reread the same file, but simultaneously rename the columns using the names keyword input parameter, set equal to a list of new column labels. You will also need to set header=0 to rename the column labels.

Finish up by inspecting the result with df.head() and df.info() in the IPython Shell (changing df to the name of your DataFrame variable).

pandas has already been imported and is available in the workspace as pd.

Instructions

  • Use pd.read_csv() with the string data_file to read the CSV file into a DataFrame and assign it to df1.
  • Create a list of new column labels - ‘year’, ‘population’ - and assign it to the variable new_labels.
  • Reread the same file, again using pd.read_csv(), but this time, add the keyword arguments header=0 and names=new_labels. Assign the resulting DataFrame to df2.
  • Print both the df1 and df2 DataFrames to see the change in column names. This has already been done for you.
1
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/world_population.csv'
1
2
# Read in the file: df1
df1 = pd.read_csv(data_file)
1
2
# Create a list of the new column labels: new_labels
new_labels = ['year', 'population']
1
2
# Read in the file, specifying the header and names parameters: df2
df2 = pd.read_csv(data_file, header=0, names=new_labels)
1
2
# Print both the DataFrames
df1.head()
DateTotal Population
019603034970564
119703684822701
219804436590356
319905282715991
420006115974486
1
df2.head()
yearpopulation
019603034970564
119703684822701
219804436590356
319905282715991
420006115974486

Delimiters, headers, and extensions

Not all data files are clean and tidy. Pandas provides methods for reading those not-so-perfect data files that you encounter far too often.

In this exercise, you have monthly stock data for four companies downloaded from Yahoo Finance. The data is stored as one row for each company and each column is the end-of-month closing price. The file name is given to you in the variable file_messy.

In addition, this file has three aspects that may cause trouble for lesser tools: multiple header lines, comment records (rows) interleaved throughout the data rows, and space delimiters instead of commas.

Your job is to use pandas to read the data from this problematic file_messy using non-default input options with read_csv() so as to tidy up the mess at read time. Then, write the cleaned up data to a CSV file with the variable file_clean that has been prepared for you, as you might do in a real data workflow.

You can learn about the option input parameters needed by using help() on the pandas function pd.read_csv().

Instructions

  • Use pd.read_csv() without using any keyword arguments to read file_messy into a pandas DataFrame df1.
  • Use .head() to print the first 5 rows of df1 and see how messy it is. Do this in the IPython Shell first so you can see how modifying read_csv() can clean up this mess.
  • Using the keyword arguments delimiter=’ ‘, header=3 and comment=’#’, use pd.read_csv() again to read file_messy into a new DataFrame df2.
  • Print the output of df2.head() to verify the file was read correctly.
  • Use the DataFrame method .to_csv() to save the DataFrame df2 to the variable file_clean. Be sure to specify index=False.
  • Use the DataFrame method .to_excel() to save the DataFrame df2 to the file ‘file_clean.xlsx’. Again, remember to specify index=False
1
2
3
# Read the raw file as-is: df1
file_messy = 'DataCamp-master/11-pandas-foundations/_datasets/messy_stock_data.tsv'
df1 = pd.read_csv(file_messy)
1
2
# Print the output of df1.head()
df1.head()
The following stock data was collect on 2016-AUG-25 from an unknown source
These kind of ocmments are not very usefulare they?
probably should just throw this line away toobut not the next since those are column labels
name Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov DecNaN
# So that line you just read has all the column headers labelsNaN
IBM 156.08 160.01 159.81 165.22 172.25 167.15 164.75 152.77 145.36 146.11 137.21 137.96NaN
1
2
# Read in the file with the correct parameters: df2
df2 = pd.read_csv(file_messy, delimiter=' ', header=3, comment='#')
1
2
# Print the output of df2.head()
df2.head()
nameJanFebMarAprMayJunJulAugSepOctNovDec
0IBM156.08160.01159.81165.22172.25167.15164.75152.77145.36146.11137.21137.96
1MSFT45.5143.0842.1343.4747.5345.9645.6145.5143.5648.7053.8855.40
2GOOGLE512.42537.99559.72540.50535.24532.92590.09636.84617.93663.59735.39755.35
3APPLE110.64125.43125.97127.29128.76127.81125.34113.39112.80113.36118.16111.73

save files

1
2
3
4
# Save the cleaned up DataFrame to a CSV file without the index
df2.to_csv(file_clean, index=False)
# Save the cleaned up DataFrame to an excel file without the index
df2.to_excel('file_clean.xlsx', index=False)

Plotting with Pandas

1
2
3
4
5
6
7
cols = ['date', 'open', 'high', 'low', 'close', 'adj_close', 'volume']
aapl = pd.read_csv(r'DataCamp-master/11-pandas-foundations/_datasets/AAPL.csv',
                   names=cols,
                   index_col='date',
                   parse_dates=True,
                   header=0,
                   na_values='null')
1
aapl.head()
openhighlowcloseadj_closevolume
date
1980-12-120.5133930.5156250.5133930.5133930.023106117258400.0
1980-12-150.4888390.4888390.4866070.4866070.02190043971200.0
1980-12-160.4531250.4531250.4508930.4508930.02029326432000.0
1980-12-170.4620540.4642860.4620540.4620540.02079521610400.0
1980-12-180.4754460.4776790.4754460.4754460.02139818362400.0
1
aapl.info()
1
2
3
4
5
6
7
8
9
10
11
12
13
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 9611 entries, 1980-12-12 to 2019-01-24
Data columns (total 6 columns):
 #   Column     Non-Null Count  Dtype  
---  ------     --------------  -----  
 0   open       9610 non-null   float64
 1   high       9610 non-null   float64
 2   low        9610 non-null   float64
 3   close      9610 non-null   float64
 4   adj_close  9610 non-null   float64
 5   volume     9610 non-null   float64
dtypes: float64(6)
memory usage: 525.6 KB
1
aapl.tail()
openhighlowcloseadj_closevolume
date
2019-01-17154.199997157.660004153.259995155.860001155.86000129821200.0
2019-01-18157.500000157.880005155.979996156.820007156.82000733751000.0
2019-01-22156.410004156.729996152.619995153.300003153.30000330394000.0
2019-01-23154.149994155.139999151.699997153.919998153.91999823130600.0
2019-01-24154.110001154.479996151.740005152.699997152.69999725421800.0

Plotting arrays (matplotlib)

1
close_arr = aapl['close'].values
1
type(close_arr)
1
numpy.ndarray
1
plt.plot(close_arr)
1
[<matplotlib.lines.Line2D at 0x19978c3eed0>]

png

Plotting Series (matplotlib)

1
close_series = aapl['close']
1
type(close_series)
1
pandas.core.series.Series
1
plt.plot(close_series)
1
[<matplotlib.lines.Line2D at 0x199798f5f40>]

png

Plotting Series (pandas)

1
close_series.plot()
1
<Axes: xlabel='date'>

png

Plotting DataFrames (pandas)

1
aapl.plot()
1
<Axes: xlabel='date'>

png

Plotting DataFrames (matplotlib)

1
plt.plot(aapl)
1
2
3
4
5
6
[<matplotlib.lines.Line2D at 0x19978d07350>,
 <matplotlib.lines.Line2D at 0x19978bf7dd0>,
 <matplotlib.lines.Line2D at 0x19978c8d760>,
 <matplotlib.lines.Line2D at 0x199791eea20>,
 <matplotlib.lines.Line2D at 0x199791ee900>,
 <matplotlib.lines.Line2D at 0x19978c85910>]

png

Fixing Scales

1
2
3
aapl.plot()
plt.yscale('log')
plt.show()

png

Customizing plots

1
2
3
4
aapl['open'].plot(color='b', style='.-', legend=True)
aapl['close'].plot(color='r', style='.', legend=True)
plt.axis(('2000', '2001', 0, 10))
plt.show()

png

Saving Plots

1
2
3
4
5
6
7
aapl.loc['2001':'2004', ['open', 'close', 'high', 'low']].plot()

plt.savefig('aapl.png')
plt.savefig('aapl.jpg')
plt.savefig('aapl.pdf')

plt.show()

png

Exercises

Plotting series using pandas

Data visualization is often a very effective first step in gaining a rough understanding of a data set to be analyzed. Pandas provides data visualization by both depending upon and interoperating with the matplotlib library. You will now explore some of the basic plotting mechanics with pandas as well as related matplotlib options. We have pre-loaded a pandas DataFrame df which contains the data you need. Your job is to use the DataFrame method df.plot() to visualize the data, and then explore the optional matplotlib input parameters that this .plot() method accepts.

The pandas .plot() method makes calls to matplotlib to construct the plots. This means that you can use the skills you’ve learned in previous visualization courses to customize the plot. In this exercise, you’ll add a custom title and axis labels to the figure.

Before plotting, inspect the DataFrame in the IPython Shell using df.head(). Also, use type(df) and note that it is a single column DataFrame.

Instructions

  • Create the plot with the DataFrame method df.plot(). Specify a color of ‘red’.
    • Note: c and color are interchangeable as parameters here, but we ask you to be explicit and specify color.
  • Use plt.title() to give the plot a title of ‘Temperature in Austin’.
  • Use plt.xlabel() to give the plot an x-axis label of ‘Hours since midnight August 1, 2010’.
  • Use plt.ylabel() to give the plot a y-axis label of ‘Temperature (degrees F)’.
  • Finally, display the plot using plt.show()
1
2
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/weather_data_austin_2010.csv'
df = pd.read_csv(data_file, usecols=['Temperature'])
1
df.info()
1
2
3
4
5
6
7
8
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 8759 entries, 0 to 8758
Data columns (total 1 columns):
 #   Column       Non-Null Count  Dtype  
---  ------       --------------  -----  
 0   Temperature  8759 non-null   float64
dtypes: float64(1)
memory usage: 68.6 KB
1
df.head()
Temperature
046.2
144.6
244.1
343.8
443.5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Create a plot with color='red'
df.plot(color='r')

# Add a title
plt.title('Temperature in Austin')

# Specify the x-axis label
plt.xlabel('Hours since midnight August 1, 2010')

# Specify the y-axis label
plt.ylabel('Temperature (degrees F)')

# Display the plot
plt.show()

png

Plotting DataFrames

Comparing data from several columns can be very illuminating. Pandas makes doing so easy with multi-column DataFrames. By default, calling df.plot() will cause pandas to over-plot all column data, with each column as a single line. In this exercise, we have pre-loaded three columns of data from a weather data set - temperature, dew point, and pressure - but the problem is that pressure has different units of measure. The pressure data, measured in Atmospheres, has a different vertical scaling than that of the other two data columns, which are both measured in degrees Fahrenheit.

Your job is to plot all columns as a multi-line plot, to see the nature of vertical scaling problem. Then, use a list of column names passed into the DataFrame df[column_list] to limit plotting to just one column, and then just 2 columns of data. When you are finished, you will have created 4 plots. You can cycle through them by clicking on the ‘Previous Plot’ and ‘Next Plot’ buttons.

As in the previous exercise, inspect the DataFrame df in the IPython Shell using the .head() and .info() methods.

Instructions

  • Plot all columns together on one figure by calling df.plot(), and noting the vertical scaling problem.
  • Plot all columns as subplots. To do so, you need to specify subplots=True inside .plot().
  • Plot a single column of dew point data. To do this, define a column list containing a single column name ‘Dew Point (deg F)’, and call df[column_list1].plot().
  • Plot two columns of data, ‘Temperature (deg F)’ and ‘Dew Point (deg F)’. To do this, define a list containing those column names and pass it into df[], as df[column_list2].plot().
1
2
3
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/weather_data_austin_2010.csv'
df = pd.read_csv(data_file, parse_dates=[3], index_col='Date')
df.head()
TemperatureDewPointPressure
Date
2010-01-01 00:00:0046.237.51.0
2010-01-01 01:00:0044.637.11.0
2010-01-01 02:00:0044.136.91.0
2010-01-01 03:00:0043.836.91.0
2010-01-01 04:00:0043.536.81.0
1
2
3
# Plot all columns (default)
df.plot()
plt.show()

png

1
2
3
# Plot all columns as subplots
df.plot(subplots=True)
plt.show()

png

1
2
3
4
# Plot just the Dew Point data
column_list1 = ['DewPoint']
df[column_list1].plot()
plt.show()

png

1
2
3
4
# Plot the Dew Point and Temperature data, but not the Pressure data
column_list2 = ['Temperature','DewPoint']
df[column_list2].plot()
plt.show()

png

Exploratory Data Analysis

Having learned how to ingest and inspect your data, you will next explore it visually as well as quantitatively. This process, known as exploratory data analysis (EDA), is a crucial component of any data science project. Pandas has powerful methods that help with statistical and visual EDA. In this chapter, you will learn how and when to apply these techniques.

Visual exploratory data analysis

The Iris Dataset

  • Famous dataset in pattern recognition
  • 150 observations, 4 features each
    • Sepal length
    • Sepal width
    • Petal length
    • Petal width
  • 3 species:
    • setosa
    • versicolor
    • virginica
1
2
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/iris.csv'
iris = pd.read_csv(data_file)
1
iris.shape
1
(150, 5)
1
iris.head()
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)species
05.13.51.40.2setosa
14.93.01.40.2setosa
24.73.21.30.2setosa
34.63.11.50.2setosa
45.03.61.40.2setosa

Line plot

1
iris.plot(x='sepal length (cm)', y='sepal width (cm)')
1
<Axes: xlabel='sepal length (cm)'>

png

Scatter Plot

1
2
3
4
iris.plot(x='sepal length (cm)', y='sepal width (cm)',
          kind='scatter')
plt.xlabel('sepal length (cm)')
plt.ylabel('sepal width (cm)')
1
Text(0, 0.5, 'sepal width (cm)')

png

Box Plot

1
2
3
iris.plot(y='sepal length (cm)',
          kind='box')
plt.ylabel('sepal length (cm)')
1
Text(0, 0.5, 'sepal length (cm)')

png

Histogram

1
2
3
iris.plot(y='sepal length (cm)',
          kind='hist')
plt.xlabel('sepal length (cm)')
1
Text(0.5, 0, 'sepal length (cm)')

png

Histogram Options

  • bins (integer): number of intervals or bins
  • range (tuple): extrema of bins (minimum, maximum)
  • density (boolean): whether to normalized to one - formerly this was normed
  • cumulative (boolean): computer Cumulative Distributions Function (CDF)
  • … more matplotlib customizations

Customizing Histogram

1
2
3
4
5
6
iris.plot(y='sepal length (cm)',
          kind='hist',
          bins=30,
          range=(4, 8),
          density=True)
plt.xlabel('sepal length (cm)')
1
Text(0.5, 0, 'sepal length (cm)')

png

Cumulative Distribution

1
2
3
4
5
6
7
8
iris.plot(y='sepal length (cm)',
          kind='hist',
          bins=30,
          range=(4, 8),
          density=True,
          cumulative=True)
plt.xlabel('sepal length (cm)')
plt.title('Cumulative Distribution Function (CDF)')
1
Text(0.5, 1.0, 'Cumulative Distribution Function (CDF)')

png

Word of Warning

  • Three different DataFrame plot idioms
    • iris.plot(kind=’hist’)
    • iris.plt.hist()
    • iris.hist()
  • Syntax / Results differ!
  • Pandas API still evolving: chech the documentation

Exercises

pandas line plots

In the previous chapter, you saw that the .plot() method will place the Index values on the x-axis by default. In this exercise, you’ll practice making line plots with specific columns on the x and y axes.

You will work with a dataset consisting of monthly stock prices in 2015 for AAPL, GOOG, and IBM. The stock prices were obtained from Yahoo Finance. Your job is to plot the ‘Month’ column on the x-axis and the AAPL and IBM prices on the y-axis using a list of column names.

All necessary modules have been imported for you, and the DataFrame is available in the workspace as df. Explore it using methods such as .head(), .info(), and .describe() to see the column names.

Instructions

  • Create a list of y-axis column names called y_columns consisting of ‘AAPL’ and ‘IBM’.
  • Generate a line plot with x=’Month’ and y=y_columns as inputs.
  • Give the plot a title of ‘Monthly stock prices’.
  • Specify the y-axis label.
  • Display the plot.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
values = [['Jan', 117.160004, 534.5224450000002, 153.309998],
          ['Feb', 128.46000700000002, 558.402511, 161.940002],
          ['Mar', 124.43, 548.002468, 160.5],
          ['Apr', 125.150002, 537.340027, 171.28999299999995],
          ['May', 130.279999, 532.1099849999998, 169.649994],
          ['Jun', 125.43, 520.51001, 162.660004],
          ['Jul', 121.300003, 625.6099849999998, 161.990005],
          ['Aug', 112.760002, 618.25, 147.889999],
          ['Sep', 110.300003, 608.419983, 144.970001],
          ['Oct', 119.5, 710.8099980000002, 140.080002],
          ['Nov', 118.300003, 742.599976, 139.419998],
          ['Dec', 105.260002, 758.880005, 137.619995]]

values = np.array(values).transpose()
1
cols = ['Month', 'AAPL', 'GOOG', 'IBM']
1
data_zipped = list(zip(cols, values))
1
data_dict = dict(data_zipped)
1
dtype = dict(zip(data_dict.keys(), ['str', 'float64', 'float64', 'float64']))
1
dtype
1
{'Month': 'str', 'AAPL': 'float64', 'GOOG': 'float64', 'IBM': 'float64'}
1
df = pd.DataFrame.from_dict(data_dict).astype(dtype)
1
df
MonthAAPLGOOGIBM
0Jan117.160004534.522445153.309998
1Feb128.460007558.402511161.940002
2Mar124.430000548.002468160.500000
3Apr125.150002537.340027171.289993
4May130.279999532.109985169.649994
5Jun125.430000520.510010162.660004
6Jul121.300003625.609985161.990005
7Aug112.760002618.250000147.889999
8Sep110.300003608.419983144.970001
9Oct119.500000710.809998140.080002
10Nov118.300003742.599976139.419998
11Dec105.260002758.880005137.619995
1
df.info()
1
2
3
4
5
6
7
8
9
10
11
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 12 entries, 0 to 11
Data columns (total 4 columns):
 #   Column  Non-Null Count  Dtype  
---  ------  --------------  -----  
 0   Month   12 non-null     object 
 1   AAPL    12 non-null     float64
 2   GOOG    12 non-null     float64
 3   IBM     12 non-null     float64
dtypes: float64(3), object(1)
memory usage: 516.0+ bytes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Create a list of y-axis column names: y_columns
y_columns = ['AAPL', 'IBM']

# Generate a line plot
df.plot(x='Month', y=y_columns)

# Add the title
plt.title('Monthly stock prices')

# Add the y-axis label
plt.ylabel('Price ($US)')

# Display the plot
plt.show()

png

pandas scatter plots

Pandas scatter plots are generated using the kind='scatter' keyword argument. Scatter plots require that the x and y columns be chosen by specifying the x and y parameters inside .plot(). Scatter plots also take an s keyword argument to provide the radius of each circle to plot in pixels.

In this exercise, you’re going to plot fuel efficiency (miles-per-gallon) versus horse-power for 392 automobiles manufactured from 1970 to 1982 from the UCI Machine Learning Repository.

The size of each circle is provided as a NumPy array called sizes. This array contains the normalized 'weight' of each automobile in the dataset.

All necessary modules have been imported and the DataFrame is available in the workspace as df.

Instructions

  • Generate a scatter plot with ‘hp’ on the x-axis and ‘mpg’ on the y-axis. Specify s=sizes.
  • Add a title to the plot.
  • Specify the x-axis and y-axis labels.
1
2
3
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/auto-mpg.csv'
df = pd.read_csv(data_file)
df.head()
mpgcyldisplhpweightaccelyroriginname
018.08307.0130350412.070USchevrolet chevelle malibu
115.08350.0165369311.570USbuick skylark 320
218.08318.0150343611.070USplymouth satellite
316.08304.0150343312.070USamc rebel sst
417.08302.0140344910.570USford torino
1
df.info()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 392 entries, 0 to 391
Data columns (total 9 columns):
 #   Column  Non-Null Count  Dtype  
---  ------  --------------  -----  
 0   mpg     392 non-null    float64
 1   cyl     392 non-null    int64  
 2   displ   392 non-null    float64
 3   hp      392 non-null    int64  
 4   weight  392 non-null    int64  
 5   accel   392 non-null    float64
 6   yr      392 non-null    int64  
 7   origin  392 non-null    object 
 8   name    392 non-null    object 
dtypes: float64(3), int64(4), object(2)
memory usage: 27.7+ KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
sizes = np.array([ 51.12044694,  56.78387977,  49.15557238,  49.06977358,
        49.52823321,  78.4595872 ,  78.93021696,  77.41479205,
        81.52541106,  61.71459825,  52.85646225,  54.23007578,
        58.89427963,  39.65137852,  23.42587473,  33.41639502,
        32.03903011,  27.8650165 ,  18.88972581,  14.0196956 ,
        29.72619722,  24.58549713,  23.48516821,  20.77938954,
        29.19459189,  88.67676838,  79.72987328,  79.94866084,
        93.23005042,  18.88972581,  21.34122243,  20.6679223 ,
        28.88670381,  49.24144612,  46.14174741,  45.39631334,
        45.01218186,  73.76057586,  82.96880195,  71.84547684,
        69.85320595, 102.22421043,  93.78252358, 110.        ,
        36.52889673,  24.14234281,  44.84805372,  41.02504618,
        20.51976563,  18.765772  ,  17.9095202 ,  17.75442285,
        13.08832041,  10.83266174,  14.00441945,  15.91328975,
        21.60597587,  18.8188451 ,  21.15311208,  24.14234281,
        20.63083317,  76.05635059,  80.05816704,  71.18975117,
        70.98330444,  56.13992036,  89.36985382,  84.38736544,
        82.6716892 ,  81.4149056 ,  22.60363518,  63.06844313,
        69.92143863,  76.76982089,  69.2066568 ,  35.81711267,
        26.25184749,  36.94940537,  19.95069229,  23.88237331,
        21.79608472,  26.1474042 ,  19.49759118,  18.36136808,
        69.98970461,  56.13992036,  66.21810474,  68.02351436,
        59.39644014, 102.10046481,  82.96880195,  79.25686195,
        74.74521151,  93.34830013, 102.05923292,  60.7883734 ,
        40.55589449,  44.7388015 ,  36.11079464,  37.9986264 ,
        35.11233175,  15.83199594, 103.96451839, 100.21241654,
        90.18186347,  84.27493641,  32.38645967,  21.62494928,
        24.00218436,  23.56434276,  18.78345471,  22.21725537,
        25.44271071,  21.36007926,  69.37650986,  76.19877818,
        14.51292942,  19.38962134,  27.75740889,  34.24717407,
        48.10262495,  29.459795  ,  32.80584831,  55.89556844,
        40.06360581,  35.03982309,  46.33599903,  15.83199594,
        25.01226779,  14.03498009,  26.90404245,  59.52231336,
        54.92349014,  54.35035315,  71.39649768,  91.93424995,
        82.70879915,  89.56285636,  75.45251972,  20.50128352,
        16.04379287,  22.02531454,  11.32159874,  16.70430249,
        18.80114574,  18.50153068,  21.00322336,  25.79385418,
        23.80266582,  16.65430211,  44.35746794,  49.815853  ,
        49.04119063,  41.52318884,  90.72524338,  82.07906251,
        84.23747672,  90.29816462,  63.55551901,  63.23059357,
        57.92740995,  59.64831981,  38.45278922,  43.19643409,
        41.81296121,  19.62393488,  28.99647648,  35.35456858,
        27.97283229,  30.39744886,  20.57526193,  26.96758278,
        37.07354237,  15.62160631,  42.92863291,  30.21771564,
        36.40567571,  36.11079464,  29.70395123,  13.41514444,
        25.27829944,  20.51976563,  27.54281821,  21.17188565,
        20.18836167,  73.97101962,  73.09614831,  65.35749368,
        73.97101962,  43.51889468,  46.80945169,  37.77255674,
        39.6256851 ,  17.24230306,  19.49759118,  15.62160631,
        13.41514444,  55.49963323,  53.18333207,  55.31736854,
        42.44868923,  13.86730874,  16.48817545,  19.33574884,
        27.3931002 ,  41.31307817,  64.63368105,  44.52069676,
        35.74387954,  60.75655952,  79.87569835,  68.46177648,
        62.35745431,  58.70651902,  17.41217694,  19.33574884,
        13.86730874,  22.02531454,  15.75091031,  62.68013142,
        68.63071356,  71.36201911,  76.80558184,  51.58836621,
        48.84134317,  54.86301837,  51.73502816,  74.14661842,
        72.22648148,  77.88228247,  78.24284811,  15.67003285,
        31.25845963,  21.36007926,  31.60164234,  17.51450098,
        17.92679488,  16.40542438,  19.96892459,  32.99310928,
        28.14577056,  30.80379718,  16.40542438,  13.48998471,
        16.40542438,  17.84050478,  13.48998471,  47.1451025 ,
        58.08281541,  53.06435374,  52.02897659,  41.44433489,
        36.60292926,  30.80379718,  48.98404972,  42.90189859,
        47.56635225,  39.24128299,  54.56115914,  48.41447259,
        48.84134317,  49.41341845,  42.76835191,  69.30854366,
        19.33574884,  27.28640858,  22.02531454,  20.70504474,
        26.33555201,  31.37264569,  33.93740821,  24.08222494,
        33.34566004,  41.05118927,  32.52595611,  48.41447259,
        16.48817545,  18.97851406,  43.84255439,  37.22278157,
        34.77459916,  44.38465193,  47.00510227,  61.39441929,
        57.77221268,  65.12675249,  61.07507305,  79.14790534,
        68.42801405,  54.10993164,  64.63368105,  15.42864956,
        16.24054679,  15.26876826,  29.68171358,  51.88189829,
        63.32798377,  42.36896092,  48.6988448 ,  20.15170555,
        19.24612787,  16.98905358,  18.88972581,  29.68171358,
        28.03762169,  30.35246559,  27.20120517,  19.13885751,
        16.12562794,  18.71277385,  16.9722369 ,  29.85984799,
        34.29495526,  37.54716158,  47.59450219,  19.93246832,
        30.60028577,  26.90404245,  24.66650366,  21.36007926,
        18.5366546 ,  32.64243213,  18.5366546 ,  18.09999962,
        22.70075058,  36.23351603,  43.97776651,  14.24983724,
        19.15671509,  14.17291518,  35.25757392,  24.38356372,
        26.02234705,  21.83420642,  25.81458463,  28.90864169,
        28.58044785,  30.91715052,  23.6833544 ,  12.82391671,
        14.63757021,  12.89709155,  17.75442285,  16.24054679,
        17.49742615,  16.40542438,  20.42743834,  17.41217694,
        23.58415722,  19.96892459,  20.33531923,  22.99334585,
        28.47146626,  28.90864169,  43.43816712,  41.57579979,
        35.01567018,  35.74387954,  48.5565546 ,  57.77221268,
        38.98605581,  49.98882458,  28.25412762,  29.01845599,
        23.88237331,  27.60710798,  26.54539622,  31.14448175,
        34.17556473,  16.3228815 ,  17.0732619 ,  16.15842026,
        18.80114574,  18.80114574,  19.42557798,  20.2434083 ,
        20.98452475,  16.07650192,  16.07650192,  16.57113469,
        36.11079464,  37.84783835,  27.82194848,  33.46359332,
        29.5706502 ,  23.38638738,  36.23351603,  32.40968826,
        18.88972581,  21.92965639,  28.68963762,  30.80379718])
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Generate a scatter plot
df.plot(kind='scatter', x='hp', y='mpg', s=sizes)

# Add the title
plt.title('Fuel efficiency vs Horse-power')

# Add the x-axis label
plt.xlabel('Horse-power')

# Add the y-axis label
plt.ylabel('Fuel efficiency (mpg)')

# Display the plot
plt.show()

png

pandas box plots

While pandas can plot multiple columns of data in a single figure, making plots that share the same x and y axes, there are cases where two columns cannot be plotted together because their units do not match. The .plot() method can generate subplots for each column being plotted. Here, each plot will be scaled independently.

In this exercise your job is to generate box plots for fuel efficiency (mpg) and weight from the automobiles data set. To do this in a single figure, you’ll specify subplots=True inside .plot() to generate two separate plots.

All necessary modules have been imported and the automobiles dataset is available in the workspace as df.

Instructions

  • Make a list called cols of the column names to be plotted: ‘weight’ and ‘mpg’.
  • Call plot on df[cols] to generate a box plot of the two columns in a single figure. To do this, specify subplots=True.
1
2
3
4
5
6
7
8
# Make a list of the column names to be plotted: cols
cols = ['weight', 'mpg']

# Generate the box plots
df[cols].plot(kind='box', subplots=True)

# Display the plot
plt.show()

png

pandas hist, pdf and cd

Pandas relies on the .hist() method to not only generate histograms, but also plots of probability density functions (PDFs) and cumulative density functions (CDFs).

In this exercise, you will work with a dataset consisting of restaurant bills that includes the amount customers tipped.

The original dataset is provided by the Seaborn package.

Your job is to plot a PDF and CDF for the fraction column of the tips dataset. This column contains information about what fraction of the total bill is comprised of the tip.

Remember, when plotting the PDF, you need to specify normed=True in your call to .hist(), and when plotting the CDF, you need to specify cumulative=True in addition to normed=True.

All necessary modules have been imported and the tips dataset is available in the workspace as df. Also, some formatting code has been written so that the plots you generate will appear on separate rows.

Instructions

  • Plot a PDF for the values in fraction with 30 bins between 0 and 30%. The range has been taken care of for you. ax=axes[0] means that this plot will appear in the first row.
  • Plot a CDF for the values in fraction with 30 bins between 0 and 30%. Again, the range has been specified for you. To make the CDF appear on the second row, you need to specify ax=axes[1].
1
2
3
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/tips.csv'
df = pd.read_csv(data_file)
df.head()
total_billtipsexsmokerdaytimesizefraction
016.991.01FemaleNoSunDinner20.059447
110.341.66MaleNoSunDinner30.160542
221.013.50MaleNoSunDinner30.166587
323.683.31MaleNoSunDinner20.139780
424.593.61FemaleNoSunDinner40.146808
1
2
3
4
5
6
7
8
# This formats the plots such that they appear on separate rows
fig, axes = plt.subplots(nrows=2, ncols=1)

# Plot the PDF
df.fraction.plot(ax=axes[0], kind='hist', bins=30, density=True, range=(0,.3))

# Plot the CDF
df.fraction.plot(ax=axes[1], kind='hist', bins=30, density=True, cumulative=True, range=(0,.3))
1
<Axes: ylabel='Frequency'>

png

Statistical Exploratory Data Analysis

Summarizing with describe()

Describe

  • count: number of entires
  • mean: average of entries
  • std: standard deviation
  • min: miniumum entry
  • 25%: first quartile
  • 50%: median or second quartile
  • 75%: third quartile
  • max: maximum entry
1
iris.describe()  # summary statistics
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
count150.000000150.000000150.000000150.000000
mean5.8433333.0540003.7586671.198667
std0.8280660.4335941.7644200.763161
min4.3000002.0000001.0000000.100000
25%5.1000002.8000001.6000000.300000
50%5.8000003.0000004.3500001.300000
75%6.4000003.3000005.1000001.800000
max7.9000004.4000006.9000002.500000

Counts

1
iris['sepal length (cm)'].count()  # Applied to Series
1
150
1
iris['sepal width (cm)'].count()  # Applied to Series
1
150
1
iris[['petal length (cm)', 'petal width (cm)']].count()  # Applied to DataFrame
1
2
3
petal length (cm)    150
petal width (cm)     150
dtype: int64
1
type(iris[['petal length (cm)', 'petal width (cm)']].count())  # Returns series
1
pandas.core.series.Series

Averages

  • measures the tendency to a central value of a measurement
1
iris['sepal length (cm)'].mean(numeric_only=True)  # Applied to Series
1
5.843333333333334
1
iris.mean(numeric_only=True)  # Applied to entire DataFrame
1
2
3
4
5
sepal length (cm)    5.843333
sepal width (cm)     3.054000
petal length (cm)    3.758667
petal width (cm)     1.198667
dtype: float64

Standard Deviations (std)

  • measures spread of a measurement
1
iris.std(numeric_only=True)
1
2
3
4
5
sepal length (cm)    0.828066
sepal width (cm)     0.433594
petal length (cm)    1.764420
petal width (cm)     0.763161
dtype: float64

Mean and Standard Deviation on a Bell Curve

1
iris['sepal width (cm)'].plot(kind='hist', bins=30)
1
<Axes: ylabel='Frequency'>

png

Medians

  • middle number of the measurements
  • special example of a quantile
1
iris.median(numeric_only=True)
1
2
3
4
5
sepal length (cm)    5.80
sepal width (cm)     3.00
petal length (cm)    4.35
petal width (cm)     1.30
dtype: float64

Quantile

  • If q is between 0 and 1, the qth quantile of a dataset is a numerical value that splits the data into two sets
    • one with the fraction q of smaller observations
    • one with the fraction q of larger observations
  • Quantiles are percentages
  • Median is the 0.5 quantile or the 50th percentile of a dataset
1
2
q = 0.5
iris.quantile(q, numeric_only=True)
1
2
3
4
5
sepal length (cm)    5.80
sepal width (cm)     3.00
petal length (cm)    4.35
petal width (cm)     1.30
Name: 0.5, dtype: float64

Inter-quartile range (IQR)

1
2
q = [0.25, 0.75]
iris.quantile(q, numeric_only=True)
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
0.255.12.81.60.3
0.756.43.35.11.8

Range

  • interval between the smallest and largest observations
  • given by the min and max methods
1
iris.min(numeric_only=True)
1
2
3
4
5
sepal length (cm)    4.3
sepal width (cm)     2.0
petal length (cm)    1.0
petal width (cm)     0.1
dtype: float64
1
iris.max(numeric_only=True)
1
2
3
4
5
sepal length (cm)    7.9
sepal width (cm)     4.4
petal length (cm)    6.9
petal width (cm)     2.5
dtype: float64

Box Plots

1
2
iris.plot(kind='box')
plt.ylabel('[cm]')
1
Text(0, 0.5, '[cm]')

png

Exercises

Fuel efficiency

From the automobiles data set, which value corresponds to the median value of the 'mpg' column? Your job is to select the 'mpg' column and call the .median(numeric_only=True) method on it. The automobile DataFrame has been provided as df.

1
2
3
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/auto-mpg.csv'
df = pd.read_csv(data_file)
df.head()
mpgcyldisplhpweightaccelyroriginname
018.08307.0130350412.070USchevrolet chevelle malibu
115.08350.0165369311.570USbuick skylark 320
218.08318.0150343611.070USplymouth satellite
316.08304.0150343312.070USamc rebel sst
417.08302.0140344910.570USford torino
1
df.median(numeric_only=True)
1
2
3
4
5
6
7
8
mpg         22.75
cyl          4.00
displ      151.00
hp          93.50
weight    2803.50
accel       15.50
yr          76.00
dtype: float64

Bachelor’s degrees awarded to women

In this exercise, you will investigate statistics of the percentage of Bachelor’s degrees awarded to women from 1970 to 2011. Data is recorded every year for 17 different fields. This data set was obtained from the Digest of Education Statistics.

Your job is to compute the minimum and maximum values of the 'Engineering' column and generate a line plot of the mean value of all 17 academic fields per year. To perform this step, you’ll use the .mean(numeric_only=True) method with the keyword argument axis='columns'. This computes the mean across all columns per row.

The DataFrame has been pre-loaded for you as df with the index set to 'Year'.

Instructions

  • Print the minimum value of the ‘Engineering’ column.
  • Print the maximum value of the ‘Engineering’ column.
  • Construct the mean percentage per year with .mean(axis=’columns’). Assign the result to mean.
  • Plot the average percentage per year. Since ‘Year’ is the index of df, it will appear on the x-axis of the plot. No keyword arguments are needed in your call to .plot().
1
2
3
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/percent-bachelors-degrees-women-usa.csv'
df = pd.read_csv(data_file, index_col='Year')
df.head()
AgricultureArchitectureArt and PerformanceBiologyBusinessCommunications and JournalismComputer ScienceEducationEngineeringEnglishForeign LanguagesHealth ProfessionsMath and StatisticsPhysical SciencesPsychologyPublic AdministrationSocial Sciences and History
Year
19704.22979811.92100559.729.0883639.06443935.313.674.5353280.865.57092373.877.138.013.844.468.436.8
19715.45279712.00310659.929.3944039.50318735.513.674.1492041.064.55648573.975.539.014.946.265.536.2
19727.42071013.21459460.429.81022110.55896236.614.973.5545201.263.66426374.676.940.214.847.662.636.1
19739.65360214.79161360.231.14791512.80460238.416.473.5018141.662.94150274.977.440.916.550.464.336.4
197414.07462317.44468861.932.99618316.20485040.518.973.3368112.262.41341275.377.941.818.252.666.137.3
1
2
# Print the minimum value of the Engineering column
df.Engineering.min(numeric_only=True)
1
0.8
1
2
# Print the maximum value of the Engineering column
df.Engineering.max(numeric_only=True)
1
19.0
1
2
3
# Construct the mean percentage per year: mean
mean = df.mean(axis='columns')
mean.head()
1
2
3
4
5
6
7
Year
1970    38.594697
1971    38.603481
1972    39.066075
1973    40.131826
1974    41.715916
dtype: float64
1
2
# Plot the average percentage per year
mean.plot()
1
<Axes: xlabel='Year'>

png

Median vs mean

In many data sets, there can be large differences in the mean and median value due to the presence of outliers.

In this exercise, you’ll investigate the mean, median, and max fare prices paid by passengers on the Titanic and generate a box plot of the fare prices. This data set was obtained from Vanderbilt University.

All necessary modules have been imported and the DataFrame is available in the workspace as df.

Instructions

  • Print summary statistics of the ‘fare’ column of df with .describe() and print(). Note: df.fare and df[‘fare’] are equivalent.
  • Generate a box plot of the ‘fare’ column.
1
2
3
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/titanic.csv'
df = pd.read_csv(data_file)
df.head(3)
pclasssurvivednamesexagesibspparchticketfarecabinembarkedboatbodyhome.dest
011Allen, Miss. Elisabeth Waltonfemale29.000024160211.3375B5S2NaNSt Louis, MO
111Allison, Master. Hudson Trevormale0.9212113781151.5500C22 C26S11NaNMontreal, PQ / Chesterville, ON
210Allison, Miss. Helen Lorainefemale2.0012113781151.5500C22 C26SNaNNaNMontreal, PQ / Chesterville, ON
1
df.fare.describe()
1
2
3
4
5
6
7
8
9
count    1308.000000
mean       33.295479
std        51.758668
min         0.000000
25%         7.895800
50%        14.454200
75%        31.275000
max       512.329200
Name: fare, dtype: float64
1
df.fare.plot(kind='box')
1
<Axes: >

png

Quantiles

In this exercise, you’ll investigate the probabilities of life expectancy in countries around the world. This dataset contains life expectancy for persons born each year from 1800 to 2015. Since country names change or results are not reported, not every country has values. This dataset was obtained from Gapminder.

First, you will determine the number of countries reported in 2015. There are a total of 260 unique countries in the entire dataset. Then, you will compute the 5th and 95th percentiles of life expectancy over the entire dataset. Finally, you will make a box plot of life expectancy every 50 years from 1800 to 2000. Notice the large change in the distributions over this period.

The dataset has been pre-loaded into a DataFrame called df.

Instructions

  • Print the number of countries reported in 2015. To do this, use the .count() method on the ‘2015’ column of df.
  • Print the 5th and 95th percentiles of df. To do this, use the .quantile() method with the list [0.05, 0.95].
  • Generate a box plot using the list of columns provided in years. This has already been done for you, so click on ‘Submit Answer’ to view the result!
1
2
3
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/life_expectancy_at_birth.csv'
df = pd.read_csv(data_file)
df.head(3)
Unnamed: 0Life expectancy18001801180218031804180518061807180818091810181118121813181418151816181718181819182018211822182318241825182618271828182918301831183218331834183518361837183818391840184118421843184418451846184718481849185018511852185318541855185618571858185918601861186218631864186518661867186818691870187118721873187418751876187718781879188018811882188318841885188618871888188918901891189218931894189518961897...1917191819191920192119221923192419251926192719281929193019311932193319341935193619371938193919401941194219431944194519461947194819491950195119521953195419551956195719581959196019611962196319641965196619671968196919701971197219731974197519761977197819791980198119821983198419851986198719881989199019911992199319941995199619971998199920002001200220032004200520062007200820092010201120122013201420152016
00AbkhaziaNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN...NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
11Afghanistan28.2128.228.1928.1828.1728.1628.1528.1428.1328.1228.1128.128.0928.0828.0728.0628.0528.0428.0328.0228.0128.027.9927.9827.9727.9527.9427.9327.9227.9127.927.8927.8827.8727.8627.8527.8427.8327.8227.8127.827.7927.7827.7727.7627.7527.7427.7327.7227.7127.727.6927.6827.6727.6627.6527.6427.6327.6227.6127.627.5927.5827.5727.5627.5427.5327.5227.5127.527.4927.4827.4727.4627.4527.4427.4327.4227.4127.427.3927.3827.3727.3627.3527.3427.3327.3227.3127.327.2927.2827.2727.2627.2527.2427.2327.22...27.017.0526.9926.9826.9726.9626.9526.9426.9326.9226.9126.926.8926.8826.8726.8626.8526.8426.8326.8226.8126.826.7926.7826.7926.826.826.8126.8226.8226.8326.8326.8426.8527.1327.6728.1928.7329.2729.830.3430.8631.431.9432.4733.0133.5334.0734.635.1335.6636.1736.6937.237.738.1938.6739.1439.6140.0740.5340.9841.4641.9642.5143.1143.7544.4545.2146.0246.8747.7448.6249.549.349.449.548.949.449.749.548.650.050.150.451.051.451.852.052.152.452.853.353.654.054.454.854.953.852.72
22Akrotiri and DhekeliaNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN...NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN

3 rows × 219 columns

1
2
# Print the number of countries reported in 2015
df['2015'].count()
1
208
1
2
# Print the 5th and 95th percentiles
df.quantile([0.05, 0.95], numeric_only=True)
Unnamed: 0180018011802180318041805180618071808180918101811181218131814181518161817181818191820182118221823182418251826182718281829183018311832183318341835183618371838183918401841184218431844184518461847184818491850185118521853185418551856185718581859186018611862186318641865186618671868186918701871187218731874187518761877187818791880188118821883188418851886188718881889189018911892189318941895189618971898...1917191819191920192119221923192419251926192719281929193019311932193319341935193619371938193919401941194219431944194519461947194819491950195119521953195419551956195719581959196019611962196319641965196619671968196919701971197219731974197519761977197819791980198119821983198419851986198719881989199019911992199319941995199619971998199920002001200220032004200520062007200820092010201120122013201420152016
0.0512.9525.4025.3025.2025.225.225.4025.4025.4025.325.325.425.4025.2025.425.425.4025.4025.4025.3025.325.325.425.425.4025.425.4025.425.325.425.425.4025.4025.325.225.325.4025.4025.3025.325.325.4025.2025.3025.3925.3025.3025.2025.3025.225.125.2025.325.4025.425.2025.425.325.4025.4025.425.2025.3025.3025.4025.2525.1525.1025.124.3125.125.1025.225.1525.2025.2025.1025.1325.2025.1025.2025.2025.3025.2025.2125.2025.225.2025.2025.1025.1025.225.1525.1025.1025.2025.2025.1025.2025.3...25.18.6724.7724.7224.3524.8825.4225.6025.4025.425.6025.8125.6025.8125.4025.1524.5925.8026.4126.6726.9826.9827.0028.0027.0325.7923.9826.0127.0830.0129.7830.7931.6731.726531.857532.340532.843533.716534.245534.655034.606535.08635.634535.984536.532037.081037.57338.123038.681039.28039.845040.394540.826541.553542.061542.008542.249542.93543.685544.173544.621545.253045.612045.940546.230046.614047.741048.124548.54248.64049.341549.235549.49950.17550.17549.60049.57049.3150.25050.27050.6050.950.4850.3450.9451.351.5852.2052.7852.83553.0753.6054.23554.93555.9756.33556.70556.8757.85559.2555
0.95246.0537.9237.3538.3738.038.338.3738.3738.3738.038.038.038.0537.6137.038.338.3738.6939.0338.5338.338.538.538.538.9138.538.3738.338.538.038.038.3738.3738.338.338.338.3738.3738.3738.538.538.5139.4138.6638.5139.4139.4138.5138.3738.838.038.5138.838.6638.638.5138.338.838.5138.5138.538.5638.4838.5138.5138.6238.3738.5138.838.7939.239.2938.839.4139.4139.6239.4140.3541.4440.9941.8140.3940.3340.5742.3542.1943.342.8943.7544.2244.9344.444.2543.9544.8745.7946.5647.4447.9347.6...54.047.1454.9754.6256.8156.9557.0257.5158.2357.958.2458.5757.9959.4660.1160.6360.6861.0960.9461.7961.3562.4363.1261.9561.3763.9864.0062.6463.3966.0866.2767.3067.6368.253068.258068.706069.084569.713069.953069.985569.899570.50670.422570.879571.066570.906571.02771.615571.397571.65871.948571.680071.725572.044572.488572.577072.775573.11673.396073.760074.056574.338574.477574.807075.176575.364575.576575.969075.94376.11876.358576.587576.83977.00077.16577.46577.66577.8077.96578.26578.6678.878.8079.2079.4679.679.7280.0680.2680.61080.7380.9381.20081.36581.6081.66581.83082.0082.10082.1650

2 rows × 218 columns

1
2
3
# Generate a box plot
years = ['1800','1850','1900','1950','2000']
df[years].plot(kind='box')
1
<Axes: >

png

Standard deviation of temperature

Let’s use the mean and standard deviation to explore differences in temperature distributions in Pittsburgh in 2013. The data has been obtained from Weather Underground.

In this exercise, you’re going to compare the distribution of daily temperatures in January and March. You’ll compute the mean and standard deviation for these two months. You will notice that while the mean values are similar, the standard deviations are quite different, meaning that one month had a larger fluctuation in temperature than the other.

The DataFrames have been pre-loaded for you as january, which contains the January data, and march, which contains the March data.

Instructions

  • Compute and print the means of the January and March data using the .mean(numeric_only=True) method.
  • Compute and print the standard deviations of the January and March data using the .std(numeric_only=True) method.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
jan_values = np.array([['2013-01-01', 28],
                       ['2013-01-02', 21],
                       ['2013-01-03', 24],
                       ['2013-01-04', 28],
                       ['2013-01-05', 30],
                       ['2013-01-06', 34],
                       ['2013-01-07', 29],
                       ['2013-01-08', 31],
                       ['2013-01-09', 36],
                       ['2013-01-10', 34],
                       ['2013-01-11', 47],
                       ['2013-01-12', 55],
                       ['2013-01-13', 62],
                       ['2013-01-14', 44],
                       ['2013-01-15', 30],
                       ['2013-01-16', 32],
                       ['2013-01-17', 32],
                       ['2013-01-18', 24],
                       ['2013-01-19', 42],
                       ['2013-01-20', 35],
                       ['2013-01-21', 18],
                       ['2013-01-22', 9],
                       ['2013-01-23', 11],
                       ['2013-01-24', 16],
                       ['2013-01-25', 16],
                       ['2013-01-26', 23],
                       ['2013-01-27', 23],
                       ['2013-01-28', 40],
                       ['2013-01-29', 59],
                       ['2013-01-30', 58],
                       ['2013-01-31', 32]]).transpose()
cols = ['Date', 'Temperature']
jan_zip = list(zip(cols, jan_values))
jan_dict = dict(jan_zip)
january = pd.DataFrame.from_dict(jan_dict).astype({'Temperature': np.int64})
january.head()
DateTemperature
02013-01-0128
12013-01-0221
22013-01-0324
32013-01-0428
42013-01-0530
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
mar_values = np.array([['2013-03-01', 28],
                       ['2013-03-02', 26],
                       ['2013-03-03', 24],
                       ['2013-03-04', 28],
                       ['2013-03-05', 32],
                       ['2013-03-06', 34],
                       ['2013-03-07', 36],
                       ['2013-03-08', 32],
                       ['2013-03-09', 40],
                       ['2013-03-10', 55],
                       ['2013-03-11', 55],
                       ['2013-03-12', 40],
                       ['2013-03-13', 32],
                       ['2013-03-14', 30],
                       ['2013-03-15', 38],
                       ['2013-03-16', 36],
                       ['2013-03-17', 32],
                       ['2013-03-18', 34],
                       ['2013-03-19', 36],
                       ['2013-03-20', 32],
                       ['2013-03-21', 22],
                       ['2013-03-22', 28],
                       ['2013-03-23', 34],
                       ['2013-03-24', 34],
                       ['2013-03-25', 32],
                       ['2013-03-26', 34],
                       ['2013-03-27', 34],
                       ['2013-03-28', 37],
                       ['2013-03-29', 43],
                       ['2013-03-30', 43],
                       ['2013-03-31', 44]]).transpose()
mar_zip = list(zip(cols, mar_values))
mar_dict = dict(mar_zip)
march = pd.DataFrame.from_dict(mar_dict).astype({'Temperature': np.int64})
march.head()
DateTemperature
02013-03-0128
12013-03-0226
22013-03-0324
32013-03-0428
42013-03-0532
1
2
# Print the mean of the January and March data
january.mean(numeric_only=True)
1
2
Temperature    32.354839
dtype: float64
1
march.mean(numeric_only=True)
1
2
Temperature    35.0
dtype: float64
1
2
# Print the standard deviation of the January and March data
january.std(numeric_only=True)
1
2
Temperature    13.583196
dtype: float64
1
march.std(numeric_only=True)
1
2
Temperature    7.478859
dtype: float64

Separating Populations with Boolean Indexing

Describe species column

  • contains categorical data
  • count: number of non-null entries
  • unique: number of distinct values
  • top: most frequent category
  • freq: number of occurrences of the top value
1
iris.species.describe()
1
2
3
4
5
count        150
unique         3
top       setosa
freq          50
Name: species, dtype: object

Unique and Factors

1
iris.species.unique()
1
array(['setosa', 'versicolor', 'virginica'], dtype=object)

Filtering by species

1
2
3
4
5
6
7
8
indices = iris['species'] == 'setosa'
setosa = iris.loc[indices,:]  # extract new DataFrame

indices = iris['species'] == 'versicolor'
versicolor = iris.loc[indices,:]  # extract new DataFrame

indices = iris['species'] == 'virginica'
virginica = iris.loc[indices,:]  # extract new DataFrame

Checking species

1
setosa['species'].unique()
1
array(['setosa'], dtype=object)
1
versicolor['species'].unique()
1
array(['versicolor'], dtype=object)
1
virginica['species'].unique()
1
array(['virginica'], dtype=object)
1
setosa.head(2)
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)species
05.13.51.40.2setosa
14.93.01.40.2setosa
1
versicolor.head(2)
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)species
507.03.24.71.4versicolor
516.43.24.51.5versicolor
1
virginica.head(2)
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)species
1006.33.36.02.5virginica
1015.82.75.11.9virginica

Visual EDA: All Data

1
2
3
4
5
6
iris.plot(kind='hist',
          bins=50,
          range=(0, 8),
          alpha=0.3)
plt.title('Entire Iris Dataset')
plt.xlabel('[cm]')
1
Text(0.5, 0, '[cm]')

png

Visual EDA: Individual Factors

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
setosa.plot(kind='hist',
          bins=50,
          range=(0, 8),
          alpha=0.3)
plt.title('Setosa Dataset')
plt.xlabel('[cm]')

versicolor.plot(kind='hist',
          bins=50,
          range=(0, 8),
          alpha=0.3)
plt.title('Versicolor Dataset')
plt.xlabel('[cm]')

virginica.plot(kind='hist',
          bins=50,
          range=(0, 8),
          alpha=0.3)
plt.title('Virginica Dataset')
plt.xlabel('[cm]')
1
Text(0.5, 0, '[cm]')

png

png

png

Statistical EDA: describe()

1
2
describe_all = iris.describe()
describe_all
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
count150.000000150.000000150.000000150.000000
mean5.8433333.0540003.7586671.198667
std0.8280660.4335941.7644200.763161
min4.3000002.0000001.0000000.100000
25%5.1000002.8000001.6000000.300000
50%5.8000003.0000004.3500001.300000
75%6.4000003.3000005.1000001.800000
max7.9000004.4000006.9000002.500000
1
2
describe_setosa = setosa.describe()
describe_setosa
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
count50.0000050.00000050.00000050.00000
mean5.006003.4180001.4640000.24400
std0.352490.3810240.1735110.10721
min4.300002.3000001.0000000.10000
25%4.800003.1250001.4000000.20000
50%5.000003.4000001.5000000.20000
75%5.200003.6750001.5750000.30000
max5.800004.4000001.9000000.60000
1
2
describe_versicolor = versicolor.describe()
describe_versicolor
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
count50.00000050.00000050.00000050.000000
mean5.9360002.7700004.2600001.326000
std0.5161710.3137980.4699110.197753
min4.9000002.0000003.0000001.000000
25%5.6000002.5250004.0000001.200000
50%5.9000002.8000004.3500001.300000
75%6.3000003.0000004.6000001.500000
max7.0000003.4000005.1000001.800000
1
2
describe_virginica = virginica.describe()
describe_virginica
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
count50.0000050.00000050.00000050.00000
mean6.588002.9740005.5520002.02600
std0.635880.3224970.5518950.27465
min4.900002.2000004.5000001.40000
25%6.225002.8000005.1000001.80000
50%6.500003.0000005.5500002.00000
75%6.900003.1750005.8750002.30000
max7.900003.8000006.9000002.50000

Computing Errors

  • This is the absolute difference of the correct statistics computed in its own group from the statistic computed with the whole population divided by the correct statistics
  • Elementwise arithmetic so no need for loops
1
2
3
error_setosa = 100 * np.abs(describe_setosa - describe_all)
error_setosa = error_setosa / describe_setosa
error_setosa
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
count200.000000200.000000200.000000200.000000
mean16.72659510.649503156.739526391.256831
std134.91925013.796994916.891608611.840574
min0.00000013.0434780.0000000.000000
25%6.25000010.40000014.28571450.000000
50%16.00000011.764706190.000000550.000000
75%23.07692310.204082223.809524500.000000
max36.2068970.000000263.157895316.666667
1
2
3
error_versicolor = 100 * np.abs(describe_versicolor - describe_all)
error_versicolor = error_versicolor / describe_versicolor
error_versicolor
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
count200.000000200.000000200.000000200.000000
mean1.56109610.25270811.7683889.602815
std60.42472238.176108275.479720285.916763
min12.2448980.00000066.66666790.000000
25%8.92857110.89108960.00000075.000000
50%1.6949157.1428570.0000000.000000
75%1.58730210.00000010.86956520.000000
max12.85714329.41176535.29411838.888889
1
2
3
error_virginica = 100 * np.abs(describe_virginica - describe_all)
error_virginica = error_virginica / describe_virginica
error_virginica
sepal length (cm)sepal width (cm)petal length (cm)petal width (cm)
count200.000000200.000000200.000000200.000000
mean11.3033802.68998032.30067240.835801
std30.22373134.449250219.702370177.866589
min12.2448989.09090977.77777892.857143
25%18.0722890.00000068.62745183.333333
50%10.7692310.00000021.62162235.000000
75%7.2463773.93700813.19148921.739130
max0.00000015.7894740.0000000.000000

Exercises

Filtering and counting

How many automobiles were manufactured in Asia in the automobile dataset? The DataFrame has been provided for you as df. Use filtering and the .count() member method to determine the number of rows where the 'origin' column has the value 'Asia'.

As an example, you can extract the rows that contain 'US' as the country of origin using df[df['origin'] == 'US'].

1
2
3
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/auto-mpg.csv'
df = pd.read_csv(data_file)
df.head(3)
mpgcyldisplhpweightaccelyroriginname
018.08307.0130350412.070USchevrolet chevelle malibu
115.08350.0165369311.570USbuick skylark 320
218.08318.0150343611.070USplymouth satellite
1
df[df['origin'] == 'Asia'].origin.count()
1
79

Separate and summarize

Let’s use population filtering to determine how the automobiles in the US differ from the global average and standard deviation. How does the distribution of fuel efficiency (MPG) for the US differ from the global average and standard deviation?

In this exercise, you’ll compute the means and standard deviations of all columns in the full automobile dataset. Next, you’ll compute the same quantities for just the US population and subtract the global values from the US values.

All necessary modules have been imported and the DataFrame has been pre-loaded as df.

Instructions

  • Compute the global mean and global standard deviations of df using the .mean(numeric_only=True) and .std(numeric_only=True) methods. Assign the results to global_mean and global_std.
  • Filter the ‘US’ population from the ‘origin’ column and assign the result to us.
  • Compute the US mean and US standard deviations of us using the .mean(numeric_only=True) and .std(numeric_only=True) methods. Assign the results to us_mean and us_std.
  • Print the differences between us_mean and global_mean and us_std and global_std. This has already been done for you.
1
2
3
# Compute the global mean and global standard deviation: global_mean, global_std
global_mean = df.mean(numeric_only=True)
global_std = df.std(numeric_only=True)
1
2
# Filter the US population from the origin column: us
us = df[df['origin'] == 'US']
1
2
3
# Compute the US mean and US standard deviation: us_mean, us_std
us_mean = us.mean(numeric_only=True)
us_std = us.std(numeric_only=True)
1
2
3
# Print the differences
print(us_mean - global_mean)
print(us_std - global_std)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
mpg        -3.412449
cyl         0.805612
displ      53.100255
hp         14.579592
weight    394.905612
accel      -0.551122
yr         -0.387755
dtype: float64
mpg       -1.364623
cyl       -0.049788
displ     -6.267657
hp         1.406630
weight   -54.055870
accel     -0.022844
yr        -0.023369
dtype: float64

Separate and plot

Population filtering can be used alongside plotting to quickly determine differences in distributions between the sub-populations. You’ll work with the Titanic dataset.

There were three passenger classes on the Titanic, and passengers in each class paid a different fare price. In this exercise, you’ll investigate the differences in these fare prices.

Your job is to use Boolean filtering and generate box plots of the fare prices for each of the three passenger classes. The fare prices are contained in the 'fare' column and passenger class information is contained in the 'pclass' column.

When you’re done, notice the portions of the box plots that differ and those that are similar.

The DataFrame has been pre-loaded for you as titanic.

Instructions

  • Inside plt.subplots(), specify the nrows and ncols parameters so that there are 3 rows and 1 column.
  • Filter the rows where the ‘pclass’ column has the values 1 and generate a box plot of the ‘fare’ column.
  • Filter the rows where the ‘pclass’ column has the values 2 and generate a box plot of the ‘fare’ column.
  • Filter the rows where the ‘pclass’ column has the values 3 and generate a box plot of the ‘fare’ column.
1
2
3
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/titanic.csv'
titanic = pd.read_csv(data_file)
titanic.head(3)
pclasssurvivednamesexagesibspparchticketfarecabinembarkedboatbodyhome.dest
011Allen, Miss. Elisabeth Waltonfemale29.000024160211.3375B5S2NaNSt Louis, MO
111Allison, Master. Hudson Trevormale0.9212113781151.5500C22 C26S11NaNMontreal, PQ / Chesterville, ON
210Allison, Miss. Helen Lorainefemale2.0012113781151.5500C22 C26SNaNNaNMontreal, PQ / Chesterville, ON
1
2
3
4
5
6
7
8
9
10
11
12
13
# Display the box plots on 3 separate rows and 1 column
fig, axes = plt.subplots(nrows=3, ncols=1)

# Generate a box plot of the fare prices for the First passenger class
titanic.loc[titanic['pclass'] == 1].plot(ax=axes[0], y='fare', kind='box')

# Generate a box plot of the fare prices for the Second passenger class
titanic.loc[titanic['pclass'] == 2].plot(ax=axes[1], y='fare', kind='box')

# Generate a box plot of the fare prices for the Third passenger class
titanic.loc[titanic['pclass'] == 3].plot(ax=axes[2], y='fare', kind='box')

plt.tight_layout()

png

Time Series in pandas

In this chapter, you will learn how to manipulate and visualize time series data using Pandas. You will become familiar with concepts such as upsampling, downsampling, and interpolation. You will practice using Pandas’ method chaining to efficiently filter your data and perform time series analyses. From stock prices to flight timings, time series data are found in a wide variety of domains and being able to effectively work with such data can be an invaluable skill.

Indexing pandas time series

Using pandas to read datetime objects

  • read_csv() function
    • Can read strings into datetime objects
    • Need to specify parse_dates=True
  • ISO 8601 format
    • yyyy-mm-dd hh:mm:ss

Product Sales CSV - Parse dates

1
2
3
4
sales = pd.read_csv('data/sales_data/sales-feb-2015.csv',
                    parse_dates=True,
                    index_col='Date')
sales.head()
CompanyProductUnits
Date
2015-02-02 08:33:00HooliSoftware3
2015-02-02 20:54:00MediacoreHardware9
2015-02-03 14:14:00InitechSoftware13
2015-02-04 15:36:00StreeplexSoftware13
2015-02-04 21:52:00Acme CoporationHardware14
1
sales.info()
1
2
3
4
5
6
7
8
9
10
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 20 entries, 2015-02-02 08:33:00 to 2015-02-26 08:58:00
Data columns (total 3 columns):
 #   Column   Non-Null Count  Dtype 
---  ------   --------------  ----- 
 0   Company  20 non-null     object
 1   Product  20 non-null     object
 2   Units    20 non-null     int64 
dtypes: int64(1), object(2)
memory usage: 640.0+ bytes

Selecting single datetime

1
sales.loc['2015-02-19 10:59:00', 'Company']
1
'Mediacore'

Selecting whole day

1
sales.loc['2015-02-05']
CompanyProductUnits
Date
2015-02-05 01:53:00Acme CoporationSoftware19
2015-02-05 22:05:00HooliService10

Partial datetime string selection

  • Alternative formats:
    • sales.loc[‘February 5, 2015’]
    • sales.loc[‘2015-Feb-5’]
  • Whole month: sales.loc[‘2015-02’]
  • Whole year: sales.loc[‘2015’]

Selecting whole month

1
sales.loc['2015-02'].head()
CompanyProductUnits
Date
2015-02-02 08:33:00HooliSoftware3
2015-02-02 20:54:00MediacoreHardware9
2015-02-03 14:14:00InitechSoftware13
2015-02-04 15:36:00StreeplexSoftware13
2015-02-04 21:52:00Acme CoporationHardware14

Slicing using dates/times

1
sales.loc['2015-2-16':'2015-2-20']
CompanyProductUnits
Date
2015-02-16 12:09:00HooliSoftware10
2015-02-19 10:59:00MediacoreHardware16
2015-02-19 16:02:00MediacoreService10

Convert strings to datetime

1
2
3
4
5
evening_2_11 = pd.to_datetime(['2015-2-11 20:03',
                               '2015-2-11 21:00',
                               '2015-2-11 22:50',
                               '2015-2-11 23:00'])
evening_2_11
1
2
3
DatetimeIndex(['2015-02-11 20:03:00', '2015-02-11 21:00:00',
               '2015-02-11 22:50:00', '2015-02-11 23:00:00'],
              dtype='datetime64[ns]', freq=None)

Reindexing DataFrame

1
sales.reindex(evening_2_11)
CompanyProductUnits
2015-02-11 20:03:00InitechSoftware7.0
2015-02-11 21:00:00NaNNaNNaN
2015-02-11 22:50:00HooliSoftware4.0
2015-02-11 23:00:00NaNNaNNaN

Filling missing values

1
sales.reindex(evening_2_11, method='ffill')
CompanyProductUnits
2015-02-11 20:03:00InitechSoftware7
2015-02-11 21:00:00InitechSoftware7
2015-02-11 22:50:00HooliSoftware4
2015-02-11 23:00:00HooliSoftware4
1
sales.reindex(evening_2_11, method='bfill')
CompanyProductUnits
2015-02-11 20:03:00InitechSoftware7
2015-02-11 21:00:00HooliSoftware4
2015-02-11 22:50:00HooliSoftware4
2015-02-11 23:00:00HooliSoftware10

Exercises

Reading and slicing times

For this exercise, we have read in the same data file using three different approaches:

1
2
3
df1 = pd.read_csv(filename)
df2 = pd.read_csv(filename, parse_dates=['Date'])
df3 = pd.read_csv(filename, index_col='Date', parse_dates=True)

Use the .head() and .info() methods in the IPython Shell to inspect the DataFrames. Then, try to index each DataFrame with a datetime string. Which of the resulting DataFrames allows you to easily index and slice data by dates using, for example, df1.loc['2010-Aug-01']?

1
2
3
4
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/weather_data_austin_2010.csv'
df1 = pd.read_csv(data_file)
df2 = pd.read_csv(data_file, parse_dates=['Date'])
df3 = pd.read_csv(data_file, index_col='Date', parse_dates=True)
1
df1.head()
TemperatureDewPointPressureDate
046.237.51.020100101 00:00
144.637.11.020100101 01:00
244.136.91.020100101 02:00
343.836.91.020100101 03:00
443.536.81.020100101 04:00
1
df2.head()
TemperatureDewPointPressureDate
046.237.51.02010-01-01 00:00:00
144.637.11.02010-01-01 01:00:00
244.136.91.02010-01-01 02:00:00
343.836.91.02010-01-01 03:00:00
443.536.81.02010-01-01 04:00:00
1
df3.head()
TemperatureDewPointPressure
Date
2010-01-01 00:00:0046.237.51.0
2010-01-01 01:00:0044.637.11.0
2010-01-01 02:00:0044.136.91.0
2010-01-01 03:00:0043.836.91.0
2010-01-01 04:00:0043.536.81.0

datatime slicing allowed when index is datetime

  • doesn’t work with
    1
    2
    
    df1.loc['2010-Aug-01']
    df2.loc['2010-Aug-01']
    
1
df3.loc['2010-Aug-01'].head()
TemperatureDewPointPressure
Date
2010-08-01 00:00:0079.070.81.0
2010-08-01 01:00:0077.471.21.0
2010-08-01 02:00:0076.471.31.0
2010-08-01 03:00:0075.771.41.0
2010-08-01 04:00:0075.171.41.0

Creating and using a DatetimeIndex

The pandas Index is a powerful way to handle time series data, so it is valuable to know how to build one yourself. Pandas provides the pd.to_datetime() function for just this task. For example, if passed the list of strings ['2015-01-01 091234','2015-01-01 091234'] and a format specification variable, such as format='%Y-%m-%d %H%M%S, pandas will parse the string into the proper datetime elements and build the datetime objects.

In this exercise, a list of temperature data and a list of date strings has been pre-loaded for you as temperature_list and date_list respectively. Your job is to use the .to_datetime() method to build a DatetimeIndex out of the list of date strings, and to then use it along with the list of temperature data to build a pandas Series.

Instructions

  • Prepare a format string, time_format, using ‘%Y-%m-%d %H:%M’ as the desired format.
  • Convert date_list into a datetime object by using the pd.to_datetime() function. Specify the format string you defined above and assign the result to my_datetimes.
  • Construct a pandas Series called time_series using pd.Series() with temperature_list and my_datetimes. Set the index of the Series to be my_datetimes.
1
2
3
4
5
6
date_file = 'data/date_list.csv'
date_df = pd.read_csv(date_file, header=None)

date_df[0] = date_df[0].map(lambda x: x.lstrip(" '").rstrip("',"))

date_df.head()
0
020100101 00:00
120100101 01:00
220100101 02:00
320100101 03:00
420100101 04:00
1
2
date_list = list(date_df[0])
date_list[:10]
1
2
3
4
5
6
7
8
9
10
['20100101 00:00',
 '20100101 01:00',
 '20100101 02:00',
 '20100101 03:00',
 '20100101 04:00',
 '20100101 05:00',
 '20100101 06:00',
 '20100101 07:00',
 '20100101 08:00',
 '20100101 09:00']
1
2
temp_list = np.random.uniform(low=41.8, high=95.3, size=8759)
temp_list
1
2
array([70.96637444, 75.83797753, 60.69687542, ..., 92.7728768 ,
       47.54699967, 55.66061534])
1
2
# Prepare a format string: time_format
time_format = '%Y%m%d %H:%M'
1
2
3
# Convert date_list into a datetime object: my_datetimes
my_datetimes = pd.to_datetime(date_list, format=time_format) 
my_datetimes
1
2
3
4
5
6
7
8
9
10
11
12
DatetimeIndex(['2010-01-01 00:00:00', '2010-01-01 01:00:00',
               '2010-01-01 02:00:00', '2010-01-01 03:00:00',
               '2010-01-01 04:00:00', '2010-01-01 05:00:00',
               '2010-01-01 06:00:00', '2010-01-01 07:00:00',
               '2010-01-01 08:00:00', '2010-01-01 09:00:00',
               ...
               '2010-12-31 14:00:00', '2010-12-31 15:00:00',
               '2010-12-31 16:00:00', '2010-12-31 17:00:00',
               '2010-12-31 18:00:00', '2010-12-31 19:00:00',
               '2010-12-31 20:00:00', '2010-12-31 21:00:00',
               '2010-12-31 22:00:00', '2010-12-31 23:00:00'],
              dtype='datetime64[ns]', length=8759, freq=None)
1
2
# Construct a pandas Series using temperature_list and my_datetimes: time_series
time_series = pd.Series(temp_list, index=my_datetimes)
1
time_series.head()
1
2
3
4
5
6
2010-01-01 00:00:00    70.966374
2010-01-01 01:00:00    75.837978
2010-01-01 02:00:00    60.696875
2010-01-01 03:00:00    69.374115
2010-01-01 04:00:00    83.669478
dtype: float64

Partial string indexing and slicing

Pandas time series support “partial string” indexing. What this means is that even when passed only a portion of the datetime, such as the date but not the time, pandas is remarkably good at doing what one would expect. Pandas datetime indexing also supports a wide variety of commonly used datetime string formats, even when mixed.

In this exercise, a time series that contains hourly weather data has been pre-loaded for you. This data was read using the parse_dates=True option in read_csv() with index_col="Dates" so that the Index is indeed a DatetimeIndex.

All data from the 'Temperature' column has been extracted into the variable ts0. Your job is to use a variety of natural date strings to extract one or more values from ts0.

After you are done, you will have three new variables - ts1, ts2, and ts3. You can slice these further to extract only the first and last entries of each. Try doing this after your submission for more practice.

Instructions

  • Extract data from ts0 for a single hour - the hour from 9pm to 10pm on 2010-10-11. Assign it to ts1.
  • Extract data from ts0 for a single day - July 4th, 2010 - and assign it to ts2.
  • Extract data from ts0 for the second half of December 2010 - 12/15/2010 to 12/31/2010. Assign it to ts3.
1
2
3
# Extract the hour from 9pm to 10pm on '2010-10-11': ts1
ts1 = time_series.loc['2010-10-11 21:00:00':'2010-10-11 22:00:00']
ts1.head()
1
2
3
2010-10-11 21:00:00    63.842622
2010-10-11 22:00:00    83.688689
dtype: float64
1
2
3
# Extract '2010-07-04' from ts0: ts2
ts2 = time_series.loc['2010-07-04']
ts2.head()
1
2
3
4
5
6
2010-07-04 00:00:00    63.819533
2010-07-04 01:00:00    80.039948
2010-07-04 02:00:00    53.012081
2010-07-04 03:00:00    54.490182
2010-07-04 04:00:00    66.200439
dtype: float64
1
2
3
# Extract data from '2010-12-15' to '2010-12-31': ts3
ts3 = time_series.loc['2010-12-15':'2010-12-31']
ts3.head()
1
2
3
4
5
6
2010-12-15 00:00:00    52.328605
2010-12-15 01:00:00    62.788830
2010-12-15 02:00:00    56.618796
2010-12-15 03:00:00    50.490368
2010-12-15 04:00:00    55.340804
dtype: float64

Reindexing the Index

Reindexing is useful in preparation for adding or otherwise combining two time series data sets. To reindex the data, we provide a new index and ask pandas to try and match the old data to the new index. If data is unavailable for one of the new index dates or times, you must tell pandas how to fill it in. Otherwise, pandas will fill with NaN by default.

In this exercise, two time series data sets containing daily data have been pre-loaded for you, each indexed by dates. The first, ts1, includes weekends, but the second, ts2, does not. The goal is to combine the two data sets in a sensible way. Your job is to reindex the second data set so that it has weekends as well, and then add it to the first. When you are done, it would be informative to inspect your results.

Instructions

  • Create a new time series ts3 by reindexing ts2 with the index of ts1. To do this, call .reindex() on ts2 and pass in the index of ts1 (ts1.index).
  • Create another new time series, ts4, by calling the same .reindex() as above, but also specifiying a fill method, using the keyword argument method=”ffill” to forward-fill values.
  • Add ts1 + ts2. Assign the result to sum12.
  • Add ts1 + ts3. Assign the result to sum13.
  • Add ts1 + ts4. Assign the result to sum14.
1
2
3
4
5
6
ts1_index = pd.DatetimeIndex(['2016-07-01', '2016-07-02', '2016-07-03', '2016-07-04',
                              '2016-07-05', '2016-07-06', '2016-07-07', '2016-07-08',
                              '2016-07-09', '2016-07-10', '2016-07-11', '2016-07-12',
                              '2016-07-13', '2016-07-14', '2016-07-15', '2016-07-16',
                              '2016-07-17'])
ts1_index
1
2
3
4
5
6
DatetimeIndex(['2016-07-01', '2016-07-02', '2016-07-03', '2016-07-04',
               '2016-07-05', '2016-07-06', '2016-07-07', '2016-07-08',
               '2016-07-09', '2016-07-10', '2016-07-11', '2016-07-12',
               '2016-07-13', '2016-07-14', '2016-07-15', '2016-07-16',
               '2016-07-17'],
              dtype='datetime64[ns]', freq=None)
1
2
ts1_values = np.array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16])
ts1_values
1
array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16])
1
2
ts1 = pd.Series(ts1_values, index=ts1_index)
ts1.head()
1
2
3
4
5
6
2016-07-01    0
2016-07-02    1
2016-07-03    2
2016-07-04    3
2016-07-05    4
dtype: int32
1
2
3
4
5
6
ts2_index = pd.DatetimeIndex(['2016-07-01', '2016-07-04', '2016-07-05', '2016-07-06',
                              '2016-07-07', '2016-07-08', '2016-07-11', '2016-07-12',
                              '2016-07-13', '2016-07-14', '2016-07-15'])
ts2_values = np.array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10])
ts2 = pd.Series(ts2_values, index=ts2_index)
ts2.head()
1
2
3
4
5
6
2016-07-01    0
2016-07-04    1
2016-07-05    2
2016-07-06    3
2016-07-07    4
dtype: int32
1
2
3
# Reindex without fill method: ts3
ts3 = ts2.reindex(ts1.index)
ts3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2016-07-01     0.0
2016-07-02     NaN
2016-07-03     NaN
2016-07-04     1.0
2016-07-05     2.0
2016-07-06     3.0
2016-07-07     4.0
2016-07-08     5.0
2016-07-09     NaN
2016-07-10     NaN
2016-07-11     6.0
2016-07-12     7.0
2016-07-13     8.0
2016-07-14     9.0
2016-07-15    10.0
2016-07-16     NaN
2016-07-17     NaN
dtype: float64
1
2
3
# Reindex with fill method, using forward fill: ts4
ts4 = ts2.reindex(ts1.index, method='ffill')
ts4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2016-07-01     0
2016-07-02     0
2016-07-03     0
2016-07-04     1
2016-07-05     2
2016-07-06     3
2016-07-07     4
2016-07-08     5
2016-07-09     5
2016-07-10     5
2016-07-11     6
2016-07-12     7
2016-07-13     8
2016-07-14     9
2016-07-15    10
2016-07-16    10
2016-07-17    10
dtype: int32
1
2
3
# Combine ts1 + ts2: sum12
sum12 = ts1 + ts2
sum12
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2016-07-01     0.0
2016-07-02     NaN
2016-07-03     NaN
2016-07-04     4.0
2016-07-05     6.0
2016-07-06     8.0
2016-07-07    10.0
2016-07-08    12.0
2016-07-09     NaN
2016-07-10     NaN
2016-07-11    16.0
2016-07-12    18.0
2016-07-13    20.0
2016-07-14    22.0
2016-07-15    24.0
2016-07-16     NaN
2016-07-17     NaN
dtype: float64
1
2
3
# Combine ts1 + ts3: sum13
sum13 = ts1 + ts3
sum13
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2016-07-01     0.0
2016-07-02     NaN
2016-07-03     NaN
2016-07-04     4.0
2016-07-05     6.0
2016-07-06     8.0
2016-07-07    10.0
2016-07-08    12.0
2016-07-09     NaN
2016-07-10     NaN
2016-07-11    16.0
2016-07-12    18.0
2016-07-13    20.0
2016-07-14    22.0
2016-07-15    24.0
2016-07-16     NaN
2016-07-17     NaN
dtype: float64
1
2
3
# Combine ts1 + ts4: sum14
sum14 = ts1 + ts4
sum14
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2016-07-01     0
2016-07-02     1
2016-07-03     2
2016-07-04     4
2016-07-05     6
2016-07-06     8
2016-07-07    10
2016-07-08    12
2016-07-09    13
2016-07-10    14
2016-07-11    16
2016-07-12    18
2016-07-13    20
2016-07-14    22
2016-07-15    24
2016-07-16    25
2016-07-17    26
dtype: int32

Resampling pandas time series

Sales Data

1
2
3
4
sales = pd.read_csv('data/sales_data/sales-feb-2015.csv',
                    parse_dates=True,
                    index_col='Date')
sales.head()
CompanyProductUnits
Date
2015-02-02 08:33:00HooliSoftware3
2015-02-02 20:54:00MediacoreHardware9
2015-02-03 14:14:00InitechSoftware13
2015-02-04 15:36:00StreeplexSoftware13
2015-02-04 21:52:00Acme CoporationHardware14

Resampling

  • Statistical methods over different time intervals
    1
    2
    3
    4
    
    mean()
    sum()
    count()
    # etc.
    
  • Down-sampling
    • reduce datetime rows to slower frequency
  • Up-sampling
    • increase datetime rows to faster frequency

Aggregating means

1
2
daily_mean = sales.resample('D').mean(numeric_only=True)
daily_mean.head()
Units
Date
2015-02-026.0
2015-02-0313.0
2015-02-0413.5
2015-02-0514.5
2015-02-06NaN

Verifying

1
daily_mean.loc['2015-2-2']
1
2
Units    6.0
Name: 2015-02-02 00:00:00, dtype: float64
1
sales.loc['2015-2-2', 'Units']
1
2
3
4
Date
2015-02-02 08:33:00    3
2015-02-02 20:54:00    9
Name: Units, dtype: int64
1
sales.loc['2015-2-2', 'Units'].mean(numeric_only=True)
1
6.0

Method chaining

1
sales.resample('D').sum().head()
CompanyProductUnits
Date
2015-02-02HooliMediacoreSoftwareHardware12
2015-02-03InitechSoftware13
2015-02-04StreeplexAcme CoporationSoftwareHardware27
2015-02-05Acme CoporationHooliSoftwareService29
2015-02-06000
1
sales.resample('D').sum().max(numeric_only=True)
1
2
Units    29
dtype: int64

Resampling strings

1
sales.resample('W').count()
CompanyProductUnits
Date
2015-02-08888
2015-02-15444
2015-02-22555
2015-03-01333

Resampling frequencies

1
2
3
4
%%html
<style>
  table {margin-left: 0 !important;}
</style>
InputDescription
‘min’, ‘T’minute
‘H’hour
‘D’day
‘B’business day
‘W’week
‘M’month
‘Q’quarter
‘A’year

Multiplying frequencies

1
sales.loc[:, 'Units'].resample('2W').sum()
1
2
3
4
5
Date
2015-02-08    82
2015-02-22    79
2015-03-08    15
Freq: 2W-SUN, Name: Units, dtype: int64

Upsampling

1
2
two_days = sales.loc['2015-2-4':'2015-2-5', 'Units']
two_days
1
2
3
4
5
6
Date
2015-02-04 15:36:00    13
2015-02-04 21:52:00    14
2015-02-05 01:53:00    19
2015-02-05 22:05:00    10
Name: Units, dtype: int64

Upsampling and filling

1
two_days.resample('4H').ffill()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
C:\Users\trent\AppData\Local\Temp\ipykernel_30012\1008547592.py:1: FutureWarning: 'H' is deprecated and will be removed in a future version, please use 'h' instead.
  two_days.resample('4H').ffill()





Date
2015-02-04 12:00:00     NaN
2015-02-04 16:00:00    13.0
2015-02-04 20:00:00    13.0
2015-02-05 00:00:00    14.0
2015-02-05 04:00:00    19.0
2015-02-05 08:00:00    19.0
2015-02-05 12:00:00    19.0
2015-02-05 16:00:00    19.0
2015-02-05 20:00:00    19.0
Freq: 4h, Name: Units, dtype: float64

Exercises

Resampling and frequency

Pandas provides methods for resampling time series data. When downsampling or upsampling, the syntax is similar, but the methods called are different. Both use the concept of ‘method chaining’ - df.method1().method2().method3() - to direct the output from one method call to the input of the next, and so on, as a sequence of operations, one feeding into the next.

For example, if you have hourly data, and just need daily data, pandas will not guess how to throw out the 23 of 24 points. You must specify this in the method. One approach, for instance, could be to take the mean, as in df.resample('D').mean(numeric_only=True).

In this exercise, a data set containing hourly temperature data has been pre-loaded for you. Your job is to resample the data using a variety of aggregation methods to answer a few questions.

Instructions

  • Downsample the ‘Temperature’ column of df to 6 hour data using .resample(‘6h’) and .mean(numeric_only=True). Assign the result to df1.
  • Downsample the ‘Temperature’ column of df to daily data using .resample(‘D’) and then count the number of data points in each day with .count(). Assign the result df2.
1
2
3
4
df = pd.read_csv('DataCamp-master/11-pandas-foundations/_datasets/weather_data_austin_2010.csv',
                 parse_dates=True,
                 index_col='Date')
df.head()
TemperatureDewPointPressure
Date
2010-01-01 00:00:0046.237.51.0
2010-01-01 01:00:0044.637.11.0
2010-01-01 02:00:0044.136.91.0
2010-01-01 03:00:0043.836.91.0
2010-01-01 04:00:0043.536.81.0
1
2
3
# Downsample to 6 hour data and aggregate by mean: df1
df1 = df.Temperature.resample('6H').mean(numeric_only=True)
df1.head()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
C:\Users\trent\AppData\Local\Temp\ipykernel_30012\1956605538.py:2: FutureWarning: 'H' is deprecated and will be removed in a future version, please use 'h' instead.
  df1 = df.Temperature.resample('6H').mean(numeric_only=True)





Date
2010-01-01 00:00:00    44.200000
2010-01-01 06:00:00    45.933333
2010-01-01 12:00:00    57.766667
2010-01-01 18:00:00    49.450000
2010-01-02 00:00:00    44.516667
Freq: 6h, Name: Temperature, dtype: float64
1
2
3
# Downsample to daily data and count the number of data points: df2
df2 = df.Temperature.resample('D').count()
df2.head()
1
2
3
4
5
6
7
Date
2010-01-01    24
2010-01-02    24
2010-01-03    24
2010-01-04    24
2010-01-05    24
Freq: D, Name: Temperature, dtype: int64

Separating and resampling

With pandas, you can resample in different ways on different subsets of your data. For example, resampling different months of data with different aggregations. In this exercise, the data set containing hourly temperature data from the last exercise has been pre-loaded.

Your job is to resample the data using a variety of aggregation methods. The DataFrame is available in the workspace as df. You will be working with the 'Temperature' column.

Instructions

  • Use partial string indexing to extract temperature data for August 2010 into august.
  • Use the temperature data for August and downsample to find the daily maximum temperatures. Store the result in august_highs.
  • Use partial string indexing to extract temperature data for February 2010 into february.
  • Use the temperature data for February and downsample to find the daily minimum temperatures. Store the result in february_lows.
1
2
3
# Extract temperature data for August: august
august = df.loc['2010-08', 'Temperature']
august.head()
1
2
3
4
5
6
7
Date
2010-08-01 00:00:00    79.0
2010-08-01 01:00:00    77.4
2010-08-01 02:00:00    76.4
2010-08-01 03:00:00    75.7
2010-08-01 04:00:00    75.1
Name: Temperature, dtype: float64
1
2
3
# Downsample to obtain only the daily highest temperatures in August: august_highs
august_highs = august.resample('D').max(numeric_only=True)
august_highs.head()
1
2
3
4
5
6
7
Date
2010-08-01    95.0
2010-08-02    95.0
2010-08-03    95.1
2010-08-04    95.1
2010-08-05    95.1
Freq: D, Name: Temperature, dtype: float64
1
2
3
# Extract temperature data for February: february
february = august = df.loc['2010-02', 'Temperature']
february.head()
1
2
3
4
5
6
7
Date
2010-02-01 00:00:00    47.8
2010-02-01 01:00:00    46.8
2010-02-01 02:00:00    46.1
2010-02-01 03:00:00    45.5
2010-02-01 04:00:00    44.9
Name: Temperature, dtype: float64
1
2
3
# Downsample to obtain the daily lowest temperatures in February: february_lows
february_lows = february.resample('D').min(numeric_only=True)
february_lows.head()
1
2
3
4
5
6
7
Date
2010-02-01    43.8
2010-02-02    44.3
2010-02-03    44.6
2010-02-04    44.5
2010-02-05    44.3
Freq: D, Name: Temperature, dtype: float64

Rolling mean and frequency In this exercise, some hourly weather data is pre-loaded for you. You will continue to practice resampling, this time using rolling means.

Rolling means (or moving averages) are generally used to smooth out short-term fluctuations in time series data and highlight long-term trends. You can read more about them here.

To use the .rolling() method, you must always use method chaining, first calling .rolling() and then chaining an aggregation method after it. For example, with a Series hourly_data, hourly_data.rolling(window=24).mean(numeric_only=True) would compute new values for each hourly point, based on a 24-hour window stretching out behind each point. The frequency of the output data is the same: it is still hourly. Such an operation is useful for smoothing time series data.

Your job is to resample the data using the combination of .rolling() and .mean(numeric_only=True). You will work with the same DataFrame df from the previous exercise.

Instructions

  • Use partial string indexing to extract temperature data from August 1 2010 to August 15 2010. Assign to unsmoothed.
  • Use .rolling() with a 24 hour window to smooth the mean temperature data. Assign the result to smoothed.
  • Use a dictionary to create a new DataFrame august with the time series smoothed and unsmoothed as columns.
  • Plot both the columns of august as line plots using the .plot() method.
1
2
3
# Extract data from 2010-Aug-01 to 2010-Aug-15: unsmoothed
unsmoothed = df['Temperature']['2010-Aug-01':'2010-Aug-15']
unsmoothed.head()
1
2
3
4
5
6
7
Date
2010-08-01 00:00:00    79.0
2010-08-01 01:00:00    77.4
2010-08-01 02:00:00    76.4
2010-08-01 03:00:00    75.7
2010-08-01 04:00:00    75.1
Name: Temperature, dtype: float64
1
2
3
# Apply a rolling mean with a 24 hour window: smoothed
smoothed = df['Temperature']['2010-Aug-01':'2010-Aug-15'].rolling(window=24).mean(numeric_only=True)
smoothed.iloc[20:30]
1
2
3
4
5
6
7
8
9
10
11
12
Date
2010-08-01 20:00:00          NaN
2010-08-01 21:00:00          NaN
2010-08-01 22:00:00          NaN
2010-08-01 23:00:00    84.350000
2010-08-02 00:00:00    84.354167
2010-08-02 01:00:00    84.354167
2010-08-02 02:00:00    84.358333
2010-08-02 03:00:00    84.362500
2010-08-02 04:00:00    84.366667
2010-08-02 05:00:00    84.366667
Name: Temperature, dtype: float64
1
2
3
# Create a new DataFrame with columns smoothed and unsmoothed: august
august = pd.DataFrame({'smoothed':smoothed, 'unsmoothed':unsmoothed})
august.head()
smoothedunsmoothed
Date
2010-08-01 00:00:00NaN79.0
2010-08-01 01:00:00NaN77.4
2010-08-01 02:00:00NaN76.4
2010-08-01 03:00:00NaN75.7
2010-08-01 04:00:00NaN75.1
1
2
# Plot both smoothed and unsmoothed data using august.plot().
august.plot()
1
<Axes: xlabel='Date'>

png

Resample and roll with it

As of pandas version 0.18.0, the interface for applying rolling transformations to time series has become more consistent and flexible, and feels somewhat like a groupby (If you do not know what a groupby is, don’t worry, you will learn about it in the next course!).

You can now flexibly chain together resampling and rolling operations. In this exercise, the same weather data from the previous exercises has been pre-loaded for you. Your job is to extract one month of data, resample to find the daily high temperatures, and then use a rolling and aggregation operation to smooth the data.

Instructions

  • Use partial string indexing to extract August 2010 temperature data, and assign to august.
  • Resample to daily frequency, saving the maximum daily temperatures, and assign the result to daily_highs.
  • As part of one long method chain, repeat the above resampling (or you can re-use daily_highs) and then combine it with .rolling() to apply a 7 day .mean(numeric_only=True) (with window=7 inside .rolling()) so as to smooth the daily highs. Assign the result to daily_highs_smoothed and print the result.
1
2
3
# Extract the August 2010 data: august
august = df['Temperature']['2010-08']
august.head()
1
2
3
4
5
6
7
Date
2010-08-01 00:00:00    79.0
2010-08-01 01:00:00    77.4
2010-08-01 02:00:00    76.4
2010-08-01 03:00:00    75.7
2010-08-01 04:00:00    75.1
Name: Temperature, dtype: float64
1
2
3
# Resample to daily data, aggregating by max: daily_highs
daily_highs = august.resample('D').max(numeric_only=True)
daily_highs.head()
1
2
3
4
5
6
7
Date
2010-08-01    95.0
2010-08-02    95.0
2010-08-03    95.1
2010-08-04    95.1
2010-08-05    95.1
Freq: D, Name: Temperature, dtype: float64
1
2
3
# Use a rolling 7-day window with method chaining to smooth the daily high temperatures in August
daily_highs_smoothed = daily_highs.rolling(window=7).mean(numeric_only=True)
daily_highs_smoothed.head(10)
1
2
3
4
5
6
7
8
9
10
11
12
Date
2010-08-01          NaN
2010-08-02          NaN
2010-08-03          NaN
2010-08-04          NaN
2010-08-05          NaN
2010-08-06          NaN
2010-08-07    95.114286
2010-08-08    95.142857
2010-08-09    95.171429
2010-08-10    95.171429
Freq: D, Name: Temperature, dtype: float64

Manipulating pandas time series

Sales data

1
2
3
sales = pd.read_csv('data/sales_data/sales-feb-2015.csv',
                    parse_dates=['Date'])
sales.head()
DateCompanyProductUnits
02015-02-02 08:33:00HooliSoftware3
12015-02-02 20:54:00MediacoreHardware9
22015-02-03 14:14:00InitechSoftware13
32015-02-04 15:36:00StreeplexSoftware13
42015-02-04 21:52:00Acme CoporationHardware14

String methods

1
sales['Company'].str.upper().head()
1
2
3
4
5
6
0              HOOLI
1          MEDIACORE
2            INITECH
3          STREEPLEX
4    ACME COPORATION
Name: Company, dtype: object

Substring matching

1
sales['Product'].str.contains('ware').head()
1
2
3
4
5
6
0    True
1    True
2    True
3    True
4    True
Name: Product, dtype: bool

Boolean arithmetic

1
2
3
print(True + False)
print(True + True)
print(False + False)
1
2
3
1
2
0

Boolean reductions

1
sales['Product'].str.contains('ware').sum()
1
14

Datetime methods

1
sales['Date'].dt.hour.head()
1
2
3
4
5
6
0     8
1    20
2    14
3    15
4    21
Name: Date, dtype: int32

Set timezone

1
2
central = sales['Date'].dt.tz_localize('US/Central')
central.head()
1
2
3
4
5
6
0   2015-02-02 08:33:00-06:00
1   2015-02-02 20:54:00-06:00
2   2015-02-03 14:14:00-06:00
3   2015-02-04 15:36:00-06:00
4   2015-02-04 21:52:00-06:00
Name: Date, dtype: datetime64[ns, US/Central]

Convert timezone

1
central.dt.tz_convert('US/Eastern').head()
1
2
3
4
5
6
0   2015-02-02 09:33:00-05:00
1   2015-02-02 21:54:00-05:00
2   2015-02-03 15:14:00-05:00
3   2015-02-04 16:36:00-05:00
4   2015-02-04 22:52:00-05:00
Name: Date, dtype: datetime64[ns, US/Eastern]

Method chaining

1
sales['Date'].dt.tz_localize('US/Central').dt.tz_convert('US/Eastern').head()
1
2
3
4
5
6
0   2015-02-02 09:33:00-05:00
1   2015-02-02 21:54:00-05:00
2   2015-02-03 15:14:00-05:00
3   2015-02-04 16:36:00-05:00
4   2015-02-04 22:52:00-05:00
Name: Date, dtype: datetime64[ns, US/Eastern]

World Population

1
2
3
4
population = pd.read_csv('DataCamp-master/11-pandas-foundations/_datasets/world_population.csv',
                         parse_dates=True,
                         index_col= 'Date')
population
Total Population
Date
1960-01-013034970564
1970-01-013684822701
1980-01-014436590356
1990-01-015282715991
2000-01-016115974486
2010-01-016924282937

Upsample population

1
population.resample('A').first().head(11)
1
2
C:\Users\trent\AppData\Local\Temp\ipykernel_30012\445972408.py:1: FutureWarning: 'A' is deprecated and will be removed in a future version, please use 'YE' instead.
  population.resample('A').first().head(11)
Total Population
Date
1960-12-313.034971e+09
1961-12-31NaN
1962-12-31NaN
1963-12-31NaN
1964-12-31NaN
1965-12-31NaN
1966-12-31NaN
1967-12-31NaN
1968-12-31NaN
1969-12-31NaN
1970-12-313.684823e+09

Interpolate missing data

1
population.resample('A').first().interpolate('linear').head(11)
1
2
C:\Users\trent\AppData\Local\Temp\ipykernel_30012\157923906.py:1: FutureWarning: 'A' is deprecated and will be removed in a future version, please use 'YE' instead.
  population.resample('A').first().interpolate('linear').head(11)
Total Population
Date
1960-12-313.034971e+09
1961-12-313.099956e+09
1962-12-313.164941e+09
1963-12-313.229926e+09
1964-12-313.294911e+09
1965-12-313.359897e+09
1966-12-313.424882e+09
1967-12-313.489867e+09
1968-12-313.554852e+09
1969-12-313.619837e+09
1970-12-313.684823e+09

Exercises

Method chaining and filtering

We’ve seen that pandas supports method chaining. This technique can be very powerful when cleaning and filtering data.

In this exercise, a DataFrame containing flight departure data for a single airline and a single airport for the month of July 2015 has been pre-loaded. Your job is to use .str() filtering and method chaining to generate summary statistics on flight delays each day to Dallas.

Instructions

  • Use .str.strip() to strip extra whitespace from df.columns. Assign the result back to df.columns.
  • In the ‘Destination Airport’ column, extract all entries where Dallas (‘DAL’) is the destination airport. Use .str.contains(‘DAL’) for this and store the result in dallas.
  • Resample dallas such that you get the total number of departures each day. Store the result in daily_departures.
  • Generate summary statistics for daily Dallas departures using .describe(). Store the result in stats.
1
2
3
4
5
df = pd.read_csv('DataCamp-master/11-pandas-foundations/_datasets/austin_airport_departure_data_2015_july.csv',
                 skiprows=15,
                 parse_dates=True,
                 index_col='Date (MM/DD/YYYY)')
df.head()
Carrier CodeFlight NumberTail NumberDestination AirportScheduled Departure TimeActual Departure TimeScheduled Elapsed Time(Minutes)Actual Elapsed Time(Minutes)Departure Delay(Minutes)Wheels-off TimeTaxi-out Time(Minutes)DelayCarrier(Minutes)DelayWeather(Minutes)DelayNational Aviation System(Minutes)DelaySecurity(Minutes)DelayLate Aircraft Arrival(Minutes)Unnamed: 17
Date (MM/DD/YYYY)
2015-07-01WN103.0N8607MMDW06:3006:52165.0147.022.007:019.00.00.00.00.00.0NaN
2015-07-01WN144.0N8609ASAN20:5520:50170.0158.0-5.021:0313.00.00.00.00.00.0NaN
2015-07-01WN178.0N646SWELP20:3020:4590.080.015.020:5510.00.00.00.00.00.0NaN
2015-07-01WN232.0N204WNATL05:4505:49135.0137.04.006:0112.00.00.00.00.00.0NaN
2015-07-01WN238.0N233LVDAL12:3012:3455.048.04.012:417.00.00.00.00.00.0NaN
1
2
3
4
# Strip extra whitespace from the column names: df.columns
print(f'Before: \n {df.columns}')
df.columns = df.columns.str.strip()
print(f'After: \n {df.columns}')
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Before: 
 Index(['  Carrier Code', 'Flight Number', 'Tail Number',
       'Destination Airport ', 'Scheduled Departure Time',
       'Actual Departure Time', 'Scheduled Elapsed Time(Minutes)',
       'Actual Elapsed Time(Minutes)', 'Departure Delay(Minutes)',
       'Wheels-off Time', 'Taxi-out Time(Minutes)', 'DelayCarrier(Minutes)',
       'DelayWeather(Minutes)', 'DelayNational Aviation System(Minutes)',
       'DelaySecurity(Minutes)', 'DelayLate Aircraft Arrival(Minutes)',
       'Unnamed: 17'],
      dtype='object')
After: 
 Index(['Carrier Code', 'Flight Number', 'Tail Number', 'Destination Airport',
       'Scheduled Departure Time', 'Actual Departure Time',
       'Scheduled Elapsed Time(Minutes)', 'Actual Elapsed Time(Minutes)',
       'Departure Delay(Minutes)', 'Wheels-off Time', 'Taxi-out Time(Minutes)',
       'DelayCarrier(Minutes)', 'DelayWeather(Minutes)',
       'DelayNational Aviation System(Minutes)', 'DelaySecurity(Minutes)',
       'DelayLate Aircraft Arrival(Minutes)', 'Unnamed: 17'],
      dtype='object')
1
2
3
# Extract data for which the destination airport is Dallas: dallas
dallas = df['Destination Airport'].str.contains('DAL')
dallas.head()
1
2
3
4
5
6
7
Date (MM/DD/YYYY)
2015-07-01    False
2015-07-01    False
2015-07-01    False
2015-07-01    False
2015-07-01     True
Name: Destination Airport, dtype: object
1
2
3
# Compute the total number of Dallas departures each day: daily_departures
daily_departures = dallas.resample('D').sum()
daily_departures.head()
1
2
3
4
5
6
7
Date (MM/DD/YYYY)
2015-07-01    10
2015-07-02    10
2015-07-03    11
2015-07-04     3
2015-07-05     9
Name: Destination Airport, dtype: object
1
2
3
# Generate the summary statistics for daily Dallas departures: stats
stats = daily_departures.describe()
stats
1
2
3
4
5
count     31
unique     5
top       10
freq      18
Name: Destination Airport, dtype: int64

Missing values and interpolation

One common application of interpolation in data analysis is to fill in missing data.

In this exercise, noisy measured data that has some dropped or otherwise missing values has been loaded. The goal is to compare two time series, and then look at summary statistics of the differences. The problem is that one of the data sets is missing data at some of the times. The pre-loaded data ts1 has value for all times, yet the data set ts2 does not: it is missing data for the weekends.

Your job is to first interpolate to fill in the data for all days. Then, compute the differences between the two data sets, now that they both have full support for all times. Finally, generate the summary statistics that describe the distribution of differences.

Instructions

  • Replace the index of ts2 with that of ts1, and then fill in the missing values of ts2 by using .interpolate(how=’linear’). Save the result as ts2_interp.
  • Compute the difference between ts1 and ts2_interp. Take the absolute value of the difference with np.abs(), and assign the result to differences.
  • Generate and print summary statistics of the differences with .describe() and print().
1
2
3
4
5
6
ts1_index = pd.DatetimeIndex(['2016-07-01', '2016-07-02', '2016-07-03', '2016-07-04',
                              '2016-07-05', '2016-07-06', '2016-07-07', '2016-07-08',
                              '2016-07-09', '2016-07-10', '2016-07-11', '2016-07-12',
                              '2016-07-13', '2016-07-14', '2016-07-15', '2016-07-16',
                              '2016-07-17'])
ts1_index
1
2
3
4
5
6
DatetimeIndex(['2016-07-01', '2016-07-02', '2016-07-03', '2016-07-04',
               '2016-07-05', '2016-07-06', '2016-07-07', '2016-07-08',
               '2016-07-09', '2016-07-10', '2016-07-11', '2016-07-12',
               '2016-07-13', '2016-07-14', '2016-07-15', '2016-07-16',
               '2016-07-17'],
              dtype='datetime64[ns]', freq=None)
1
2
ts1_values = np.array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16])
ts1_values
1
array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16])
1
2
ts1 = pd.Series(ts1_values, index=ts1_index)
ts1.head()
1
2
3
4
5
6
2016-07-01    0
2016-07-02    1
2016-07-03    2
2016-07-04    3
2016-07-05    4
dtype: int32
1
2
3
4
5
6
ts2_index = pd.DatetimeIndex(['2016-07-01', '2016-07-04', '2016-07-05', '2016-07-06',
                              '2016-07-07', '2016-07-08', '2016-07-11', '2016-07-12',
                              '2016-07-13', '2016-07-14', '2016-07-15'])
ts2_values = np.array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10])
ts2 = pd.Series(ts2_values, index=ts2_index)
ts2.head()
1
2
3
4
5
6
2016-07-01    0
2016-07-04    1
2016-07-05    2
2016-07-06    3
2016-07-07    4
dtype: int32
1
2
3
# Reset the index of ts2 to ts1, and then use linear interpolation to fill in the NaNs: ts2_interp
ts2_interp = ts2.reindex(ts1.index).interpolate(how='linear')
ts2_interp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2016-07-01     0.000000
2016-07-02     0.333333
2016-07-03     0.666667
2016-07-04     1.000000
2016-07-05     2.000000
2016-07-06     3.000000
2016-07-07     4.000000
2016-07-08     5.000000
2016-07-09     5.333333
2016-07-10     5.666667
2016-07-11     6.000000
2016-07-12     7.000000
2016-07-13     8.000000
2016-07-14     9.000000
2016-07-15    10.000000
2016-07-16    10.000000
2016-07-17    10.000000
dtype: float64
1
2
3
# Compute the absolute difference of ts1 and ts2_interp: differences 
differences = np.abs(ts1 - ts2_interp)
differences
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
2016-07-01    0.000000
2016-07-02    0.666667
2016-07-03    1.333333
2016-07-04    2.000000
2016-07-05    2.000000
2016-07-06    2.000000
2016-07-07    2.000000
2016-07-08    2.000000
2016-07-09    2.666667
2016-07-10    3.333333
2016-07-11    4.000000
2016-07-12    4.000000
2016-07-13    4.000000
2016-07-14    4.000000
2016-07-15    4.000000
2016-07-16    5.000000
2016-07-17    6.000000
dtype: float64
1
2
# Generate and print summary statistics of the differences
differences.describe()
1
2
3
4
5
6
7
8
9
count    17.000000
mean      2.882353
std       1.585267
min       0.000000
25%       2.000000
50%       2.666667
75%       4.000000
max       6.000000
dtype: float64

Time zones and conversion

Time zone handling with pandas typically assumes that you are handling the Index of the Series. In this exercise, you will learn how to handle timezones that are associated with datetimes in the column data, and not just the Index.

You will work with the flight departure dataset again, and this time you will select Los Angeles ('LAX') as the destination airport.

Here we will use a mask to ensure that we only compute on data we actually want. To learn more about Boolean masks, click here!

Instructions

  • Create a Boolean mask, mask, such that if the ‘Destination Airport’ column of df equals ‘LAX’, the result is True, and otherwise, it is False.
  • Use the mask to extract only the LAX rows. Assign the result to la.
  • Concatenate the two columns la[‘Date (MM/DD/YYYY)’] and la[‘Wheels-off Time’] with a ’ ‘ space in between. Pass this to pd.to_datetime() to create a datetime array of all the times the LAX-bound flights left the ground.
  • Use Series.dt.tz_localize() to localize the time to ‘US/Central’.
  • Use the .dt.tz_convert() method to convert datetimes from ‘US/Central’ to ‘US/Pacific’.
1
2
3
4
5
df = pd.read_csv('DataCamp-master/11-pandas-foundations/_datasets/austin_airport_departure_data_2015_july.csv',
                 skiprows=15,
                 parse_dates=True)
df.columns = df.columns.str.strip()
df.head()
Carrier CodeDate (MM/DD/YYYY)Flight NumberTail NumberDestination AirportScheduled Departure TimeActual Departure TimeScheduled Elapsed Time(Minutes)Actual Elapsed Time(Minutes)Departure Delay(Minutes)Wheels-off TimeTaxi-out Time(Minutes)DelayCarrier(Minutes)DelayWeather(Minutes)DelayNational Aviation System(Minutes)DelaySecurity(Minutes)DelayLate Aircraft Arrival(Minutes)Unnamed: 17
0WN07/01/2015103.0N8607MMDW06:3006:52165.0147.022.007:019.00.00.00.00.00.0NaN
1WN07/01/2015144.0N8609ASAN20:5520:50170.0158.0-5.021:0313.00.00.00.00.00.0NaN
2WN07/01/2015178.0N646SWELP20:3020:4590.080.015.020:5510.00.00.00.00.00.0NaN
3WN07/01/2015232.0N204WNATL05:4505:49135.0137.04.006:0112.00.00.00.00.00.0NaN
4WN07/01/2015238.0N233LVDAL12:3012:3455.048.04.012:417.00.00.00.00.00.0NaN
1
2
# Build a Boolean mask to filter out all the 'LAX' departure flights: mask
mask = df['Destination Airport'] == 'LAX'
1
2
3
# Use the mask to subset the data: la
la = df[mask]
la.head()
Carrier CodeDate (MM/DD/YYYY)Flight NumberTail NumberDestination AirportScheduled Departure TimeActual Departure TimeScheduled Elapsed Time(Minutes)Actual Elapsed Time(Minutes)Departure Delay(Minutes)Wheels-off TimeTaxi-out Time(Minutes)DelayCarrier(Minutes)DelayWeather(Minutes)DelayNational Aviation System(Minutes)DelaySecurity(Minutes)DelayLate Aircraft Arrival(Minutes)Unnamed: 17
33WN07/01/20151249.0N430WNLAX05:3005:29185.0173.0-1.005:4314.00.00.00.00.00.0NaN
55WN07/01/20154924.0N757LVLAX16:0016:15185.0169.015.016:2712.00.00.00.00.00.0NaN
91WN07/02/20151249.0N570WNLAX05:3005:38185.0171.08.005:479.00.00.00.00.00.0NaN
113WN07/02/20154924.0N379SWLAX16:0016:07185.0173.07.016:2316.00.00.00.00.00.0NaN
134WN07/03/20151249.0N487WNLAX05:1005:16185.0174.06.005:3014.00.00.00.00.00.0NaN
1
2
3
# Combine two columns of data to create a datetime series: times_tz_none 
times_tz_none = pd.to_datetime(la['Date (MM/DD/YYYY)'] + ' ' + la['Wheels-off Time'])
times_tz_none.head()
1
2
3
4
5
6
33    2015-07-01 05:43:00
55    2015-07-01 16:27:00
91    2015-07-02 05:47:00
113   2015-07-02 16:23:00
134   2015-07-03 05:30:00
dtype: datetime64[ns]
1
2
3
# Localize the time to US/Central: times_tz_central
times_tz_central = times_tz_none.dt.tz_localize('US/Central')
times_tz_central.head()
1
2
3
4
5
6
33    2015-07-01 05:43:00-05:00
55    2015-07-01 16:27:00-05:00
91    2015-07-02 05:47:00-05:00
113   2015-07-02 16:23:00-05:00
134   2015-07-03 05:30:00-05:00
dtype: datetime64[ns, US/Central]
1
2
3
# Convert the datetimes from US/Central to US/Pacific
times_tz_pacific = times_tz_central.dt.tz_convert('US/Pacific')
times_tz_pacific.head()
1
2
3
4
5
6
33    2015-07-01 03:43:00-07:00
55    2015-07-01 14:27:00-07:00
91    2015-07-02 03:47:00-07:00
113   2015-07-02 14:23:00-07:00
134   2015-07-03 03:30:00-07:00
dtype: datetime64[ns, US/Pacific]

Visualizing pandas time series

Topics

  • Line types
  • Plot types
  • Subplots
1
2
3
4
sp500 = pd.read_csv('data/sp500_2010-01-01_-_2015-12-31.csv',
                    parse_dates=True,
                    index_col= 'Date')
sp500.head()
OpenHighLowCloseAdj CloseVolume
Date
2010-01-041116.5600591133.8699951116.5600591132.9899901132.9899903991400000
2010-01-051132.6600341136.6300051129.6600341136.5200201136.5200202491020000
2010-01-061135.7099611139.1899411133.9499511137.1400151137.1400154972660000
2010-01-071136.2700201142.4599611131.3199461141.6899411141.6899415270680000
2010-01-081140.5200201145.3900151136.2199711144.9799801144.9799804389590000

Pandas plot

1
sp500['Close'].plot()
1
<Axes: xlabel='Date'>

png

Labels and title

1
2
sp500['Close'].plot(title='S&P 500')
plt.ylabel('Closing Price (US Dollars)')
1
Text(0, 0.5, 'Closing Price (US Dollars)')

png

One week

1
2
sp500.loc['2012-4-1':'2012-4-7', 'Close'].plot(title='S&P 500')
In [11]: plt.ylabel('Closing Price (US Dollars)')
1
Text(0, 0.5, 'Closing Price (US Dollars)')

png

Plot styles

1
2
sp500.loc['2012-4', 'Close'].plot(style='k.-', title='S&P500')
plt.ylabel('Closing Price (US Dollars)')
1
Text(0, 0.5, 'Closing Price (US Dollars)')

png

More plot styles

  • Style format string
    • color (k: black)
    • marker (.: dot)
    • line type (-: solid)
ColorMarkerLine
b: blueo: circle: dotted
g: green*: star-: dashed
r: reds: square 
c: cyan+: plus 

Area plot

1
2
sp500['Close'].plot(kind='area', title='S&P 500')
plt.ylabel('Closing Price (US Dollars)')
1
Text(0, 0.5, 'Closing Price (US Dollars)')

png

Multiple columns

1
sp500.loc['2012', ['Close','Volume']].plot(title='S&P 500')
1
<Axes: title={'center': 'S&P 500'}, xlabel='Date'>

png

Subplots

1
sp500.loc['2012', ['Close','Volume']].plot(subplots=True)
1
array([<Axes: xlabel='Date'>, <Axes: xlabel='Date'>], dtype=object)

png

Exercises

Plotting time series, datetime indexing

Pandas handles datetimes not only in your data, but also in your plotting.

In this exercise, some time series data has been pre-loaded. However, we have not parsed the date-like columns nor set the index, as we have done for you in the past!

The plot displayed is how pandas renders data with the default integer/positional index. Your job is to convert the 'Date' column from a collection of strings into a collection of datetime objects. Then, you will use this converted 'Date' column as your new index, and re-plot the data, noting the improved datetime awareness. After you are done, you can cycle between the two plots you generated by clicking on the ‘Previous Plot’ and ‘Next Plot’ buttons.

Before proceeding, look at the plot shown and observe how pandas handles data with the default integer index. Then, inspect the DataFrame df using the .head() method in the IPython Shell to get a feel for its structure.

Instructions

  • Use pd.to_datetime() to convert the ‘Date’ column to a collection of datetime objects, and assign back to df.Date.
  • Set the index to this updated ‘Date’ column, using df.set_index() with the optional keyword argument inplace=True, so that you don’t have to assign the result back to df.
  • Re-plot the DataFrame to see that the axis is now datetime aware. This code has been written for you.
1
2
3
df = pd.read_csv('DataCamp-master/11-pandas-foundations/_datasets/weather_data_austin_2010.csv',
                 usecols=[0, 3])
df.head()
TemperatureDate
046.220100101 00:00
144.620100101 01:00
244.120100101 02:00
343.820100101 03:00
443.520100101 04:00
1
2
# Plot the raw data before setting the datetime index
df.plot()
1
<Axes: >

png

1
2
3
# Convert the 'Date' column into a collection of datetime objects: df.Date
df.Date = pd.to_datetime(df.Date)
df.Date.head()
1
2
3
4
5
6
0   2010-01-01 00:00:00
1   2010-01-01 01:00:00
2   2010-01-01 02:00:00
3   2010-01-01 03:00:00
4   2010-01-01 04:00:00
Name: Date, dtype: datetime64[ns]
1
2
3
# Set the index to be the converted 'Date' column
df.set_index('Date', inplace=True)
df.head()
Temperature
Date
2010-01-01 00:00:0046.2
2010-01-01 01:00:0044.6
2010-01-01 02:00:0044.1
2010-01-01 03:00:0043.8
2010-01-01 04:00:0043.5
1
2
# Re-plot the DataFrame to see that the axis is now datetime aware!
df.plot()
1
<Axes: xlabel='Date'>

png

Plotting date ranges, partial indexing

Now that you have set the DatetimeIndex in your DataFrame, you have a much more powerful and flexible set of tools to use when plotting your time series data. Of these, one of the most convenient is partial string indexing and slicing. In this exercise, we’ve pre-loaded a full year of Austin 2010 weather data, with the index set to be the datetime parsed 'Date' column as shown in the previous exercise.

Your job is to use partial string indexing of the dates, in a variety of datetime string formats, to plot all the summer data and just one week of data together. After you are done, you can cycle between the two plots by clicking on the ‘Previous Plot’ and ‘Next Plot’ buttons.

First, remind yourself how to extract one month of temperature data using 'May 2010' as a key into df.Temperature[], and call head() to inspect the result: df.Temperature['May 2010'].head().

Instructions

  • Plot the summer temperatures using method chaining. The summer ranges from the months ‘2010-Jun’ to ‘2010-Aug’.
  • Plot the temperatures for one week in June using the same method chaining, but this time indexing with ‘2010-06-10’:’2010-06-17’ before you follow up with .plot().
1
2
3
4
df = pd.read_csv('DataCamp-master/11-pandas-foundations/_datasets/weather_data_austin_2010.csv',
                 parse_dates=True,
                 index_col='Date')
df.head()
TemperatureDewPointPressure
Date
2010-01-01 00:00:0046.237.51.0
2010-01-01 01:00:0044.637.11.0
2010-01-01 02:00:0044.136.91.0
2010-01-01 03:00:0043.836.91.0
2010-01-01 04:00:0043.536.81.0
1
2
# Plot the summer data
df.Temperature['2010-Jun':'2010-Aug'].plot()
1
<Axes: xlabel='Date'>

png

1
2
# Plot the one week data
df.Temperature['2010-06-10':'2010-06-17'].plot()
1
<Axes: xlabel='Date'>

png

Case Study - Sunlight in Austin

Working with real-world weather and climate data, in this chapter you will bring together and apply all of the skills you have acquired in this course. You will use Pandas to manipulate the data into a form usable for analysis, and then systematically explore it using the techniques you learned in the prior chapters. Enjoy!

Reading and Cleaning the Data

Case study

  • Comparing observed weather data from two sources

Climate normals of Austin, TX

1
2
3
4
df_climate = pd.read_csv('DataCamp-master/11-pandas-foundations/_datasets/weather_data_austin_2010.csv',
                         parse_dates=True,
                         index_col='Date')
df_climate.head()
TemperatureDewPointPressure
Date
2010-01-01 00:00:0046.237.51.0
2010-01-01 01:00:0044.637.11.0
2010-01-01 02:00:0044.136.91.0
2010-01-01 03:00:0043.836.91.0
2010-01-01 04:00:0043.536.81.0

Weather data of Austin, TX

1
2
3
df = pd.read_csv('DataCamp-master/11-pandas-foundations/_datasets/NOAA_QCLCD_2011_hourly_13904.txt',
                 header=None)
df.head()
012345678910111213141516171819202122232425262728293031323334353637383940414243
013904201101015312OVC04510.005110.6383.115-9.4241536029.4229.95AA29.95
1139042011010115312OVC04910.005110.6373.014-10.0231034029.4930.01AA30.02
2139042011010125312OVC06010.005110.6372.913-10.6221501029.49103030.01AA30.02
3139042011010135312OVC06510.005010.0383.117-8.327735029.5130.03AA30.04
4139042011010145312BKN07010.005010.0372.815-9.4251102029.5130.04AA30.04

Reminder: read_csv()

  • Useful keyword options
    • names: assigning column labels
    • index_col: assigning index
    • parse_dates: parsing datetimes
    • na_values: parsing NaNs

Exercises

Reading in a data file

Now that you have identified the method to use to read the data, let’s try to read one file. The problem with real data such as this is that the files are almost never formatted in a convenient way. In this exercise, there are several problems to overcome in reading the file. First, there is no header, and thus the columns don’t have labels. There is also no obvious index column, since none of the data columns contain a full date or time.

Your job is to read the file into a DataFrame using the default arguments. After inspecting it, you will re-read the file specifying that there are no headers supplied.

The CSV file has been provided for you as the variable data_file.

Instructions

  • Import pandas as pd.
  • Read the file data_file into a DataFrame called df.
  • Print the output of df.head(). This has been done for you. Notice the formatting problems in df.
  • Re-read the data using specifying the keyword argument header=None and assign it to df_headers.
  • Print the output of df_headers.head(). This has already been done for you. Hit ‘Submit Answer’ and see how this resolves the formatting issues.
1
data_file = 'DataCamp-master/11-pandas-foundations/_datasets/NOAA_QCLCD_2011_hourly_13904.txt'
1
2
3
# Read in the data file: df
df = pd.read_csv(data_file)
df.head()
1390420110101005312OVC04510.00.1.2.351.410.6.538.63.1.715.8-9.4.924.1015.1.11360.12.13.1429.42.15.16.17.18.1929.95.20AA.21.22.2329.95.1.24
0139042011010115312OVC04910.005110.6373.014-10.0231034029.4930.01AA30.02
1139042011010125312OVC06010.005110.6372.913-10.6221501029.49103030.01AA30.02
2139042011010135312OVC06510.005010.0383.117-8.327735029.5130.03AA30.04
3139042011010145312BKN07010.005010.0372.815-9.4251102029.5130.04AA30.04
4139042011010155312BKN06510.00499.4372.817-8.328601029.53101530.06AA30.06
1
2
3
4
# Read in the data file with header=None: df_headers
df_headers = pd.read_csv(data_file,
                         header=None)
df_headers.head()
012345678910111213141516171819202122232425262728293031323334353637383940414243
013904201101015312OVC04510.005110.6383.115-9.4241536029.4229.95AA29.95
1139042011010115312OVC04910.005110.6373.014-10.0231034029.4930.01AA30.02
2139042011010125312OVC06010.005110.6372.913-10.6221501029.49103030.01AA30.02
3139042011010135312OVC06510.005010.0383.117-8.327735029.5130.03AA30.04
4139042011010145312BKN07010.005010.0372.815-9.4251102029.5130.04AA30.04

Re-assigning column names

After the initial step of reading in the data, the next step is to clean and tidy it so that it is easier to work with.

In this exercise, you will begin this cleaning process by re-assigning column names and dropping unnecessary columns.

pandas has been imported in the workspace as pd, and the file NOAA_QCLCD_2011_hourly_13904.txt has been parsed and loaded into a DataFrame df. The comma separated string of column names, column_labels, and list of columns to drop, list_to_drop, have also been loaded for you.

Instructions

  • Convert the comma separated string column_labels to a list of strings using .split(‘,’). Assign the result to column_labels_list.
  • Reassign df.columns using the list of strings column_labels_list.
  • Call df.drop() with list_to_drop and axis=’columns’. Assign the result to df_dropped.
  • Print df_dropped.head() to examine the result. This has already been done for you.
1
column_labels = 'Wban,date,Time,StationType,sky_condition,sky_conditionFlag,visibility,visibilityFlag,wx_and_obst_to_vision,wx_and_obst_to_visionFlag,dry_bulb_faren,dry_bulb_farenFlag,dry_bulb_cel,dry_bulb_celFlag,wet_bulb_faren,wet_bulb_farenFlag,wet_bulb_cel,wet_bulb_celFlag,dew_point_faren,dew_point_farenFlag,dew_point_cel,dew_point_celFlag,relative_humidity,relative_humidityFlag,wind_speed,wind_speedFlag,wind_direction,wind_directionFlag,value_for_wind_character,value_for_wind_characterFlag,station_pressure,station_pressureFlag,pressure_tendency,pressure_tendencyFlag,presschange,presschangeFlag,sea_level_pressure,sea_level_pressureFlag,record_type,hourly_precip,hourly_precipFlag,altimeter,altimeterFlag,junk'
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
list_to_drop = ['sky_conditionFlag',
                'visibilityFlag',
                'wx_and_obst_to_vision',
                'wx_and_obst_to_visionFlag',
                'dry_bulb_farenFlag',
                'dry_bulb_celFlag',
                'wet_bulb_farenFlag',
                'wet_bulb_celFlag',
                'dew_point_farenFlag',
                'dew_point_celFlag',
                'relative_humidityFlag',
                'wind_speedFlag',
                'wind_directionFlag',
                'value_for_wind_character',
                'value_for_wind_characterFlag',
                'station_pressureFlag',
                'pressure_tendencyFlag',
                'pressure_tendency',
                'presschange',
                'presschangeFlag',
                'sea_level_pressureFlag',
                'hourly_precip',
                'hourly_precipFlag',
                'altimeter',
                'record_type',
                'altimeterFlag',
                'junk']
1
2
3
# Split on the comma to create a list: column_labels_list
column_labels_list = column_labels.split(',')
column_labels_list
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
['Wban',
 'date',
 'Time',
 'StationType',
 'sky_condition',
 'sky_conditionFlag',
 'visibility',
 'visibilityFlag',
 'wx_and_obst_to_vision',
 'wx_and_obst_to_visionFlag',
 'dry_bulb_faren',
 'dry_bulb_farenFlag',
 'dry_bulb_cel',
 'dry_bulb_celFlag',
 'wet_bulb_faren',
 'wet_bulb_farenFlag',
 'wet_bulb_cel',
 'wet_bulb_celFlag',
 'dew_point_faren',
 'dew_point_farenFlag',
 'dew_point_cel',
 'dew_point_celFlag',
 'relative_humidity',
 'relative_humidityFlag',
 'wind_speed',
 'wind_speedFlag',
 'wind_direction',
 'wind_directionFlag',
 'value_for_wind_character',
 'value_for_wind_characterFlag',
 'station_pressure',
 'station_pressureFlag',
 'pressure_tendency',
 'pressure_tendencyFlag',
 'presschange',
 'presschangeFlag',
 'sea_level_pressure',
 'sea_level_pressureFlag',
 'record_type',
 'hourly_precip',
 'hourly_precipFlag',
 'altimeter',
 'altimeterFlag',
 'junk']
1
2
# Assign the new column labels to the DataFrame: df.columns
df.columns = column_labels_list
1
2
3
# Remove the appropriate columns: df_dropped
df_dropped = df.drop(list_to_drop, axis='columns')
df_dropped.head()
WbandateTimeStationTypesky_conditionvisibilitydry_bulb_farendry_bulb_celwet_bulb_farenwet_bulb_celdew_point_farendew_point_celrelative_humiditywind_speedwind_directionstation_pressuresea_level_pressure
0139042011010115312OVC04910.005110.6373.014-10.0231034029.4930.01
1139042011010125312OVC06010.005110.6372.913-10.6221501029.4930.01
2139042011010135312OVC06510.005010.0383.117-8.327735029.5130.03
3139042011010145312BKN07010.005010.0372.815-9.4251102029.5130.04
4139042011010155312BKN06510.00499.4372.817-8.328601029.5330.06

Cleaning and tidying datetime data

In order to use the full power of pandas time series, you must construct a DatetimeIndex. To do so, it is necessary to clean and transform the date and time columns.

The DataFrame df_dropped you created in the last exercise is provided for you and pandas has been imported as pd.

Your job is to clean up the date and Time columns and combine them into a datetime collection to be used as the Index.

Instructions

  • Convert the ‘date’ column to a string with .astype(str) and assign to df_dropped[‘date’].
  • Add leading zeros to the ‘Time’ column. This has been done for you.
  • Concatenate the new ‘date’ and ‘Time’ columns together. Assign to date_string.
  • Convert the date_string Series to datetime values with pd.to_datetime(). Specify the format parameter.
  • Set the index of the df_dropped DataFrame to be date_times. Assign the result to df_clean.
1
2
# Convert the date column to string: df_dropped['date']
df_dropped['date'] = df_dropped.date.astype(str)
1
2
# Pad leading zeros to the Time column: df_dropped['Time']
df_dropped['Time'] = df_dropped['Time'].apply(lambda x:'{:0>4}'.format(x))
1
2
3
# Concatenate the new date and Time columns: date_string
date_string = df_dropped['date'] + df_dropped['Time']
date_string.head()
1
2
3
4
5
6
0    201101010153
1    201101010253
2    201101010353
3    201101010453
4    201101010553
dtype: object
1
2
3
# Convert the date_string Series to datetime: date_times
date_times = pd.to_datetime(date_string, format='%Y%m%d%H%M')
date_times.head()
1
2
3
4
5
6
0   2011-01-01 01:53:00
1   2011-01-01 02:53:00
2   2011-01-01 03:53:00
3   2011-01-01 04:53:00
4   2011-01-01 05:53:00
dtype: datetime64[ns]
1
2
3
# Set the index to be the new date_times container: df_clean
df_clean = df_dropped.set_index(date_times)
df_clean.head()
WbandateTimeStationTypesky_conditionvisibilitydry_bulb_farendry_bulb_celwet_bulb_farenwet_bulb_celdew_point_farendew_point_celrelative_humiditywind_speedwind_directionstation_pressuresea_level_pressure
2011-01-01 01:53:001390420110101015312OVC04910.005110.6373.014-10.0231034029.4930.01
2011-01-01 02:53:001390420110101025312OVC06010.005110.6372.913-10.6221501029.4930.01
2011-01-01 03:53:001390420110101035312OVC06510.005010.0383.117-8.327735029.5130.03
2011-01-01 04:53:001390420110101045312BKN07010.005010.0372.815-9.4251102029.5130.04
2011-01-01 05:53:001390420110101055312BKN06510.00499.4372.817-8.328601029.5330.06

Cleaning the numeric columns

The numeric columns contain missing values labeled as ‘M’. In this exercise, your job is to transform these columns such that they contain only numeric values and interpret missing data as NaN.

The pandas function pd.to_numeric() is ideal for this purpose: It converts a Series of values to floating-point values. Furthermore, by specifying the keyword argument errors=’coerce’, you can force strings like ‘M’ to be interpreted as NaN.

A DataFrame df_clean is provided for you at the start of the exercise, and as usual, pandas has been imported as pd.

Instructions

  • Print the ‘dry_bulb_faren’ temperature between 8 AM and 9 AM on June 20, 2011.
  • Convert the ‘dry_bulb_faren’ column to numeric values with pd.to_numeric(). Specify errors=’coerce’.
  • Print the transformed dry_bulb_faren temperature between 8 AM and 9 AM on June 20, 2011.
  • Convert the ‘wind_speed’ and ‘dew_point_faren’ columns to numeric values with pd.to_numeric(). Again, specify errors=’coerce’.
1
2
# Print the dry_bulb_faren temperature between 8 AM and 9 AM on June 20, 2011
df_clean.loc['2011-6-20 08:00:00':'2011-6-20 09:00:00', 'dry_bulb_faren']
1
2
3
4
5
6
7
8
9
10
11
2011-06-20 08:27:00     M
2011-06-20 08:28:00     M
2011-06-20 08:29:00     M
2011-06-20 08:30:00     M
2011-06-20 08:31:00     M
2011-06-20 08:32:00     M
2011-06-20 08:33:00     M
2011-06-20 08:34:00     M
2011-06-20 08:35:00     M
2011-06-20 08:53:00    83
Name: dry_bulb_faren, dtype: object
1
2
3
# Convert the dry_bulb_faren column to numeric values: df_clean['dry_bulb_faren']
df_clean['dry_bulb_faren'] = pd.to_numeric(df_clean['dry_bulb_faren'], errors='coerce')
df_clean.dry_bulb_faren.head()
1
2
3
4
5
6
2011-01-01 01:53:00    51.0
2011-01-01 02:53:00    51.0
2011-01-01 03:53:00    50.0
2011-01-01 04:53:00    50.0
2011-01-01 05:53:00    49.0
Name: dry_bulb_faren, dtype: float64
1
2
# Print the transformed dry_bulb_faren temperature between 8 AM and 9 AM on June 20, 2011
df_clean.loc['2011-6-20 08:00:00':'2011-6-20 09:00:00', 'dry_bulb_faren']
1
2
3
4
5
6
7
8
9
10
11
2011-06-20 08:27:00     NaN
2011-06-20 08:28:00     NaN
2011-06-20 08:29:00     NaN
2011-06-20 08:30:00     NaN
2011-06-20 08:31:00     NaN
2011-06-20 08:32:00     NaN
2011-06-20 08:33:00     NaN
2011-06-20 08:34:00     NaN
2011-06-20 08:35:00     NaN
2011-06-20 08:53:00    83.0
Name: dry_bulb_faren, dtype: float64
1
2
3
4
5
# Convert the wind_speed and dew_point_faren columns to numeric values
df_clean['wind_speed'] = pd.to_numeric(df_clean['wind_speed'], errors='coerce')
df_clean['dew_point_faren'] = pd.to_numeric(df_clean['dew_point_faren'], errors='coerce')

df_clean[['wind_speed', 'dew_point_faren']].head()
wind_speeddew_point_faren
2011-01-01 01:53:0010.014.0
2011-01-01 02:53:0015.013.0
2011-01-01 03:53:007.017.0
2011-01-01 04:53:0011.015.0
2011-01-01 05:53:006.017.0

Statistical exploratory data analysis

Reminder: time series

  • Index selection by date time
  • Partial datetime selection
  • Slicing ranges of datetimes
1
2
3
4
climate2010['2010-05-31 22:00:00'] # datetime
climate2010['2010-06-01'] # Entire day
climate2010['2010-04'] # Entire month
climate2010['2010-09':'2010-10'] # 2 months

Reminder: statistics methods

  • Methods for computing statistics:
    • describe(): summary
    • mean(): average
    • count(): counting entries
    • median(): median
    • std(): standard deviation

Exercises

Signal min, max, median

Now that you have the data read and cleaned, you can begin with statistical EDA. First, you will analyze the 2011 Austin weather data.

Your job in this exercise is to analyze the ‘dry_bulb_faren’ column and print the median temperatures for specific time ranges. You can do this using partial datetime string selection.

The cleaned dataframe is provided in the workspace as df_clean.

Instructions

  • Select the ‘dry_bulb_faren’ column and print the output of .median(numeric_only=True).
  • Use .loc[] to select the range ‘2011-Apr’:’2011-Jun’ from ‘dry_bulb_faren’ and print the output of .median(numeric_only=True).
  • Use .loc[] to select the month ‘2011-Jan’ from ‘dry_bulb_faren’ and print the output of .median(numeric_only=True).
1
2
# Print the median of the dry_bulb_faren column
df_clean.dry_bulb_faren.median(numeric_only=True)
1
72.0
1
2
# Print the median of the dry_bulb_faren column for the time range '2011-Apr':'2011-Jun'
df_clean.loc['2011-Apr':'2011-Jun', 'dry_bulb_faren'].median(numeric_only=True)
1
78.0
1
2
# Print the median of the dry_bulb_faren column for the month of January
df_clean.loc['2011-Jan', 'dry_bulb_faren'].median(numeric_only=True)
1
48.0

Signal variance

You’re now ready to compare the 2011 weather data with the 30-year normals reported in 2010. You can ask questions such as, on average, how much hotter was every day in 2011 than expected from the 30-year average?

The DataFrames df_clean and df_climate from previous exercises are available in the workspace.

Your job is to first resample df_clean and df_climate by day and aggregate the mean temperatures. You will then extract the temperature related columns from each - 'dry_bulb_faren' in df_clean, and 'Temperature' in df_climate - as NumPy arrays and compute the difference.

Notice that the indexes of df_clean and df_climate are not aligned - df_clean has dates in 2011, while df_climate has dates in 2010. This is why you extract the temperature columns as NumPy arrays. An alternative approach is to use the pandas .reset_index() method to make sure the Series align properly. You will practice this approach as well.

Instructions

  • Downsample df_clean with daily frequency and aggregate by the mean. Store the result as daily_mean_2011.
  • Extract the ‘dry_bulb_faren’ column from daily_mean_2011 as a NumPy array using .values. Store the result as daily_temp_2011. Note: .values is an attribute, not a method, so you don’t have to use ().
  • Downsample df_climate with daily frequency and aggregate by the mean. Store the result as daily_climate.
  • Extract the ‘Temperature’ column from daily_climate using the .reset_index() method. To do this, first reset the index of daily_climate, and then use bracket slicing to access ‘Temperature’. Store the result as daily_temp_climate.
1
2
3
# Downsample df_clean by day and aggregate by mean: daily_mean_2011
daily_mean_2011 = df_clean.resample('D').mean(numeric_only=True)
daily_mean_2011.head()
WbanStationTypedry_bulb_farendew_point_farenwind_speed
2011-01-0113904.012.050.13043520.73913010.913043
2011-01-0213904.012.039.41666719.7083334.166667
2011-01-0313904.012.046.84615435.5000002.653846
2011-01-0413904.012.053.36734750.4081632.510204
2011-01-0513904.012.057.96551740.0689664.689655
1
2
3
# Extract the dry_bulb_faren column from daily_mean_2011 using .values: daily_temp_2011
daily_temp_2011 = daily_mean_2011.dry_bulb_faren.values
daily_temp_2011[0:10]
1
2
array([50.13043478, 39.41666667, 46.84615385, 53.36734694, 57.96551724,
       46.95833333, 51.91666667, 51.81481481, 43.61363636, 38.27777778])
1
2
3
# Downsample df_climate by day and aggregate by mean: daily_climate
daily_climate = df_climate.resample('D').mean(numeric_only=True)
daily_climate.head()
TemperatureDewPointPressure
Date
2010-01-0149.33750037.7166671.0
2010-01-0249.79583338.3708331.0
2010-01-0349.90000038.2791671.0
2010-01-0449.72916738.0083331.0
2010-01-0549.84166738.0875001.0
1
2
3
# Extract the Temperature column from daily_climate using .reset_index(): daily_temp_climate
daily_temp_climate = daily_climate.reset_index()['Temperature']
daily_temp_climate.head()
1
2
3
4
5
6
0    49.337500
1    49.795833
2    49.900000
3    49.729167
4    49.841667
Name: Temperature, dtype: float64
1
2
3
# Compute the difference between the two arrays and print the mean difference
difference = daily_temp_2011 - daily_temp_climate
difference.mean(numeric_only=True)
1
1.3300839215698717

Sunny or cloudy

On average, how much hotter is it when the sun is shining? In this exercise, you will compare temperatures on sunny days against temperatures on overcast days.

Your job is to use Boolean selection to filter out sunny and overcast days, and then compute the difference of the mean daily maximum temperatures between each type of day.

The DataFrame df_clean from previous exercises has been provided for you. The column 'sky_condition' provides information about whether the day was sunny ('CLR') or overcast ('OVC').

Instructions 1/3

  • Get the cases in df_clean where the sky is clear. That is, when ‘sky_condition’ equals ‘CLR’, assigning to is_sky_clear.
  • Use .loc[] to filter df_clean by is_sky_clear, assigning to sunny.
  • Resample sunny by day (‘D’), and take the max to find the maximum daily temperature.
1
df_clean.head(3)
WbandateTimeStationTypesky_conditionvisibilitydry_bulb_farendry_bulb_celwet_bulb_farenwet_bulb_celdew_point_farendew_point_celrelative_humiditywind_speedwind_directionstation_pressuresea_level_pressure
2011-01-01 01:53:001390420110101015312OVC04910.0051.010.6373.014.0-10.02310.034029.4930.01
2011-01-01 02:53:001390420110101025312OVC06010.0051.010.6372.913.0-10.62215.001029.4930.01
2011-01-01 03:53:001390420110101035312OVC06510.0050.010.0383.117.0-8.3277.035029.5130.03
1
2
3
# Using df_clean, when is sky_condition 'CLR'?
is_sky_clear = df_clean['sky_condition']=='CLR'
is_sky_clear.head()
1
2
3
4
5
6
2011-01-01 01:53:00    False
2011-01-01 02:53:00    False
2011-01-01 03:53:00    False
2011-01-01 04:53:00    False
2011-01-01 05:53:00    False
Name: sky_condition, dtype: bool
1
2
3
4
# Filter df_clean using is_sky_clear
sunny = df_clean[is_sky_clear].copy()
sunny = sunny.apply(lambda col: pd.to_numeric(col, errors='coerce')).dropna(how='all', axis=1)
sunny.head(3)
WbandateTimeStationTypevisibilitydry_bulb_farendry_bulb_celwet_bulb_farenwet_bulb_celdew_point_farendew_point_celrelative_humiditywind_speedwind_directionstation_pressuresea_level_pressure
2011-01-01 13:53:00139042011010113531210.059.015.0457.026.0-3.32814.010.029.6330.16
2011-01-01 14:53:00139042011010114531210.059.015.0457.227.0-2.82916.0360.029.6330.16
2011-01-01 15:53:00139042011010115531210.057.013.9446.627.0-2.83211.0350.029.6330.17
1
2
3
4
5
# Resample sunny by day then calculate the max
# Additional cleaning was done in the previous cell, to remove all non-numeric values
# Resample doesn't work if the column has strings
sunny_daily_max = sunny.resample('1D').max(numeric_only=True)
sunny_daily_max.head(3)
WbandateTimeStationTypevisibilitydry_bulb_farendry_bulb_celwet_bulb_farenwet_bulb_celdew_point_farendew_point_celrelative_humiditywind_speedwind_directionstation_pressuresea_level_pressure
2011-01-0113904.020110101.02353.012.010.059.015.045.07.228.0-2.253.016.0360.029.7830.33
2011-01-0213904.020110102.02253.012.010.035.01.732.00.128.0-2.276.08.0360.029.8230.38
2011-01-0313904.020110103.0453.012.010.032.00.029.0-1.426.0-3.385.00.00.029.7130.27

Instructions 2/3

  • Get the cases in df_clean where the sky is overcast. Using .str.contains(), find when ‘sky_condition’ contains ‘OVC’, assigning to is_sky_overcast.
  • Use .loc[] to filter df_clean by is_sky_overcast, assigning to overcast.
  • Resample overcast by day (‘D’), and take the max to find the maximum daily temperature.
1
2
# Using df_clean, when does sky_condition contain 'OVC'?
is_sky_overcast = df_clean['sky_condition'].str.contains('OVC')
1
2
3
# Filter df_clean using is_sky_overcast
overcast = df_clean[is_sky_overcast].copy()
overcast = overcast.apply(lambda col: pd.to_numeric(col, errors='coerce')).dropna(how='all', axis=1)
1
2
3
# Resample overcast by day then calculate the max
overcast_daily_max = overcast.resample('D').max(numeric_only=True)
overcast_daily_max.head(3)
WbandateTimeStationTypevisibilitydry_bulb_farendry_bulb_celwet_bulb_farenwet_bulb_celdew_point_farendew_point_celrelative_humiditywind_speedwind_directionstation_pressuresea_level_pressure
2011-01-0113904.020110101.0353.012.010.051.010.638.03.117.0-8.327.015.0350.029.5130.03
2011-01-02NaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaNNaN
2011-01-0313904.020110103.02353.012.010.058.014.449.09.745.07.079.010.0200.029.7030.26

Instructions 3/3

  • Calculate the mean of sunny_daily_max, assigning to sunny_daily_max_mean.
  • Calculate the mean of overcast_daily_max, assigning to overcast_daily_max_mean.
  • Print sunny_daily_max_mean minus overcast_daily_max_mean. How much hotter are sunny days?
1
2
3
# Calculate the mean of sunny_daily_max
sunny_daily_max_mean = sunny_daily_max.mean(numeric_only=True)
sunny_daily_max_mean
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Wban                  1.390400e+04
date                  2.011066e+07
Time                  2.036054e+03
StationType           1.200000e+01
visibility            9.950000e+00
dry_bulb_faren        7.556071e+01
dry_bulb_cel          2.420214e+01
wet_bulb_faren        6.224286e+01
wet_bulb_cel          1.674821e+01
dew_point_faren       5.586071e+01
dew_point_cel         1.326036e+01
relative_humidity     8.126071e+01
wind_speed            1.228214e+01
wind_direction        2.300000e+02
station_pressure      2.951011e+01
sea_level_pressure    3.002405e+01
dtype: float64
1
2
3
# Calculate the mean of overcast_daily_max
overcast_daily_max_mean = overcast_daily_max.mean(numeric_only=True)
overcast_daily_max_mean
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Wban                  1.390400e+04
date                  2.011060e+07
Time                  1.635559e+03
StationType           1.200000e+01
visibility            9.775641e+00
dry_bulb_faren        6.905641e+01
dry_bulb_cel          2.058308e+01
wet_bulb_faren        6.303077e+01
wet_bulb_cel          1.719436e+01
dew_point_faren       6.020000e+01
dew_point_cel         1.565949e+01
relative_humidity     8.717436e+01
wind_speed            1.552821e+01
wind_direction        2.417436e+02
station_pressure      2.950703e+01
sea_level_pressure    3.002580e+01
dtype: float64
1
2
# Print the difference (sunny minus overcast)
sunny_daily_max_mean - overcast_daily_max_mean
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Wban                    0.000000
date                   64.843223
Time                  400.494597
StationType             0.000000
visibility              0.174359
dry_bulb_faren          6.504304
dry_bulb_cel            3.619066
wet_bulb_faren         -0.787912
wet_bulb_cel           -0.446145
dew_point_faren        -4.339286
dew_point_cel          -2.399130
relative_humidity      -5.913645
wind_speed             -3.246062
wind_direction        -11.743590
station_pressure        0.003082
sea_level_pressure     -0.001753
dtype: float64

The average daily maximum dry bulb temperature was 6.5 degrees Fahrenheit higher on sunny days compared to overcast days.

Visual exploratory data analysis

Line plots in pandas

1
df_climate.Temperature.loc['2010-07'].plot(title='Temperature (July 2010)')
1
<Axes: title={'center': 'Temperature (July 2010)'}, xlabel='Date'>

png

Histograms in pandas

1
2
3
4
df_climate.DewPoint.plot(kind='hist', bins=30,
                         title='Dew Point Distribution (2010)',
                         ec='black',
                         color='g')
1
<Axes: title={'center': 'Dew Point Distribution (2010)'}, ylabel='Frequency'>

png

Box plots in pandas

1
2
df_climate.DewPoint.plot(kind='box',
                         title='Dew Point Distribution (2010)')
1
<Axes: title={'center': 'Dew Point Distribution (2010)'}>

png

Subplots in pandas

1
2
3
4
df_climate.plot(kind='hist',
                density=True,
                subplots=True,
                ec='Black')
1
2
array([<Axes: ylabel='Frequency'>, <Axes: ylabel='Frequency'>,
       <Axes: ylabel='Frequency'>], dtype=object)

png

Weekly average temperature and visibility

Is there a correlation between temperature and visibility? Let’s find out.

In this exercise, your job is to plot the weekly average temperature and visibility as subplots. To do this, you need to first select the appropriate columns and then resample by week, aggregating the mean.

In addition to creating the subplots, you will compute the Pearson correlation coefficient using .corr(). The Pearson correlation coefficient, known also as Pearson’s r, ranges from -1 (indicating total negative linear correlation) to 1 (indicating total positive linear correlation). A value close to 1 here would indicate that there is a strong correlation between temperature and visibility.

The DataFrame df_clean has been pre-loaded for you.

Instructions

  • Import matplotlib.pyplot as plt.
  • Select the ‘visibility’ and ‘dry_bulb_faren’ columns and resample them by week, aggregating the mean. Assign the result to weekly_mean.
  • Print the output of weekly_mean.corr().
  • Plot the weekly_mean dataframe with .plot(), specifying subplots=True.
1
df_clean.visibility = pd.to_numeric(df_clean.visibility, errors='coerce')
1
2
3
# Select the visibility and dry_bulb_faren columns and resample them: weekly_mean
weekly_mean = df_clean[['visibility', 'dry_bulb_faren']].resample('W').mean(numeric_only=True)
weekly_mean.head()
visibilitydry_bulb_faren
2011-01-0210.00000044.659574
2011-01-098.27578550.246637
2011-01-166.45165141.103774
2011-01-238.37085347.194313
2011-01-309.96685153.486188
1
2
# Print the output of weekly_mean.corr()
weekly_mean.corr()
visibilitydry_bulb_faren
visibility1.000000.49004
dry_bulb_faren0.490041.00000
1
2
# Plot weekly_mean with subplots=True
weekly_mean.plot(subplots=True)
1
array([<Axes: >, <Axes: >], dtype=object)

png

Daily hours of clear sky

In a previous exercise, you analyzed the 'sky_condition' column to explore the difference in temperature on sunny days compared to overcast days. Recall that a 'sky_condition' of 'CLR' represents a sunny day. In this exercise, you will explore sunny days in greater detail. Specifically, you will use a box plot to visualize the fraction of days that are sunny.

The 'sky_condition' column is recorded hourly. Your job is to resample this column appropriately such that you can extract the number of sunny hours in a day and the number of total hours. Then, you can divide the number of sunny hours by the number of total hours, and generate a box plot of the resulting fraction.

As before, df_clean is available for you in the workspace.

Instructions 1/3

  • Get the cases in df_clean where the sky is clear. That is, when ‘sky_condition’ equals ‘CLR’, assigning to is_sky_clear.
  • Resample is_sky_clear by day, assigning to resampled.
1
2
# Using df_clean, when is sky_condition 'CLR'?
is_sky_clear = df_clean.sky_condition == 'CLR'
1
2
# Resample is_sky_clear by day
resampled = is_sky_clear.resample('D')

Instructions 2/3

  • Calculate the number of measured sunny hours per day as the sum of resampled, assigning to sunny_hours.
  • Calculate the total number of measured hours per day as the count of resampled, assigning to total_hours.
  • Calculate the fraction of hours per day that were sunny as the ratio of sunny hours to total hours.
1
2
3
# Calculate the number of sunny hours per day
sunny_hours = resampled.sum()
sunny_hours.head()
1
2
3
4
5
6
2011-01-01    11
2011-01-02     7
2011-01-03     3
2011-01-04     0
2011-01-05     1
Freq: D, Name: sky_condition, dtype: int64
1
2
3
# Calculate the number of measured hours per day
total_hours = resampled.count()
total_hours.head()
1
2
3
4
5
6
2011-01-01    23
2011-01-02    24
2011-01-03    26
2011-01-04    49
2011-01-05    29
Freq: D, Name: sky_condition, dtype: int64
1
2
3
# Calculate the fraction of hours per day that were sunny
sunny_fraction = sunny_hours/total_hours
sunny_fraction.head()
1
2
3
4
5
6
2011-01-01    0.478261
2011-01-02    0.291667
2011-01-03    0.115385
2011-01-04    0.000000
2011-01-05    0.034483
Freq: D, Name: sky_condition, dtype: float64

Instructions 3/3

  • Draw a box plot of sunny_fraction using .plot() with kind set to ‘box’.
1
sunny_fraction.plot(kind='box')
1
<Axes: >

png

The weather in the dataset is typically sunny less than 40% of the time.

Heat or humidity

Dew point is a measure of relative humidity based on pressure and temperature. A dew point above 65 is considered uncomfortable while a temperature above 90 is also considered uncomfortable.

In this exercise, you will explore the maximum temperature and dew point of each month. The columns of interest are 'dew_point_faren' and 'dry_bulb_faren'. After resampling them appropriately to get the maximum temperature and dew point in each month, generate a histogram of these values as subplots. Uncomfortably, you will notice that the maximum dew point is above 65 every month!

df_clean has been pre-loaded for you.

Instructions

  • Select the ‘dew_point_faren’ and ‘dry_bulb_faren’ columns (in that order). Resample by month and aggregate the maximum monthly temperatures. Assign the result to monthly_max.
  • Plot a histogram of the resampled data with bins=8, alpha=0.5, and subplots=True.
1
2
3
# Resample dew_point_faren and dry_bulb_faren by Month, aggregating the maximum values: monthly_max
monthly_max = df_clean[['dew_point_faren', 'dry_bulb_faren']].resample('M').max(numeric_only=True)
monthly_max.head()
1
2
C:\Users\trent\AppData\Local\Temp\ipykernel_30012\1678932165.py:2: FutureWarning: 'M' is deprecated and will be removed in a future version, please use 'ME' instead.
  monthly_max = df_clean[['dew_point_faren', 'dry_bulb_faren']].resample('M').max(numeric_only=True)
dew_point_farendry_bulb_faren
2011-01-3163.080.0
2011-02-2870.085.0
2011-03-3168.087.0
2011-04-3073.093.0
2011-05-3176.0100.0
1
2
3
4
5
6
# Generate a histogram with bins=8, alpha=0.5, subplots=True
monthly_max.plot(kind='hist',
                 ec='black',
                 bins=8,
                 alpha=0.5,
                 subplots=True)
1
2
array([<Axes: ylabel='Frequency'>, <Axes: ylabel='Frequency'>],
      dtype=object)

png

Probability of high temperatures

We already know that 2011 was hotter than the climate normals for the previous thirty years. In this final exercise, you will compare the maximum temperature in August 2011 against that of the August 2010 climate normals. More specifically, you will use a CDF plot to determine the probability of the 2011 daily maximum temperature in August being above the 2010 climate normal value. To do this, you will leverage the data manipulation, filtering, resampling, and visualization skills you have acquired throughout this course.

The two DataFrames df_clean and df_climate are available in the workspace. Your job is to select the maximum temperature in August in df_climate, and then maximum daily temperatures in August 2011. You will then filter out the days in August 2011 that were above the August 2010 maximum, and use this to construct a CDF plot.

Once you’ve generated the CDF, notice how it shows that there was a 50% probability of the 2011 daily maximum temperature in August being 5 degrees above the 2010 climate normal value!

Instructions

  • From df_climate, extract the maximum temperature observed in August 2010. The relevant column here is ‘Temperature’. You can select the rows corresponding to August 2010 in multiple ways. For example, df_climate.loc[‘2011-Feb’] selects all rows corresponding to February 2011, while df_climate.loc[‘2009-09’, ‘Pressure’] selects the rows corresponding to September 2009 from the ‘Pressure’ column.
  • From df_clean, select the August 2011 temperature data from the ‘dry_bulb_faren’. Resample this data by day and aggregate the maximum value. Store the result in august_2011.
  • Filter rows of august_2011 to keep days where the value exceeded august_max. Store the result in august_2011_high.
  • Construct a CDF of august_2011_high using 25 bins. Remember to specify the kind, density, and cumulative parameters in addition to bins.
1
2
3
# Extract the maximum temperature in August 2010 from df_climate: august_max
august_max = df_climate.Temperature.loc['2010-08'].max(numeric_only=True)
august_max
1
95.3
1
2
3
# Resample August 2011 temps in df_clean by day & aggregate the max value: august_2011
august_2011 = df_clean.dry_bulb_faren.loc['2011-08'].resample('D').max(numeric_only=True)
august_2011.head()
1
2
3
4
5
6
2011-08-01    103.0
2011-08-02    103.0
2011-08-03    103.0
2011-08-04    104.0
2011-08-05    103.0
Freq: D, Name: dry_bulb_faren, dtype: float64
1
2
3
# Filter for days in august_2011 where the value exceeds august_max: august_2011_high
august_2011_high = august_2011.loc[august_2011 > august_max]
august_2011_high.head()
1
2
3
4
5
6
2011-08-01    103.0
2011-08-02    103.0
2011-08-03    103.0
2011-08-04    104.0
2011-08-05    103.0
Name: dry_bulb_faren, dtype: float64
1
2
# Construct a CDF of august_2011_high
august_2011_high.plot(kind='hist', bins=25, density=True, cumulative=True, ec='black')
1
<Axes: ylabel='Frequency'>

png

Final Thoughts

You can now…

  • Import many types of datasets and deal with import issues
  • Export data to facilitate collaborative data science
  • Perform statistical and visual EDA natively in pandas

Certificate

This post is licensed under CC BY 4.0 by the author.