Laboratory 7: Pandas for Butter!

In [1]:
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
DESKTOP-EH6HD63
desktop-eh6hd63\farha
C:\Users\Farha\Anaconda3\python.exe
3.7.4 (default, Aug  9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)

Full name:

R#:

Title of the notebook:

Date:



Pandas

A data table is called a DataFrame in pandas (and other programming environments too).

The figure below from https://pandas.pydata.org/docs/getting_started/index.html illustrates a dataframe model:

Each column and each row in a dataframe is called a series, the header row, and index column are special.

To use pandas, we need to import the module, generally pandas has numpy as a dependency so it also must be imported

In [1]:
import numpy as np #Importing NumPy library as "np"
import pandas as pd #Importing Pandas library as "pd"

Dataframe-structure using primative python

First lets construct a dataframe like object using python primatives. We will construct 3 lists, one for row names, one for column names, and one for the content.

In [2]:
mytabular = np.random.randint(1,100,(5,4))
myrowname = ['A','B','C','D','E']
mycolname = ['W','X','Y','Z']
mytable = [['' for jcol in range(len(mycolname)+1)] for irow in range(len(myrowname)+1)] #non-null destination matrix, note the implied loop construction
In [4]:
print(mytabular)
[[95 54 77 70]
 [29  7 59 19]
 [99 76 85 40]
 [72 87 97 21]
 [52 85 45 83]]

The above builds a placeholder named mytable for the psuedo-dataframe. Next we populate the table, using a for loop to write the column names in the first row, row names in the first column, and the table fill for the rest of the table.

In [3]:
for irow in range(1,len(myrowname)+1): # write the row names
    mytable[irow][0]=myrowname[irow-1]
for jcol in range(1,len(mycolname)+1): # write the column names
    mytable[0][jcol]=mycolname[jcol-1]  
for irow in range(1,len(myrowname)+1): # fill the table (note the nested loop)
    for jcol in range(1,len(mycolname)+1):
        mytable[irow][jcol]=mytabular[irow-1][jcol-1]

Now lets print the table out by row and we see we have a very dataframe-like structure

In [5]:
for irow in range(0,len(myrowname)+1):
    print(mytable[irow][0:len(mycolname)+1])
['', 'W', 'X', 'Y', 'Z']
['A', 95, 54, 77, 70]
['B', 29, 7, 59, 19]
['C', 99, 76, 85, 40]
['D', 72, 87, 97, 21]
['E', 52, 85, 45, 83]

We can also query by row

In [6]:
print(mytable[3][0:len(mycolname)+1])
['C', 99, 76, 85, 40]

Or by column

In [7]:
for irow in range(0,len(myrowname)+1):  #cannot use implied loop in a column slice
    print(mytable[irow][2])
X
54
7
76
87
85

Or by row+column index; sort of looks like a spreadsheet syntax.

In [8]:
print(' ',mytable[0][3])
print(mytable[3][0],mytable[3][3])
  Y
C 85

Create a proper dataframe

We will now do the same using pandas

In [9]:
df = pd.DataFrame(np.random.randint(1,100,(5,4)), ['A','B','C','D','E'], ['W','X','Y','Z'])
df
Out[9]:
W X Y Z
A 33 89 32 63
B 1 43 87 70
C 86 66 49 94
D 70 47 79 52
E 89 48 88 70

We can also turn our table into a dataframe, notice how the constructor adds header row and index column

In [10]:
df1 = pd.DataFrame(mytable)
df1
Out[10]:
0 1 2 3 4
0 W X Y Z
1 A 95 54 77 70
2 B 29 7 59 19
3 C 99 76 85 40
4 D 72 87 97 21
5 E 52 85 45 83

To get proper behavior, we can just reuse our original objects

In [11]:
df2 = pd.DataFrame(mytabular,myrowname,mycolname)
df2
Out[11]:
W X Y Z
A 95 54 77 70
B 29 7 59 19
C 99 76 85 40
D 72 87 97 21
E 52 85 45 83

Getting the shape of dataframes

The shape method will return the row and column rank (count) of a dataframe.

In [13]:
df.shape
Out[13]:
(5, 4)
In [14]:
df1.shape
Out[14]:
(6, 5)
In [15]:
df2.shape
Out[15]:
(5, 4)

Appending new columns

To append a column simply assign a value to a new column name to the dataframe

In [14]:
df['new']= 'NA'
df
Out[14]:
W X Y Z new
A 33 89 32 63 NA
B 1 43 87 70 NA
C 86 66 49 94 NA
D 70 47 79 52 NA
E 89 48 88 70 NA

Appending new rows

A bit trickier but we can create a copy of a row and concatenate it back into the dataframe.

In [15]:
newrow = df.loc[['E']].rename(index={"E": "X"}) # create a single row, rename the index
newtable = pd.concat([df,newrow]) # concatenate the row to bottom of df - note the syntax
In [16]:
newtable
Out[16]:
W X Y Z new
A 33 89 32 63 NA
B 1 43 87 70 NA
C 86 66 49 94 NA
D 70 47 79 52 NA
E 89 48 88 70 NA
X 89 48 88 70 NA

Removing Rows and Columns

To remove a column is straightforward, we use the drop method

In [17]:
newtable.drop('new', axis=1, inplace = True)
newtable
Out[17]:
W X Y Z
A 33 89 32 63
B 1 43 87 70
C 86 66 49 94
D 70 47 79 52
E 89 48 88 70
X 89 48 88 70

To remove a row, you really got to want to, easiest is probablty to create a new dataframe with the row removed

In [18]:
newtable = newtable.loc[['A','B','D','E','X']] # select all rows except C
newtable
Out[18]:
W X Y Z
A 33 89 32 63
B 1 43 87 70
D 70 47 79 52
E 89 48 88 70
X 89 48 88 70

Indexing

We have already been indexing, but a few examples follow:

In [19]:
newtable['X'] #Selecing a single column
Out[19]:
A    89
B    43
D    47
E    48
X    48
Name: X, dtype: int32
In [20]:
newtable[['X','W']] #Selecing multiple columns
Out[20]:
X W
A 89 33
B 43 1
D 47 70
E 48 89
X 48 89
In [21]:
newtable.loc['E'] #Selecing rows based on label via loc[ ] indexer
Out[21]:
W    89
X    48
Y    88
Z    70
Name: E, dtype: int32
In [22]:
newtable.loc[['E','X','B']] #Selecing multiple rows based on label via loc[ ] indexer
Out[22]:
W X Y Z
E 89 48 88 70
X 89 48 88 70
B 1 43 87 70
In [23]:
newtable.loc[['B','E','D'],['X','Y']] #Selecting elemens via both rows and columns via loc[ ] indexer
Out[23]:
X Y
B 43 87
E 48 88
D 47 79

Conditional Selection

In [24]:
df = pd.DataFrame({'col1':[1,2,3,4,5,6,7,8],
                   'col2':[444,555,666,444,666,111,222,222],
                   'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})
df
Out[24]:
col1 col2 col3
0 1 444 orange
1 2 555 apple
2 3 666 grape
3 4 444 mango
4 5 666 jackfruit
5 6 111 watermelon
6 7 222 banana
7 8 222 peach
In [25]:
#What fruit corresponds to the number 555 in ‘col2’?

df[df['col2']==555]['col3']
Out[25]:
1    apple
Name: col3, dtype: object
In [26]:
#What fruit corresponds to the minimum number in ‘col2’?

df[df['col2']==df['col2'].min()]['col3']
Out[26]:
5    watermelon
Name: col3, dtype: object

Descriptor Functions

In [27]:
#Creating a dataframe from a dictionary

df = pd.DataFrame({'col1':[1,2,3,4,5,6,7,8],
                   'col2':[444,555,666,444,666,111,222,222],
                   'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})
df
Out[27]:
col1 col2 col3
0 1 444 orange
1 2 555 apple
2 3 666 grape
3 4 444 mango
4 5 666 jackfruit
5 6 111 watermelon
6 7 222 banana
7 8 222 peach

`head` method

Returns the first few rows, useful to infer structure

In [28]:
#Returns only the first five rows

df.head()
Out[28]:
col1 col2 col3
0 1 444 orange
1 2 555 apple
2 3 666 grape
3 4 444 mango
4 5 666 jackfruit

`info` method

Returns the data model (data column count, names, data types)

In [29]:
#Info about the dataframe

df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 8 entries, 0 to 7
Data columns (total 3 columns):
col1    8 non-null int64
col2    8 non-null int64
col3    8 non-null object
dtypes: int64(2), object(1)
memory usage: 320.0+ bytes

`describe` method

Returns summary statistics of each numeric column.
Also returns the minimum and maximum value in each column, and the IQR (Interquartile Range).
Again useful to understand structure of the columns.

In [32]:
#Statistics of the dataframe

df.describe()
Out[32]:
col1 col2
count 8.00000 8.0000
mean 4.50000 416.2500
std 2.44949 211.8576
min 1.00000 111.0000
25% 2.75000 222.0000
50% 4.50000 444.0000
75% 6.25000 582.7500
max 8.00000 666.0000

Counting and Sum methods

There are also methods for counts and sums by specific columns

In [30]:
df['col2'].sum() #Sum of a specified column
Out[30]:
3330

The unique method returns a list of unique values (filters out duplicates in the list, underlying dataframe is preserved)

In [31]:
df['col2'].unique() #Returns the list of unique values along the indexed column 
Out[31]:
array([444, 555, 666, 111, 222], dtype=int64)

The nunique method returns a count of unique values

In [32]:
df['col2'].nunique() #Returns the total number of unique values along the indexed column 
Out[32]:
5

The value_counts() method returns the count of each unique value (kind of like a histogram, but each value is the bin)

In [33]:
df['col2'].value_counts()  #Returns the number of occurences of each unique value
Out[33]:
222    2
444    2
666    2
111    1
555    1
Name: col2, dtype: int64

Using functions in dataframes - symbolic apply

The power of pandas is an ability to apply a function to each element of a dataframe series (or a whole frame) by a technique called symbolic (or synthetic programming) application of the function.

Its pretty complicated but quite handy, best shown by an example

In [37]:
def times2(x):  # A prototype function to scalar multiply an object x by 2
    return(x*2)

print(df)
print('Apply the times2 function to col2')
df['col2'].apply(times2) #Symbolic apply the function to each element of column col2, result is another dataframe
   col1  col2        col3
0     1   444      orange
1     2   555       apple
2     3   666       grape
3     4   444       mango
4     5   666   jackfruit
5     6   111  watermelon
6     7   222      banana
7     8   222       peach
Apply the times2 function to col2
Out[37]:
0     888
1    1110
2    1332
3     888
4    1332
5     222
6     444
7     444
Name: col2, dtype: int64

Sorts

In [34]:
df.sort_values('col2', ascending = True) #Sorting based on columns 
Out[34]:
col1 col2 col3
5 6 111 watermelon
6 7 222 banana
7 8 222 peach
0 1 444 orange
3 4 444 mango
1 2 555 apple
2 3 666 grape
4 5 666 jackfruit

Aggregating (Grouping Values) dataframe contents

In [35]:
#Creating a dataframe from a dictionary

data = {
    'key' : ['A', 'B', 'C', 'A', 'B', 'C'],
    'data1' : [1, 2, 3, 4, 5, 6],
    'data2' : [10, 11, 12, 13, 14, 15],
    'data3' : [20, 21, 22, 13, 24, 25]
}

df1 = pd.DataFrame(data)
df1
Out[35]:
key data1 data2 data3
0 A 1 10 20
1 B 2 11 21
2 C 3 12 22
3 A 4 13 13
4 B 5 14 24
5 C 6 15 25
In [36]:
# Grouping and summing values in all the columns based on the column 'key'

df1.groupby('key').sum()
Out[36]:
data1 data2 data3
key
A 5 23 33
B 7 25 45
C 9 27 47
In [37]:
# Grouping and summing values in the selected columns based on the column 'key'

df1.groupby('key')[['data1', 'data2']].sum()
Out[37]:
data1 data2
key
A 5 23
B 7 25
C 9 27

Filtering out missing values

In [42]:
#Creating a dataframe from a dictionary

df = pd.DataFrame({'col1':[1,2,3,4,None,6,7,None],
                   'col2':[444,555,None,444,666,111,None,222],
                   'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})
df
Out[42]:
col1 col2 col3
0 1.0 444.0 orange
1 2.0 555.0 apple
2 3.0 NaN grape
3 4.0 444.0 mango
4 NaN 666.0 jackfruit
5 6.0 111.0 watermelon
6 7.0 NaN banana
7 NaN 222.0 peach

Below we drop any row that contains a NaN code.

In [38]:
df_dropped = df.dropna()
df_dropped
Out[38]:
col1 col2 col3
0 1 444 orange
1 2 555 apple
2 3 666 grape
3 4 444 mango
4 5 666 jackfruit
5 6 111 watermelon
6 7 222 banana
7 8 222 peach

Below we replace NaN codes with some value, in this case 0

In [39]:
df_filled1 = df.fillna(0)
df_filled1
Out[39]:
col1 col2 col3
0 1 444 orange
1 2 555 apple
2 3 666 grape
3 4 444 mango
4 5 666 jackfruit
5 6 111 watermelon
6 7 222 banana
7 8 222 peach

Below we replace NaN codes with some value, in this case the mean value of of the column in which the missing value code resides.

In [40]:
df_filled2 = df.fillna(df.mean())
df_filled2
Out[40]:
col1 col2 col3
0 1 444 orange
1 2 555 apple
2 3 666 grape
3 4 444 mango
4 5 666 jackfruit
5 6 111 watermelon
6 7 222 banana
7 8 222 peach

Reading a File into a Dataframe

Pandas has methods to read common file types, such as csv,xlsx, and json. Ordinary text files are also quite manageable.

On a machine you control you can write script to retrieve files from the internet and process them.

In [42]:
readfilecsv = pd.read_csv('CSV_ReadingFile.csv')  #Reading a .csv file
print(readfilecsv)
    a   b   c   d
0   0   1   2   3
1   4   5   6   7
2   8   9  10  11
3  12  13  14  15

Similar to reading and writing .csv files, you can also read and write .xslx files as below (useful to know this)

In [43]:
readfileexcel = pd.read_excel('Excel_ReadingFile.xlsx', sheet_name='Sheet1') #Reading a .xlsx file
print(readfileexcel)
   Unnamed: 0   a   b   c   d
0           0   0   1   2   3
1           1   4   5   6   7
2           2   8   9  10  11
3           3  12  13  14  15

Writing a dataframe to file

In [44]:
#Creating and writing to a .csv file
readfilecsv = pd.read_csv('CSV_ReadingFile.csv')
readfilecsv.to_csv('CSV_WritingFile1.csv')
readfilecsv = pd.read_csv('CSV_WritingFile1.csv')
print(readfilecsv)
   Unnamed: 0   a   b   c   d
0           0   0   1   2   3
1           1   4   5   6   7
2           2   8   9  10  11
3           3  12  13  14  15
In [45]:
#Creating and writing to a .csv file by excluding row labels 
readfilecsv = pd.read_csv('CSV_ReadingFile.csv')
readfilecsv.to_csv('CSV_WritingFile2.csv', index = False)
readfilecsv = pd.read_csv('CSV_WritingFile2.csv')
print(readfilecsv)
    a   b   c   d
0   0   1   2   3
1   4   5   6   7
2   8   9  10  11
3  12  13  14  15
In [46]:
#Creating and writing to a .xlsx file
readfileexcel = pd.read_excel('Excel_ReadingFile.xlsx', sheet_name='Sheet1', index = False)
readfileexcel.to_excel('Excel_WritingFile.xlsx', sheet_name='MySheet', index = False)
readfileexcel = pd.read_excel('Excel_WritingFile.xlsx', sheet_name='MySheet', index = False)
print(readfileexcel)
   Unnamed: 0   a   b   c   d
0           0   0   1   2   3
1           1   4   5   6   7
2           2   8   9  10  11
3           3  12  13  14  15

This is a Pandas Cheat Sheet



Here are some of the resources used for creating this notebook:

Here are some great reads on this topic:

Here are some great videos on these topics:



Exercise: Pandas of Data

Pandas library supports three major types of data structures: Series, DataFrames, and Panels. What are some differences between the three structures?

* Make sure to cite any resources that you may use.

In [ ]: