Count It Up! Mastering Groupby to Analyze Two Columns in Pandas DataFrames
Import pandas library:
import pandas as pd
Create a sample DataFrame:
df = pd.DataFrame({'col1': ['A', 'A', 'B', 'B', 'A', 'C', 'C', 'C'],
'col2': ['X', 'Y', 'X', 'Y', 'X', 'Y', 'X', 'Y']})
Group by two columns and get counts:
Use the .groupby()
method on the DataFrame, specifying the column names you want to group by within the square brackets. Here, we are grouping by 'col1' and 'col2'.
Then, call the .size()
method to get the number of occurrences in each group. Finally, .to_frame(name='count')
renames the resulting Series (containing the counts) to 'count' and converts it to a DataFrame.
result = df.groupby(['col1', 'col2']).size().to_frame(name='count')
print(result)
Explanation of the code:
.groupby(['col1', 'col2'])
: This groups the DataFrame by the values in the 'col1' and 'col2' columns..size()
: This calculates the number of rows in each group..to_frame(name='count')
: This converts the resulting Series (containing the counts) into a DataFrame and renames the column containing the counts to 'count'.
Output:
count
col1 col2
A X 2
Y 1
B X 1
Y 1
C X 1
Y 2
This DataFrame shows the count of occurrences for each unique combination of values in 'col1' and 'col2'.
I hope this explanation is helpful!
Example 1: Simple Count
import pandas as pd
# Sample data
data = {'city': ['New York', 'Los Angeles', 'Chicago', 'New York', 'Miami'],
'size': ['Large', 'Small', 'Medium', 'Large', 'Small']}
# Create DataFrame
df = pd.DataFrame(data)
# Group by city and get count
city_counts = df.groupby('city').size().to_frame(name='count')
print(city_counts)
This code creates a DataFrame with 'city' and 'size' columns. It then groups the DataFrame by the 'city' column and uses size()
to count the occurrences of each city. Finally, it converts the resulting Series to a DataFrame named 'city_counts'.
count
city
Chicago 1
Los Angeles 1
Miami 1
New York 2
Example 2: Count with additional column
import pandas as pd
# Sample data (includes a price column)
data = {'product': ['Shirt', 'Shirt', 'Hat', 'Shirt', 'Pants'],
'color': ['Red', 'Blue', 'Black', 'Red', 'Blue'],
'price': [20, 15, 25, 20, 30]}
# Create DataFrame
df = pd.DataFrame(data)
# Group by product and color, then get count and calculate total price
product_color_counts = df.groupby(['product', 'color']) \
.size().to_frame(name='count') \
.reset_index() # Reset index to include product and color columns
product_color_counts['total_price'] = df.groupby(['product', 'color'])['price'].sum()
print(product_color_counts)
This code builds upon the previous example. It includes a 'price' column and demonstrates how to perform additional calculations within the grouped data. Here, it calculates the total price for each combination of product and color.
product color count total_price
0 Hat Black 1 25
1 Pants Blue 1 30
2 Shirt Red 2 40
3 Shirt Blue 1 15
These examples showcase the flexibility of groupby
for counting occurrences and performing further analysis on grouped data in Pandas DataFrames.
Method 1: Using .value_counts()
This method is particularly useful when you only need the counts and don't require additional transformations on the grouped data.
import pandas as pd
# Sample data
data = {'customer': ['A', 'A', 'B', 'B', 'A', 'C', 'C', 'C'],
'product': ['X', 'Y', 'X', 'Y', 'X', 'Y', 'X', 'Y']}
# Create DataFrame
df = pd.DataFrame(data)
# Group by customer and product, get counts
customer_product_counts = df.groupby(['customer', 'product']).size()
print(customer_product_counts)
Explanation:
.groupby(['customer', 'product'])
: Groups the DataFrame by 'customer' and 'product' columns.
Note: This approach returns a Series object containing the counts for each unique combination of customer and product. You can't directly convert this to a DataFrame within the groupby
call.
Method 2: Using a dictionary with .agg()
This method allows you to perform different aggregations on multiple columns within the grouped data.
import pandas as pd
# Sample data (includes a price column)
data = {'category': ['Electronics', 'Electronics', 'Clothing', 'Clothing', 'Electronics'],
'brand': ['Sony', 'LG', 'Nike', 'Adidas', 'Samsung'],
'price': [500, 700, 100, 150, 800]}
# Create DataFrame
df = pd.DataFrame(data)
# Define a dictionary for aggregation functions
aggregation_dict = {'price': ['sum', 'mean']}
# Group by category and brand, get counts and calculate price statistics
category_brand_counts = df.groupby(['category', 'brand']) \
.agg(aggregation_dict) \
.reset_index() # Reset index to include category and brand columns
print(category_brand_counts)
.agg(aggregation_dict)
: Applies the aggregation functions specified in the dictionary (aggregation_dict
) to each group.aggregation_dict
: This dictionary maps column names to a list of aggregation functions. Here, we calculate the sum and mean of the 'price' column..reset_index()
: Converts the multi-level index back to a regular DataFrame with separate columns for 'category' and 'brand'.
These alternate methods provide different ways to achieve group-wise counting within your Pandas DataFrame. Choose the method that best suits your specific needs and the complexity of the analysis you intend to perform.
python pandas dataframe