📊 1. 使用query()方法进行条件筛选

传统写法:

df[(df['age'] > 18) & (df['city'] == '上海') & (df['salary'] > 10000)]

使用query() - 更清晰:

df.query('age > 18 and city == "上海" and salary > 10000')

🔗 2. assign()方法链式创建新列

传统方法:

df['total'] = df['price'] * df['quantity']
df['discount'] = df['total'] * 0.1
df['final'] = df['total'] - df['discount']

链式方法:

df = df.assign(
    total=df['price'] * df['quantity'],
    discount=lambda x: x['total'] * 0.1,
    final=lambda x: x['total'] - x['discount']
)

📐 3. 使用melt()进行宽表转长表

原始宽表:

namemathenglishscience
张三908588
李四789284

转换代码:

df_melted = df.melt(
    id_vars=['name'],
    value_vars=['math', 'english', 'science'],
    var_name='subject',
    value_name='score'
)

⚡ 4. apply()与向量化操作的性能对比

❌ 慢 - 逐行循环:

df['new_col'] = df.apply(lambda x: x['a'] + x['b'], axis=1)

✅ 快 - 向量化(快10-100倍):

df['new_col'] = df['a'] + df['b']

复杂条件使用np.where:

df['category'] = np.where(df['score'] >= 60, '及格', '不及格')

🔧 5. 处理缺失值的完整策略

查看缺失值情况:

df.isnull().sum()
df.isnull().sum() / len(df) * 100

数值列填充:

df['age'].fillna(df['age'].median(), inplace=True)

分类列填充:

df['city'].fillna('未知', inplace=True)

🚰 6. 使用pipe()实现管道操作

def remove_outliers(df, column, threshold=3):
    z_score = (df[column] - df[column].mean()) / df[column].std()
    return df[abs(z_score) < threshold]

def fill_missing(df, column, method='median'):
    if method == 'median':
        df[column].fillna(df[column].median(), inplace=True)
    return df

# 链式调用
df_clean = (df
    .pipe(remove_outliers, 'price', threshold=2)
    .pipe(fill_missing, 'quantity', method='median')
)

📅 7. 日期时间处理技巧

# 转换日期格式
df['date'] = pd.to_datetime(df['date'])

# 提取日期特征
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['day'] = df['date'].dt.day
df['weekday'] = df['date'].dt.dayofweek

# 重采样
df.set_index('date', inplace=True)
monthly_sales = df['sales'].resample('M').sum()

🔗 8. 合并多个DataFrame的高效方式

# merge - 类似SQL的JOIN
df_merged = pd.merge(df1, df2, on='key_column', how='left')

# concat - 纵向拼接
df_vertical = pd.concat([df1, df2, df3], axis=0, ignore_index=True)

# concat - 横向拼接
df_horizontal = pd.concat([df1, df2], axis=1)

💾 9. 内存优化技巧

# 查看内存占用
df.info(memory_usage='deep')

# 优化数据类型
df['age'] = df['age'].astype('int16')
df['price'] = df['price'].astype('float32')
df['city'] = df['city'].astype('category')

🎯 10. 实战案例:电商订单数据处理

import pandas as pd
import numpy as np

# 读取数据
df = pd.read_csv('orders.csv', parse_dates=['order_date'])

# 数据清洗和处理
df_clean = (df
    .query('amount > 0 and status != "cancelled"')
    .assign(
        order_year=lambda x: x['order_date'].dt.year,
        discount_amount=lambda x: x['amount'] * x['discount_rate'],
        final_amount=lambda x: x['amount'] - x['discount_amount']
    )
    .dropna(subset=['customer_id'])
)

# 客户分层
conditions = [
    (df_clean['final_amount'] > 10000),
    (df_clean['final_amount'] > 5000),
    (df_clean['final_amount'] <= 1000)
]
choices = ['高价值客户', '中等客户', '低价值客户']
df_clean['customer_segment'] = np.select(conditions, choices, default='普通客户')

print(df_clean['customer_segment'].value_counts())

📋 技巧总结表

技巧核心函数应用场景
条件筛选query()复杂条件筛选
链式创建列assign()多步骤特征工程
宽表转长表melt()数据规范化
分组聚合groupby().agg()汇总统计
缺失值处理fillna()数据清洗
管道操作pipe()流程化处理
日期处理dt访问器时间特征提取
内存优化astype()大数据处理

💡 小贴士:建议将常用函数封装成自己的工具库,以便在多个项目中复用。