Different groupers for each column with pandas GroupBy

Try using apply to apply a lambda function to each column of your dataframe, then use the name of that pd.Series to group by the second dataframe:

df1.apply(lambda x: x.groupby(df2[x.name]).transform('sum'))

Output:

   a   b
0  4  11
1  6  11
2  4  15
3  6  15

Using stack and unstack

df1.stack().groupby([df2.stack().index.get_level_values(level=1),df2.stack()]).transform('sum').unstack()
Out[291]: 
   a   b
0  4  11
1  6  11
2  4  15
3  6  15

You will have to group each column individually since each column uses a different grouping scheme.

If you want a cleaner version, I would recommend a list comprehension over the column names, and call pd.concat on the resultant series:

pd.concat([df1[c].groupby(df2[c]).transform('sum') for c in df1.columns], axis=1)

   a   b
0  4  11
1  6  11
2  4  15
3  6  15

Not to say there's anything wrong with using apply as in the other answer, just that I don't like apply, so this is my suggestion :-)


Here are some timeits for your perusal. Just for your sample data, you will notice the difference in timings is obvious.

%%timeit 
(df1.stack()
    .groupby([df2.stack().index.get_level_values(level=1), df2.stack()])
    .transform('sum').unstack())
%%timeit 
df1.apply(lambda x: x.groupby(df2[x.name]).transform('sum'))
%%timeit 
pd.concat([df1[c].groupby(df2[c]).transform('sum') for c in df1.columns], axis=1)

8.99 ms ± 4.55 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
8.35 ms ± 859 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
6.13 ms ± 279 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Not to say apply is slow, but explicit iteration in this case is faster. Additionally, you will notice the second and third timed solution will scale better with larger length v/s breadth since the number of iterations depends on the number of columns.