Functions of the form or , where is a group, arise in lots of contexts.
One very natural way this can happen is to have a probability distribution on a group, . The probability density of group elements is a function .
Another way this can happen is if you have some function and has a natural action on ‘s domain – if you care about the values takes at a particular point , you are led to consider functions of the form . For a specific example, the intensity of a particular pixel, , in a square gray-scale image, , subject to flips and rotations, can be considered as a function .
Recall that we can visualize finitely generated groups by drawing Cayley Diagrams. (There’s a nice book, Visual Group Theory by Nathan Carter, that teaches a lot of basic group theory from the perspective of Cayley Diagrams.)
The natural way to visualize functions on groups is to picture them as taking values on the nodes of the Cayley Diagrams. One way to do this is by coloring the nodes. In the following visualization of a real-valued function on , dark colors represent a value being close to zero and light colors close to one.
Just as we can with functions, we can do pointwise addition of two functions:
Or do pointwise multiplication:
In addition to doing various pointwise operations to the function, we can also do an operation analogous to “translating” a function. This is just permuting the domain by multiplying the input by a group element before applying the function.
Studying functions with interesting structures for domains (in particular: groups, monoids, graphs and categories) feels like a rich area to me, and I haven’t been able to find much work focused on it. I don’t know what area of math it should fall in.
(Character theory studies functions of the form , but they are functions with very specific properties, not the general case. Harmonic analysis considers such functions, but only in narrow contexts.)
It’s well known that one can generalize convolutions and the Fourier transform to groups. However, it seems like many people don’t find them intuitive. It turns out that from the right perspective they are quite natural and we’ll address them in a future blog post, now that we’ve built up some basic tools for reasoning about them. We’ll also look at generalizations to other structures.
Finally, and perhaps most interesting to me, is that these notions appear to provide us with a nice framework to generalize deep convolutional neural networks…