Many AI inference problems arising in a wide variety of fields such as verification, network communication, computer vision, machine learning, and robotics can be solved using message-passing algorithms that operate on factor graphs. Often, however, we are facing inference problems with a lot of symmetries not reflected in the factor graph structure and, hence, not exploitable by efficient message-passing algorithms. In this talk, I will survey lifted message-passing algorithms that exploit additional symmetries. Starting from a given factor graph, they essentially first construct a lifted factor graph of supernodes and superfactors, corresponding to sets of nodes and factors that send and receive the same messages, i.e., that are indistinguishable given the evidence. Then they run a modified message-passing algorithm on the lifted factor. In particular, I will present lifted variants of several message-passing algorithms such as loopy belief propagation, warning and survey propagation, and demonstrate that significant efficiency gains are obtainable, often by orders of magnitude. This talk is based on collaborations with Babak Ahmadi, Youssef El Massaoudi, Fabian Hadiji, Sriraam Natarajan, and Scott Sanner.