Poster
in
Workshop: I Can't Believe It's Not Better: Challenges in Applied Deep Learning
On the Role of Structure in Hierarchical Graph Neural Networks
Luca Sbicego · Sevda Öğüt · Manuel Madeira · Yiming Qin · Dorina Thanou · Pascal Frossard
Hierarchical Graph Neural Networks (GNNs) integrate pooling layers to generate graph representations by progressively coarsening graphs. These GNNs are provably more expressive than traditional GNNs that solely rely on message passing. While prior work shows that hierarchical architectures do not exhibit empirical performance gains, these findings are based on small datasets where structure-unaware baselines often perform well, limiting their generalizability. In this work, we comprehensively investigate therole of graph structure in pooling-based GNNs. Our analysis includes: (1) reproducing previous studies on larger, more diverse datasets, (2) assessing the robustness of different architectures to structural perturbations of the graphs at varying depths of the network layers, and (3) comparing against structure-agnostic baselines. Our results confirm previous findings and demonstrate that they hold across newly tested datasets, even when graph structure is meaningful for the task. Interestingly, we observe that hierarchical GNNs exhibit improved performance recovery to structural perturbations compared to their flat counterparts. These findings highlight both the potential and limitations of pooling-based GNNs, motivating the need for more structure-sensitive benchmarks and evaluation frameworks.