Multilayer networks describe the rich ways in which a set of nodes are related by expressing their connections through different kinds of relationships in separate layers. These multiple relationships are naturally represented by an adjacency tensor that can be parsed and interpreted using techniques from multilinear algebra. In this work we propose the use of the nonnegative Tucker decomposition (NNTuck) with KL-divergence as an expressive factor model for adjacency tensors. Applying a full Tucker decomposition naturally generalizes existing methods for Poisson tensor decomposition and stochastic blockmodels of multilayer networks that equate to special case decompositions where certain factor matrices are fixed. We show how the full nonnegative Tucker decomposition provides an intuitive factor-based perspective on layer interdependence and enables linear-algebraic techniques for analyzing interdependence with reference to specific layers.
Algorithmically, we find that using expectation maximization to maximize this log-likelihood under the NNTuck model is step-by-step equivalent to tensorial multiplicative updates for the nonnegative tucker decomposition under a KL loss, extending a previously known equivalence from non-negative matrices to non-negative tensors. Using both synthetic and real-world data, we evaluate the use and interpretation of NNTuck as a model of multilayer networks. In doing so, we detail a formal treatment of cross-validation for link-prediction tasks in multilayer settings, emphasizing significant performance differences when attempting tubular and independent link prediction. Furthermore, we propose a definition of layer dependence based on using a likelihood ratio test to evaluate the NNTuck model with differing specifications. We show how this definition, paired with analysis of the factor matrices in the NNTuck, can be used to understand and interpret layer dependence in different contexts.