Many datasets and networks contain dominant global patterns that sometimes represent artifactual, trivial, or irrelevant structure. Correspondingly, analyses often seek to remove these patterns to uncover or accentuate interesting underlying structure. We use the term global residualization to describe this removal.
Here, we show approximate equivalence between three variants of global residualization across unsupervised learning, network science, and imaging neuroscience:
First-component removal: Subtraction of rank-one approximation of the data (common in unsupervised learning).
Degree correction: Subtraction of the normalized outer product of node degree vectors (common in network science).
Global signal regression: Regression of the mean time series signal from the time series data (common in imaging neuroscience).
File ‘abct_utils.py’ already there; not retrieving.
Note: you may need to restart the kernel to use updated packages.
Show original structural and correlation networks
To show these relationships, we first consider the original structural and correlation networks in our example brain-imaging data. Each network contains 360 nodes (rows and columns) that denote cortical (brain) regions.
We now apply the three variants of global residualization to these networks. Note that while the degree and first component are removed directly from the networks, the global signal is regressed out of the time series data.