Loyvain Clustering

Open in Colab

The Loyvain method is a fusion of Lloyd and Louvain methods, two classic algorithms for k-means clustering and modularity maximization, respectively. This method specifically interpolates between Lloyd and Louvain, and thus represents a unified algorithm for k-means clustering and modularity maximization.

Here, we compare the Loyvain method against classic k-means and spectral clustering methods on our example brain-imaging correlation networks.

Set up and load data

# Install abct and download abct_utils.py
base = "https://github.com/mikarubi/abct/raw/refs/heads/main"
!wget --no-clobber {base}/docs-code/examples/abct_utils.py
%pip install --quiet abct nilearn

# Import modules
import abct
import time
import numpy as np
from sklearn import cluster
from abct_utils import X, C, fig_scatter, fig_surf

# Get residual correlation networks
Xg = abct.residualn(X.T, "global").T
Xg = Xg / np.linalg.norm(Xg, axis=0, keepdims=True)
Cg = Xg.T @ Xg
File ‘abct_utils.py’ already there; not retrieving.

Note: you may need to restart the kernel to use updated packages.

Set up data and clustering similarity functions

We compare the performance of Loyvain across a range of cluster numbers. To do this, we first define the corresponding similarity or objective functions for each of these methods.

# Define clustering hyperparameters
K = np.arange(5, 30, 5)     # number of clusters
repl = 10                   # number of replicates

def kmeans_similarity(X, M):
    # distance = 0.0
    similarity = 0.0
    for u in range(k):
        I = (M == u)                        # cluster indices
        ni = I.sum()                        # cluster size
        Xi = X[:, I]                        # cluster time series
        xi = Xi.mean(1, keepdims=1)         # cluster centroid
        # distance += ((Xi - xi) ** 2).sum()  # distance
        similarity += (Xi.T @ Xi).sum() / ni

    return similarity

def spectral_similarity(C, M):
    similarity = 0.0
    for u in range(k):
        I = (M == u)                        # cluster indices
        similarity += C[np.ix_(I, I)].sum() / C[I].sum()

    return similarity

Run standard k-means and spectral clustering

We first run standard k-means and spectral clustering algorithms. Note that Loyvain maximizes the k-means objective on residual networks, and the spectral clustering objective (normalized cut) on the original networks. Accordingly, here we compare it to k-means clustering of time series after global signal regression, and spectral clustering of the original correlation networks.

np.random.seed(1)
start_time = time.time()
print("Running standard k-means and spectral clustering.")

kmeans_lloyd_similarity = np.zeros(len(K))
spectral_shimalik_similarity = np.zeros(len(K))
for i, k in enumerate(K):
    print(f"Number of clusters: {k}")
    M_kmeans_sklearn = cluster.KMeans(n_clusters=k, n_init=repl).fit(Xg.T).labels_
    M_spectral_sklearn = cluster.SpectralClustering(
        n_clusters=k, affinity='precomputed', n_init=repl).fit(C).labels_
    kmeans_lloyd_similarity[i] = kmeans_similarity(Xg, M_kmeans_sklearn)
    spectral_shimalik_similarity[i] = spectral_similarity(C, M_spectral_sklearn)

print(f"Time elapsed: {time.time() - start_time} seconds.")
Running standard k-means and spectral clustering.
Number of clusters: 5
Number of clusters: 10
Number of clusters: 15
Number of clusters: 20
Number of clusters: 25
Time elapsed: 87.64774417877197 seconds.

Run Loyvain k-means and spectral clustering

We now use the Loyvain method for direct k-means clustering and spectral clustering of correlation networks.

np.random.seed(1)
start_time = time.time()
print("Running Loyvain k-means and spectral clustering.")

MKmod = [None] * len(K)
kmeans_loyvain_similarity = np.zeros(len(K))
spectral_loyvain_similarity = np.zeros(len(K))
for i, k in enumerate(K):
    print(f"Number of clusters: {k}")
    M_kmodularity_loyvain = abct.loyvain(C, k, "kmodularity", replicates=repl)[0]
    M_spectral_loyvain = abct.loyvain(C, k, "spectral", replicates=repl)[0]
    kmeans_loyvain_similarity[i] = kmeans_similarity(Xg, M_kmodularity_loyvain)
    spectral_loyvain_similarity[i] = spectral_similarity(C, M_spectral_loyvain)
    MKmod[i] = M_kmodularity_loyvain

print(f"Time elapsed: {time.time() - start_time} seconds.")
Running Loyvain k-means and spectral clustering.
Number of clusters: 5
Number of clusters: 10
Number of clusters: 15
Number of clusters: 20
Number of clusters: 25
Time elapsed: 3.911929130554199 seconds.

Visualize clustering results

We now show the maps of k-modularity clusters detected with the Loyvain method.

for i, Mi in enumerate(MKmod):
    fig_surf(Mi, f"Loyvain on correlation network: {K[i]} modules",
             "tab20b", pmin=0, pmax=100)

Compare clustering similarities

Finally, we directly compare the clustering similarities of the Loyvain method to standard k-means and spectral clustering algorithms.

# Scatter plots of k-means clustering similarities
x, y = kmeans_lloyd_similarity, kmeans_loyvain_similarity
fig = fig_scatter(x, y,
                  "Lloyd k-means",
                  "Loyvain k-means",
                  "K-means similarity")
fig.add_shape(type="line",
              x0=np.min([x, y]),
              y0=np.min([x, y]),
              x1=np.max([x, y]),
              y1=np.max([x, y]))
fig.show()

# Scatter plots of spectral clustering similarities
x, y = spectral_shimalik_similarity, spectral_loyvain_similarity
fig = fig_scatter(x, y,
                 "Lloyd spectral",
                 "Loyvain spectral",
                 "Spectral similarity")
fig.add_shape(type="line",
              x0=np.min([x, y]),
              y0=np.min([x, y]),
              x1=np.max([x, y]),
              y1=np.max([x, y]))
fig.show()