benedekrozemberczki / karateclub

Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Home Page:https://karateclub.readthedocs.io

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Randomness in Laplacian Eigenmaps Embeddings

wendywangwwt opened this issue · comments

Hi! I'm using Laplacian Eigenmaps and noticed that the resulting embeddings are not always the same, even though I have explicitly set the seed:

model = LaplacianEigenmaps(dimensions=3,seed=0)

Running the same algorithm in the same python session for multiple times yields different embeddings each time. Here is a minimal reproducible example:

import networkx as nx
g_undirected = nx.newman_watts_strogatz_graph(1000, 20, 0.05, seed=1)

from karateclub.node_embedding.neighbourhood import LaplacianEigenmaps
import numpy as np

for _ in range(5):
    model = LaplacianEigenmaps(dimensions=3,seed=0)
    model.fit(g_undirected)
    node_emb_le = model.get_embedding()
    print(np.sum(node_emb_le))

It yields the following summed value of the embeddings for me:

31.647046936812927
-31.647046936812888
31.64704693681287
-31.690999529775908
-31.581837545720354

How can I control the randomness so that every time the resulting embeddings are exactly the same, even if I run the algorithm for arbitrary times in the same python session?

Can you also seed numpy?

Sorry, could you please reply @wendywangwwt ? Closing for now.