About the PCA in section 3.1 of the paper.
biwanqing opened this issue · comments
Hi, thank you for releasing the code. I have a question that looking forward to your answers:
This PCA code, in my opinion, reduces the dimensionality of the features (K*K) and proves the redundancy of the features within each kernel, how is the intra-kernel correlations derived from this?
step 1: split 3D kernel F into 2D kernels (assuming F is of size CxHxW)
xs = [F[nChannel, :, :].flatten() for nChannel in range(F.shape[0])]
X = np.array(xs)step 2: perform PCA
import sklearn.decomposition
pca = sklearn.decomposition.PCA(n_components=None)
pca.fit(X)step3: this is the variance of F which is explained by the first principal component (PC1)
v = pca.explained_variance_ratio_[0]