XinLJ66 / DL_and_AdS_CFT

Implementation of DL method from (Hashimoto, et al, 2018)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Deep Learning & AdS/CFT

Implementation of DL method from (Hashimoto, et al, 2018). Descriptions below are summary of (Hashimoto, et al, 2018). The figures are generated using the codes in this repo.

Deep Neural Network Representation of the Scalar Field in AdS Spacetime

For a scalar field theory in a -dimensional curved spacetime, the action is written as

.

Suppose the field configuration depends only on , which is the holographic direction. Then, the generic metric is given by

,

with the asymptotic AdS boundary condition with the AdS radius ,and another boundary condition at the black hole horizon, .

The classical equation of motion for the scalar field is

,

where and .

To represent this equation of motion as a deep neural network, it can be discretized in the radial direction as the following

,

.

Input vector for the neural network will be , and it will propagate along the neural network, up to the black hole horizon at . Each layer will be a fully connected layer with 2 input and output features. Weight matrix for n-th layer corresponds to the discretized equation of motion is

W^{(n)} = \left[ \begin{matrix}
1 & \Delta \eta \\
\Delta \eta m^2 & 1 - \Delta \eta h(\eta^{(n)})
\end{matrix} \right]
,

and the activation function for each layer is

\begin{align*}
\varphi(x_1) &= x_1\,,\\
\varphi(x_2) &= x_2 + \Delta \eta \frac{\delta V(x_1)}{\delta x_1}\,.
\end{align*}

The output layer has 2 input features and 1 output feature. Exact form of the output layer will be explained in the next section.

Modified Neural Network Structure (Added on 2021.05.12.)

Aforementioned neural network with 2x2 linear layers could not reduce the loss less than around 980.

nn2.py is based on the same neural network representation, but it consists of 1x1 linear layers and the matrix multiplication part is done manually.

Also, pi-only option is added to the output layer.

Now, our modified output layer is

.

This neural network could decrease the loss around 3.2, but the learned metric is not accurate enough.

Application on AdS Schwartzchild Black Hole

The neural network will be tested by checking whether it can train and reproduce the AdS Schwartzchild metric.

Boundary condition of the black hole horizon is given as

,

where is the horizon cutoff, which will be set as a finitely small value for the neural network.

Data points for training are generated using the aforementioned neural network with exact AdS Schwartzchild metric, , let us call this data generator. Output layer of the data generator is defined as . direction is discretized by 10 layers with and . and is fixed for simplicity.

Data points are randomly generated. They are fed to the data generator and labeled as 'Positive' if , and 'Negative' if where . Total 2000 data points are generated, 1000 for each class. Plot of the generated data points is shown in the figure below.

alt text

Loss function for the model is given as

Where is the number of layers. The regularization term is modified from the original paper to reduce the rapid change of derivative of the metric. The original regularizer tends to make the metric constant, rather than smoothing it, which seems that not following the purpose explained on the paper.

The model was learned using Adam optimizer with 10000 epochs.

Latest training loss

alt text

Metric before training

alt text

Metric after training

alt text

Loss is decreased around 3.2, which is a large progress. However, the learned metric is not accurate enough.

References

[1] K. Hashimoto, S. Sugishita, A. Tanaka, and A. Tomiya. (2018). Deep Learning and the AdS/CFT Correspondence. Phys. Rev. D 98 (2018) 4. 046019.

About

Implementation of DL method from (Hashimoto, et al, 2018)


Languages

Language:Python 100.0%