# Schedule

Donostia - San Sebastian, 26-28 August 2019

Monday 26th | Tuesday 27th | Wednesday 28th | |

9:00 - 10:00 | Wang | Melko | Melko |

10:00 - 11:30 | Wang | Kim | Ringel |

11:30 - 12:00 | Coffee break |
Coffee break |
Coffee break |

12:00 - 13:30 | Kim | Wang | Ringel |

13:30 - 15:00 | Lunch |
Lunch |
Lunch |

15:00 - 16:00 | Wang | Melko | Melko |

16:00 - 16:30 | Coffee break |
Coffee break |
Coffee break |

16:30 - 17:30 | Melko | Kim | Ringel |

17:30 - 18:30 | Kim | Ringel | Wrap up session |

# Materials Covered

## Lei Wang

- Intro + deep learning theory minimal (1h)
- Generative models for physicists (1.5 h)
- Differentiable programming (1.5h)
- Representation Learning (1h)

I will show code snippets and quiz during lectures. There are a few full pedagogical code examples with questions for hands on. Examples include computation graph, automatic differentiation, variational inference, inverse Hamiltonian design, and some other fun stuff.

I will distribute applications of deep learning to statistical and quantum many-body physics throughout the lectures.

## Eun-Ah Kim

- Topological Phases
- Out of equilibrium phases
- STM data of charge order
- X-ray data of charge order

For each of these topics, the choice of the ML algorithm and the preprocessing scheme depends on the structure of the available data and the physics of interest. Hence I would like to put some time into making sure students understand the key theoretical and experimental insights that guided our ML strategies, as well as the principles of the ML algorithms themselves.

In terms of the algorithms, I will use simple feedforward fully connected neural networks for supervised learning and gaussian mixture model for unsupervised learning.

## Roger Melko

- Lattice models for statistical physics, Monte Carlo method
- Supervised and unsupervised learning, generative modeling
- Restricted Boltzmann machines
- Quantum state reconstruction

In these lectures we will focus on tying together the formalism of Markov chain Monte Carlo with stochastic neural networks used for generative modeling. The goal will be a pedagogical development of a strategy to reconstruct the state of a quantum system given projective measurements on individual qubits. Applications to current cold atom and trapped ion experiments will be discussed.

## Zohar Ringel

- Bayesian Inference.
- Gaussian Processes, RKHS regression, Equivalence Kernel, Learning curves.
- New results in ML: Mapping over-parameterized fixed-depth deep neural networks to Gaussian Processes. The Neural Tangent Kernel.
- Practical example: De-noising NV center readouts.
- Field theory description of over-parametrized networks. Expansions in one over dataset size and one over parameter-number.
- The information bottleneck principal, internal representation in deep neural networks, layer-wise training, transfer learning.
- Application of DNN and information bottleneck: Identifying slow degrees of freedom, learning Exact Holomorphic Mappings in ADS/CFT.