Variational Monte Carlo with Large Patched Transformers

Date

Friday November 29, 2024
1:30 pm - 2:30 pm

Location

STI A
Event Category

Prof. Stefanie Czischek,
University of Ottawa

 

Abstract:

Large language models, like transformers, have recently demonstrated immense powers in text and image generation. This success is driven by the ability to capture long-range correlations between elements in a sequence. The same feature makes the transformer a powerful wavefunction ansatz that addresses the challenge of describing correlations in simulations of qubit systems. In this talk I consider two-dimensional Rydberg atom arrays to demonstrate that transformers reach higher accuracies than conventional recurrent neural networks for variational ground state searches. I further introduce large, patched transformer models, which consider a sequence of large atom patches, and show that this architecture significantly accelerates the simulations.
 

 

Timbits, coffee, tea will be served in STI A before the colloquium.

 

 

Upcoming Events

Towards Coherent Control in Patterned Graphene

Nov

22

Friday

Event Default Image
1:30 pm - 2:30 pm
STI A

Towards Coherent Control in Patterned Graphene

Departmental - Towards Coherent Control in Patterned Graphene

The secret life of dark compact objects

Nov

22

Friday

Event Default Image
1:30 pm - 2:30 pm
STI A

The secret life of dark compact objects

Departmental - The secret life of dark compact objects