Schrödinger Sandbox:

Algorithm Details



Our initial state is always a 2-dimensional Gaussian wavepacket centered at some point $\mathbf{r}_0$, with width $\sigma$ and average momentum $\mathbf{k}$: $${ \psi(\mathbf{r},0) = N \, e^{-(\mathbf{r}-\mathbf{r_0})^2 / 2\sigma^2} \; e^{i\mathbf{k}\cdot\mathbf{r}} \; , \tag{1} }$$ where $N$ is a normalization constant.

This state evolves in time according to the Schrödinger equation, $${ i {\partial\psi(\mathbf{r},t)\over \partial t} = H \psi(\mathbf{r},t) = \left( -{\nabla^2 \over 2m} + V(\mathbf{r})\right) {\psi(\mathbf{r},t) \; .} \tag{2} }$$ Here, $H$ is the Hamiltonian, $V$ is the potential energy function, $m$ is the particle mass, and we are using units in which Planck’s constant ℏ is equal to 1.

Equation (2) can be solved numerically by discretizing the wavefunction in both space and time: We suppose that $\psi$ lives on a rectangular grid with spacing $a$, so that the Laplacian operator gets replaced by a discrete approximation, $${\begin{aligned} \nabla^2\psi \equiv \left({\partial^2 \over \partial x^2} + {\partial^2 \over \partial y^2}\right)\psi \rightarrow {1 \over a^2} \Bigl( &\psi(\mathbf{r}+a\hat{\mathbf{x}}, t) -2\psi(\mathbf{r}, t) + \psi(\mathbf{r}-a\hat{\mathbf{x}}, t) \; + \\ &\psi(\mathbf{r}+a\hat{\mathbf{y}}, t) -2\psi(\mathbf{r}, t) + \psi(\mathbf{r}-a\hat{\mathbf{y}}, t) \Bigr) \; , \end{aligned}\tag{3} }$$ and we also suppose that time advances in discrete steps of size $\Delta t$, so that the time derivative in (2) becomes $${ {\psi(\mathbf{r}, t + \Delta t) - \psi(\mathbf{r}, t) \over \Delta t } \;. \tag{4} }$$ If we sample the initial wavefunction (1) onto the grid, we can use these equations to iteratively compute  $\psi(\mathbf{r}, \Delta t)$,  $\psi(\mathbf{r}, 2\Delta t)$,  $\psi(\mathbf{r}, 3\Delta t),$ and so on.

This procedure works and is accurate so long as $\Delta t$ and $a$ are chosen small enough. (In the limit as they approach zero, the discrete results approach the continuum solution arbitrarily closely.) However, there are refinements that produce more accurate results when $\Delta t$ and $a$ are finite. One such refinement is Visscher's algorithm [Ref. 1]: If the wavefunction is split into its real and imaginary parts, $\psi = R + iI$, the Schrödinger equation is equivalent to $${ {\partial R \over \partial t} = H I \, , \quad \text{and} \quad {\partial I \over \partial t} = -H R \, . \tag{5} }$$ And if $R$ and $I$ are defined at staggered times, $R$ at times $\{0,\,$ $\Delta t,\,$ $2\Delta t,\,...\,\}$, and $I$ at times $\{\Delta t/2\,$, $3\Delta t/2\,...\,\}$, then the discretized version of (5) is $${\begin{aligned} R(t + \Delta t/2) &= R(t - \Delta t/2) + \Delta t \, H\,I(t) \; ,\\ I(t + \Delta t/2) &= I(t - \Delta t/2) - \Delta t \, H\,R(t) \,. \end{aligned}\tag{6} }$$ Why go to all this trouble? Because it can be shown that the system (6) is
     (i) accurate to second order in $\Delta t$ (whereas equation (4) is only accurate to first order),
     (ii) unitary, i.e., probability is conserved at each step, and
     (iii) stable for a wide range of $\Delta t$'s.

Regarding that last point, many numerical algorithms, including forward iteration based on equation (4), are unstable. That is, the errors grow exponentially with the number of time steps until they completely swamp the solution. The video below shows what this can look like in practice.


Figure 1. The dire consequences of computing wave function evolution with an unstable algorithm.


A second refinement to the basic algorithm is the use of a higher-order formula for the discrete second derivative: $${ { \partial^2\psi \over \partial x^2} \rightarrow {1 \over a^2}\left( -{1 \over 12} \psi(\mathbf{r} + 2a\hat{\mathbf{x}}) + {4 \over 3} \psi(\mathbf{r} + a\hat{\mathbf{x}}) - {5 \over 2}\psi(\mathbf{r}) +{4 \over 3} \psi(\mathbf{r} - a\hat{\mathbf{x}}) - {1 \over 12} \psi(\mathbf{r} - 2a\hat{\mathbf{x}}) \right) \; , \tag{7} }$$ and similarly for $\partial^2\psi / \partial y^2$. This is accurate to fourth order in $a$, versus second order for equation (3). (See reference [Ref 2].) Thus it lets us create simulations with larger grid spacings (and hence fewer grid points) while maintaining accuracy.


Up till now, we have avoided the question of what happens to the wavefunction at the grid boundaries. Evolving the wavefunction with no special treatment at the boundaries is equivalent to placing infinitely high potential barriers there, because the wavefunction is implicitly zero just outside the grid. This results in unwanted reflections, which interfere with the scattering phenomena we're trying to visualize. Figure 2 illustrates the problem.

Figure 2. Unwanted reflections from a grid boundary.


To fight this problem, we can introduce an imaginary-valued potential in the border region of the grid. This potential effectively "absorbs" the wavefunction (i.e., damps it exponentially ) rather than reflecting it. The results are illustrated in figures 3 and 4.

Figure 3. An imaginary-valued potential in the gray border region damps the wavefunction there.


Figure 4. When the gray border region is not rendered, the wavefunction appears to just travel off the grid edges without reflection, as desired.


The code implementing all of the above ingredients is available in the project repository. Most of the computational work is done in the compute shaders. (The file names are computeShader.wgsl for WebGPU-enabled devices and computeShader.glsl for WebGL devices.)




References

[1] Visscher, P. (1991) A Fast Explicit Algorithm for the Time-Dependent Schrödinger Equation. Computers in Physics, 5, 596-598.

[2] Finite difference coefficient (2023) Wikipedia. Available at: https://en.wikipedia.org/wiki/Finite_difference_coefficient