7.1. msepy#

Note

msepy stands for mimetic spectral elements in Python. It is an implementation of the mimetic spectral element methods using pure Python. In other words, the most computationally intensive part of the simulation, the linear system solving, is also done within Python.

Advantages of this are clear. No external packages (or APIs to other kernels) are needed to use msepy implementation; only common Python packages like scipy, numpy, matplotlib, etc., are required. Users can quickly settle phyem in their machines no mather they are Windows, Linux or Mac.

The most obvious disadvantage of this implementation is that, since Python suffers from its relatively lower speed, this implementation is not proper for large problems. It is hard to say what problems are large. And it is also dependent on the machine. Normally, msepy is very handy for 1- or 2-dimentional and small 3-dimensional problems. User could explore the edge by trial.

Caution

msepy is not parallelizable; do not execuate with for example mpiexec.

To invoke the msepy implementation, use indicator 'msepy' as the first argument for apply of fem module, i.e.,

>>> implementation, objects = ph.fem.apply('msepy', locals())

We get the implementation body, implementation, and the dictionary of all implemented counterparts, objects.

7.1.1. Implemented counterparts#

To pick up the implemented counterparts of abstract instances, we can just use their variable names, for example,

>>> manifold = objects['manifold']
>>> mesh = objects['mesh']

If some instances have no explicit varialbes, you could possiblly pick them up using their symbolic representations. For example, the boundary manifolds for defineing the boundary conditions have no explicit varibles. We can pick them using theire symbolic representations through the dictionary implementation.base,

>>> Gamma_alpha = implementation.base['manifolds'][r"\Gamma_{\alpha}"]
>>> Gamma_beta = implementation.base['manifolds'][r"\Gamma_{\beta}"]

For these instances that can be accessed throug implementation.base, there are just four types, manifolds, meshes, spaces and forms. They can be accessed with keys 'manifolds', 'meshes', 'spaces' and 'forms', respectively. For example

>>> mesh is implementation.base['meshes'][r'\mathfrak{M}']
True

We see that the mesh accessed from implementation.base['meshes'] is the same mesh we obtained from objects.

7.1.2. Configuration#

❇️ Manifold & Mesh

There are two ways to configure the abstract manifold and mesh to be exact ones:

1) Use predefined setting:

msepy has predefined maniolds (domains). To let a manifold to be a predefined one, we can call config of the implementation body, implementation, for example,

>>> implementation.config(manifold)(
...     'crazy', c=0., bounds=([0, 1], [0, 1]), periodic=False,
... )

where the first argument, 'crazy', is the indicator and the remaining arguments are parameters of the predefined manifold.

Predefined msepy manifolds

indicator

description

'crazy'

See Crazy domain and mesh.

'crazy_multi'

See Multi-crazy domain and mesh.

'backward_step'

See Backward step.

'cylinder_channel'

See Cylinder channel

All predefined msepy manifolds are available at Gallery, see msepy domains and meshes.

Once we have specified the manifold, we should also configure its boundary sections, for example,

>>> implementation.config(Gamma_alpha)(
...     manifold, {0: [1, 1, 0, 0]}   # the natural boundary.
... )

This specifies the boundary section Gamma_alpha, \(\Gamma_{\alpha}\), to be two faces (boundary units) of the msepy manifold manifold. These faces (boundary units) are specified by dictionary {0: [1, 1, 0, 0]}. For what does this dictionary exactly mean, we refer to the introduction page of the 'crazy' domain, Crazy domain and mesh. In short, the manifold has one region (no. 0) and {0: [1, 1, 0, 0]} indicates the topological faces (boundary units) facing \(x-\) and \(y+\) directions of the region.

Important

Indicators of boundary units of a predefined msepy manifold are explained at its corresponding gallery page. Find it at msepy domains and meshes.

Note

Since Gamma_alpha and Gamma_beta are a partition of the boundary of manifold, see Boundary conditions, once Gamma_alpha is configured, Gamma_beta is definite; no need to configure it.

Once the manifold and its boundary sections are configured, we can configure the mesh on it by, for example,

>>> implementation.config(mesh)([12, 12])

This will generate a mesh of \(12 \times 12\) elements in the manifold. Both manifold and mesh can be visualized by calling their visualize property. See examples at msepy domains and meshes.

2) User-customized meshes:

Todo

Users shall also could input their own meshes (thus maniolds are definite) generated in common mesh generators, for instance, Gmsh. To this end, an interface from the generic mesh format, for example, the VTK format, to msepy mesh and manifold modules should be implemented.

So far, this interface is missing. And if your domain is not covered by the predefined manifolds, please send us a message, see Contact, with your domain details. We will implement it as a predefined manifold as soon as possible.


❇️ Time sequence

We also need to specify the abstract time sequence to an exact one. Since the time sequence instance, ts, is not generic regardless of particular implementations, we do not need to pick it up from the implementation.

>>> ts.specify('constant', [0, 1, 100], 2)

This command specifies the time sequence ts to be one of constant (indicated by 'constant') time intervals, i.e.,

\[\Delta t = t^{k} - t^{k-1} = C\quad \forall k\in\left\lbrace 1, 2, 3,\cdots\right\rbrace.\]

The time sequence starts with \(t=0\), ends with \(t=1\), and places 100 equal smallest intervals in between, and each time step covers two smallest intervals. Thus, for example, the time step from \(t^{k-1}\) to \(t^{k}\) is splitted into two equal smallest intervals, i.e., one from \(t^{k-1}\) to \(t^{k-\frac{1}{2}}\) and one from \(t^{k-1}\) to \(t^{k}\). In other words, valid time instances of this time sequence are

\[t^0,\ t^{\frac{1}{2}},\ t^1,\ t^{1+\frac{1}{2}},\ \cdots,\ t^{49+\frac{1}{2}},\ t^{50}.\]

And in total, we have 50 time steps, \(k\in\left\lbrace1,2,3,\cdots,50\right\rbrace\).


❇️ Initial condition

Suppose the linear port Hamiltonian problem we try to solve has an analytic solution,

\[\begin{split}\left\lbrace \begin{aligned} & \tilde{\alpha}_{\mathrm{analytic}}(x, y, t) = - g(x,y)\dfrac{\mathrm{d}f(t)}{\mathrm{d}t} \\ & \tilde{\beta}_{\mathrm{analytic}}(x, y, t) = \begin{bmatrix} \dfrac{\partial g}{\partial x}(x, y) f(t) \\ \dfrac{\partial g}{\partial y}(x, y) f(t) \end{bmatrix} \end{aligned} \right.,\end{split}\]

where

\[g(x, y) = \cos(2\pi x)\cos(2\pi y),\]

and

\[f(t) = 2\sin(2\sqrt{2}\pi) + 3\cos(2\sqrt{2}\pi).\]

This is a so-called 2-dimensional eigen solution. And it is pre-coded in phyem. We can call it by

>>> eigen2 = ph.samples.Eigen2()

And the analytic solutions, \(\tilde{\alpha}_{\mathrm{analytic}}\) and \(\tilde{\beta}_{\mathrm{analytic}}\), are attributes, scalar and vector, of eigen2. We can set the continuous form of a and b to be \(\tilde{\alpha}_{\mathrm{analytic}}\) and \(\tilde{\beta}_{\mathrm{analytic}}\) by

>>> a = objects['a']
>>> b = objects['b']
>>> a.cf = eigen2.scalar
>>> b.cf = eigen2.vector

where the first two lines pick up the implemented counterparts of forms a and b and the last two lines set their continuous forms, cf, to be the analytic solutions. If a form has its continuous form, we can measure the error between its numerical solution to its continuous form (analytic solution) which indicates the accuaracy of the simulation. For example,

>>> a[0].reduce()                      # reduce the analytic solution at t=0 to discrete space
>>> b[0].reduce()                      # reduce the analytic solution at t=0 to discrete space
>>> a_L2_error_t0 = a[0].error()       # compute the L2 error at t=0
>>> b_L2_error_t0 = b[0].error()       # compute the L2 error at t=0
>>> a_L2_error_t0
0.0056...
>>> b_L2_error_t0
0.0060...

Note

The brackets, [], of a form return a staic copy of the form at a particular time instant. For example, a[0] gives the static copy of a at \(t=0\).

So, the first two lines of above code discretize the analytic solution at \(t=0\) to the discrete forms a and b, which configures the initial condition of the simulation. The next two lines then compute the error between the discrete initial condition and the analytic initial condition. It is seen that the error is very low implying that the finite dimensional spaces in this mesh (of \(12\times12\) elements) are fine.

Note that a generic simulation usually does not possess an analytical solution for all time, but only has the initial condition. In that case, we can use the same way to initialize the discrete forms to possess the initial condition.


❇️ Boundary conditions

To impose specified boundary conditions, we need to touch the counterpart of the linear system. Thus, we first need to pick up the implemented linear system by its local variable name, i.e.,

>>> ls = objects['ls']

And to let it take effect, we need to call its apply method,

>>> ls = ls.apply()

Then we can configure the boundary conditions of this particular linear system by using config of its property bc,

>>> ls.bc.config(Gamma_alpha)(eigen2.scalar)   # natural boundary condition
>>> ls.bc.config(Gamma_beta)(b.cf)             # essential boundary condition

where the first command sets the natural boundary condition on \(\Gamma_\alpha\) to be the analytical solution \(\tilde{\alpha}_{\mathrm{analytic}}\) and the second line sets the essnetial boundary condition on \(\Gamma_\beta\) to be the analytical solution \(\tilde{\beta}_{\mathrm{analytic}}\) (recall that we have set b.cf to be eigen2.vector).

7.1.3. Solving#

We first define two lists to store the errors,

>>> a_errors = [a_L2_error_t0, ]
>>> b_errors = [b_L2_error_t0, ]

We then go through all time steps by iterating over all \(k\in\left\lbrace1,2,3,\cdots,50\right\rbrace\),

>>> for k in range(1, 51):
...     static_ls = ls(k=k)                      # get the static linear system for k=...
...     assembled_ls = static_ls.assemble()      # assemble the static linear system into a global system
...     x, message, info = assembled_ls.solve()  # solve the global system
...     static_ls.x.update(x)                    # use the solution to update the discrete forms
...     a_L2_error = a[None].error()             # compute the error of the discrete forms at the most recent time
...     b_L2_error = b[None].error()             # compute the error of the discrete forms at the most recent time
...     a_errors.append(a_L2_error)              # append the error to the list
...     b_errors.append(b_L2_error)              # append the error to the list

Note that a[None] automatically gives a static copy of a at its most recent time. For example, when k=10, i.e., \(t=t^{10}=0.2\) (recall that the overall computation time is 1 and it is divided into 50 time steps), a[None] is equivalent to a[0.2]. And then method error computes the \(L^2\)-error of it. To check it, do

>>> a[0.2].error() == a_errors[10]
True
>>> len(a_errors)
51
>>> len(b_errors)
51

7.1.4. Post-processing#

You can save the solution to VTK file by calling

>>> a[1].visualize.vtk(saveto='a1')
>>> b[1].visualize.vtk(saveto='b1')

The first line saves the static copy of a at \(t=1\) to ./a1.vtu and the second line saves the static copy of b at \(t=1\) to ./b1.vtu.

You can also save them into one file by

>>> a[1].visualize.vtk(b[1], saveto='a1_b1')

Static copies of a and b at \(t=1\) are saved to ./a1_b1.vtu.

Then visualization tools can be used to visualize and analyze the solutions. And we recommend, for example, the open-source visualization software Paraview.

You can also try

a[1].visualize.matplot()

This will give a 2-dimensioan plot of a[1] using the matplotlib package. But since matplotlib is not very handy for 3-dimensional plots, the matplot method is only implemented for 2-dimensional forms. And we again recommand interactive VTK visualization tools for 3-dimensional visualization.


↩️ Back to Implementations.