MPh¶
Pythonic scripting interface for Comsol Multiphysics
Comsol is a commercial software application that is widely used in science and industry for research and development. It excels at modeling almost any (multi-)physics problem by solving the governing set of partial differential equations via the finite-element method. It comes with a modern graphical user interface to set up simulation models and can be scripted from Matlab or its native Java API.
MPh brings the dearly missing power of Python to the world of Comsol. It leverages the Java bridge provided by JPype to access the Comsol API and wraps it in a layer of pythonic ease-of-use. The Python wrapper covers common scripting tasks, such as loading a model from a file, modifying parameters, importing data, to then run the simulation, evaluate the results, and export them.
Comsol models are marked by their .mph
file extension, which stands
for multi-physics. Hence the name of this library. It is open-source
and in no way affiliated with Comsol Inc., the company that develops
and sells the simulation software.
Installation¶
MPh is available on PyPI and can be readily installed via
pip install MPh
Run pip uninstall MPh
in order to remove the package from your system.
Requires JPype for the bridge from Python to Comsol’s
Java API and NumPy for returning (fast) numerical arrays.
pip
makes sure the two Python dependencies are installed and adds them
if missing.
Comsol, obviously, you need to license and install yourself. Versions from Comsol 5.1 onward are expected to work. A separate Java run-time environment is not required as Comsol ships with one already built in.
On Linux and macOS, Comsol is expected to be found in its respective default location. On Windows, any custom install location is supported, as the installer stores that information in the central registry.
Tutorial¶
To follow along with this tutorial in an interactive Python session,
if you wish to do so, make sure you have downloaded the demonstration
model capacitor.mph
from MPh’s source-code repository. Save
it in the same folder from which you run Python.
It is a model of a non-ideal, inhomogeneous, parallel-plate capacitor, in that its electrodes are of finite extent, the edges are rounded to avoid excessive electric-field strengths, and two media of different dielectric permittivity fill the separate halves of the electrode gap. Running the model only requires a license for the core Comsol platform, but not for any add-on module beyond that.

Starting Comsol¶
In the beginning was the client. And the client was with Comsol. And the client was Comsol. So let there be a Comsol client.
>>> import mph
>>> client = mph.start(cores=1)
The start()
function returns a client object, i.e.
an instance of the Client
class. It takes roughly
ten seconds for the client to spin up.
In this example, the Comsol back-end is instructed to use but one
processor core. If the optional parameter is omitted, it will use all
cores available on the machine. Restricting this resource is useful
when other simulations are running in parallel. Note, however, that
within the same Java and therefore Python session, only one Comsol
client can run at a time. So the Client
class cannot be instantiated
more than once. If you wish to work around this limitation imposed by
Comsol, and realize the full parallelization potential of your
simulation hardware, you will need to run multiple Python
processes, one for each client.
Managing models¶
Now that we have the client up and running, we can tell it to load a model file.
>>> model = client.load('capacitor.mph')
It returns a model object, i.e. an instance of the
Model
class. We will learn what to do with it
further down. For now, it was simply loaded into memory. We can
list the names of all models the client currently manages.
>>> client.names()
['capacitor']
If we were to load more models, that list would be longer. Note that the above simply displays the names of the models. The actual model objects can be recalled as follows:
>>> client.models()
[Model('capacitor')]
We will generally not need to bother with these lists, as we would
rather hold on to the model
reference we received from the client.
But to free up memory, we could remove a specific model.
>>> client.remove(model)
Or we could remove all models at once — restart from a clean slate.
>>> client.clear()
>>> client.names()
[]
Inspecting models¶
Let’s have a look at the parameters defined in the model:
>>> model.parameters()
{'U': '1[V]', 'd': '2[mm]', 'l': '10[mm]', 'w': '2[mm]'}
With a little more typing, we can include the parameter descriptions:
>>> for (name, value) in model.parameters().items():
... description = model.description(name)
... print(f'{description:20} {name} = {value}')
...
applied voltage U = 1[V]
electrode spacing d = 2[mm]
plate length l = 10[mm]
plate width w = 2[mm]
Two custom materials are defined:
>>> model.materials()
['medium 1', 'medium 2']
They will be used by these physics interfaces:
>>> model.physics()
['electrostatic', 'electric currents']
To solve the model, we will run these studies:
>>> model.studies()
['static', 'relaxation', 'sweep']
Notice something? All features are referred to by their names, also
known as labels, such as medium 1
. But not by their tags, such as
mat1
, which litter not just the Comsol programming interface, but,
depending on display settings, its graphical user interface as well.
Tags are an implementation detail. An unnecessary annoyance to anyone who has ever scripted a Comsol model from either Matlab or Java. Unnecessary because names/labels are equally enforced to be unique, so tags are not needed for disambiguation. And annoying because we cannot freely change a tag. Say, we remove a feature, but then realize we need it after all, and thus recreate it. It may now have a different tag. And any code that references it has to adapted.
This is Python though. We hide implementation details as much as we can. Abstract them out. So refer to things in the model tree by what you name them in the model tree. If you remove a feature and then put it back in, just give it the same name, and nothing has changed. You may also set up different models to be automated by the same script. No problem, as long as your naming scheme is consistent throughout.
Modifying parameters¶
As we have learned from the list above, the model defines a parameter
named d
that denotes the electrode spacing. If we know a parameter’s
name, we can access its value directly.
>>> model.parameter('d')
'2[mm]'
If we pass in not just the name, but also a value, that same method modifies it.
>>> model.parameter('d', '1[mm]')
>>> model.parameter('d')
'1[mm]'
This particular model’s only geometry sequence
>>> model.geometries()
['geometry']
is set up to depend on that very value. So it will effectively change the next time it is rebuilt. This will happen automatically once we solve the model. But we may also trigger the geometry rebuild right away.
>>> model.build()
Running simulations¶
To solve the model, we need to create a mesh. This would also be taken care of automatically, but let’s make sure this critical step passes without a hitch.
>>> model.mesh()
Now run the first study, the one set up to compute the electrostatic solution, i.e. the instantaneous and purely capacitive response to the applied voltage, before leakage currents have any time to set in.
>>> model.solve('static')
This modest simulation should not take longer than a few seconds. While we are at it, we may as well solve the remaining two studies, one time-dependent, the other a parameter sweep.
>>> model.solve('relaxation')
>>> model.solve('sweep')
They take a little longer, but not much. We could have solved all three studies at once, or rather, all of the studies defined in the model.
>>> model.solve()
Evaluating results¶
Let’s see what we found out and evaluate the electrostatic capacitance, i.e. at zero time or infinite frequency.
>>> model.evaluate('2*es.intWe/U^2', 'pF')
array(1.31948342)
All results are returned as NumPy arrays. Though “global”
evaluations such as this one could be readily cast to a regular Python
float
.
We might also ask where the electric field is strongest and have
evaluate()
perform a “local” evaluation.
>>> (x, y, E) = model.evaluate(['x', 'y', 'es.normE'])
>>> E.max()
1480.2743893783063
>>> imax = E.argmax()
>>> x[imax], y[imax]
(-0.000503768636204733, -0.004088126064370979)
Note how this time we did not specify any units. When left out, values are returned in default units. Here specifically, the field strength in V/m and its coordinates in meters.
We also did not specify the dataset, even though there are three different studies that have separate solutions and datasets associated along with them. When not named specifically, the default dataset is used. That generally refers to the study defined first, here “static”. The default dataset is the one resulting from that study, here — inconsistently — named “electrostatic”.
>>> model.datasets()
['electrostatic', 'time-dependent', 'parametric sweep', 'sweep//solution']
Now let’s look at the time dependence. The two media in this model have a small, but finite conductivity, leading to leakage currents in the long run. As the two conductivities also differ in value, charges will accumulate at the interface between the media. This interface charge leads to a gradual relaxation of the electric field over time, and thus to a change of the capacitance as well. We can tell that from its value at the first and last time step.
>>> C = '2*ec.intWe/U^2'
>>> model.evaluate(C, 'pF', 'time-dependent', 'first')
array(1.31948342)
>>> model.evaluate(C, 'pF', 'time-dependent', 'last')
array(1.48410283)
The 'first'
and 'last'
time step defined in that study are 0 and 1
second, respectively.
>>> (indices, values) = model.inner('time-dependent')
>>> values[0]
0.0
>>> values[-1]
1.0
Obviously, the capacitance also varies if we change the distance between the electrodes. In the model, a parameter sweep was used to study that. These “outer” solutions, just like the time-dependent “inner” solutions, are referenced by indices, i.e. integer numbers, each of which corresponds to a particular parameter value.
>>> (indices, values) = model.outer('parametric sweep')
>>> indices
array([1, 2, 3], dtype=int32)
>>> values
array([1., 2., 3.]
>>> model.evaluate(C, 'pF', 'parametric sweep', 'first', 1)
array(1.31948342)
>>> model.evaluate(C, 'pF', 'parametric sweep', 'first', 2)
array(0.73678535)
>>> model.evaluate(C, 'pF', 'parametric sweep', 'first', 3)
array(0.52865775)
Then again, with a scripting interface such as this one, we may as well run the time-dependent study a number of times and change the parameter value from one run to the next. General parameter sweeps can get quite complicated in terms of how they map to indices as soon as combinations of parameters are allowed. Support for this may therefore be limited.
Exporting results¶
Two exports are defined in the demonstration model:
>>> model.exports()
['data', 'image']
The first exports the solution of the electrostatic field as text data. The second renders an image of the plot featured in the screen-shot at the top of the page.
We can trigger all exports at once by calling model.export()
. Or we
can be more selective and just export one: model.export('image')
.
The exported files will end up in the same folder as the model file
itself and have the names that were assigned in the model’s export
nodes. Unless we supply custom file names or paths by adding them as
the second argument.
>>> model.export('image', 'static field.png')
The idea here is to first set up sensible exports in the GUI, such as images that illustrate the simulation results, and then trigger them from a script for a particular simulation run, the results of which may depend on parameter values.
Saving results¶
To save the model we just solved, along with its solution, just do:
>>> model.save()
This would overwrite the existing file we loaded the model from. To avoid this, we could specify a different file name.
>>> model.save('capacitor_solved')
The .mph
extension will be added automatically if it is not included
in the first place.
Maybe we don’t actually need to keep the solution and mesh data around. The model was quick enough to solve, and we do like free disk space. We would just like to be able to look up modeling details somewhere down the line. Comsol also keeps track of the modeling history: a log of which features were created, deleted, modified, and in which order. Typically, these details are irrelevant. We can prune them by resetting that record.
>>> model.clear()
>>> model.reset()
>>> model.save('capacitor_compacted')
Most functionality that the library offers is covered in this tutorial. The few things that were left out can be gleaned from the API documentation. A number of use-case examples are showcased in chapter Demonstrations.
Limitations¶
Java bridge¶
MPh is built on top of the Python-to-Java bridge JPype. It is JPype that allows us to look at Comsol’s Java API and run the same commands from Python. All credit to the JPype developers for making this possible.
The Comsol API does not support running more than one client at a time, i.e. within the same Java program. Meanwhile, JPype cannot manage more than one Java virtual machine within the same Python process. If it could, it would be easy to work around Comsol’s limitation. (There is an alternative Java bridge, pyJNIus, which is not limited to one virtual machine, but then fails in another regard: A number of Java methods exposed by Comsol are inexplicably missing from the Python encapsulation.)
Therefore, if several simulations are to be run in parallel, distributed over independent processor cores in an effort to achieve maximum speed-up of a parameter sweep, they have to be started as separate Python subprocesses. Refer to section “Multiple processes” for a demonstration.
Additionally, there are some known, but unresolved issues with JPype’s
shutdown of the Java virtual machine. Notably, pressing Ctrl+C
to interrupt an ongoing operation will usually crash the Python session.
So do not rely on catching KeyboardInterrupt
exceptions in
application code.
Platform differences¶
The Comsol API offers two distinct ways to run a simulation
session on the local machine. One may either start a “stand-alone”
client, which does not require a Comsol server. Or one may start a
server separately and have a “thin” client connect to it via a
loop-back network socket. The first approach is more lightweight and
more reliable, as it keeps everything inside the same process. The
second approach is slower to start up and relies on the inter-process
communication to be robust, but would also work across the network,
i.e., for remote sessions where the client runs locally and delegates
the heavy lifting to a server running on another machine. If we
instantiate the Client
class without providing a
value for the host address and network port, it will create a
stand-alone client. Otherwise it will run in client–server mode.
On Linux and macOS however, the stand-alone mode does not work out of
the box. This is due to a limitation of Unix-like operating systems
and explained in more detail in GitHub issue #8. On these
platforms, if all you did was install MPh, starting the client in
stand-alone mode will raise a java.lang.UnsatisfiedLinkError
because required external libraries cannot be found. You would have
to add the full paths of shared-library folders to an environment
variable named LD_LIBRARY_PATH
on Linux and DYLD_LIBRARY_PATH
on
macOS.
For example, for an installation of Comsol 5.6 on Ubuntu Linux, you
would add the following lines at the end of the shell configuration
file .bashrc
.
# Help MPh find Comsol's shared libraries.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH\
:/usr/local/comsol56/multiphysics/lib/glnxa64\
:/usr/local/comsol56/multiphysics/lib/glnxa64/gcc\
:/usr/local/comsol56/multiphysics/ext/graphicsmagick/glnxa64\
:/usr/local/comsol56/multiphysics/ext/cadimport/glnxa64
On macOS, the root folder is /Applications/COMSOL56/Multiphysics
.
The folder names in this example depend on the installed Comsol version
and will have to be adapted accordingly.
Requiring this variable to be set correctly limits the possibility of selecting a specific Comsol version from within MPh, as adding multiple installations to that search path will lead to name collisions. One could work around the issue by wrapping a Python program using MPh in a shell script that sets the environment variable only for that one process. Or have the Python program run the Comsol session in another Python subprocess. However, none of this is ideal. Starting the client should work without any of these detours.
The function mph.start()
exists to navigate these
platform differences. On Windows, it starts a stand-alone client in
order to profit from the better start-up performance. On Linux and
macOS, it creates a local session in client–server mode so that no
shell configuration is required up front. This behavior is reflected
in the configuration option 'session'
, accessible via
mph.option()
, which is set to 'platform-dependent'
by default. It could also be set to 'stand-alone'
or 'client-server'
before calling start()
in order to override the
default behavior.
Performance in client–server mode is noticeably worse in certain
scenarios, not just at start-up. If functions access the Java API
frequently, such as when navigating the model tree, perhaps even
recursively as mph.tree()
does, then client–server
mode can be slower by a large factor compared to a stand-alone client.
Rest assured however that simulation run-times are not affected.
Conversely, setting up stand-alone mode on Linux or macOS is also not a robust solution. Image exports, for example, are known to crash due to some conflict with external libraries. As opposed to Windows, where this works reliably.
Demonstrations¶
Busbar¶
“Electrical Heating in a Busbar” is an example model featured in the tutorial of “Introduction to Comsol Multiphysics” and explained there in great detail. The section “Getting the Maximum and Minimum Temperature” demonstrates how to obtain the two temperature extremes within the Comsol GUI.
The following Python code does the same thing programmatically:
import mph
client = mph.start()
model = client.load('busbar.mph')
model.solve()
(x, y, z, T) = model.evaluate(['x', 'y', 'z', 'T'])
(Tmax, Tmin) = (T.max(), T.min())
(imax, imin) = (T.argmax(), T.argmin())
print(f'Tmax = {T.max():.2f} K at ({x[imax]:5f}, {y[imax]:5f}, {z[imax]:5f})')
print(f'Tmin = {T.min():.2f} K at ({x[imin]:5f}, {y[imin]:5f}, {z[imin]:5f})')
This outputs the exact same numbers that appear in the table of the GUI:
Tmax = 330.42 K at (0.105000, -0.024899, 0.053425)
Tmin = 322.41 K at (0.063272, 0.000000, 0.000000)
You could now sweep the model’s parameters, for example the length L
or width wbb
of the busbar.
Compacting models¶
We usually save models to disk after we have solved them, which includes the solution and mesh data in the file. This is convenient so that we can come back to the model later, but don’t have to run the simulation again, which may take a long time. However, the files then require a lot of disk space. After a while, we may want to archive the models, but trim the fat before we do that.
To compact all model files in the current working directory, we can do this:
import mph
from pathlib import Path
client = mph.start()
for file in Path.cwd().glob('*.mph'):
print(f'{file}:')
model = client.load(file)
model.clear()
model.save()
The script compact_models.py
in the demos
folder
of the source-code repository is a refined version of the above
code. It displays more status information and also resets the modeling
history.
Note that we could easily go through all sub-directories recursively
by replacing glob
with rglob
. However, this should
be used with caution so as to not accidentally modify models in folders
that were not meant to be included.
Multiple processes¶
As explained in Limitations, we cannot run more than
one Comsol session inside the same Python process. But we can start
multiple Python processes in parallel if we leverage the
multiprocessing
module from the standard library.
import mph
import multiprocessing
import queue
Additionally, we have imported the queue
module, also from
the standard library, though only for the queue.Empty
exception
type that it provides.
In this demonstration, we will solve the model capacitor.mph
from the Tutorial. We want to sweep the electrode distance
d and calculate the capacitance C for each value of the distance,
ranging from 0.5 to 5 mm.
values = [0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0]
Next, we define the function that we intend to run in every process,
i.e. the “worker”. The function sets up the Comsol session when the
process starts, then keeps solving the model for every distance value
that it receives via a jobs
queue. Each time, it evaluates the
solution and returns the capacitance via a results
queue. It does
so until the jobs
queue is exhausted, upon which the function
terminates, and with it Comsol session and Python process.
def worker(jobs, results):
client = mph.start(cores=1)
model = client.load('capacitor.mph')
while True:
try:
d = jobs.get(block=False)
except queue.Empty:
break
model.parameter('d', f'{d} [mm]')
model.solve('static')
C = model.evaluate('2*es.intWe/U^2', 'pF')
results.put((d, C))
Each worker will only use one of the processor cores available on the machine, as that’s the whole point: We want to achieve maximum speed-up of, say, a parameter sweep, by having each core work on a job corresponding to one of the many parameter values, which it can do independently of work being done for any other value.
We could also solve this sequentially, one parameter value at a time. Comsol’s solver could then make use of all cores and would also employ some parallelization strategy in its internal computation. But the speed-up would not scale linearly with the number of cores, especially for large numbers of them.
We might use a “parametric sweep”, a feature that Comsol does offer. But by doing this in Python we retain full programmatic control of which parameter is solved for and when. The parameter values don’t have to be hard-coded, they could come from user input or be generated depending on the outcome of previous simulations. For example, this approach lends itself to iterative optimization schemes such as the “genetic algorithm”, where a batch of simulations would be run for each new “generation”.
Note how the returned results also contain the input parameter. As the worker processes will run asynchronously in parallel, we cannot take for granted that output is returned in input order.
Before we start the computation, we add all parameter values to the
jobs
queue:
jobs = multiprocessing.Queue()
for d in values:
jobs.put(d)
We also have to provide the results
queue, which is of course empty
at first.
results = multiprocessing.Queue()
Then we can start a number of workers, say four:
for _ in range(4):
process = multiprocessing.Process(target=worker, args=(jobs, results))
process.start()
It may be a good idea to hold on to the process
objects and add them
to a list processes
, just so that Python’s garbage collection won’t
accidentally delete them while the external processes are running.
Finally, still in the main process that starts all the workers, we can
collect the results. We use a for
loop and exploit the fact that
there will be as many results as there were jobs to begin with.
for _ in values:
(d, C) = results.get()
We would then display them, plot them, save them to a file, or whatever it is we do with simulation results.
The complete script worker_pool.py
, which implements all of
the above and also irons out some wrinkles not covered here for the
sake of brevity, can be found in the demos
folder of the
source-code repository. As it runs, it displays a live plot
such as the one that follows. It is reproduced here preserving the real
time from a run with two workers. Observe how the first two data points
do in fact come in out of order.

A more advanced implementation may use a class derived from
multiprocessing.Process
instead of a mere function, just to
be able to save state. For long-running simulations it would make sense
to store jobs and results on disk, rather than in memory, so that the
execution of the queue may be resumed after a possible interruption.
In that case one may, or may not, find the subprocess
module from the standard library more convenient for starting the
external processes. The worker implementation would then be in a
separate module that is run as a script.
Creating models: Java style¶
The primary focus of MPh is to automate the simulation workflow, like running parameter sweeps or optimization routines with customized, Python-powered post-processing. Creating and altering models is possible, see next section, but has some limitations.
However, any and all functionality offered by the Comsol Java API
is accessible via the “pythonized” Java layer provided by JPype,
which is exposed as the .java
attribute of Client
instances, mapping to Comsol’s ModelUtil
, as well as of
Model
instances, mapping to Comsol’s model
.
Let’s take this Comsol blog post as an example: “Automate your modeling tasks with the Comsol API for use with Java”. It starts with the following Java code:
import com.comsol.model.*;
import com.comsol.model.util.*;
public class HelloWorld {
public static void main(String[] args) {
run();
}
public static Model run() {
Model model = ModelUtil.create("Model");
model.modelNode().create("comp1");
model.geom().create("geom1", 3);
model.geom("geom1").feature().create("blk1", "Block");
model.geom("geom1").feature("blk1").set("size", new String[]{"0.1", "0.2", "0.5"});
model.geom("geom1").run("fin");
return model;
}
}
What it does is, it creates a model, which contains a 3d geometry component that is just a block 0.1 by 0.2 by 0.5 meters in size.
In Python, we would achieve the same like so:
import mph
client = mph.start()
pymodel = client.create('Model')
model = pymodel.java
model.modelNode().create("comp1");
model.geom().create("geom1", 3);
model.geom("geom1").feature().create("blk1", "Block");
model.geom("geom1").feature("blk1").set("size", ["0.1", "0.2", "0.5"]);
model.geom("geom1").run("fin");
Note how the functional Java code (excluding Java-specific syntax
elements) was essentially copied and pasted, even the semicolons,
which Python simply ignores. We named the Python wrapper pymodel
and assigned model
to the underlying Java object just so we could
do this. We had to replace new String[]{"0.1", "0.2", "0.5"}
because
Python does not know what new
means. There, Java expects a
list of three strings. So we replaced the expression with
["0.1", "0.2", "0.5"]
, the Python equivalent of just that: a list
of these three strings.
Occasionally when translating Java (or Matlab) code you find in the documentation, or a blog post as the case was here, or which Comsol generated from your model when you saved it as a Java/Matlab file, you will have to amend code lines such as the one above. But they are few and far between. The error messages you might receive should point you in the right direction and the JPype documentation would offer help on issues with type conversion.
The advantage of using Python over Java is:
You don’t really need to know Java. Just a little, to understand that occasionally we have to take care of type conversions that JPype cannot handle all by itself. Which is rare.
You don’t need to install Java. It just ships with Comsol. You also don’t need to bother with compiling Java source code to Java classes via
comsolcompile
.You can use Python introspection to understand how Comsol models are “created in code”. The Comsol documentation explains a lot of things, but not every little detail. Either use Python’s built-in
dir()
or callmph.inspect()
to see a pretty-fied representation of a Java object in the model tree.
To save the model created in the above example, we do:
pymodel.save('model')
This stores a file named model.mph
in the working directory, which
may then be opened in the Comsol GUI or be used in any other Python,
Java, or Matlab project.
Creating models: Python style¶
The example from the previous section can be expressed in much more
idiomatic Python syntax if we ignore the Java layer and only use
methods from the Model
class.
import mph
client = mph.start()
model = client.create()
model.create('geometries', 3)
model.create('geometries/Geometry 1', 'Block')
model.property('geometries/Geometry 1/Block 1', 'size', ('0.1', '0.2', '0.5'))
model.build('Geometry 1')
This, again, hides all tags in application code. Instead, we refer to nodes in the model tree by name. In the example, these names were generated automatically, in the same way the Comsol GUI does it. We could also supply names of our choice.
import mph
client = mph.start()
model = client.create('block of ice')
model.create('geometries/geometry', 3)
model.create('geometries/geometry/ice block', 'Block')
model.property('geometries/geometry/ice block', 'size', ('0.1', '0.2', '0.5'))
model.build('geometry')
If model.create()
receives a reference to a node
that does not exist yet, such as geometries/geometry
in the example,
it creates that node in its parent group, here the built-in group
geometries
, and gives it the name we supplied, here geometry
.
So far, we have used strings to refer to nodes. We could also use the
Node
class, which offers more flexibility and extra
functionality. Instances of that class are returned by
model.create()
for convenience. But they can be
generated from scratch by string concatenation with the division
operator — much like pathlib.Path
objects from Python’s
standard library.
import mph
client = mph.start()
model = client.create('block of ice')
geometries = model/'geometries'
geometry = geometries.create(3, name='geometry')
block = geometry.create('Block', name='ice block')
block.property('size', ('0.1', '0.2', '0.5'))
model.build(geometry)
The division operator is the Swiss army knife for accessing nodes in
the model tree. It even works with client
as root. Within that last
example, the following notations
client/'block of ice'/'geometries'/'geometry'/'ice block'
model/'geometries'/'geometry'/'ice block'
geometries/'geometry'/'ice block'
geometry/'ice block'
block
all refer to the same geometry element in the model. We could also include the forward slash in a string expression instead of using it as an operator, just like we did in the first and second example.
model/'geometries/geometry/ice block'
The model’s root node can be referenced with either model/''
or
model/None
. If any of the node names in the hierarchy contain a
forward slash themselves, that forward slash can be escaped (i.e.,
marked to be interpreted literally) by doubling it, for instance:
geometry/'ice//frozen water'
.
The example model discussed here produces the following model tree:
>>> mph.tree(model)
block of ice
├─ parameters
│ └─ Parameters 1
├─ functions
├─ components
│ └─ Component 1
├─ geometries
│ └─ geometry
│ ├─ ice block
│ └─ Form Union
├─ views
│ └─ View 1
├─ selections
├─ coordinates
│ └─ Boundary System 1
├─ variables
├─ couplings
├─ physics
├─ multiphysics
├─ materials
├─ meshes
├─ studies
├─ solutions
├─ batches
├─ datasets
├─ evaluations
├─ tables
├─ plots
└─ exports
The parameter group, model component, default view and coordinate system were created by Comsol automatically. We could rename these nodes if we wanted to. Most built-in groups are still empty, waiting for features to be created.
The demo script create_capacitor.py
shows how to create
more advanced features than in the simple example here: It generates
the demonstration model used in the Tutorial.
API¶
Code documentation of the public application programming interface provided by this library.
Starts a local Comsol session. |
|
Manages configuration options. |
|
Manages the Comsol client instance. |
|
Manages a Comsol server instance. |
|
Represents a Comsol model. |
|
Represents a model node. |
|
Displays the model tree. |
|
Inspects a Java node object. |
|
Discovers Comsol installations. |
Releases¶
1.0.4¶
Added missing support for installations with classkit license. (#40)
Added name validation of available configuration options. (#40)
Added support for persistent configuration storage to
mph.config
.Fixed: No preference
checkforrecoveries
on certain installations. (#39)Fixed: Unclear error message when requested version is not installed. (#42)
1.0.3¶
Published on May 5, 2021.
Fixes:
Client.remove()
did not accept model by name.Fixes: Node names were not escaped when creating new features.
Fixes:
Model.save()
failed whenformat
given, but notpath
.Fixes:
Model.save()
withoutpath
given did not save new models.Fixes:
Model.parameters(evaluate=True)
returned strings, not numbers.Fixes off-by-one error when passing
inner
indices toModel.evaluate()
.Comsol expects 1-based indices, as opposed to Python’s 0-based indexing.
Adds missing built-in group
'couplings'
.mph.start()
now returns existing client instance on subsequent calls.
1.0.2¶
Published on April 28, 2021.
Assigns more typical tag names when creating new model features.
In most cases, tags are now named like they are in the Comsol GUI.
Node.retag()
allows post-hoc modification of a node’s tag.Adds missing built-in groups, e.g. evaluations and tables.
Improves performance of node navigation in client–server mode.
The internal type-casting converts
Node
instances to their tags.The internal type-casting handles lists of numbers.
Before,
property()
andcreate()
would only accept lists of strings.Node.type()
now returns nothing if node has no feature type.Moved tutorial model to
demos
folder.Added demo script
create_capacitor.py
that generates the tutorial model.
1.0.1¶
1.0.0¶
Published on April 13, 2021.
We now offer you the best API Comsol has ever seen! 🎉
See “Creating models: Python style” for a feature demonstration.
A new
Node
class allows easy navigation of the model tree.The
Model
class relies internally onNode
for most functionality.Feature nodes can be created with
Model.create()
.Node properties can be read and written via
Model.property()
.Feature nodes can be removed with
Model.remove()
.The
Node
class has additional functionality for modifying the model.All feature nodes can now be toggled, not just physics features.
Model.features()
andModel.toggle()
have been deprecated.Use the
Node
class instead to access that functionality.Model.import_()
was introduced to supersedeModel.load()
.Arguments
unit
anddescription
toModel.parameter()
are deprecated.Parameter descriptions should now be accessed via
Model.description()
.Model.parameters()
now returns a dictionary instead of named tuples.This is a breaking change, but in line with other parts of the API.
mph.start()
now picks a random free server port in client-server mode.This avoids collisions when starting multiple processes on Linux and macOS.
Models may be saved as Java, Matlab, or VBA source files.
mph.tree()
helps developers inspect the model tree in the console.Known issue: Navigating the model tree is slow in client–server mode.
It is much faster in stand-alone mode, the default on Windows.
Made folder search case-insensitive on Linux/macOS, as requested in #31.
Documentation builds now use the MyST parser and the Furo theme.
0.9.1¶
Published on March 24, 2021.
Added documentation chapter “Demonstrations”.
Added demo script that runs parallel Comsol sessions.
Amended
mph.start()
to allow hand-selecting the server port.This makes the demo script work reliably on Linux and macOS.
Improved error handling at server start-up.
Relaxed log levels during discovery of Comsol installations.
This suppresses possibly confusing log messages as described in #28.
0.9.0¶
Published on March 10, 2021.
mph.start()
is now the preferred way to start a local Comsol session.On Windows, it starts a lightweight, stand-alone client.
On Linux and macOS, it starts a thin client and local server.
This is due to limitations on these platforms described in issue #8.
Configuration options are exposed by
mph.option()
.An in-memory cache for previously loaded model files may be activated.
Selection names are returned by
model.selections()
.Feature names in physics interfaces are returned by
model.features()
.Feature nodes in physics interfaces can be toggled on or off.
Parameter descriptions can be modified.
Parameter values may be returned as evaluated numbers instead of string expressions.
Custom classes derived from
Model
can now be more easily type-cast to.Users are warned if log-in details for the Comsol server have not been set up.
Fixes issue #23 regarding discovery with older Python versions on Windows.
Fixes issue #24 regarding localized server output messages.
0.8.2¶
Published on February 13, 2021.
Works around issue of incorrect exit behavior.
Fixes: Exit code was always 0, even when terminating with
sys.exit(2)
.Fixes: Exit code was 0, not 1, when exiting due to unhandled exception.
0.8.1¶
Published on February 9, 2021.
Applies fixes for macOS from pull request #11.
macOS support has now actually been tested according to issue #13.
0.8.0¶
Published on February 7, 2020.
Adds support for Linux and macOS.
Caveats apply. See documentation chapter “Limitations” as well as issues #8 and #9.
Refactored discovery mechanism for Comsol installations.
0.7.6¶
Published on November 29, 2020.
Unpins JPype and Python version.
Works around issue #1 by brute-forcing shutdown of Java VM.
Client
instances now report the Comsol version actually used.Updates the documentation regarding limitations.
Resolves issue #4 regarding compatibility with 32-bit Python.
Possibly resolves issue #5 regarding spaces in path names.
0.7.5¶
Published on July 30, 2020.
First release used extensively “in production”.
Last release based on JPype 0.7.5.
Performs a regular shutdown of the Java VM, as opposed to releases to follow.
Respects user-set Comsol preferences when starting
Client
.Adds screen-shot of Comsol demonstration model to Tutorial.
Adds deployment instructions for developers.
0.7.4¶
Published on July 17, 2020.
Pins JPype dependency to version 0.7.5.
Works around shutdown delays of the Java VM, see issue #1.
Requires Python version to be 3.8.3 or below.
Minor improvements to wording of documentation.
0.7.3¶
Published on June 15, 2020.
Suppresses console pop-up during client initialization.
Ignores empty units in parameter assignments.
0.7.2¶
Published on May 18, 2020.
Makes
dataset
argument toModel.outer()
optional.Minor tweaks to project’s meta information.
0.7.1¶
0.7.0¶
Published on May 17, 2020.
First open-source release published on PyPI.