MPI-AMRVAC
3.1
The MPI - Adaptive Mesh Refinement - Versatile Advection Code (development version)
|
Below you can find a list of frequently asked questions and answers.
Yes, we have a mailinglist that you can send questions to: amrva cuse rs@ls .kul euven .be
You can join the mailing-list by subscribing, so that you will be informed about important changes or (bug)fixes. You can also search the mailing list archive.
Indeed, there is a quick and time-saving way to read your own parameters without having to give an explicit value in the usr file and recompile each time. Instead, add this in your usr file :
with my_parameter_1
and my_parameter_2
to be defined at the very beginning of the user file, just before the “contains” statement. Doing so, you can use them anywhere in the usr file and they will have the value you defined in your par file adding the following lines :
The idea of the onegrid
option for conversion is that a hierarchical block AMR grid (as stored in the .dat
files) can be saved to an equivalent uniform grid representation at a user-chosen level (combine level_io
with onegrid
). You then can use any software you like to handle uniform grid data. Converting back to the .dat format is then impossible. However, you can write a user-routine to read in the uniform data, and then use it to restart a similar uniform-grid (no AMR, just domain decomposed) simulation. The .dat
files are all that is needed to do restarts.
If you want to change the data during the simulation, in principle you do not need another program to do that. For that purpose, we provide the generic subroutines
or immediately after the advance step:
or just once, at restart (and after reading in a .dat file), then use
which allows you to modify things during runtime (of course, it should make physical sense). See their interface in mod_usr_methods.t.
The logical variable save_physical_boundary
can be set to true, which enforces the .dat
files to also contain the ghost cell information for physical boundaries (i.e., those beyond the left or right edge of the domain, in each direction). You can use this file (like any other .dat
file, to restart, and this helps if you want to use saved boundary info in your boundary value handling. However, all our present conversion options (like e.g. to .vtu
files) do not store this extra info, and you can therefore not use them for visualizing the ghost cell info. For that, you will need to handle the .dat
files directly, e.g. using python.
This means that your MPI library was compiled with a different version of gfortran
(and GCC
) than you are using now. If you run on a cluster, contact their support and ask them to fix it.
If you encounter this problem on your own machine, you can try to:
The MPI-AMRVAC code has Hall-MHD included, as detailed in some of our publications, e.g. in the method paper
or also in the Kelvin-Helmholtz related application paper
The implementation details are given in the first reference (Porth et al.), and although it works properly on several tests and applications, we note that the time step constraint of our explicit implementation may become prohibitive for particular applications. We just limit the CFL according to \(\Delta t < \Delta x/ c_w\), with time step and spatial step linked by the speed \(c_w\), but in that speed we set \(c_w= |v|+\max(c_{fast}, \eta_h B/\rho * k_{max})\) and \(k_{max}\) is the maximal wavenumber we can represent (i.e. linked to \(\Delta x\)). The dispersive nature of the Hall-MHD system may then make \(\Delta t\) going like \(\Delta x^2\), and this limits the current implementation.
In MPI-AMRVAC 3.0, the Hall effect is included when setting the following in the mhd_list
namelist part
In the same namelist, the optional logical mhd_4th_order=.true.
implies a 4th order evaluation for currents, default it is only second order. In any case, you may also need to activate an additional ghost cell layer (or 2 for 4th-order evaluations), through setting appropriately the parameter nghostcells
.