Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
25 set 2012, 11:28 GMT-4
I prefer direct solvers, they are usually way faster.
I prefer direct solvers, they are usually way faster.
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
25 set 2012, 14:00 GMT-4
hehe Alexander sir, I appreciate the direct solver suggestion but alas RAM is quite limited on my part.
I've been reading everywhere that Geometric multigrid combined with GMRES is very powerful and all (that sometime only slightly slower than direct solvers), but i haven't been able to make it work with comsol. Have you any experience on it?
Many thanks anyways for replying
Aimi
hehe Alexander sir, I appreciate the direct solver suggestion but alas RAM is quite limited on my part.
I've been reading everywhere that Geometric multigrid combined with GMRES is very powerful and all (that sometime only slightly slower than direct solvers), but i haven't been able to make it work with comsol. Have you any experience on it?
Many thanks anyways for replying
Aimi
Robert Koslover
Certified Consultant
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
24 ott 2012, 18:21 GMT-4
For cases of limited RAM, consider using GMRES with the SOR vector preconditioner. (But I encourage other readers here to comment with their agreement or disagreement.)
You may also find that setting the discretization to linear (in contrast to the the default value of quadratic) can dramatically reduce the memory requirements and also speed up the computation, while for many cases providing sufficient accuracy. In fact, this one trick may let you get away with using a direct solver (such as PARDISO).
For cases of limited RAM, consider using GMRES with the SOR vector preconditioner. (But I encourage other readers here to comment with their agreement or disagreement.)
You may also find that setting the discretization to linear (in contrast to the the default value of quadratic) can dramatically reduce the memory requirements and also speed up the computation, while for many cases providing sufficient accuracy. In fact, this one trick may let you get away with using a direct solver (such as PARDISO).
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
25 ott 2012, 04:52 GMT-4
Thank you for your reply Robert,
I was running out of ideas, I'll try out playing with the discretization and solver settings you suggested. So far i've been experimenting with the solver settings of GMRES solver and multigrid preconditioner ( SOR vector for pre and post smoother). I just thought i'll also share what i have found. As you all might have known already perhaps, when simulating incoming perpendicular waves with perfect conductors on the sides, i found it to be relatively easy. I dont need to ensure the meshes of the side boundaries to correspond with each other (needed when applying periodic floquet conditions). The solver GMRES with multgrid ( SOR vector preconditioner) with default settings seems to be the right way to go for me, relatively fast convergence and memory usage.
Problems started to appear when using PML as mentioned in the user guide, and we need to apply swept mesh in the PML region. I played around with the PML scaling factor, but i found so far i can only make the convergence for the iterative method worse (longer time needed or not even converging).
The interesting thing i see is the fact that even if I use perfect conductors as the side boundaries, I can still have terrible convergence properties with the GMRES and multigrid if i force the side boundaries to have identical mesh using the copy mesh function. So i'm beginning to think the problem is mainly in the generated mesh inside. I tried to inspect the mesh quality but i see not much difference between the fast converging one and the slow converging one (the one with identical mesh at the side walls)
So far i find it really difficult if i have floquet conditions on the side walls when there is additionally PML on top of it. The addition of the PML really makes convergence slow down drastically. I tried to play around with the relaxation factor of the SOR vector in the post and presmoother part but i so far see that it has no effect or just make it worse. The default value of 1 for the relaxation factor seems somewhat optimal already.
Well that's pretty much it from me, hopefully somebody finds it abit useful or somebody can add something to it...
Aimi
Thank you for your reply Robert,
I was running out of ideas, I'll try out playing with the discretization and solver settings you suggested. So far i've been experimenting with the solver settings of GMRES solver and multigrid preconditioner ( SOR vector for pre and post smoother). I just thought i'll also share what i have found. As you all might have known already perhaps, when simulating incoming perpendicular waves with perfect conductors on the sides, i found it to be relatively easy. I dont need to ensure the meshes of the side boundaries to correspond with each other (needed when applying periodic floquet conditions). The solver GMRES with multgrid ( SOR vector preconditioner) with default settings seems to be the right way to go for me, relatively fast convergence and memory usage.
Problems started to appear when using PML as mentioned in the user guide, and we need to apply swept mesh in the PML region. I played around with the PML scaling factor, but i found so far i can only make the convergence for the iterative method worse (longer time needed or not even converging).
The interesting thing i see is the fact that even if I use perfect conductors as the side boundaries, I can still have terrible convergence properties with the GMRES and multigrid if i force the side boundaries to have identical mesh using the copy mesh function. So i'm beginning to think the problem is mainly in the generated mesh inside. I tried to inspect the mesh quality but i see not much difference between the fast converging one and the slow converging one (the one with identical mesh at the side walls)
So far i find it really difficult if i have floquet conditions on the side walls when there is additionally PML on top of it. The addition of the PML really makes convergence slow down drastically. I tried to play around with the relaxation factor of the SOR vector in the post and presmoother part but i so far see that it has no effect or just make it worse. The default value of 1 for the relaxation factor seems somewhat optimal already.
Well that's pretty much it from me, hopefully somebody finds it abit useful or somebody can add something to it...
Aimi
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
9 gen 2013, 18:32 GMT-5
hmm,
I was just thinking if it's worthwhile to switch to iterative solver.
I have limited experience with 3D RF supercell, I only managed to solve the problem with very coarse resolution, so maybe the problem is that the quadratic problem is kind of nonlinear non-symmetric matrix problem.
what about just using better surface meshing at the periodic boundary?
hmm,
I was just thinking if it's worthwhile to switch to iterative solver.
I have limited experience with 3D RF supercell, I only managed to solve the problem with very coarse resolution, so maybe the problem is that the quadratic problem is kind of nonlinear non-symmetric matrix problem.
what about just using better surface meshing at the periodic boundary?
Please login with a confirmed email address before reporting spam
Posted:
1 decade ago
10 gen 2013, 05:04 GMT-5
For solving the system matrix by FEM, it's highly sparse,
GMRES is the best solver for that.
For solving the system matrix by FEM, it's highly sparse,
GMRES is the best solver for that.