JuAFEM + ForwardDiff + NLsolve = Finite elements for nonlinear problems?

Ok, I’ve been playing around for a while using this method and example. Everything seems to work if I only have one field (either displacement or electric Potential) and no terms that connect the two fields. However I can’t quite figure out how to work with the two fields. Is my method of accessing either one that I use above correct? This is like: get the globaldofs(cell), then get the variables out of the up vector, and assume that the first 4x3 components (since the cell is tetrahedral) are u and next 4x3 are P. Does it matter with regards to the calculation of fe and Ke? And will AD be able to do it for a function like

function element_potential(lengthdofs, cvP, cvu, u::Vector{T}, α, C, q, G) where T
    mid = div(lengthdofs, 2)
    uel = reinterpret(Vec{3, T}, u[1:mid])
    Pel = reinterpret(Vec{3, T}, u[mid+1:lengthdofs])
    out = zero(T)
    for qp in 1:getnquadpoints(cvu)
        ∇P = function_gradient(cvP, qp, Pel)
        δP = function_value(cvP, qp, Pel)
        δε = function_symmetric_gradient(cvu, qp, uel)

        out += Fc(δε, C) + Fg(∇P, G) + Fl(δP, α)+ Fq(δP, δε, q)

    end
    return out
end

It would be awesome if there was an example with 2 or more fields, with a potential that connects them, and using the above method of solving.

Remember that you also have to multiply out with the quadrature weight dΩ = getdetJdV(cvu, qp) in each quadrature point.

We should make it more clear how to extract the dofs from u for the different fields but what you do here is correct (assuming that you added the fields in the order u and P).

Other than the missing quadrature weight in the integration, I don’t see anything obviously wrong and AD should work well for this. What is the problem you encounter?

Basically, before the linear solver, we need to modify the tangent matrix K and the residual f so that u = K \ f will result in zeros on those degrees of freedom that are prescribed with Dirichlet boundary conditions. We could, of course, compute u_free = K[free, free] \ f[free] where free are the indices of the non constrained dofs, and then set u[free] = u_free but forming K[free, free] can be expensive if K is very large and this has to be done in every Newton step.

When it comes to the residual and the convergence check, we only want that norm(f[free]) is small. For those indices in f that correspond to constrained dofs, these elements will contain the “residual forces” and will not be non zero at the point where the system is solved.

Basically, before the linear solver, we need to modify the tangent matrix K and the residual f so that u = K \ f will result in zeros on those degrees of freedom that are prescribed with Dirichlet boundary conditions. We could, of course, compute u_free = K[free, free] \ f[free] where free are the indices of the non constrained dofs, and then set u[free] = u_free but forming K[free, free] can be expensive if K is very large and this has to be done in every Newton step.

Can you do this by forming a custom jacobian type and overloading \ ? For the residual checking, maybe allowing a vector of reltol and abstol and setting some of those to Inf would be enough?

Right, so this seems like being able to pass a function that does the linear solve and being able to pass a callback should cover these things, right?

Yes, definitely.

Okay, That was definitely an issue. I seem to be getting some weird results, basically not what I would expect putting in the formulas and boundary conditions. It also seems that the method results in an instant drop into some kind of minimum, after which nothing much happens anymore. I tried making an easier model, and just move the input vector along the negative of the derivative f that gets constructed inside the assembly step (to hopefully mimick some steepest descent method). This seems to go horribly wrong however.

With regards to the hyperelasticity example, what I don’t get is that you call un the previous solution vector, but it seems this one doesn’t get updated, just gets added by the gradient, which does get updated at each step. I’m a bit confused as to how this works.

There is no time stepping in that example, but if there would have been we would set un = u after the Newton iteration before starting the next time step.

Ah that makes sense.

Everything works now! I changed the newton steps slightly, not using the cg function because it kept complaining. It was a rogue + being on the next line instead of behind the last term on the previous. Accessing all the elements inside the vector for starting conditions is still something I couldn’t quite figure out, but ok that’s something I should be able to fix. One last question, is it possible to simulate a 3D field (Pxyz) on a 2D geometry?

1 Like

Ah, that’s a classic.

Currently no, although it shouldn’t be too hard to support.

Also note that

 uel = reinterpret(Vec{3, T}, u[1:mid])
 Pel = reinterpret(Vec{3, T}, u[mid+1:lengthdofs])

is actually not needed. It is totally fine to just use

 uel = u[1:mid]
 Pel = u[mid+1:lengthdofs]

Yes I found that out recently! I really like the JuAFEM package by the way, it’s really clear what is doing what and how it works, even for a complete novice to FEM like me. Its also really versatile it seems, being able to highjack it to minimize something in real space, since I was too lazy to implement derivatives on different meshes myself, is really cool! When the optim.jl push will happen so we can put the hessian and gradient in ourselves, it will be very good. Also thanks a lot for your help!

1 Like

Thats great to hear! Also, please open issues for things you find awkward or things that are lacking. The development of the package has, so far, mostly been driven by mine and Kristoffers research, and thus we have only implemented things that we need.

For example:

uel = u[1:mid]
Pel = u[mid+1:lengthdofs]

from above is rather awkward, we should do something better for getting the dofs for a specific field.