Coupled PINN

newf(θ,p)=loss_function1(θ.prob1u0,p)+loss_function2(θ.prob2u0,p)
f_ = OptimizationFunction(newf, Optimization.AutoZygote())
u0 = ComponentArrays(prob1u0 = prob1.u0, prob2u0 = prob2.u0)
prob = Optimization.OptimizationProblem(f_, u0)

is a nice way to do it.

1 Like

What would be the output of this line? Is it the same as Array[prob1u0,prob2u0] which gives a vector with two elements?
For ComponentArray I get the error “objects of type Module are not callable”.

u0 = ComponentArray(prob1u0 = prob1.u0, prob2u0 = prob2.u0)

1 Like

Thanks. Do U know what is this error for?

 Uniform: the condition a < b is not satisfied.

I got this at res line run.

That means you placed box constraints but the initial condition wasn’t in the box.

1 Like

I think at the point where you are asking about things like .- it might be time to learn a bit about how Julia works rather than trying to write a Master’s thesis with line-by-line debugging by Chris on discourse…

(The . operator is for broadcasting, see Functions · The Julia Language)

2 Likes

I know. I am sorry. I know the basics but I couldn’t find anything about “.-” and there are some codes which there isn’t any description for them specially PDE problems with PINN; What should I do when I have no clue and there is no guidance after spending 4 months for this problem? BTW I know what .- means but I just want to make sure why it is used like this in the loss function which could have been used differently. FYI I don’t think it’s elementwise minus. For example dx*phi(x, θ) .- 1 is for the normalization constraint (Δtp(x)=1). .- is used as an equal somehow and I don’t know why.
I am going to stop asking him questions from now on.
Peace

Sorry that I have asked U so many questions and I appreciate your help and time. Do U know any course which introduces Deep Learning or PINN with Julia? I have found some videos but those are mostly basic and start from the beginning. I am in need of a more advanced course which will help me understand my problem throughly.
Peace.

https://book.sciml.ai/notes/03-Introduction_to_Scientific_Machine_Learning_through_Physics-Informed_Neural_Networks/

1 Like
function loss_function1(θ,p)
    sum(map(l->l(θ) ,loss_functions1))
end

I have been studying PINN Docs. There are some places where theta is defined as the weight and in other places p is the weight. I am confused. In the definition of loss function, what are theta and p?
Thanks.

It’s a function, you define the argument names.

In the PhysicsInformedNN documantation there is a saying about phi which is confusing.

  • phi: a trial solution, specified as phi(x,p) where x is the coordinates vector for the dependent variable and p are the weights of the phi function (generally the weights of the neural network defining phi). By default this is generated from the chain. This should only be used to more directly impose functional information in the training problem, for example imposing the boundary condition by the test function formulation.

Since the example is solved for only x, if I want to solve multi dimension problem, how can I handle my dimension especially in one point, line or surface for additional loss definition. Here it says it is a vector.
For example if I want a specific line when my dimensions are t , x , y; should I put them in a vector instead of x or it will be automatically consider x as a vector of [t,x,y]?

This

1 Like