The lower-bound problem may depend on how we define the copula function. The following is my understanding. Consider a bivariate joint distribution constructed by a copula function:
where H_\rho() is the joint distribution function, C_\rho() is the copula, \Phi_x(.) is the CDF of the marginal distribution of x; similar for \Phi_y(.).
In the case where both x and y have normal marginal distributions, we have x\in (-\infty, \infty) and y\in (-\infty, \infty) while both \Phi_x(\cdot) and \Phi_y(\cdot) are uniformally distributed in (0,1).
So, whether the lower bounds in the code should be -\infty or 0, may depend on whether the model is constructed on (x, y) or (\Phi_x(x), \Phi_y(y)).
Using the code example in my previous post, the joint distribution is constructed by D1 = SklarDist(C1, (x,y))
which, I believe, would have x and y as inputs to D1
(i.e., D1
is akin to H_\rho(x,y)). Therefore, when doing integration on D1
, I think the lower bounds should be those of (x,y), i.e., -\infty.
Does it make sense?
ps. Now I think I understand your comment: You want the joint distribution to be represented by C_\rho(\Phi_x(x), \Phi_y(y)) where the arguments are defined by (\Phi_x(), \Phi_y()). I am not sure whether this is a good idea for many empirical researchers. Many times our data is in the form of x (e.g., log of the class scores) but not \Phi_x(). We want to input (x, y) and get the pdf, cdf, and the likelihood value of the distribution. This may be a case where good notations and practical uses are in conflict.
(edit to add the last paragraph)