I’m trying to write some code to “show how it works” when it comes to imposing high quality prior information on a Bayesian regression problem. I’m running into implementation issues and could use some thoughts @stevengj in particular might have some ideas because of ApproxFun, but also any of the Turing crew or anyone else please feel free to chip in.

Suppose you want to fit a function to a particular data set, and you have some theoretical reason to believe the function should have some properties. Let’s for the moment assume we’re fitting a function on the standard Chebyshev domain [1,1] because it avoids some complications.

Here are some examples of the kinds of information we might have:

- We think the function near the left endpoint should be not far from say 0.0, and on the right endpoint not far from 1.0.
- The function is strictly positive
- The function has non-negative derivative
- The function should be near 0.3 at 0.0

I’m representing this function as a 5th order Chebyshev series using ApproxFun, and I can impose the conditions in 1 and 4 no problem with addlogprob!

```
coefs ~ MvNormal(zeros(6),1000.0.*I(6))
f = Fun(Chebyshev(),coefs)
Turing.addlogprob!(logpdf(Gamma(3,.01/2),f(-1.0)) # function near -1 is near 0
Turing.addlogprob!(logpdf(Gamma(20,1.0/19),f(1.0)) # function near 1 is near 1.0
Turing.addlogprob!(logpdf(Gamma(20,0.3/19),f(0.0)) # function near 0 is near 0.30
```

Ideally what I’d like to do for condition (2) and (3) is to integrate a function like:

```
lpp(x) = 10*(logistic((x-0.01)/.01)-1)
```

Which is kind of a smooth step function, it goes very negative for negative quantities, and is near 0 for positive quantities. I’d then add this to the log probability and down-weight any functions that spend any time with negative values…

```
val,prec = quadgk(x->lpp(f(x)),-1,1)
Turing.@addlogprob!(val) ## imposing condition 2 approximately
```

But so far that doesn’t work right

What’s the best way to impose these kind of integral based conditions in Turing? There are a number of issues:

- Interaction with gradients in Turing?
- Numerical stability?
- Type compatibility.

I’ve tried a number of ideas, and would be happy to give details of how it failed, but first I thought maybe I’d see if anyone has some idea of the “canonical” way to utilize numerical integration results inside Turing?