That there are effects? For sure. Still very early days, but there’s evidence in both mice and humans.
Microbes consume and produce neurotransmitters that can act on the gut nervous system and can alter signaling in the brain trough the vagus nerve.
In mouse models of ASD, there are clear causal connections to the gut microbiome (eg. transferring the microbiomes of healthy mice can alleviate some of the ASD phenotype).
Many studies (including one I’m close to publishing) demonstrate that there are associations between gut microbes and neurocognitive development in humans (causal direction of the effects are of course hard to nail down)
And a lot of other stuff. DM me if you want some references.
Now how critical / important any of this is remains an open question. Mice without microbiomes still develop perfectly functioning brains (though they’re not totally normal, this could be due to nutritional and immune effects).
EHTC is a quite loose collaboration and people are using their tools of choice. I am currently doing some diagnostic plots of the raw data with FITSIO.jl, DimensionalData.jl and PlotlyJS.jl. All development is of course done using Pluto.jl. Some dynamic imaging experiments follow later .
Just ramping up on Julia for now, but planing to use it for a CRM project.
As a side note by day I work on cyber security products and Julia would be an amazing fit for the space.
You could essentially use it for Server, Endpoint Agent, and ML models.
Optimal experimental design of mixed networks containing both distributed acoustic sensing (fiber optic cable based seismometers) and traditional point sensors
Level set tomography of structures in the LA basin and underneath Yellowstone using a kalman-filter based inversion scheme
My current grand goal though is to set up a framework for deep Gaussian process based regularization of geophysical inverse problems using the SPDE approach (like R-INLA) and use it to invert for fluid injection into faults by predicting the production of seismicity. That requires GPs, adjoint based derivatives of PDEs, function-space corrections for Hamiltonian Monte-Carlo… hopefully I can get 20% of my goal done before my thesis deadline!
Sounds super interesting! Your work is related to some stuff I have been reading recently but I haven’t had time to code it up. Would be nice if you could update us on your progress through blogs or posts here. If you want, I am also more than happy to help you review some code. I think you will need a PDE package (e.g. JuAFEM, JuliaFEM, FinEtools, Gridap, Trixi, etc), a Bayesian inference package (e.g. Turing), a GP package (e.g. Stheno) and an AD package (e.g. Zygote) to do what you want.
Edit: I know a thing or 2 about JuAFEM, Turing, Stheno and Zygote
I’m currently working on similar problems, but trying to count all the fish in the sea using sonar and trawl catches (really). I’ve done a little hacking trying to implement the GMRF/SPDE approach in Julia, but haven’t got it working yet…I’d definitely be interested to hear updates as well.
@jbmuir I don’t know if you saw this thread, but I just managed to get a (slightly hacky) sparse Stheno GP to work inside a Turing model, which may be of use to you…if you’ve got thoughts about how this should work in a less-hacky way, there’s an open issue where it would be great to hear your input!
I am manually pulling using the v2 of the API. I could post code if you were interested. Reason I moved over to Julia was because they just deprecated the API I was using in python and wanted to try it in Julia
module TwitTwo
using HTTP, JSON, Base64
struct Client
token::String
end
function jparse(resp::HTTP.Messages.Response)
JSON.parse(String(resp.body))
end
function createClient(api::String = "Your api key",secret::String="Your api secret")
headers = Dict()
headers["User-Agent"] = "Rod Biren"
headers["Authorization"] = "Basic $(base64encode("$api:$secret"))"
headers["Content-Type"] = "application/x-www-form-urlencoded;charset=UTF-8"
headers["Content-Length"] = 29
println(headers)
r = HTTP.request("POST",
"https://api.twitter.com/oauth2/token",
headers,
"grant_type=client_credentials",
verbose=1,
cookies=true
)
global client = Client(jparse(r)["access_token"])
end
function getUserInfo(name::String)
headers = Dict()
headers["User-Agent"] = "Rod Biren"
headers["Authorization"] = "Bearer $(client.token)"
r = HTTP.request("GET","https://api.twitter.com/1.1/users/show.json?screen_name=$name",headers)
return jparse(r)
end
function getUserCount(name::String)
user = getUserInfo(name)
return(user["followers_count"])
end
function getTweets(name::String)
headers = Dict()
headers["Authorization"] = "Bearer $(client.token)"
r = HTTP.request("GET","https://api.twitter.com/2/tweets/search/recent?query=from:$name&tweet.fields=public_metrics",headers)
return jparse(r)
end
function getEngagement(tweets)
engagement=0
try
for tweet in tweets["data"]
engagement += tweet["public_metrics"]["reply_count"] + tweet["public_metrics"]["like_count"] + tweet["public_metrics"]["quote_count"] + tweet["public_metrics"]["retweet_count"]
end
catch
end
return engagement
end
end # module
Also @mohamed82008 - I’ve got some prototypes working using JuAFEM, but for my first problem I’m mainly interested in a timeseries application so just using a finite-difference stencil works well. My main bottleneck at the moment is trying to make the forward PDE solve of the diffusion problem fast enough for sampling; I had that discretised with JuAFEM & solved using method-of-lines via DifferentialEquations, but putting it onto the GPU caused some problems when I last tried it a couple of months ago.
@ElOceanografo - I did see that thread, and I am very interested with that going forward. There are heaps of problems that have nice solutions in geophysics when posed as some kind of GP regression, but weirdly the awareness of GPs is pretty low even though they originated in a quite allied field. Potentially it is due to the difficulty of scaling to big data so very cool to see a sparse model working in a fun language like Julia.
I use it mostly for climate processing as needed, for reanalysis data and satellite data. Right now the datasets I can process are limited because I don’t use a lot of datasets but this is likely set to change.
I also use it to process climate model output from Isca (Uni Exeter) and SAM (Stony Brook). I eventually intend to use it to analyze data from CESM and E3SM and CMIP data as well.
And also, I do Julia for all my homeworks now hehe.