My research is centred around the application of techniques from machine-learning and computational statistics to analyse data from animal behaviour experiments. Examples of such techniques might be Artificial Neural Networks, Gaussian Processes and Hidden Markov Models. These might be used to understand how fish follow each other, how pigeons navigate or how prawns interact with their neighbours.
Typically when we come to write up such analyses in a paper we refer the reader to 1-3 standard texts where they could, in principle, find out all they need to know to reproduce our work, noting only the specific details that distinguish our use of whichever tool we employ, in a short paragraph in the small print.
When our readers (gambling on the plural), or even my colleagues refer to my work they usually say something along the line of 'that Bayesian stuff you do', with the suggestion that such things are practically incomprehensible and perhaps somehow almost like witchcraft.
In fact, far from being a maze of complex mathematics that only committed disciples can penetrate, my use of these methods is predicated most importantly on knowing what I can afford not to understand in detail. I am very much an applications man, taking established tools off the shelf, tweaking them a little and employing them for new purposes.
In this blog I'd like to demystify this process a little, explaining what I know about different methods, building up from some very basic ideas, and mostly showing how powerful a little knowledge can be. Over a series of posts I will try to show how analyses we have published actually work, trying to explain with a minimum of mathematics what is actually going on. So when I next say of some analysis 'this bit is fairly trivial', maybe someone will agree.