# Capture-Recapture Models in Python with PyMC3

Recently, a project at work has led me to learn a bit about capture-recapture models for estimating the size and dynamics of populations that cannot be fully enumerated. These models often arise in ecological statistics when estimating the size, birth, and survival rates of animal populations in the wild. This post gives examples of implementing three capture-recapture models in Python with PyMC3 and is intended primarily as a reference for my future self, though I hope it may serve as a useful introduction for others as well.

We will implement three Bayesian capture-recapture models:

- the Lincoln-Petersen model of abundance,
- the Cormack-Jolly-Seber model of survival, and
- the Jolly-Seber model of abundance and survival.

`%matplotlib inline`

`import logging`

```
from matplotlib import pyplot as plt
from matplotlib.ticker import StrMethodFormatter
import numpy as np
import pymc3 as pm
from pymc3.distributions.dist_math import binomln, bound, factln
import scipy as sp
import seaborn as sns
import theano
from theano import tensor as tt
```

```
set()
sns.
= StrMethodFormatter('{x:.1%}') PCT_FORMATTER
```

```
= 518302 # from random.org, for reproducibility
SEED
np.random.seed(SEED)
```

```
# keep theano from complaining about compile locks
'theano.gof.compilelock')
(logging.getLogger(
.setLevel(logging.CRITICAL))
# keep theano from warning about default rounding mode changes
round = False theano.config.warn.
```

## The Lincoln-Petersen model

The simplest model of abundace, that is, the size of a population, is the Lincoln-Petersen model. While this model is a bit simple for most practical applications, it will introduce some useful modeling concepts and computational techniques.

The idea of the Lincoln-Petersen model is to visit the observation site twice to capture individuals from the population of interest. The individuals captured during the first visit are marked (often with tags, radio collars, microchips, etc.) and then released. The number of individuals captured, marked, and released is recorded as \(n_1\). On the second visit, the number of captured individuals is recorded as \(n_2\). If enough individuals are captured on the second visit, chances are quite high that several of them will have been marked on the first visit. The number of marked individuals recaptured on the second visit is recorded as \(n_{1, 2}\).

The Lincoln-Petersen model assumes that: 1. each individuals has an equal probability to be captured on both visits (regardless of whether or not they were marked), 2. no marks fall off or become illegible, and 3. the population is closed, that is, no individuals are born, die, enter, or leave the site between visits.

The third assumption is quite restrictive, and will be relaxed in the two subsequent models. The first two assumptions can be relaxed in various ways, but we will not do so in this post. First we derive a simple analytic estimator for the total population size given \(n_1\), \(n_2\), and \(n_{1, 2}\), then we fit a Bayesian Lincoln-Petersen model using PyMC3 to set the stage for the (Cormack-)Jolly-Seber models.

Let \(N\) denote the size of the unknown total population, and let \(p\) denote the capture probability. We have that

\[ \begin{align*} n_1, n_2\ |\ N, p & \sim \textrm{Bin}(N, p) \\ n_{1, 2}\ |\ n_1, p & \sim \textrm{Bin}(n_1, p). \end{align*} \]

Therefore \(\frac{n_2}{N}\) and \(\frac{n_{1, 2}}{n_1}\) are unbiased estimates of \(p\). The Lincoln-Peterson estimator is derived by equating these estimators

\[\frac{n_2}{\hat{N}} = \frac{n_{1, 2}}{n_1}\]

and solving for

\[\hat{N} = \frac{n_1 n_2}{n_{1, 2}}.\]

We now simulate a data set where \(N = 1000\) and the capture probability is \(p = 0.1\)

```
= 1000
N_LP = 0.1
P_LP
= sp.stats.bernoulli.rvs(P_LP, size=(2, N_LP)) x_lp
```

The rows of `x_lp`

correspond to site visits and the columns to individuals. The entry `x_lp[i, j]`

is one if the \(j\)-th individuals was captured on the \(i\)-th site visit, and zero otherwise.

` x_lp`

```
array([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 1, 0]])
```

We construct \(n_1\), \(n_2\), and \(n_{1, 2}\) from `x_lp`

.

```
= x_lp.sum(axis=1)
n1, n2 = x_lp.prod(axis=0).sum() n12
```

` n1, n2, n12`

`(109, 95, 10)`

The Lincoln-Petersen estimate of \(N\) is therefore

```
= n1 * n2 / n12
N_lp
N_lp
```

`1035.5`

We now give a Bayesian formulation of the Lincoln-Petersen model. We use the priors

\[ \begin{align*} p & \sim U(0, 1) \\ \pi(N) & = 1 \textrm{ for } N \geq n_1 + n_2 - n_{1, 2}. \end{align*} \]

Note that the prior on \(N\) is improper.

```
with pm.Model() as lp_model:
= pm.Uniform('p', 0., 1.)
p = pm.Bound(pm.Flat, lower=n1 + n2 - n12)('N') N_
```

We now implement the likelihoods of the data given above during the derivation of the Lincoln-Petersen estimator.

```
with lp_model:
= pm.Binomial('n1_obs', N_, p, observed=n1)
n1_obs = pm.Binomial('n2_obs', N_, p, observed=n2)
n2_obs = pm.Binomial('n12_obs', n1, p, observed=n12) n12_obs
```

Now that the model is fully specified, we sample from its posterior distribution.

```
= 3
NJOBS
= {
SAMPLE_KWARGS 'draws': 1000,
'njobs': NJOBS,
'random_seed': [SEED + i for i in range(NJOBS)],
'nuts_kwargs': {'target_accept': 0.9}
}
```

```
with lp_model:
= pm.sample(**SAMPLE_KWARGS) lp_trace
```

```
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (3 chains in 3 jobs)
NUTS: [N_lowerbound__, p_interval__]
100%|██████████| 1500/1500 [00:06<00:00, 230.30it/s]
The number of effective samples is smaller than 25% for some parameters.
```

First we examine a few sampling diagnostics. The Bayesian fraction of missing information (BFMI) and energy plot give no cause for concern.

` pm.bfmi(lp_trace)`

`1.0584389661341544`

`; pm.energyplot(lp_trace)`

The Gelman-Rubin statistics are close to one, indicating convergence.

`max(np.max(gr_stats) for gr_stats in pm.gelman_rubin(lp_trace).values())`

`1.0040982739011743`

Since there are no apparent sampling problems, we examine the estimate of the population size.

```
pm.plot_posterior(=['N'],
lp_trace, varnames=N_LP,
ref_val=0., alpha=0.75
lw; )
```

The true population size is well within the 95% credible interval. The posterior estimate of \(N\) could become more accurate by substituting a reasonable upper bound for the prior on \(N\) for the uninformative prior we have used here out of convenience.

## The Cormack-Jolly-Seber model

The Cormack-Jolly-Seber model estimates the survival dynamics of individuals in the population by relaxing the third assumption of the Lincoln-Petersen model, that the population is closed. Note that here “survival” does not necessarily correspond to the individual’s death, as it includes individuals that leave the observation area during the study. Despite this subtlety, we will use the convenient terminology of “alive” and “dead” throughout.

For the Cormack-Jolly-Seber and Jolly-Seber models, we follow the notation of *Analysis of Capture-Recapture Data* by McCrea and Morgan. Additionally, we will use a cormorant data set from the book to illustrate these two models. This data set involves eleven site visits where individuals were given individualized identifying marks. The data are defined below in the form of an \(M\)-array.

```
= 10
T
= np.array([30, 157, 174, 298, 470, 421, 413, 514, 430, 181])
R
= np.zeros((T, T + 1))
M 0, 1:] = [10, 4, 2, 2, 0, 0, 0, 0, 0, 0]
M[1, 2:] = [42, 12, 16, 1, 0, 1, 1, 1, 0]
M[2, 3:] = [85, 22, 5, 5, 2, 1, 0, 1]
M[3, 4:] = [139, 39, 10, 10, 4, 2, 0]
M[4, 5:] = [175, 60, 22, 8, 4, 2]
M[5, 6:] = [159, 46, 16, 5, 2]
M[6, 7:] = [191, 39, 4, 8]
M[7, 8:] = [188, 19, 23]
M[8, 9:] = [101, 55]
M[9, 10] = 84 M[
```

Here `T`

indicates the number of revisits to the site so there were \(T + 1 = 11\) total visits. The entries of `R`

indicate the number of animals captured and released on each visit.

` R`

`array([ 30, 157, 174, 298, 470, 421, 413, 514, 430, 181])`

The entry `M[i, j]`

indicates how many individuals captured on visit \(i\) were first recaptured on visit \(j\). The diagonal of `M`

is entirely zero, since it is not possible to recapture an individual on the same visit as it was released.

`4, :4] M[:`

```
array([[ 0., 10., 4., 2.],
[ 0., 0., 42., 12.],
[ 0., 0., 0., 85.],
[ 0., 0., 0., 0.]])
```

Of the thirty individuals marked and released on the first visit, ten were recaptured on the second visit, four were recapture on the third visit, and so on.

Capture-recapture data is often given in the form of encounter histories, for example

\[1001000100,\]

which indicates that the individual was first captured on the first visit and recaptured on the fourth and eigth visits. It is straightforward to convert a series of encounter histories to an \(M\)-array. We will discuss encounter histories again at the end of this post.

The parameters of the Cormack-Jolly-Seber model are \(p\), the capture probability, and \(\phi_i\), the probability that an individual that was alive during the \(i\)-th visit is still alive during the \((i + 1)\)-th visit. The capture probability can vary over time in the Cormack-Jolly-Seber model, but we use a constant capture probability here for simplicity.

We again place a uniform prior on \(p\).

```
with pm.Model() as cjs_model:
= pm.Uniform('p', 0., 1.) p
```

We also place a uniform prior on \(\phi_i\).

```
with cjs_model:
= pm.Uniform('ϕ', 0., 1., shape=T) ϕ
```

If \(\nu_{i, j}\) is the probability associated with `M[i, j]`

, then

\[ \begin{align*} \nu_{i, j} & = P(\textrm{individual that was alive at visit } i \textrm{ is alive at visit } j) \\ & \times P(\textrm{individual was not captured on visits } i + 1, \ldots, j - 1) \\ & \times P(\textrm{individual is captured on visit } j). \end{align*} \]

From our parameter definitions,

\[P(\textrm{individual that was alive at visit } i \textrm{ is alive at visit } j) = \prod_{k = i}^{j - 1} \phi_k,\]

```
def fill_lower_diag_ones(x):
return tt.triu(x) + tt.tril(tt.ones_like(x), k=-1)
```

```
with cjs_model:
= tt.triu(
p_alive
tt.cumprod(1:]) * ϕ),
fill_lower_diag_ones(np.ones_like(M[:, =1
axis
) )
```

\[P(\textrm{individual was not captured at visits } i + 1, \ldots, j - 1) = (1 - p)^{j - i - 1},\]

```
= np.arange(T)[:, np.newaxis]
i = np.arange(T + 1)[np.newaxis]
j
= np.clip(j - i - 1, 0, np.inf)[:, 1:] not_cap_visits
```

```
with cjs_model:
= tt.triu(
p_not_cap 1 - p)**not_cap_visits
( )
```

and

\[P(\textrm{individual is captured on visit } j) = p.\]

```
with cjs_model:
= p_alive * p_not_cap * p ν
```

The likelihood of the observed recaptures is then

`= np.triu_indices_from(M[:, 1:]) triu_i, triu_j `

```
with cjs_model:
= pm.Binomial(
recap_obs 'recap_obs',
1:][triu_i, triu_j],
M[:,
ν[triu_i, triu_j],=M[:, 1:][triu_i, triu_j]
observed )
```

Finally, some individual released on each occasion are not recaptured again,

`- M.sum(axis=1) R `

`array([ 12., 83., 53., 94., 199., 193., 171., 284., 274., 97.])`

The probability of this event is

\[\chi_i = P(\textrm{released on visit } i \textrm{ and not recaptured again}) = 1 - \sum_{j = i + 1}^T \nu_{i, j}.\]

```
with cjs_model:
= 1 - ν.sum(axis=1) χ
```

The likelihood of the individual that were not recaptured again is

```
with cjs_model:
= pm.Binomial(
no_recap_obs 'no_recap_obs',
- M.sum(axis=1), χ,
R =R - M.sum(axis=1)
observed )
```

Now that the model is fully specified, we sample from its posterior distribution.

```
with cjs_model:
= pm.sample(**SAMPLE_KWARGS) cjs_trace
```

```
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (3 chains in 3 jobs)
NUTS: [ϕ_interval__, p_interval__]
100%|██████████| 1500/1500 [00:05<00:00, 272.61it/s]
```

Again, the BFMI and energy plot are reasonable.

` pm.bfmi(cjs_trace)`

`0.97973424667749842`

`; pm.energyplot(cjs_trace)`

The Gelman-Rubin statistics also indicate convergence.

`max(np.max(gr_stats) for gr_stats in pm.gelman_rubin(cjs_trace).values())`

`1.0009026695355843`

McCrea and Morgan’s book includes a table of maximum likelihood estimates for \(\phi_i\) and \(p\). We verify that our Bayesian estimates are close to these.

`= np.array([0.8, 0.56, 0.83, 0.86, 0.73, 0.69, 0.81, 0.64, 0.46, 0.99]) ϕ_mle `

```
= plt.subplots(figsize=(8, 6))
fig, ax
= np.arange(T) + 1
t_plot = np.percentile(cjs_trace['ϕ'], [5, 95], axis=0)
low, high
ax.fill_between(
t_plot, low, high,=0.5, label="90% interval"
alpha;
)
ax.plot('ϕ'].mean(axis=0),
t_plot, cjs_trace[="Posterior expected value"
label;
)
ax.scatter(
t_plot, ϕ_mle,=5,
zorder='k', label="Maximum likelihood estimate"
c;
)
1, T);
ax.set_xlim("$t$");
ax.set_xlabel(
;
ax.yaxis.set_major_formatter(PCT_FORMATTER)r"$\phi_t$");
ax.set_ylabel(
=2); ax.legend(loc
```

`= 0.51 p_mle `

```
= pm.plot_posterior(
ax =['p'],
cjs_trace, varnames=p_mle,
ref_val=0., alpha=0.75
lw
)
"$p$"); ax.set_title(
```

## The Jolly-Seber Model

The Jolly-Seber model is an extension of the Cormack-Jolly-Seber model (the fact that the extension is named after fewer people is a bit counterintuitive) that estimates abundance and birth dynamics, in addition to the survival dynamics estimated by the Cormack-Jolly-Seber model. As with the Cormack-Jolly-Seber model where “death” included leaving the site, “birth” includes not just the actual birth of new individuals, but individuals that arrive at the site during the study from elsewhere. Again, despite this subtlety, we will use the convenient terminology of “birth” and “born” throughout.

In order to estimate abundance and birth dynamics, the Jolly-Seber model adds likelihood terms for the first time an individual is captured to the recapture likelihood of the Cormack-Jolly-Seber model. We use the same uniform priors on \(p\) and \(\phi_i\) as in the Cormack-Jolly-Seber model.

```
with pm.Model() as js_model:
= pm.Uniform('p', 0., 1.)
p = pm.Uniform('ϕ', 0., 1., shape=T) ϕ
```

As with the Lincoln-Petersen model, the Jolly-Seber model estimates the size of the population, including all individuals ever alive during the study period, \(N\). We use the Schwarz-Arnason formulation of the Jolly-Seber model, where each individual has probability \(\beta_i\) of being born into the population between visits \(i\) and \(i + 1\). We place a \(\operatorname{Dirichlet}(1, \ldots, 1)\) prior on these parameters.

```
with js_model:
= pm.Dirichlet('β', np.ones(T), shape=T) β
```

Let \(\Psi_i\) denote the probability that a given individual is alive on visit \(i\) and has not yet been captured before visit \(i\). Then \(\Psi_1 = \beta_0\), since no individuals can have been captured before the first visit, and

\[ \begin{align*} \Psi_{i + 1} & = P(\textrm{the individual was alive and unmarked during visit } i \textrm{ and survived to visit } i + 1) \\ & + P(\textrm{the individual was born between visits } i \textrm{ and } i + 1) \\ & = \Psi_i (1 - p) \phi_i + \beta_i. \end{align*} \]

After writing out the first few terms, we see that this recursion has the closed-form solution

\[\Psi_{i + 1} = \sum_{k = 0}^i \left(\beta_k (1 - p)^{i - k} \prod_{\ell = 1}^{i - k} \phi_{\ell} \right).\]

```
= sp.linalg.circulant(np.arange(T))
never_cap_surv_ix
with js_model:
= tt.concatenate((
p_never_cap_surv 1], tt.cumprod((1 - p) * ϕ)[:-1]
[
))
= tt.tril(
Ψ * p_never_cap_surv[never_cap_surv_ix]
β sum(axis=1) ).
```

The probability that an unmarked individual that is alive at visit \(i\) is captured on visit \(i\) is then \(\Psi_i p\). The probability that an individual is alive at the end of the study period and never captured is \[1 - \sum_{i = 1}^T \Psi_i p.\]

Therefore, the likelihood of the observed first captures is a \((T + 1)\)-dimensional multinomial, where the first \(T\) probabilities are \(\Psi_1 p, \ldots, \Psi_T p\), and the corresponding first \(T\) counts are the observed number of unmarked individuals captured at each visit, \(u_i\). The final probability is

\[1 - \sum_{i = 1}^T \Psi_i p\]

and corresponds to the unobserved number of individuals never captured. Since PyMC3 does not implement such an “incomplete multinomial” distribution, we give a minimal implementation here.

```
class IncompleteMultinomial(pm.Discrete):
def __init__(self, n, p, *args, **kwargs):
"""
n is the total frequency
p is the vector of probabilities of the observed components
"""
super(IncompleteMultinomial, self).__init__(*args, **kwargs)
self.n = n
self.p = p
self.mean = n * p.sum() * p,
self.mode = tt.cast(tt.round(n * p), 'int32')
def logp(self, x):
"""
x is the vector of frequences of all but the last components
"""
= self.n
n = self.p
p
= n - x.sum()
x_last
return bound(
+ tt.sum(x * tt.log(p) - factln(x)) \
factln(n) + x_last * tt.log(1 - p.sum()) - factln(x_last),
all(x >= 0), tt.all(x <= n), tt.sum(x) <= n,
tt.>= 0) n
```

As in the Lincoln-Petersen model, we place an improper flat prior (with the appropriate lower bound) on \(N\).

`= np.concatenate(([R[0]], R[1:] - M[:, 1:].sum(axis=0)[:-1])) u `

```
with js_model:
= pm.Bound(pm.Flat, lower=u.sum())('N') N
```

The likelihood of the observed first captures is therefore

```
with js_model:
= IncompleteMultinomial(
unmarked_obs 'unmarked_obs', N, Ψ * p,
=u
observed )
```

The recapture likelihood for the Jolly-Seber model is the same as for the Cormack-Jolly-Seber model.

```
with js_model:
= tt.triu(
p_alive
tt.cumprod(1:]) * ϕ),
fill_lower_diag_ones(np.ones_like(M[:, =1
axis
)
)= tt.triu(
p_not_cap 1 - p)**not_cap_visits
(
)= p_alive * p_not_cap * p
ν
= pm.Binomial(
recap_obs 'recap_obs',
1:][triu_i, triu_j], ν[triu_i, triu_j],
M[:, =M[:, 1:][triu_i, triu_j]
observed )
```

```
with js_model:
= 1 - ν.sum(axis=1)
χ
= pm.Binomial(
no_recap_obs 'no_recap_obs',
- M.sum(axis=1), χ,
R =R - M.sum(axis=1)
observed )
```

Again we sample from the posterior distribution of this model.

```
with js_model:
= pm.sample(**SAMPLE_KWARGS) js_trace
```

```
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Multiprocess sampling (3 chains in 3 jobs)
NUTS: [N_lowerbound__, β_stickbreaking__, ϕ_interval__, p_interval__]
100%|██████████| 1500/1500 [00:29<00:00, 50.24it/s]
```

Again, the BFMI and energy plot are reasonable.

` pm.bfmi(js_trace)`

`0.93076073875335852`

`; pm.energyplot(js_trace)`

The Gelman-Rubin statistics also indicate convergence.

`max(np.max(gr_stats) for gr_stats in pm.gelman_rubin(js_trace).values())`

`1.0051087141503718`

The posterior expected survival rates are, somewhat surprisingly, still quite similar to the maximum likelihood estimates under the Cormack-Jolly-Seber model.

```
= plt.subplots(figsize=(8, 6))
fig, ax
= np.percentile(js_trace['ϕ'], [5, 95], axis=0)
low, high
ax.fill_between(
t_plot, low, high,=0.5, label="90% interval"
alpha;
)
ax.plot('ϕ'].mean(axis=0),
t_plot, js_trace[="Posterior expected value"
label;
)
ax.scatter(
t_plot, ϕ_mle,=5,
zorder='k', label="Maximum likelihood estimate (CJS)"
c;
)
1, T);
ax.set_xlim("$t$");
ax.set_xlabel(
;
ax.yaxis.set_major_formatter(PCT_FORMATTER)r"$\phi_t$");
ax.set_ylabel(
="upper center"); ax.legend(loc
```

The following plot shows the estimated birth dynamics.

```
= plt.subplots(figsize=(8, 6))
fig, ax
= np.percentile(js_trace['β'], [5, 95], axis=0)
low, high
ax.fill_between(- 1, low, high,
t_plot =0.5, label="90% interval"
alpha;
)
ax.plot(- 1, js_trace['β'].mean(axis=0),
t_plot ="Posterior expected value"
label;
)
0, T - 1);
ax.set_xlim("$t$");
ax.set_xlabel(
;
ax.yaxis.set_major_formatter(PCT_FORMATTER)r"$\beta_t$");
ax.set_ylabel(
=2); ax.legend(loc
```

The posterior expected population size is about 30% larger than the number of distinct individuals marked.

`'N'].mean() / u.sum() js_trace[`

`1.2951923918464066`

```
pm.plot_posterior(=['N'],
js_trace, varnames=0., alpha=0.75
lw; )
```

Now that we have estimated these three models, we return briefly to the topic of \(M\)-arrays versus encounter histories. While \(M\)-arrays are a convienent summary of encounter histories, they do not lend themselves to common extensions of these models to include individual random effects, trap-dependent recapture, etc. as readily as encounter histories. Two possibile approaches to include such effects are:

- Use likelihoods for the (Cormack-)Jolly-Seber models based on encounter histories, which are a bit more complex than those based on \(M\)-arrays.
- Individual \(M\)-arrays: transform each individual’s account history into an \(M\)-array an stack them into a three-dimensional array of \(M\)-arrays.

We may explore one (or both) of these approaches in a future post.

Thanks to Eric Heydenberk for his feedback on a early draft of this post.

This post is available as a Jupyter notebook here.