Solution: Score functions and observed information matrix

Eksempel

Here we solve problem 4.1 in ABG, using the expression for the partial likelihood specified in \((4.7)\).

Solution

Assuming that the counting processes \(N_i(t)\), for \(i = 1,2,\ldots,n\), have intensity processes of the form \(\lambda_i(t) = Y_i(t)\alpha_0(t)\exp(\beta^T x_i)\), where the \(Y_i(t)\) are at risk indicators and the \(x_i = (x_{i1},...,x_{ip})^T\) are fixed covariates, and with \(r(\beta ,x_i) = \exp(\beta^T x_i)\) we get that the partial likelihood in \((4.7)\) becomes: \[L(\beta) = \prod_{T_j} \frac{\exp(\beta^T x_{i_j})}{\sum_{l \in R_j}\exp(\beta^T x_l)},\]

where \(i_j\) is the index of the individual with event time \(T_j\) and \(R_j = \{l: T_l \geq T_j\}\) are the indices of individuals still at risk right before \(T_j\). Note that \(\beta^T x_l = \beta_1 x_{l1} + \ldots + \beta_k x_{lk} + \ldots + \beta_p x_{lp}\), so that \(\frac{\partial \beta^T x_l}{\partial \beta_k} = x_{lk}\).

To simplify some later calculations we denote \(\theta_l := \exp(\beta^T x_l)\), and have \(\frac{\partial \theta_l}{\partial \beta_k} = \frac{\partial \exp(\beta^T x_l)}{\partial \beta_k} = \exp(\beta^T x_l)\cdot\frac{\partial \beta^T x_l}{\partial \beta_k} = \theta_l x_{lk}\).

Solution to (a)

We will now derive an expression for the score vector, \(U(\beta) = \frac{\partial \log L(\beta)}{\partial \beta}\), for the partial likelihood. The \(k\)th element of this vector, with \(k \in \{1,2,\ldots,p\}\), is:

\begin{align*} U(\beta)_k &= \frac{\partial \log L(\beta)}{\partial \beta_k} \\ &= \frac{\partial}{\partial \beta_k} \log \left[ \prod_{T_j} \frac{\exp(\beta^T x_{i_j})}{\sum_{l \in R_j}\exp(\beta^T x_l)} \right] \\ &= \sum_{T_j} \left\{\frac{\partial}{\partial \beta_k}\beta^T x_{i_j} - \frac{\partial}{\partial \beta_k}\log \left[ \sum_{l \in R_j}\exp(\beta^T x_l)\right] \right\} \\ &= \sum_{T_j} \left\{x_{i_j k} - \frac{\partial}{\partial \beta_k}\log \left[ \sum_{l \in R_j}\theta_l\right] \right\} \\ &= \sum_{T_j} \left\{x_{i_j k} - \frac{\frac{\partial}{\partial \beta_k}\left[ \sum_{l \in R_j}\theta_l\right]}{\sum_{l \in R_j}\theta_l} \right\} \\ &= \sum_{T_j} \left\{x_{i_j k} - \frac{\sum_{l \in R_j}\frac{\partial}{\partial \beta_k}\theta_l}{\sum_{l \in R_j}\theta_l} \right\} \\ &= \sum_{T_j} \left\{x_{i_j k} - \frac{\sum_{l \in R_j}\theta_l x_{lk}}{\sum_{l \in R_j}\theta_l } \right\}.\end{align*}

The first term can be simplified very slightly since we sum over all event times, so that we get

\[\underline{\underline{U(\beta)_k = \sum_{i=1}^n x_{i k} - \sum_{T_j} \left\{\frac{\sum_{l \in R_j}\theta_l x_{lk}}{\sum_{l \in R_j}\theta_l } \right\}}}.\]

Solution to (b)

The observed information matrix \(I(\beta)\) is defined as minus the hessian of \(\log L(\beta)\), meaning it is a symmetric \(p \times p\)-matrix whose entry in the \(k\)th row and \(m\)th column is

\begin{align*} I(\beta)_{km} &= -\frac{\partial \log L(\beta)}{\partial \beta_k \partial \beta_m}\\ &= -\frac{\partial U(\beta)_k}{\partial \beta_m} \\ &= \frac{\partial}{\partial \beta_m} \sum_{T_j} \left\{\frac{\sum_{l \in R_j}\theta_l x_{lk}}{\sum_{l \in R_j}\theta_l } \right\} \\ &= \sum_{T_j} \left\{\frac{\partial}{\partial \beta_m} \frac{\sum_{l \in R_j}\theta_l x_{lk}}{\sum_{l \in R_j}\theta_l } \right\} \\ &= \sum_{T_j} \left\{\frac{\left[\frac{\partial}{\partial \beta_m} \sum_{l \in R_j}\theta_l x_{lk}\right]\cdot \left[\sum_{l \in R_j}\theta_l \right] - \left[ \sum_{l \in R_j}\theta_l x_{lk}\right]\cdot\left[\frac{\partial}{\partial \beta_m}\sum_{l \in R_j}\theta_l \right]}{\left[\sum_{l \in R_j}\theta_l \right]^2} \right\} \\ &= \sum_{T_j} \left\{\frac{\left[\frac{\partial}{\partial \beta_m} \sum_{l \in R_j}\theta_l x_{lk}\right]}{\left[\sum_{l \in R_j}\theta_l \right]} - \frac{\left[ \sum_{l \in R_j}\theta_l x_{lk}\right]\cdot\left[\frac{\partial}{\partial \beta_m}\sum_{l \in R_j}\theta_l \right]}{\left[\sum_{l \in R_j}\theta_l \right]^2} \right\} \\ &= \sum_{T_j} \left\{\frac{\left[\sum_{l \in R_j}\theta_l x_{lk} x_{lm}\right]}{\left[\sum_{l \in R_j}\theta_l \right]} - \frac{\left[ \sum_{l \in R_j}\theta_l x_{lk}\right]\cdot\left[\sum_{l \in R_j}\theta_l x_{lm}\right]}{\left[\sum_{l \in R_j}\theta_l \right]^2} \right\} .\end{align*}