Symmetric functions arising from continuous-time Markov chains

Let the random variable $t_{k-1}$ be the sum of the holding times in states $1$ up to state $k-1$. Note that $t_{k-1}$ is a hypoexponential random variable. Let $f_{k-1}$ denote the PDF of this random variable. From the probabilistic interpretation given by the OP, the quantity of interest can be written as: \begin{align*} s_k &= \frac{1}{\lambda_1 \cdots \lambda_{k-1}} \int_0^1 f_{k-1}(z) \int_{1-z}^{\infty} \lambda_k e^{-\lambda_k s} ds dz \\ &=\frac{e^{-\lambda_k}}{\lambda_1 \cdots \lambda_{k-1}} \int_0^1 f_{k-1}(z) e^{z \lambda_k} dz \end{align*} To be sure, this basically comes from the probability that $t_{k-1}<1$ and that the holding time in state $k$ is at least $1-t_{k-1}$.

To avoid assuming that the jump rates are distinct, we will use the following matrix representation of the PDF $f_{k-1}$: $$ f_{k-1}(z) = - \boldsymbol{\alpha}_{k-1} e^{z \Theta_{k-1}} \Theta_{k-1} \mathbf{1}_{k-1} $$ where I borrow the notation from here, except I have introduced subscripts to $\boldsymbol{\alpha}$, $\Theta$ and $\mathbf{1}$ in order to express the number of jump rates in the hypoexponential distribution. The matrix $\Theta_{k-1}$ is upper bidiagonal, and as long as all of the rates are positive, this matrix is invertible. Basically, we used the fact that a hypoexponential distribution is an example of a phase-type distribution.

Let $\mathbf{I}_{k-1}$ be the $(k-1) \times (k-1)$ identity matrix. By integrating by parts, it is straightforward to show that: $$ \int_0^1 e^{z \Theta_{k-1}} e^{z \lambda_k} dz = (\Theta_{k-1} + \lambda_k \mathbf{I}_{k-1})^{-1} (e^{\Theta_{k-1}} e^{\lambda_k} - \mathbf{I}_{k-1} ) $$ Hence, $$ s_k = \frac{-e^{-\lambda_k}}{\lambda_1 \cdots \lambda_{k-1}} \boldsymbol{\alpha}_{k-1}(\Theta_{k-1} + \lambda_k \mathbf{I}_{k-1})^{-1} (e^{\Theta_{k-1}} e^{\lambda_k} - \mathbf{I}_{k-1} ) \Theta_{k-1} \mathbf{1}_{k-1} $$ Note that nowhere in this derivation did we assume that the jump rates are distinct. This formula is straightforward to implement numerically.


$ \text{Dear James,} $ you can write your function as a(n infinite) sum of homogeneous symmetric functions \begin{equation*}%$ h_\ell(x_1, \dots, x_k) := \sum_{1 \leqslant i_1 \leqslant \cdots \leqslant i_\ell \leqslant k} x_{i_1} \cdots x_{i_\ell} = [t^\ell]\prod_{j = 1}^k \frac{1}{1 - t x_j} \end{equation*} where $ [t^\ell]f(t) $ is the $\ell$-th Fourier coefficient of $f$ (for this last equality, see wikipedia or the book by Macdonald). Now, decomposing the rational fraction, we get \begin{equation*}%$ \prod_{j = 1}^k \frac{1}{1 - t x_j} = \sum_{j = 1}^k \frac{A_j(X)}{1 - t x_j}, \quad A_j(X) = \prod_{i \neq j} \frac{1}{1 - x_j^{-1} x_i} \end{equation*} thus, taking the coefficient of $ [t^\ell] $, one gets \begin{equation*}%$ h_\ell(x_1, \dots, x_k) = \sum_{j = 1}^k A_j(X) x_j^\ell = \sum_{j = 1}^k \frac{x_j^{\ell- k + 1} }{ \prod_{i \neq j} (x_j - x_i) } \end{equation*}

Your function thus writes \begin{equation*}%$ s_k(x_1, \dots, x_k) = \sum_{\ell \geqslant 0} \frac{(-1)^\ell}{\ell !} h_{\ell + k - 1}(x_1, \dots, x_k) \end{equation*} and is hence projective due to the projectivity of the $ h_i $'s. As a result, setting two variables equal amounts to get (for instance) $ h_\ell(x_1, \dots, x_{k - 1}, x_{k - 1}) $, which does not have such a simple expression. But we can continue the analysis, using the blog of Tao (https://terrytao.wordpress.com/2017/08/06/schur-convexity-and-positive-definiteness-of-the-even-degree-complete-homogeneous-symmetric-polynomials/). One can get a really nice representation of the complete homogeneous symmetric functions writing $ \frac{1}{1 - t x} = \mathbb{E} \left( e^{ tx \mathbb{e} } \right) $ where $ \mathbb{e} \sim \mathrm{Exp}(1) $. Considering a sequence of independent such exponential random variables $ (\mathbb{e}_i)_{1 \leqslant i \leqslant k} $, one thus gets \begin{equation*}%$ h_\ell(x_1, \dots, x_k) = [t^\ell]\prod_{j = 1}^k \frac{1}{1 - t x_j} = [t^\ell]\mathbb{E}\left( e^{ t \sum_{i = 1}^k x_i \mathbb{e}_i } \right) = \frac{1}{\ell !} \mathbb{E}\left( \left( \sum_{i = 1}^k x_i \mathbb{e}_i \right)^\ell \right) \end{equation*}

Since $ x_i \in \mathbb{R}_+ $, your function admits a representation as the Bessel expectation \begin{equation*}%$ s_k(x_1, \dots, x_k) = \mathbb{E}\left( \sum_{\ell \geqslant 0} \frac{(-1)^\ell}{ \ell ! (\ell + k - 1)! } \left( \sum_{i = 1}^k x_i \mathbb{e}_i \right)^{\ell + k - 1} \right) = \mathbb{E}\left( J_{k + 1} \left(2 \sqrt{ \sum_{i = 1}^k x_i \mathbb{e}_i } \right)\right) \end{equation*}

You can transform again this form using the generating series of the Bessel functions and write $ J_k(x) $ as the Fourier coefficient $ [t^k] e^{ x (t - t^{-1})/2 } $ or as a Laplace transform. Last, you can even use a stable $1/2$ random variable to write the final result as $ \mathbb{E}\left( \exp\left( Z \sum_{i = 1}^k x_i \mathbb{e}_i \right)\right) $ where $ Z $ is a random variable independent of the $ \mathbb{e}_i $'s.

Under this form, the operation of setting two variables equal amounts to change an exponential random variable by a sum of two exponential random variables.

Best, Y.