I've been stuck on this problem for days. Any help, solutions or just pointers would be very welcome.

A squash match is composed of a number of games, denoted as g. The distribution of g is known: typically 5, but it could be anything from 1 to 10+ .
Each game is composed of a number of rallies, denoted as r. The distribution of r is known: typically 26, but it could be anything from 1 to 40+.
Between each rally there is a time gap, denoted as t (where players rest and retrieve the ball).
Gaps between rallies in a single game are inter-rally-gaps denoted as irg. The distribution p(t|irg) is known: typically 18 seconds.
Gaps between the last rally of one game and the first rally of the next game are inter-game-gaps denoted as igg. The distribution p(t|igg) is known: typically 88 seconds.
Note that there is a considerable overlap between these two distributions.

The observable system output is simply a series of time gaps t1, t2, t3... between rallies and can generated as follows. Generate a value for g (this choice is independent of the values of the other random variables). For each game, generate a value for r. For each rally (except the last), generate a value for t (drawn from the appropriate distribution for inter-rally or inter-game gaps).

To illustrate the point (using small value of r for the sake of clarity), if we chose g=1, r=4 we might generate three gaps (between the 4 rallies) as: 18, 17, 19 seconds.
If we chose g=2, r=3,7 (two games, the first with 3 rallies and the second with 7 rallies) we might generate the following: 18, 17, 88, 17, 16, 19, 18, 18, 20 seconds.

The problem is: given a sequence of observed gaps t1, t2, t3... find the most likely values of g and r.