We consider a two-player zero-sum-game in a bounded open domain Ωdescribed as follows: at a point x ∈ Ω, Players I and IIplay an ε-step tug-of-war game with probability α, andwith probability β (α + β = 1), arandom point in the ball of radius ε centered at x ischosen. Once the game position reaches the boundary, Player II pays Player I the amountgiven by a fixed payoff function F. We give a detailed proof of the factthat the value functions of this game satisfy the Dynamic Programming Principle

for x ∈ Ω withu(y) = F(y) wheny ∉ Ω. This principle implies the existence ofquasioptimal Markovian strategies.