1. Introduction
 We call a sequence 
 $(a_n)_n$
 of real numbers increasing if, for all
$(a_n)_n$
 of real numbers increasing if, for all 
 $n\in \mathbb {N}$
,
$n\in \mathbb {N}$
, 
 $a_n < a_{n+1}$
, and nondecreasing if, for all
$a_n < a_{n+1}$
, and nondecreasing if, for all 
 $n\in \mathbb {N}$
,
$n\in \mathbb {N}$
, 
 $a_n \leq a_{n+1}$
, and we define the terms decreasing and nonincreasing analogously. A real number is called left-computable if there exists a computable nondecreasing sequence of rational numbers converging to it; note that some textbooks [Reference Downey and Hirschfeldt4, Reference Nies11] call such numbers left-c.e. A real number
$a_n \leq a_{n+1}$
, and we define the terms decreasing and nonincreasing analogously. A real number is called left-computable if there exists a computable nondecreasing sequence of rational numbers converging to it; note that some textbooks [Reference Downey and Hirschfeldt4, Reference Nies11] call such numbers left-c.e. A real number 
 $\alpha $
 is called computable if there exists a computable sequence
$\alpha $
 is called computable if there exists a computable sequence 
 $(a_n)_n$
 of rational numbers satisfying
$(a_n)_n$
 of rational numbers satisfying 
 $|\alpha - a_n| < 2^{-n}$
, for all
$|\alpha - a_n| < 2^{-n}$
, for all 
 $n\in \mathbb {N}$
. It is easy to see that any computable real number is left-computable. In this article, we study real numbers that are limits of computable, nondecreasing, converging sequences
$n\in \mathbb {N}$
. It is easy to see that any computable real number is left-computable. In this article, we study real numbers that are limits of computable, nondecreasing, converging sequences 
 $(a_n)_n$
 of rational numbers which are not required to satisfy the condition
$(a_n)_n$
 of rational numbers which are not required to satisfy the condition 
 $|\alpha - a_n| < 2^{-n}$
 for all but only for infinitely many
$|\alpha - a_n| < 2^{-n}$
 for all but only for infinitely many 
 $n\in \mathbb {N}$
.
$n\in \mathbb {N}$
.
Definition 1.1. We call a real number 
 $\alpha $
 regainingly approximable if there exists a computable nondecreasing sequence of rational numbers
$\alpha $
 regainingly approximable if there exists a computable nondecreasing sequence of rational numbers 
 $(a_n)_n$
 converging to
$(a_n)_n$
 converging to 
 $\alpha $
 such that
$\alpha $
 such that 
 ${\alpha - a_n < 2^{-n}}$
 holds for infinitely many
${\alpha - a_n < 2^{-n}}$
 holds for infinitely many 
 ${n \in \mathbb {N}}$
.
${n \in \mathbb {N}}$
.
Intuitively speaking, regainingly approximable numbers are not required to possess approximations that converge as speedily as those to computable numbers, but they must possess approximations that “catch up” with that speed infinitely often while being allowed to “dawdle” arbitrarily in between. Trivially, every regainingly approximable number is left-computable, and every computable number is regainingly approximable. In fact, even the following stronger observation holds.
Example 1.2. Every non-high left-computable number is regainingly approximable. Indeed, let 
 $\alpha $
 be a left-computable number that is not high. Fix a computable nondecreasing sequence of rational numbers
$\alpha $
 be a left-computable number that is not high. Fix a computable nondecreasing sequence of rational numbers 
 $(a_n)_n$
 converging to
$(a_n)_n$
 converging to 
 $\alpha $
. An increasing function
$\alpha $
. An increasing function 
 $s \colon \mathbb {N} \to \mathbb {N}$
 with
$s \colon \mathbb {N} \to \mathbb {N}$
 with 
 ${\alpha - a_{s(n)} < 2^{-n}}$
 for all
${\alpha - a_{s(n)} < 2^{-n}}$
 for all 
 $n \in \mathbb {N}$
 can be computed from oracle
$n \in \mathbb {N}$
 can be computed from oracle 
 $\alpha $
. Since
$\alpha $
. Since 
 $\alpha $
 is not high, according to a characterization of highness by Martin [Reference Martin8] (see Soare [Reference Soare15, Theorem XI.1.3]), there exists a computable, w.l.o.g. increasing function
$\alpha $
 is not high, according to a characterization of highness by Martin [Reference Martin8] (see Soare [Reference Soare15, Theorem XI.1.3]), there exists a computable, w.l.o.g. increasing function 
 $r \colon \mathbb {N}\to \mathbb {N}$
 with
$r \colon \mathbb {N}\to \mathbb {N}$
 with 
 $r(n) \geq s(n)$
 for infinitely many
$r(n) \geq s(n)$
 for infinitely many 
 ${n \in \mathbb {N}}$
. Then for any
${n \in \mathbb {N}}$
. Then for any 
 $n \in \mathbb {N}$
 with
$n \in \mathbb {N}$
 with 
 $r(n) \geq s(n)$
 we obtain
$r(n) \geq s(n)$
 we obtain 
 $$ \begin{align*} \alpha - a_{r(n)} \leq \alpha - a_{s(n)} < 2^{-n}. \end{align*} $$
$$ \begin{align*} \alpha - a_{r(n)} \leq \alpha - a_{s(n)} < 2^{-n}. \end{align*} $$
Therefore, 
 $\alpha $
 is regainingly approximable.
$\alpha $
 is regainingly approximable.
The remainder of the article is structured as follows:
- 
• In Section 3 we begin by showing that Definition 1.1 is robust under minor modifications, with the equivalences between the different formulations turning out to be effectively uniform. We also state some computability-theoretic properties of the set  $\{n \in \mathbb {N} \colon \alpha - a_n < 2^{-n}\}$
, that is, the set of points where a regaining approximation $\{n \in \mathbb {N} \colon \alpha - a_n < 2^{-n}\}$
, that is, the set of points where a regaining approximation $(a_n)_n$
 “catches up.” $(a_n)_n$
 “catches up.”
- 
• We begin Section 4 by giving several characterizations of the sets  $A\subseteq \mathbb {N}$
 that have the property that the real number is regainingly approximable. For the rest of the section we then focus on c.e. sets $A\subseteq \mathbb {N}$
 that have the property that the real number is regainingly approximable. For the rest of the section we then focus on c.e. sets $$\begin{align*}2^{-A} := \sum_{a \in A} 2^{-(a+1)} \end{align*}$$ $$\begin{align*}2^{-A} := \sum_{a \in A} 2^{-(a+1)} \end{align*}$$ $A\subseteq \mathbb {N}$
: A real number $A\subseteq \mathbb {N}$
: A real number $x\in [0,1]$
 is called strongly left-computable if there exists a computably enumerable set $x\in [0,1]$
 is called strongly left-computable if there exists a computably enumerable set $A\subseteq \mathbb {N}$
 with $A\subseteq \mathbb {N}$
 with $x=2^{-A}$
. It is well-known that the set of strongly left-computable numbers is a proper superset of the set of computable numbers in $x=2^{-A}$
. It is well-known that the set of strongly left-computable numbers is a proper superset of the set of computable numbers in $[0,1]$
 and a proper subset of the set of left-computable numbers in $[0,1]$
 and a proper subset of the set of left-computable numbers in $[0,1]$
. We consider c.e. sets A such that $[0,1]$
. We consider c.e. sets A such that $2^{-A}$
 is regainingly approximable and call such sets regainingly approximable. We give different characterizations of this class of sets, and note that, unlike for regainingly approximable numbers, not all arguments here can be fully effectively uniform. $2^{-A}$
 is regainingly approximable and call such sets regainingly approximable. We give different characterizations of this class of sets, and note that, unlike for regainingly approximable numbers, not all arguments here can be fully effectively uniform.
- 
• In Section 5, we state further computability-theoretic properties of regainingly approximable sets. First, we observe an easy splitting result, namely that every c.e. set  $C\subseteq \mathbb {N}$
 is the union of two disjoint regainingly approximable sets $C\subseteq \mathbb {N}$
 is the union of two disjoint regainingly approximable sets $A,B\subseteq \mathbb {N}$
. Next, we prove that there is a c.e. set that is not regainingly approximable. Finally, we show that every c.e. Turing degree contains a regainingly approximable set. $A,B\subseteq \mathbb {N}$
. Next, we prove that there is a c.e. set that is not regainingly approximable. Finally, we show that every c.e. Turing degree contains a regainingly approximable set.
- 
• In Sections 6 and 7 we look at the Kolmogorov complexity of regainingly approximable numbers and sets. On the one hand, we observe that every regainingly approximable number is i.o. K-trivial and, hence, not Martin-Löf random. On the other hand, we show that there exists a regainingly approximable number such that the prefix-free Kolmogorov complexity of infinitely many of its initial segments (in a sense to be explained in Section 2) exceeds their length, and that there exist regainingly approximable sets whose initial segments infinitely often have the maximal Kolmogorov complexity that is possible for a c.e. set. 
- 
• In Section 8 we observe that regainingly approximable sets and numbers behave badly with respect to arithmetical operations: The set of regainingly approximable sets is closed neither under union nor under intersection, and while the set of regainingly approximable numbers is closed downwards under Solovay reducibility it is not closed under addition. 
- 
• Finally, in Section 9 we formulate a uniformly effective algorithm for splitting c.e. sets into two regainingly approximable sets, which strengthens the splitting result of Section 5. In fact, this algorithm proves a stronger result, namely that every real that is regular in the sense of Wu [Reference Wu18] can be split into two regular reals that are additionally regainingly approximable. 
We close this introduction by mentioning that isolating the notion of regaining approximability was a key step needed for Hölzl and Janicki’s [Reference Hölzl and Janicki6] negative answer to the question posed by Merkle and Titov [Reference Merkle and Titov10] whether among the left-computable numbers, being Martin-Löf random is equivalent to being non-speedable.
2. Notation
 For general background on computability theory and algorithmic randomness, we refer the reader to the usual textbooks [Reference Downey and Hirschfeldt4, Reference Nies11, Reference Soare15]. We will use Cantor’s pairing function 
 $\langle \cdot ,\cdot \rangle \colon \mathbb {N}^2\to \mathbb {N}$
 defined by
$\langle \cdot ,\cdot \rangle \colon \mathbb {N}^2\to \mathbb {N}$
 defined by 
 $$\begin{align*}\langle m,n\rangle := \frac{1}{2}\left(m + n\right)\left(m + n + 1\right) + n , \end{align*}$$
$$\begin{align*}\langle m,n\rangle := \frac{1}{2}\left(m + n\right)\left(m + n + 1\right) + n , \end{align*}$$
for all 
 $m,n\in \mathbb {N}$
. This is a computable bijection, so let
$m,n\in \mathbb {N}$
. This is a computable bijection, so let 
 $\pi _1\colon \mathbb {N}\to \mathbb {N}$
 and
$\pi _1\colon \mathbb {N}\to \mathbb {N}$
 and 
 $\pi _2\colon \mathbb {N}\to \mathbb {N}$
 denote the two components of its inverse function, that is,
$\pi _2\colon \mathbb {N}\to \mathbb {N}$
 denote the two components of its inverse function, that is, 
 $\langle \pi _1(n),\pi _2(n)\rangle = n$
 for all
$\langle \pi _1(n),\pi _2(n)\rangle = n$
 for all 
 ${n\in \mathbb {N}}$
. For
${n\in \mathbb {N}}$
. For 
 $n\in \mathbb {N}$
, we write
$n\in \mathbb {N}$
, we write 
 $\log (n)$
 for the length of the binary representation of n, thus
$\log (n)$
 for the length of the binary representation of n, thus 
 $\log (n)=\lfloor \log _2(n) \rfloor + 1$
 if
$\log (n)=\lfloor \log _2(n) \rfloor + 1$
 if 
 $n>0$
 and
$n>0$
 and 
 $\log _2(0)=1$
. Hence,
$\log _2(0)=1$
. Hence, 
 $\log (n)$
 is identical to the usual binary logarithm up to a small additive constant.
$\log (n)$
 is identical to the usual binary logarithm up to a small additive constant.
 For functions 
 $f,g\colon \mathbb {N} \to \mathbb {N}$
 we write
$f,g\colon \mathbb {N} \to \mathbb {N}$
 we write 
 $f \leq ^+ g$
 if there exists a constant
$f \leq ^+ g$
 if there exists a constant 
 ${c \in \mathbb {N}}$
 such that
${c \in \mathbb {N}}$
 such that 
 $f(n) \leq g(n) + c$
 for all n. A set
$f(n) \leq g(n) + c$
 for all n. A set 
 $A\subseteq \mathbb {N}$
 will often be identified with its characteristic sequence, that is, we define
$A\subseteq \mathbb {N}$
 will often be identified with its characteristic sequence, that is, we define 
 $A(n):=1$
 if
$A(n):=1$
 if 
 $n\in A$
, and
$n\in A$
, and 
 $A(n):=0$
 if
$A(n):=0$
 if 
 $n\not \in A$
.
$n\not \in A$
.
We end this section by laying out the following conventions about how, in the remainder of the article, we will (often tacitly) identify real numbers with infinite binary sequences and subsets of the natural numbers:
- 
• If we ascribe to some  $x\in \mathbb {R}$
 a property that is formally defined only for infinite binary sequences, then we mean that a specific, uniquely determined infinite binary sequence has that property; namely, the sequence containing infinitely many ones and that is a binary representation of the unique real number $x\in \mathbb {R}$
 a property that is formally defined only for infinite binary sequences, then we mean that a specific, uniquely determined infinite binary sequence has that property; namely, the sequence containing infinitely many ones and that is a binary representation of the unique real number $y\in (0,1]$
 with $y\in (0,1]$
 with $x-y\in \mathbb {Z}$
. $x-y\in \mathbb {Z}$
.
- 
• If we ascribe to some  $x\in \mathbb {R}$
 a property that is formally defined only for subsets of $x\in \mathbb {R}$
 a property that is formally defined only for subsets of $\mathbb {N}$
, then we mean that the uniquely determined infinite set $\mathbb {N}$
, then we mean that the uniquely determined infinite set $A\subseteq \mathbb {N}$
, whose characteristic sequence is the infinite binary sequence defined in the previous item, has that property. $A\subseteq \mathbb {N}$
, whose characteristic sequence is the infinite binary sequence defined in the previous item, has that property.
- 
• When we talk about “initial segments” of some  $x\in \mathbb {R}$
, then we are again referring to the initial segments of the infinite binary sequence defined in the first item. This will be used for instance when we talk about the plain or prefix-free Kolmogorov complexity of initial segments of some such x. $x\in \mathbb {R}$
, then we are again referring to the initial segments of the infinite binary sequence defined in the first item. This will be used for instance when we talk about the plain or prefix-free Kolmogorov complexity of initial segments of some such x.
- 
• Similarly, when we talk about the initial segments of an  $A\subseteq \mathbb {N}$
, then we are referring to its characteristic sequence. $A\subseteq \mathbb {N}$
, then we are referring to its characteristic sequence.
3. Robustness
In this section, we first show that slight changes to the definition of regainingly approximable numbers do not lead to a different notion. The following lemma will be useful; note that no computability assumptions are made.
Lemma 3.1. Let 
 $(a_n)_n$
 be a nondecreasing sequence of real numbers converging to some real number
$(a_n)_n$
 be a nondecreasing sequence of real numbers converging to some real number 
 $\alpha $
 such that, for infinitely many
$\alpha $
 such that, for infinitely many 
 $n \in \mathbb {N}$
,
$n \in \mathbb {N}$
, 
 $\alpha - a_n < 2^{-n}$
. Then, for every unbounded function
$\alpha - a_n < 2^{-n}$
. Then, for every unbounded function 
 $f\colon \mathbb {N}\to \mathbb {N}$
 there exist infinitely many m with
$f\colon \mathbb {N}\to \mathbb {N}$
 there exist infinitely many m with 
 $\alpha - a_{f(m+1)} < 2^{-f(m)}$
.
$\alpha - a_{f(m+1)} < 2^{-f(m)}$
.
Proof By assumption, the set
 $$\begin{align*}A := \{n\in\mathbb{N} \colon n \geq f(0) \text{ and }\alpha - a_n < 2^{-n}\} \end{align*}$$
$$\begin{align*}A := \{n\in\mathbb{N} \colon n \geq f(0) \text{ and }\alpha - a_n < 2^{-n}\} \end{align*}$$
is infinite. We define a function 
 $g\colon A\to \mathbb {N}$
 by
$g\colon A\to \mathbb {N}$
 by 
 $$\begin{align*}g(n):=\min\{m\in\mathbb{N} \colon f(m+1)> n \} , \end{align*}$$
$$\begin{align*}g(n):=\min\{m\in\mathbb{N} \colon f(m+1)> n \} , \end{align*}$$
for 
 $n\in A$
. The function g is well-defined because f is unbounded. For every
$n\in A$
. The function g is well-defined because f is unbounded. For every 
 $n\in A$
 we have
$n\in A$
 we have 
 $f(g(n)) \leq n < f(g(n)+1)$
. The set
$f(g(n)) \leq n < f(g(n)+1)$
. The set 
 $$\begin{align*}g(A) := \{g(n) \colon n \in A\}\end{align*}$$
$$\begin{align*}g(A) := \{g(n) \colon n \in A\}\end{align*}$$
is infinite. Let 
 $m\in g(A)$
 be arbitrary and let
$m\in g(A)$
 be arbitrary and let 
 $n\in A$
 be such that
$n\in A$
 be such that 
 $m=g(n)$
. Then
$m=g(n)$
. Then 
 $$\begin{align*}\alpha - a_{f(m+1)} = \alpha - a_{f(g(n)+1)} \leq \alpha - a_n < 2^{-n} \leq 2^{-f(g(n))} = 2^{-f(m)}.\\[-34pt] \end{align*}$$
$$\begin{align*}\alpha - a_{f(m+1)} = \alpha - a_{f(g(n)+1)} \leq \alpha - a_n < 2^{-n} \leq 2^{-f(g(n))} = 2^{-f(m)}.\\[-34pt] \end{align*}$$
 There are several obvious modifications of Definition 1.1 that one might want to consider. First, instead of allowing nondecreasing 
 $(a_n)_n$
, we might require
$(a_n)_n$
, we might require 
 $(a_n)_n$
 to be increasing. Secondly, one might replace the condition
$(a_n)_n$
 to be increasing. Secondly, one might replace the condition 
 $\alpha - a_n < 2^{-n}$
 by the condition
$\alpha - a_n < 2^{-n}$
 by the condition 
 ${\alpha - a_n < 2^{-f(n)}}$
 where
${\alpha - a_n < 2^{-f(n)}}$
 where 
 ${f\colon \mathbb {N}\to \mathbb {N}}$
 is an arbitrary computable, unbounded function of one’s choice; or, one might ask for this to hold only for some fixed computable, nondecreasing, unbounded function
${f\colon \mathbb {N}\to \mathbb {N}}$
 is an arbitrary computable, unbounded function of one’s choice; or, one might ask for this to hold only for some fixed computable, nondecreasing, unbounded function 
 $f\colon \mathbb {N}\to \mathbb {N}$
, a seemingly weaker requirement. However, it turns out that none of these modifications make any difference.
$f\colon \mathbb {N}\to \mathbb {N}$
, a seemingly weaker requirement. However, it turns out that none of these modifications make any difference.
Proposition 3.2. For a real number 
 $\alpha \in \mathbb {R}$
 the following statements are equivalent:
$\alpha \in \mathbb {R}$
 the following statements are equivalent: 
- 
(1)  $\alpha $
 is regainingly approximable. $\alpha $
 is regainingly approximable.
- 
(2) There exists a computable, increasing sequence of rational numbers  $(a_n)_n$
 converging to $(a_n)_n$
 converging to $\alpha $
 such that, for infinitely many $\alpha $
 such that, for infinitely many $n \in \mathbb {N}$
, $n \in \mathbb {N}$
, $\alpha - a_n < 2^{-n}$
. $\alpha - a_n < 2^{-n}$
.
- 
(3) For every computable, unbounded function  $f\colon \mathbb {N}\to \mathbb {N}$
 there exists a computable increasing sequence of rational numbers $f\colon \mathbb {N}\to \mathbb {N}$
 there exists a computable increasing sequence of rational numbers $(a_n)_n$
 converging to $(a_n)_n$
 converging to $\alpha $
 such that, for infinitely many $\alpha $
 such that, for infinitely many $n \in \mathbb {N}$
, $n \in \mathbb {N}$
, $$\begin{align*}\alpha - a_n < 2^{-f(n)}.\end{align*}$$ $$\begin{align*}\alpha - a_n < 2^{-f(n)}.\end{align*}$$
- 
(4) There exist a computable, nondecreasing, and unbounded function  ${f\colon \mathbb {N}\to \mathbb {N}}$
 and a computable nondecreasing sequence of rational numbers ${f\colon \mathbb {N}\to \mathbb {N}}$
 and a computable nondecreasing sequence of rational numbers $(a_n)_n$
 converging to $(a_n)_n$
 converging to $\alpha $
 such that, for infinitely many $\alpha $
 such that, for infinitely many $n \in \mathbb {N}$
, $n \in \mathbb {N}$
, $\alpha - a_n < 2^{-f(n)}$
. $\alpha - a_n < 2^{-f(n)}$
.
 Note that this implies in particular that it makes no difference whether we use “
 $<$
” or “
$<$
” or “
 $\leq $
” in the definition of regaining approximability. We would also like to point out that all implications in the following proof are uniformly effective.
$\leq $
” in the definition of regaining approximability. We would also like to point out that all implications in the following proof are uniformly effective.
Proof 
 $(2) \Rightarrow (1)$
: Trivial.
$(2) \Rightarrow (1)$
: Trivial.
 
 $(3) \Rightarrow (2)$
: Trivial.
$(3) \Rightarrow (2)$
: Trivial.
 
 $(1) \Rightarrow (3)$
: Let
$(1) \Rightarrow (3)$
: Let 
 $\alpha $
 be a regainingly approximable number. Let
$\alpha $
 be a regainingly approximable number. Let 
 $(b_n)_n$
 be a computable nondecreasing sequence of rational numbers converging to
$(b_n)_n$
 be a computable nondecreasing sequence of rational numbers converging to 
 $\alpha $
 with
$\alpha $
 with 
 $\alpha - b_n < 2^{-n}$
 for infinitely many
$\alpha - b_n < 2^{-n}$
 for infinitely many 
 $n \in \mathbb {N}$
. Let
$n \in \mathbb {N}$
. Let 
 $f\colon \mathbb {N}\to \mathbb {N}$
 be a computable, unbounded function. Then the function
$f\colon \mathbb {N}\to \mathbb {N}$
 be a computable, unbounded function. Then the function 
 $g\colon \mathbb {N}\to \mathbb {N}$
 defined by
$g\colon \mathbb {N}\to \mathbb {N}$
 defined by 
 $$\begin{align*}g(n) := 1+n+\max \{ f(m) \colon m \leq n \} \end{align*}$$
$$\begin{align*}g(n) := 1+n+\max \{ f(m) \colon m \leq n \} \end{align*}$$
is computable, increasing, and satisfies 
 $g(n)\geq f(n)+1$
, for all
$g(n)\geq f(n)+1$
, for all 
 $n\in \mathbb {N}$
. In particular, g is unbounded. The sequence
$n\in \mathbb {N}$
. In particular, g is unbounded. The sequence 
 $(a_n)_n$
 of rational numbers defined by
$(a_n)_n$
 of rational numbers defined by 
 $$\begin{align*}a_n:=b_{g(n+1)} - 2^{-g(n)}\end{align*}$$
$$\begin{align*}a_n:=b_{g(n+1)} - 2^{-g(n)}\end{align*}$$
is computable and increasing and converges to 
 $\alpha $
. By Lemma 3.1 there exist infinitely many n with
$\alpha $
. By Lemma 3.1 there exist infinitely many n with 
 $\alpha - b_{g(n+1)} < 2^{-g(n)}$
. For all such n we obtain
$\alpha - b_{g(n+1)} < 2^{-g(n)}$
. For all such n we obtain 
 $$\begin{align*}\alpha - a_n = \alpha - b_{g(n+1)} + 2^{-g(n)} < 2^{-g(n)+1} \leq 2^{-f(n)}. \end{align*}$$
$$\begin{align*}\alpha - a_n = \alpha - b_{g(n+1)} + 2^{-g(n)} < 2^{-g(n)+1} \leq 2^{-f(n)}. \end{align*}$$
 
 $(1) \Rightarrow (4)$
: Trivial.
$(1) \Rightarrow (4)$
: Trivial.
 
 $(4) \Rightarrow (1)$
: Assume that
$(4) \Rightarrow (1)$
: Assume that 
 $f\colon \mathbb {N}\to \mathbb {N}$
 is a computable, nondecreasing, and unbounded function and
$f\colon \mathbb {N}\to \mathbb {N}$
 is a computable, nondecreasing, and unbounded function and 
 $(b_n)_n$
 is a computable nondecreasing sequence of rational numbers converging to
$(b_n)_n$
 is a computable nondecreasing sequence of rational numbers converging to 
 $\alpha $
 such that, for infinitely many
$\alpha $
 such that, for infinitely many 
 $n \in \mathbb {N}$
,
$n \in \mathbb {N}$
, 
 $\alpha - b_n < 2^{-f(n)}$
. Define a function
$\alpha - b_n < 2^{-f(n)}$
. Define a function 
 $g\colon \mathbb {N}\to \mathbb {N}$
 via
$g\colon \mathbb {N}\to \mathbb {N}$
 via 
 $$\begin{align*}g(0):= \max\{m\in\mathbb{N} \colon f(m)=f(0)\} \end{align*}$$
$$\begin{align*}g(0):= \max\{m\in\mathbb{N} \colon f(m)=f(0)\} \end{align*}$$
and
 $$\begin{align*}g(n+1) := \max\{m\in\mathbb{N} \colon f(m)=f(g(n)+1)\} , \end{align*}$$
$$\begin{align*}g(n+1) := \max\{m\in\mathbb{N} \colon f(m)=f(g(n)+1)\} , \end{align*}$$
for 
 $n\in \mathbb {N}$
; informally speaking, the graph of f is an infinite increasing sequence of plateaus of finite length and
$n\in \mathbb {N}$
; informally speaking, the graph of f is an infinite increasing sequence of plateaus of finite length and 
 $g(n)$
 gives the x-coordinate of the right-most point in the
$g(n)$
 gives the x-coordinate of the right-most point in the 
 $(n+1)$
th plateau.
$(n+1)$
th plateau.
 Clearly, g is computable and increasing and satisfies 
 $f(g(n))\geq n$
 for all
$f(g(n))\geq n$
 for all 
 $n\in \mathbb {N}$
. Furthermore, for every
$n\in \mathbb {N}$
. Furthermore, for every 
 $k\in \mathbb {N}$
 there exists exactly one
$k\in \mathbb {N}$
 there exists exactly one 
 $n\in \mathbb {N}$
 with
$n\in \mathbb {N}$
 with 
 $f(k)=f(g(n))$
, and this n satisfies
$f(k)=f(g(n))$
, and this n satisfies 
 $k \leq g(n)$
. The sequence
$k \leq g(n)$
. The sequence 
 $(a_n)_n$
 of rational numbers defined by
$(a_n)_n$
 of rational numbers defined by 
 $a_n := b_{g(n)}$
, for all
$a_n := b_{g(n)}$
, for all 
 $n\in \mathbb {N}$
, is computable and nondecreasing and converges to
$n\in \mathbb {N}$
, is computable and nondecreasing and converges to 
 $\alpha $
. By assumption, the set
$\alpha $
. By assumption, the set 
 $$\begin{align*}B := \{k\in\mathbb{N} \colon \alpha - b_k < 2^{-f(k)} \} \end{align*}$$
$$\begin{align*}B := \{k\in\mathbb{N} \colon \alpha - b_k < 2^{-f(k)} \} \end{align*}$$
is infinite. Hence, the set
 $$\begin{align*}A := \{n \in \mathbb{N} \colon (\exists k \in B)\; f(k) = f(g(n)) \} \end{align*}$$
$$\begin{align*}A := \{n \in \mathbb{N} \colon (\exists k \in B)\; f(k) = f(g(n)) \} \end{align*}$$
is infinite as well. Consider a number 
 $n\in A$
, and let
$n\in A$
, and let 
 $k\in B$
 be a number with
$k\in B$
 be a number with 
 $f(k)=f(g(n))$
. Then
$f(k)=f(g(n))$
. Then 
 $k\leq g(n)$
 and
$k\leq g(n)$
 and 
 $$\begin{align*}\alpha - a_n = \alpha - b_{g(n)} \leq \alpha - b_k < 2^{-f(k)} = 2^{-f(g(n))} \leq 2^{-n}.\\[-34pt] \end{align*}$$
$$\begin{align*}\alpha - a_n = \alpha - b_{g(n)} \leq \alpha - b_k < 2^{-f(k)} = 2^{-f(g(n))} \leq 2^{-n}.\\[-34pt] \end{align*}$$
 Next, we observe that if a left-computable number 
 $\alpha $
 is regainingly approximable then this will be apparent no matter which of its effective approximations we look at.
$\alpha $
 is regainingly approximable then this will be apparent no matter which of its effective approximations we look at.
Proposition 3.3. Let 
 $\alpha $
 be a left-computable number, and let
$\alpha $
 be a left-computable number, and let 
 $(a_n)_n$
 be a computable, nondecreasing sequence of rational numbers converging to
$(a_n)_n$
 be a computable, nondecreasing sequence of rational numbers converging to 
 $\alpha $
. Then the following conditions are equivalent.
$\alpha $
. Then the following conditions are equivalent. 
- 
(1)  $\alpha $
 is a regainingly approximable number. $\alpha $
 is a regainingly approximable number.
- 
(2) There exists a computable, increasing function  $r\colon \mathbb {N}\to \mathbb {N}$
 such that, for infinitely many n, $r\colon \mathbb {N}\to \mathbb {N}$
 such that, for infinitely many n, $\alpha - a_{r(n)} < 2^{-n}$
. $\alpha - a_{r(n)} < 2^{-n}$
.
Note that the proof is effectively uniform in both directions.
Proof 
 $(2) \Rightarrow (1)$
: Assume that there exists a computable, increasing function
$(2) \Rightarrow (1)$
: Assume that there exists a computable, increasing function 
 $r\colon \mathbb {N}\to \mathbb {N}$
 such that we have
$r\colon \mathbb {N}\to \mathbb {N}$
 such that we have 
 ${\alpha - a_{r(n)} < 2^{-n}}$
 for infinitely many n. Then the sequence
${\alpha - a_{r(n)} < 2^{-n}}$
 for infinitely many n. Then the sequence 
 $(b_n)_n$
 of rational numbers defined by
$(b_n)_n$
 of rational numbers defined by 
 ${b_n:=a_{r(n)}}$
 is computable, nondecreasing, converges to
${b_n:=a_{r(n)}}$
 is computable, nondecreasing, converges to 
 $\alpha $
, and satisfies, for infinitely many n,
$\alpha $
, and satisfies, for infinitely many n, 
 $\alpha - b_n < 2^{-n}$
. Hence,
$\alpha - b_n < 2^{-n}$
. Hence, 
 $\alpha $
 is regainingly approximable.
$\alpha $
 is regainingly approximable.
 
 $(1) \Rightarrow (2)$
: Assume that
$(1) \Rightarrow (2)$
: Assume that 
 $\alpha $
 is regainingly approximable. By Proposition 3.2 there exists a computable, increasing sequence
$\alpha $
 is regainingly approximable. By Proposition 3.2 there exists a computable, increasing sequence 
 $(b_n)_n$
 of rational numbers converging to
$(b_n)_n$
 of rational numbers converging to 
 $\alpha $
 such that there exist infinitely many n with
$\alpha $
 such that there exist infinitely many n with 
 $\alpha - b_n < 2^{-n}$
. We define a computable, increasing function
$\alpha - b_n < 2^{-n}$
. We define a computable, increasing function 
 ${r\colon \mathbb {N}\to \mathbb {N}}$
 by
${r\colon \mathbb {N}\to \mathbb {N}}$
 by 
 $r(0):=\min \{m\in \mathbb {N} \colon a_m\geq b_0 \}$
, and
$r(0):=\min \{m\in \mathbb {N} \colon a_m\geq b_0 \}$
, and 
 $$\begin{align*}r(n+1) := \min\{m\in\mathbb{N} \colon m> r(n) \text{ and } a_m\geq b_{n+1} \} , \end{align*}$$
$$\begin{align*}r(n+1) := \min\{m\in\mathbb{N} \colon m> r(n) \text{ and } a_m\geq b_{n+1} \} , \end{align*}$$
for 
 $n\in \mathbb {N}$
. For all
$n\in \mathbb {N}$
. For all 
 $n\in \mathbb {N}$
 we have
$n\in \mathbb {N}$
 we have 
 $a_{r(n)} \geq b_n$
. Thus, for the infinitely many
$a_{r(n)} \geq b_n$
. Thus, for the infinitely many 
 $n\in \mathbb {N}$
 with
$n\in \mathbb {N}$
 with 
 $\alpha - b_n < 2^{-n}$
, we obtain
$\alpha - b_n < 2^{-n}$
, we obtain 
 $$\begin{align*}\alpha - a_{r(n)} \leq \alpha - b_n < 2^{-n}.\\[-34pt] \end{align*}$$
$$\begin{align*}\alpha - a_{r(n)} \leq \alpha - b_n < 2^{-n}.\\[-34pt] \end{align*}$$
We close this section by investigating the computability-theoretic properties of the set of points where a regaining approximation “catches up.”
Proposition 3.4. Let 
 $\alpha $
 be a regainingly approximable number, and let
$\alpha $
 be a regainingly approximable number, and let 
 $(a_n)_n$
 be a computable nondecreasing sequence of rational numbers converging to
$(a_n)_n$
 be a computable nondecreasing sequence of rational numbers converging to 
 $\alpha $
 with
$\alpha $
 with 
 $\alpha - a_n < 2^{-n}$
 for infinitely many
$\alpha - a_n < 2^{-n}$
 for infinitely many 
 $n \in \mathbb {N}$
. Then
$n \in \mathbb {N}$
. Then 
 $$ \begin{align*} A:= \{n \in \mathbb{N} \colon \alpha - a_n < 2^{-n}\} \end{align*} $$
$$ \begin{align*} A:= \{n \in \mathbb{N} \colon \alpha - a_n < 2^{-n}\} \end{align*} $$
has the following properties
 $:$
$:$
 
- 
(1)  $\mathbb {N} \setminus A$
 is computably enumerable. $\mathbb {N} \setminus A$
 is computably enumerable.
- 
(2) The following are equivalent  $:$ $:$- 
(a)  $\alpha $
 is computable. $\alpha $
 is computable.
- 
(b) A is decidable. 
- 
(c) A is not hyperimmune. 
 
- 
Proof
- 
(1) If  $\alpha $
 is rational or if $\alpha $
 is rational or if $\mathbb {N}\setminus A$
 is empty then A and $\mathbb {N}\setminus A$
 is empty then A and $\mathbb {N}\setminus A$
 are decidable; thus assume w.l.o.g. that $\mathbb {N}\setminus A$
 are decidable; thus assume w.l.o.g. that $\alpha $
 is irrational and that $\alpha $
 is irrational and that $\mathbb {N} \setminus A$
 is non-empty. Then $\mathbb {N} \setminus A$
 is non-empty. Then $\alpha - a_n\geq 2^{-n}$
 if and only if $\alpha - a_n\geq 2^{-n}$
 if and only if $\alpha - a_n> 2^{-n}$
. Fix any $\alpha - a_n> 2^{-n}$
. Fix any ${e \in \mathbb {N} \setminus A}$
, and define a computable function ${e \in \mathbb {N} \setminus A}$
, and define a computable function $f \colon \mathbb {N} \to \mathbb {N}$
 via Clearly, f computably enumerates $f \colon \mathbb {N} \to \mathbb {N}$
 via Clearly, f computably enumerates $$ \begin{align*} f(\left\langle m,n\right\rangle ) := \begin{cases} n, &\text{if }a_m - a_n \geq 2^{-n}, \\ e, &\text{otherwise.} \end{cases} \end{align*} $$ $$ \begin{align*} f(\left\langle m,n\right\rangle ) := \begin{cases} n, &\text{if }a_m - a_n \geq 2^{-n}, \\ e, &\text{otherwise.} \end{cases} \end{align*} $$ $\mathbb {N} \setminus A$
. $\mathbb {N} \setminus A$
.
- 
(2) (a)  $\Rightarrow $
 (b): Suppose that $\Rightarrow $
 (b): Suppose that $\alpha $
 is computable. Then there exists a computable sequence $\alpha $
 is computable. Then there exists a computable sequence $(b_m)_m$
 of rational numbers with $(b_m)_m$
 of rational numbers with $\left |\alpha - b_m\right | < 2^{-m}$
 for all $\left |\alpha - b_m\right | < 2^{-m}$
 for all $m \in \mathbb {N}$
. Due to (1) it suffices to show that A is computably enumerable. Fix some $m \in \mathbb {N}$
. Due to (1) it suffices to show that A is computably enumerable. Fix some $d \in A$
 and define a computable function $d \in A$
 and define a computable function $g \colon \mathbb {N} \to \mathbb {N}$
 via Then g computably enumerates A; indeed, if for some m it holds that $g \colon \mathbb {N} \to \mathbb {N}$
 via Then g computably enumerates A; indeed, if for some m it holds that $$ \begin{align*} g(\left\langle m,n\right\rangle ) := \begin{cases} n, &\text{if }b_m - a_n + 2^{-m} < 2^{-n}, \\ d, &\text{otherwise.} \end{cases} \end{align*} $$ $$ \begin{align*} g(\left\langle m,n\right\rangle ) := \begin{cases} n, &\text{if }b_m - a_n + 2^{-m} < 2^{-n}, \\ d, &\text{otherwise.} \end{cases} \end{align*} $$ ${b_m - a_n + 2^{-m} < 2^{-n}}$
 then ${b_m - a_n + 2^{-m} < 2^{-n}}$
 then ${\alpha - a_n < 2^{-n}}$
; and for the other direction, if ${\alpha - a_n < 2^{-n}}$
; and for the other direction, if ${\alpha - a_n < 2^{-n}}$
 holds, then for all m satisfying ${\alpha - a_n < 2^{-n}}$
 holds, then for all m satisfying ${\alpha - a_n + 2\cdot 2^{-m} < 2^{-n}}$
 we have ${\alpha - a_n + 2\cdot 2^{-m} < 2^{-n}}$
 we have ${b_m - a_n + 2^{-m} < 2^{-n}}$
. ${b_m - a_n + 2^{-m} < 2^{-n}}$
.(b)  $\Rightarrow $
 (c): Trivial. $\Rightarrow $
 (c): Trivial.(c)  $\Rightarrow $
 (a): By assumption A is infinite. Define $\Rightarrow $
 (a): By assumption A is infinite. Define ${p_A \colon \mathbb {N} \to \mathbb {N}}$
 as the uniquely determined increasing function with ${p_A \colon \mathbb {N} \to \mathbb {N}}$
 as the uniquely determined increasing function with ${p_A(\mathbb {N}) = A}$
. Suppose that A is not hyperimmune. Then there exists a computable function ${p_A(\mathbb {N}) = A}$
. Suppose that A is not hyperimmune. Then there exists a computable function $r \colon \mathbb {N} \to \mathbb {N}$
 with $r \colon \mathbb {N} \to \mathbb {N}$
 with $r(n) \geq p_A(n)$
 for all $r(n) \geq p_A(n)$
 for all $n \in \mathbb {N}$
 (see, for instance, Soare [Reference Soare15, Theorem V.2.3]). Then we have for all $n \in \mathbb {N}$
 (see, for instance, Soare [Reference Soare15, Theorem V.2.3]). Then we have for all $$ \begin{align*} \alpha - a_{r(n)} \leq \alpha - a_{p_A(n)} < 2^{-p_A(n)} \leq 2^{-n} \end{align*} $$ $$ \begin{align*} \alpha - a_{r(n)} \leq \alpha - a_{p_A(n)} < 2^{-p_A(n)} \leq 2^{-n} \end{align*} $$ $n \in \mathbb {N}$
. Hence, $n \in \mathbb {N}$
. Hence, $\alpha $
 is computable. $\alpha $
 is computable.
4. Regainingly approximable sets
 In this section, we study sets 
 $A\subseteq \mathbb {N}$
 such that
$A\subseteq \mathbb {N}$
 such that 
 $2^{-A}$
 is regainingly approximable. We start by characterizing these sets in ways analogous to Propositions 3.2 and 3.3, before then focusing on the particularly interesting case where A is c.e.
$2^{-A}$
 is regainingly approximable. We start by characterizing these sets in ways analogous to Propositions 3.2 and 3.3, before then focusing on the particularly interesting case where A is c.e.
 We need the following terminology and easy lemma: Following Soare [Reference Soare15] we call a sequence 
 $(A_n)_n$
 of finite sets
$(A_n)_n$
 of finite sets 
 $A_n\subseteq \mathbb {N}$
 a strong array if the function
$A_n\subseteq \mathbb {N}$
 a strong array if the function 
 $n\mapsto \sum _{i\in A_n} 2^i$
 is computable, and a strong array
$n\mapsto \sum _{i\in A_n} 2^i$
 is computable, and a strong array 
 $(A_n)_n$
 a uniformly computable approximation from the left of a set
$(A_n)_n$
 a uniformly computable approximation from the left of a set 
 $A\subseteq \mathbb {N}$
 if:
$A\subseteq \mathbb {N}$
 if: 
- 
(1) for all  $n\in \mathbb {N}$
, $n\in \mathbb {N}$
, $A_n\subseteq \{0,\ldots ,n-1\}$
, $A_n\subseteq \{0,\ldots ,n-1\}$
,
- 
(2) for all  $n\in \mathbb {N}$
, $n\in \mathbb {N}$
, $2^{-A_n} \leq 2^{-A_{n+1}}$
, and $2^{-A_n} \leq 2^{-A_{n+1}}$
, and
- 
(3) for all  $i\in \mathbb {N}$
, $i\in \mathbb {N}$
, $\lim _{n\to \infty } A_n(i)=A(i)$
. $\lim _{n\to \infty } A_n(i)=A(i)$
.
Lemma 4.1 (Soare [Reference Soare14])
 For a set 
 $A\subseteq \mathbb {N}$
 the following conditions are equivalent.
$A\subseteq \mathbb {N}$
 the following conditions are equivalent. 
- 
(1) The number  $2^{-A}$
 is left-computable. $2^{-A}$
 is left-computable.
- 
(2) There exists a uniformly computable approximation from the left of the set  $A\subseteq \mathbb {N}$
. $A\subseteq \mathbb {N}$
.
With this we are ready to establish the following characterization.
Theorem 4.2. Let 
 $A\subseteq \mathbb {N}$
 be a set such that the number
$A\subseteq \mathbb {N}$
 be a set such that the number 
 $2^{-A}$
 is left-computable. Then the following conditions are equivalent.
$2^{-A}$
 is left-computable. Then the following conditions are equivalent. 
- 
(1) The number  $2^{-A}$
 is regainingly approximable. $2^{-A}$
 is regainingly approximable.
- 
(2) There exists a uniformly computable approximation  $(A_n)_n$
 of A from the left such that, for infinitely many n, $(A_n)_n$
 of A from the left such that, for infinitely many n, $$\begin{align*}A\cap\{0,\ldots,n-1\} = A_n.\end{align*}$$ $$\begin{align*}A\cap\{0,\ldots,n-1\} = A_n.\end{align*}$$
- 
(3) For every uniformly computable approximation  $(B_n)_n$
 of A from the left there exists a computable, increasing function $(B_n)_n$
 of A from the left there exists a computable, increasing function ${s\colon \mathbb {N}\to \mathbb {N}}$
 such that, for infinitely many n, ${s\colon \mathbb {N}\to \mathbb {N}}$
 such that, for infinitely many n, $$\begin{align*}A\cap\{0,\ldots,n-1\} = B_{s(n)}\cap\{0,\ldots,n-1\}.\end{align*}$$ $$\begin{align*}A\cap\{0,\ldots,n-1\} = B_{s(n)}\cap\{0,\ldots,n-1\}.\end{align*}$$
Proof 
 $(3) \Rightarrow (2)$
: By Lemma 4.1 some uniformly computable approximation
$(3) \Rightarrow (2)$
: By Lemma 4.1 some uniformly computable approximation 
 $(B_n)_n$
 of A from the left exists. Then by assumption a function s as in (3) must exist as well. Thus, (2) follows immediately by letting
$(B_n)_n$
 of A from the left exists. Then by assumption a function s as in (3) must exist as well. Thus, (2) follows immediately by letting 
 $A_n:=B_{s(n)} \cap \{0,\ldots ,n-1\}$
 for all
$A_n:=B_{s(n)} \cap \{0,\ldots ,n-1\}$
 for all 
 $n\in \mathbb {N}$
.
$n\in \mathbb {N}$
.
 
 $(2) \Rightarrow (1)$
: Let
$(2) \Rightarrow (1)$
: Let 
 $(A_n)_n$
 be a uniformly computable approximation of A from the left such that
$(A_n)_n$
 be a uniformly computable approximation of A from the left such that 
 ${A\cap \{0,\ldots ,n-1\} = A_n}$
 for infinitely many n. The sequence
${A\cap \{0,\ldots ,n-1\} = A_n}$
 for infinitely many n. The sequence 
 $(2^{-A_n})_n$
 is a nondecreasing computable sequence of rational numbers converging to
$(2^{-A_n})_n$
 is a nondecreasing computable sequence of rational numbers converging to 
 $2^{-A}$
. For the infinitely many n with
$2^{-A}$
. For the infinitely many n with 
 ${A\cap \{0,\ldots ,n-1\} = A_n}$
 we have
${A\cap \{0,\ldots ,n-1\} = A_n}$
 we have
 $$\begin{align*}2^{-A} - 2^{-A_n} \leq \sum\nolimits _{k=n+1}^\infty 2^{-k} = 2^{-n}.\end{align*}$$
$$\begin{align*}2^{-A} - 2^{-A_n} \leq \sum\nolimits _{k=n+1}^\infty 2^{-k} = 2^{-n}.\end{align*}$$
 Thus, 
 $2^{-A}$
 is regainingly approximable.
$2^{-A}$
 is regainingly approximable.
 
 $(1) \Rightarrow (3)$
: If A is cofinite then it is easy to see that for any computable approximation of A from the left there is a function s as required by (3); thus w.l.o.g. let A be coinfinite.
$(1) \Rightarrow (3)$
: If A is cofinite then it is easy to see that for any computable approximation of A from the left there is a function s as required by (3); thus w.l.o.g. let A be coinfinite.
 Let 
 $(B_n)_n$
 be an arbitrary uniformly computable approximation of A from the left. Then
$(B_n)_n$
 be an arbitrary uniformly computable approximation of A from the left. Then 
 $(2^{-B_n})_n$
 is a computable, nondecreasing sequence of rational numbers converging to
$(2^{-B_n})_n$
 is a computable, nondecreasing sequence of rational numbers converging to 
 $2^{-A}$
. By Proposition 3.3 there exists a computable increasing function
$2^{-A}$
. By Proposition 3.3 there exists a computable increasing function 
 $r\colon \mathbb {N}\to \mathbb {N}$
 such that, for infinitely many n,
$r\colon \mathbb {N}\to \mathbb {N}$
 such that, for infinitely many n, 
 $2^{-A} - 2^{-B_{r(n)}} < 2^{-n}$
. If there are infinitely many n with
$2^{-A} - 2^{-B_{r(n)}} < 2^{-n}$
. If there are infinitely many n with 
 $$\begin{align*}A\cap\{0,\ldots,n-1\} = B_{r(n)}\cap\{0,\ldots,n-1\},\end{align*}$$
$$\begin{align*}A\cap\{0,\ldots,n-1\} = B_{r(n)}\cap\{0,\ldots,n-1\},\end{align*}$$
then (3) holds with 
 $s:=r$
. Otherwise there is an
$s:=r$
. Otherwise there is an 
 $N>0$
 such that for all
$N>0$
 such that for all 
 $n\geq N$
 we have
$n\geq N$
 we have 
 $A\cap \{0,\ldots ,n-1\} \neq B_{r(n)}\cap \{0,\ldots ,n-1\}$
; that is, one of:
$A\cap \{0,\ldots ,n-1\} \neq B_{r(n)}\cap \{0,\ldots ,n-1\}$
; that is, one of: 
- 
(i)  $2^{-A\cap \{0,\ldots ,n-1\}} + 2^{-n} \leq 2^{-B_{r(n)}\cap \{0,\ldots ,n-1\}}$ $2^{-A\cap \{0,\ldots ,n-1\}} + 2^{-n} \leq 2^{-B_{r(n)}\cap \{0,\ldots ,n-1\}}$
- 
(ii)  $2^{-B_{r(n)}\cap \{0,\ldots ,n-1\}} + 2^{-n} \leq 2^{-A\cap \{0,\ldots ,n-1\}}$ $2^{-B_{r(n)}\cap \{0,\ldots ,n-1\}} + 2^{-n} \leq 2^{-A\cap \{0,\ldots ,n-1\}}$
must hold. But (i), together with A’s coinfiniteness, would imply
 $$\begin{align*}2^{-A} < 2^{-A\cap\{0,\ldots,n-1\}} + 2^{-n} \leq 2^{-B_{r(n)}\cap\{0,\ldots,n-1\}} \leq 2^{-B_{r(n)}} \leq 2^{-A}, \end{align*}$$
$$\begin{align*}2^{-A} < 2^{-A\cap\{0,\ldots,n-1\}} + 2^{-n} \leq 2^{-B_{r(n)}\cap\{0,\ldots,n-1\}} \leq 2^{-B_{r(n)}} \leq 2^{-A}, \end{align*}$$
a contradiction. Thus, for 
 $n\geq N$
, (ii) must hold. Define
$n\geq N$
, (ii) must hold. Define 
 $s\colon \mathbb {N}\to \mathbb {N}$
 via
$s\colon \mathbb {N}\to \mathbb {N}$
 via 

and note that this is well-defined as 
 $\lim _{n\to \infty } B_n(i)=A(i)$
 for all
$\lim _{n\to \infty } B_n(i)=A(i)$
 for all 
 $i\in \mathbb {N}$
. It is clear that s is increasing and computable. Fix any of the infinitely many
$i\in \mathbb {N}$
. It is clear that s is increasing and computable. Fix any of the infinitely many 
 $n\geq N$
 with
$n\geq N$
 with 
 $2^{-A} - 2^{-B_{r(n)}} < 2^{-n}$
.
$2^{-A} - 2^{-B_{r(n)}} < 2^{-n}$
.
 We claim that 
 $A\cap \{0,\ldots ,n-1\} = B_{s(n)}\cap \{0,\ldots ,n-1\}$
. For the sake of a contradiction, assume otherwise; then one of
$A\cap \{0,\ldots ,n-1\} = B_{s(n)}\cap \{0,\ldots ,n-1\}$
. For the sake of a contradiction, assume otherwise; then one of 
- 
(iii)  $2^{-A\cap \{0,\ldots ,n-1\}} + 2^{-n} \leq 2^{-B_{s(n)}\cap \{0,\ldots ,n-1\}}$ $2^{-A\cap \{0,\ldots ,n-1\}} + 2^{-n} \leq 2^{-B_{s(n)}\cap \{0,\ldots ,n-1\}}$
- 
(iv)  $2^{-B_{s(n)}\cap \{0,\ldots ,n-1\}} + 2^{-n} \leq 2^{-A\cap \{0,\ldots ,n-1\}}$ $2^{-B_{s(n)}\cap \{0,\ldots ,n-1\}} + 2^{-n} \leq 2^{-A\cap \{0,\ldots ,n-1\}}$
must hold. From (iii) we can deduce
 $$\begin{align*}2^{-A} < 2^{-A\cap\{0,\ldots,n-1\}} + 2^{-n} \leq 2^{-B_{s(n)}\cap\{0,\ldots,n-1\}} \leq 2^{-B_{s(n)}} \leq 2^{-A}, \end{align*}$$
$$\begin{align*}2^{-A} < 2^{-A\cap\{0,\ldots,n-1\}} + 2^{-n} \leq 2^{-B_{s(n)}\cap\{0,\ldots,n-1\}} \leq 2^{-B_{s(n)}} \leq 2^{-A}, \end{align*}$$
a contradiction; from (iv) we obtain
 $$\begin{align*}\begin{array}{r@{\;}c@{\;}l} 2^{-A} &<& 2^{-B_{r(n)}} + 2^{-n} < 2^{-B_{r(n)}\cap\{0,\ldots,n-1\}} + 2^{-n} + 2^{-n} \\ &\leq & 2^{-B_{s(n)}\cap\{0,\ldots,n-1\}} + 2^{-n} \leq 2^{-A\cap\{0,\ldots,n-1\}} \leq 2^{-A} , \end{array}\end{align*}$$
$$\begin{align*}\begin{array}{r@{\;}c@{\;}l} 2^{-A} &<& 2^{-B_{r(n)}} + 2^{-n} < 2^{-B_{r(n)}\cap\{0,\ldots,n-1\}} + 2^{-n} + 2^{-n} \\ &\leq & 2^{-B_{s(n)}\cap\{0,\ldots,n-1\}} + 2^{-n} \leq 2^{-A\cap\{0,\ldots,n-1\}} \leq 2^{-A} , \end{array}\end{align*}$$
another contradiction.
For the remainder of this section we will focus on c.e. sets.
Definition 4.3. We call a set 
 $A\subseteq \mathbb {N}$
 regainingly approximable if it is computably enumerable and the real number
$A\subseteq \mathbb {N}$
 regainingly approximable if it is computably enumerable and the real number 
 $2^{-A}$
 is regainingly approximable.
$2^{-A}$
 is regainingly approximable.
 That we resort to the number 
 $2^{-A}$
 in this definition concerning sets may seem unnatural and raises the question whether regainingly approximable sets could alternatively be defined using enumerations of their elements. This is indeed possible, as we will show now.
$2^{-A}$
 in this definition concerning sets may seem unnatural and raises the question whether regainingly approximable sets could alternatively be defined using enumerations of their elements. This is indeed possible, as we will show now.
 We call a total function 
 $f\colon \mathbb {N}\to \mathbb {N}$
 an enumeration of a set
$f\colon \mathbb {N}\to \mathbb {N}$
 an enumeration of a set 
 $A\subseteq \mathbb {N}$
 if
$A\subseteq \mathbb {N}$
 if 
 $$\begin{align*}A = \{n\in\mathbb{N} \colon (\exists k \in\mathbb{N})\; f(k)=n+1\}. \end{align*}$$
$$\begin{align*}A = \{n\in\mathbb{N} \colon (\exists k \in\mathbb{N})\; f(k)=n+1\}. \end{align*}$$
If 
 $f(k)=n+1$
 then we say that
$f(k)=n+1$
 then we say that 
 $f$
 enumerates
$f$
 enumerates 
 $n$
 into
$n$
 into 
 $A$
 at stage
$A$
 at stage 
 $k$
. Note that here
$k$
. Note that here 
 $f(k)=0$
 encodes that the function f does not enumerate anything into A at stage k. It is clear that a set
$f(k)=0$
 encodes that the function f does not enumerate anything into A at stage k. It is clear that a set 
 $A\subseteq \mathbb {N}$
 is computably enumerable if and only if there exists a computable enumeration of A. If
$A\subseteq \mathbb {N}$
 is computably enumerable if and only if there exists a computable enumeration of A. If 
 $f\colon \mathbb {N}\to \mathbb {N}$
 is an enumeration of a subset of
$f\colon \mathbb {N}\to \mathbb {N}$
 is an enumeration of a subset of 
 $\mathbb {N}$
 then, for
$\mathbb {N}$
 then, for 
 $t\in \mathbb {N}$
, we write
$t\in \mathbb {N}$
, we write 
 $$\begin{align*}\mathrm{Enum}(f)[t] :=\{n \in\mathbb{N} \colon (\exists k \in \mathbb{N})\; (k<t \text{ and } f(k)=n+1)\}. \end{align*}$$
$$\begin{align*}\mathrm{Enum}(f)[t] :=\{n \in\mathbb{N} \colon (\exists k \in \mathbb{N})\; (k<t \text{ and } f(k)=n+1)\}. \end{align*}$$
Definition 4.4. Let 
 $r\colon \mathbb {N}\to \mathbb {N}$
 be a nondecreasing, unbounded function. We call an enumeration
$r\colon \mathbb {N}\to \mathbb {N}$
 be a nondecreasing, unbounded function. We call an enumeration 
 $f\colon \mathbb {N}\to \mathbb {N}$
 of a set
$f\colon \mathbb {N}\to \mathbb {N}$
 of a set 
 $A\subseteq \mathbb {N}$
 r-good if there exist infinitely many n such that
$A\subseteq \mathbb {N}$
 r-good if there exist infinitely many n such that 
 $$\begin{align*}\{0,\ldots,n-1\} \cap A \subseteq \mathrm{Enum}(f)[r(n)]. \end{align*}$$
$$\begin{align*}\{0,\ldots,n-1\} \cap A \subseteq \mathrm{Enum}(f)[r(n)]. \end{align*}$$
Remark 4.5. We call an enumeration of some A an enumeration without repetitions if for every 
 $n\in A$
 there exists exactly one
$n\in A$
 there exists exactly one 
 $k\in \mathbb {N}$
 with
$k\in \mathbb {N}$
 with 
 $f(k)=n+1$
. From a given enumeration f of A one can easily compute one without repetitions by defining, for all
$f(k)=n+1$
. From a given enumeration f of A one can easily compute one without repetitions by defining, for all 
 $k\in \mathbb {N}$
,
$k\in \mathbb {N}$
, 
 $$\begin{align*}\widetilde{f}(k):= \begin{cases} f(k), & \text{if } f(k)>0 \text{ and } (f(k)-1) \not\in \mathrm{Enum}(f)[k], \\ 0, & \text{otherwise}. \end{cases} \end{align*}$$
$$\begin{align*}\widetilde{f}(k):= \begin{cases} f(k), & \text{if } f(k)>0 \text{ and } (f(k)-1) \not\in \mathrm{Enum}(f)[k], \\ 0, & \text{otherwise}. \end{cases} \end{align*}$$
Note that if f was r-good for some nondecreasing, unbounded function r then 
 $\widetilde {f}$
 is r-good as well.
$\widetilde {f}$
 is r-good as well.
Example 4.6. Let 
 $A\subseteq \mathbb {N}$
 be a decidable set. Then the function
$A\subseteq \mathbb {N}$
 be a decidable set. Then the function 
 ${f\colon \mathbb {N}\to \mathbb {N}}$
 defined by
${f\colon \mathbb {N}\to \mathbb {N}}$
 defined by 
 $f(n):=n+1$
 if
$f(n):=n+1$
 if 
 $n\in A$
,
$n\in A$
, 
 $f(n):=0$
 if
$f(n):=0$
 if 
 $n\not \in A$
, is a computable and
$n\not \in A$
, is a computable and 
 $\mathrm {id}_{\mathbb {N}}$
-good enumeration without repetitions of A.
$\mathrm {id}_{\mathbb {N}}$
-good enumeration without repetitions of A.
Theorem 4.7. For a c.e. 
 $A\subseteq \mathbb {N}$
 the following conditions are equivalent.
$A\subseteq \mathbb {N}$
 the following conditions are equivalent. 
- 
(1) The number  $2^{-A}$
 is regainingly approximable. $2^{-A}$
 is regainingly approximable.
- 
(2) There exists a computable  $\mathrm {id}_{\mathbb {N}}$
-good enumeration of A. $\mathrm {id}_{\mathbb {N}}$
-good enumeration of A.
- 
(3) There exists a computable, nondecreasing, unbounded function  $r\colon \mathbb {N}\to \mathbb {N}$
 such that there exists a computable r-good enumeration of A. $r\colon \mathbb {N}\to \mathbb {N}$
 such that there exists a computable r-good enumeration of A.
- 
(4) For every computable enumeration f of A there exists a computable, increasing function  $r\colon \mathbb {N}\to \mathbb {N}$
 such that f is r-good. $r\colon \mathbb {N}\to \mathbb {N}$
 such that f is r-good.
In particular, in analogy to Proposition 3.3, if a set A is regainingly approximable then this will be apparent no matter which of its effective enumerations we look at.
Proof of Theorem 4.7
 
 $(3) \Rightarrow (1)$
: Let
$(3) \Rightarrow (1)$
: Let 
 $r\colon \mathbb {N}\to \mathbb {N}$
 be a computable, nondecreasing, unbounded function, and let
$r\colon \mathbb {N}\to \mathbb {N}$
 be a computable, nondecreasing, unbounded function, and let 
 $f\colon \mathbb {N}\to \mathbb {N}$
 be a computable r-good enumeration of A. Then
$f\colon \mathbb {N}\to \mathbb {N}$
 be a computable r-good enumeration of A. Then 
 $(a_n)_n$
 defined for all
$(a_n)_n$
 defined for all 
 $n\in \mathbb {N}$
 via
$n\in \mathbb {N}$
 via 
 $$\begin{align*}a_n:= 2^{-\mathrm{Enum}(f)[r(n)]} \end{align*}$$
$$\begin{align*}a_n:= 2^{-\mathrm{Enum}(f)[r(n)]} \end{align*}$$
is a computable, nondecreasing sequence of rational numbers converging to 
 $2^{-A}$
; and since we have for infinitely many n that
$2^{-A}$
; and since we have for infinitely many n that 
 $$\begin{align*}\{0,\ldots,n-1\} \cap A \subseteq \mathrm{Enum}(f)[r(n)] \end{align*}$$
$$\begin{align*}\{0,\ldots,n-1\} \cap A \subseteq \mathrm{Enum}(f)[r(n)] \end{align*}$$
it follows that 
 $2^{-A} - a_n \leq \sum _{k=n}^\infty 2^{-k-1} = 2^{-n}$
. Thus
$2^{-A} - a_n \leq \sum _{k=n}^\infty 2^{-k-1} = 2^{-n}$
. Thus 
 $2^{-A}$
 is regainingly approximable.
$2^{-A}$
 is regainingly approximable.
 
 $(1) \Rightarrow (4)$
: Let
$(1) \Rightarrow (4)$
: Let 
 $A\subseteq \mathbb {N}$
 be a c.e. set such that
$A\subseteq \mathbb {N}$
 be a c.e. set such that 
 $2^{-A}$
 is regainingly approximable, and let
$2^{-A}$
 is regainingly approximable, and let 
 $f\colon \mathbb {N}\to \mathbb {N}$
 be an arbitrary computable enumeration of A. Then
$f\colon \mathbb {N}\to \mathbb {N}$
 be an arbitrary computable enumeration of A. Then 
 $(a_n)_n$
 defined by
$(a_n)_n$
 defined by 
 $a_n:=2^{-\mathrm {Enum}(f)[n]}$
, for all
$a_n:=2^{-\mathrm {Enum}(f)[n]}$
, for all 
 $n\in \mathbb {N}$
, is a computable nondecreasing sequence of rational numbers converging to
$n\in \mathbb {N}$
, is a computable nondecreasing sequence of rational numbers converging to 
 $2^{-A}$
. By Proposition 3.3 there exists a computable, increasing function
$2^{-A}$
. By Proposition 3.3 there exists a computable, increasing function 
 $r\colon \mathbb {N}\to \mathbb {N}$
 such that, for infinitely many n,
$r\colon \mathbb {N}\to \mathbb {N}$
 such that, for infinitely many n, 
 $2^{-A} - a_{r(n)} < 2^{-n}$
. We obtain
$2^{-A} - a_{r(n)} < 2^{-n}$
. We obtain 
 $\{0,\ldots ,n-1\} \cap A \subseteq \mathrm {Enum}(f)[r(n)]$
 for infinitely many n. Hence, f is r-good.
$\{0,\ldots ,n-1\} \cap A \subseteq \mathrm {Enum}(f)[r(n)]$
 for infinitely many n. Hence, f is r-good.
 
 $(4) \Rightarrow (3)$
: Trivial, as every c.e. set has a computable enumeration.
$(4) \Rightarrow (3)$
: Trivial, as every c.e. set has a computable enumeration.
 
 $(2) \Rightarrow (3)$
: Trivial.
$(2) \Rightarrow (3)$
: Trivial.
 
 $(3) \Rightarrow (2)$
: Assume that
$(3) \Rightarrow (2)$
: Assume that 
 ${r\colon \mathbb {N}\to \mathbb {N}}$
 is a computable, nondecreasing, unbounded function and that
${r\colon \mathbb {N}\to \mathbb {N}}$
 is a computable, nondecreasing, unbounded function and that 
 $f\colon \mathbb {N}\to \mathbb {N}$
 is a computable r-good enumeration of A. If A is finite, then f itself is trivially an
$f\colon \mathbb {N}\to \mathbb {N}$
 is a computable r-good enumeration of A. If A is finite, then f itself is trivially an 
 $\mathrm {id}_{\mathbb {N}}$
-good enumeration of A; so assume that A is infinite. Let
$\mathrm {id}_{\mathbb {N}}$
-good enumeration of A; so assume that A is infinite. Let 
 $L_0, M_0:=\emptyset $
 and for
$L_0, M_0:=\emptyset $
 and for 
 $n \geq 1$
 define
$n \geq 1$
 define 
 $L_n := \{0,\ldots ,n-1\}\cap \mathrm {Enum}(f)[r(n)]$
 as well as
$L_n := \{0,\ldots ,n-1\}\cap \mathrm {Enum}(f)[r(n)]$
 as well as 
 $M_n := L_n \setminus L_{n-1}$
; that is,
$M_n := L_n \setminus L_{n-1}$
; that is, 
 $M_1,\ldots ,M_n$
 forms a disjoint partition of
$M_1,\ldots ,M_n$
 forms a disjoint partition of 
 $L_n$
 for every n.
$L_n$
 for every n.
 We let 
 $g\colon \mathbb {N}\to \mathbb {N}$
 be the enumeration that first enumerates all elements of
$g\colon \mathbb {N}\to \mathbb {N}$
 be the enumeration that first enumerates all elements of 
 $M_1$
 in increasing order, then those of
$M_1$
 in increasing order, then those of 
 $M_2$
 in increasing order, and so on. More formally speaking,
$M_2$
 in increasing order, and so on. More formally speaking, 
 $g\colon \mathbb {N}\to \mathbb {N}$
 is defined as follows: For every
$g\colon \mathbb {N}\to \mathbb {N}$
 is defined as follows: For every 
 $n\in \mathbb {N}$
, let
$n\in \mathbb {N}$
, let 
 $m_n$
 be the cardinality of
$m_n$
 be the cardinality of 
 $M_n$
, let
$M_n$
, let 
 $k_0^{(n)},\ldots ,k_{m_n-1}^{(n)}$
 be its elements in increasing order and, for t with
$k_0^{(n)},\ldots ,k_{m_n-1}^{(n)}$
 be its elements in increasing order and, for t with 
 $0\leq t \leq m_n-1$
, define
$0\leq t \leq m_n-1$
, define 
 $$\begin{align*}g\left( t + \sum_{j<n} m_j\right) := 1+k_t^{(n)}. \end{align*}$$
$$\begin{align*}g\left( t + \sum_{j<n} m_j\right) := 1+k_t^{(n)}. \end{align*}$$
This is well-defined due to A’s infinity. Clearly, g is a computable enumeration (in fact, without repetitions) of A with 
 $g(n)\neq 0$
 for all n. We claim that it is in fact an
$g(n)\neq 0$
 for all n. We claim that it is in fact an 
 $\mathrm {id}_{\mathbb {N}}$
-good enumeration of A. To see this, fix any of the, by assumption, infinitely many n with
$\mathrm {id}_{\mathbb {N}}$
-good enumeration of A. To see this, fix any of the, by assumption, infinitely many n with 
 $$\begin{align*}\{0,\ldots,n-1\} \cap A \subseteq \mathrm{Enum}(f)[r(n)]. \end{align*}$$
$$\begin{align*}\{0,\ldots,n-1\} \cap A \subseteq \mathrm{Enum}(f)[r(n)]. \end{align*}$$
By construction, g enumerates exactly the elements of
 $$\begin{align*}L_{n-1} = M_0\cup \dots \cup M_{n-1}\end{align*}$$
$$\begin{align*}L_{n-1} = M_0\cup \dots \cup M_{n-1}\end{align*}$$
during the first 
 $\sum _{j<n} m_j$
 stages and, by choice of n, all elements of
$\sum _{j<n} m_j$
 stages and, by choice of n, all elements of 
 $\{0,\ldots ,n-1\}\cap A$
 that are not enumerated during these stages must be in
$\{0,\ldots ,n-1\}\cap A$
 that are not enumerated during these stages must be in 
 $M_n$
. In fact, they are exactly all elements of
$M_n$
. In fact, they are exactly all elements of 
 $M_n$
, and thus will be enumerated by g in the immediately following stages, starting with stage
$M_n$
, and thus will be enumerated by g in the immediately following stages, starting with stage 
 $\sum _{j<n} m_j$
. As there cannot be more than
$\sum _{j<n} m_j$
. As there cannot be more than 
 ${n-\sum _{j<n} m_j}$
 such numbers, this process will be completed before stage n.
${n-\sum _{j<n} m_j}$
 such numbers, this process will be completed before stage n.
Remark 4.8. Note that “
 $(3) \Rightarrow (2)$
” is the only implication in Theorem 4.7 whose proof is not fully uniformly effective; in its proof we non-uniformly distinguished whether A is finite or not. A simple topological argument shows that this is unavoidable; in fact, there does not even exist a continuous function
$(3) \Rightarrow (2)$
” is the only implication in Theorem 4.7 whose proof is not fully uniformly effective; in its proof we non-uniformly distinguished whether A is finite or not. A simple topological argument shows that this is unavoidable; in fact, there does not even exist a continuous function 
 $F\colon \mathbb {N}^{\mathbb {N}}\to \mathbb {N}^{\mathbb {N}}$
 mapping every
$F\colon \mathbb {N}^{\mathbb {N}}\to \mathbb {N}^{\mathbb {N}}$
 mapping every 
 $2\mathrm {id}_{\mathbb {N}}$
-good enumeration without repetitions of an arbitrary
$2\mathrm {id}_{\mathbb {N}}$
-good enumeration without repetitions of an arbitrary 
 $A \subseteq \mathbb {N}$
 to an
$A \subseteq \mathbb {N}$
 to an 
 $\mathrm {id}_{\mathbb {N}}$
-good enumeration of A. To see this, we use the following two facts:
$\mathrm {id}_{\mathbb {N}}$
-good enumeration of A. To see this, we use the following two facts: 
- 
(i) There exists only one enumeration of the empty set, namely the constant zero function  $\mathbf {0}\colon \mathbb {N}\to \mathbb {N}$
, which is an $\mathbf {0}\colon \mathbb {N}\to \mathbb {N}$
, which is an $\mathrm {id}_{\mathbb {N}}$
-good enumeration. $\mathrm {id}_{\mathbb {N}}$
-good enumeration.
- 
(ii) For every function  $f \colon \mathbb {N} \to \mathbb {N}$
 that is an $f \colon \mathbb {N} \to \mathbb {N}$
 that is an $\mathrm {id}_{\mathbb {N}}$
-good enumeration of $\mathrm {id}_{\mathbb {N}}$
-good enumeration of $\mathbb {N}$
 we have $\mathbb {N}$
 we have $f(n) \neq 0$
 for all $f(n) \neq 0$
 for all $n \in \mathbb {N}$
. $n \in \mathbb {N}$
.
Now assume, for the sake of a contradiction, that there is an F as described. By (i), we must have 
 $F(\mathbf {0})=\mathbf {0}$
, and as F was assumed to be continuous, there exists an
$F(\mathbf {0})=\mathbf {0}$
, and as F was assumed to be continuous, there exists an 
 $n_0\in \mathbb {N}$
 with
$n_0\in \mathbb {N}$
 with 
 $F(0^{n_0}\mathbb {N}^{\mathbb {N}})\subseteq 0\mathbb {N}^{\mathbb {N}}$
; that is, F must map all inputs starting with sufficiently many
$F(0^{n_0}\mathbb {N}^{\mathbb {N}})\subseteq 0\mathbb {N}^{\mathbb {N}}$
; that is, F must map all inputs starting with sufficiently many 
 $0$
’s to a sequence starting with at least one
$0$
’s to a sequence starting with at least one 
 $0$
. Define
$0$
. Define 
 $g\colon \mathbb {N}\to \mathbb {N}$
 as an enumeration of
$g\colon \mathbb {N}\to \mathbb {N}$
 as an enumeration of 
 $\mathbb {N}$
 “delayed by
$\mathbb {N}$
 “delayed by 
 $n_0$
 stages”, that is, let
$n_0$
 stages”, that is, let 
 $$\begin{align*}g(n):=\begin{cases} 0, & \text{if } n<n_0, \\ 1+n-n_0, & \text{if } n\geq n_0, \end{cases} \end{align*}$$
$$\begin{align*}g(n):=\begin{cases} 0, & \text{if } n<n_0, \\ 1+n-n_0, & \text{if } n\geq n_0, \end{cases} \end{align*}$$
for 
 $n\in \mathbb {N}$
. Then while g is a
$n\in \mathbb {N}$
. Then while g is a 
 $2\mathrm {id}_{\mathbb {N}}$
-good enumeration without repetitions of
$2\mathrm {id}_{\mathbb {N}}$
-good enumeration without repetitions of 
 $\mathbb {N}$
, we have
$\mathbb {N}$
, we have 
 $F(g)\in 0\mathbb {N}^{\mathbb {N}}$
, and no such enumeration can be an
$F(g)\in 0\mathbb {N}^{\mathbb {N}}$
, and no such enumeration can be an 
 $\mathrm {id}_{\mathbb {N}}$
-good enumeration of
$\mathrm {id}_{\mathbb {N}}$
-good enumeration of 
 $\mathbb {N}$
, contradiction.
$\mathbb {N}$
, contradiction.
5. Computability-theoretic properties
We begin this section by observing an easy splitting theorem that we will need later. Next, we will show that there are c.e. sets that are not regainingly approximable. Finally, we will show that every c.e. Turing degree contains a regainingly approximable set.
Theorem 5.1. For any c.e. set 
 $C\subseteq \mathbb {N}$
 there exist two disjoint, regainingly approximable sets
$C\subseteq \mathbb {N}$
 there exist two disjoint, regainingly approximable sets 
 $A,B\subseteq \mathbb {N}$
 with
$A,B\subseteq \mathbb {N}$
 with 
 $C=A\cup B$
.
$C=A\cup B$
.
Proof By Sacks’ Splitting Theorem [Reference Sacks13] (see, for instance, Soare [Reference Soare15, Theorem VII.3.2]) there exist two disjoint, c.e., low subsets 
 $A,B\subseteq \mathbb {N}$
 with
$A,B\subseteq \mathbb {N}$
 with 
 $C=A\cup B$
. Low sets are not high. Hence, by Example 1.2 and Theorem 4.7, A and B are regainingly approximable.
$C=A\cup B$
. Low sets are not high. Hence, by Example 1.2 and Theorem 4.7, A and B are regainingly approximable.
The theorem leaves open whether the splitting can be done effectively. The answer is yes: in Section 9 we will present a uniformly effective algorithm that even works for a larger class of objects than just the c.e. sets.
As corollaries to Theorem 5.1 we obtain two easy ways to see that the regainingly approximable sets are a strict superclass of the decidable ones.
Corollary 5.2. There exists a regainingly approximable set 
 $A\subseteq \mathbb {N}$
 that is not decidable.
$A\subseteq \mathbb {N}$
 that is not decidable.
First proof
By Example 1.2, any c.e. set that is neither decidable nor high is regainingly approximable.
Second proof
 Let 
 $C\subseteq \mathbb {N}$
 be an undecidable c.e. set. By Theorem 5.1 there are two disjoint regainingly approximable sets
$C\subseteq \mathbb {N}$
 be an undecidable c.e. set. By Theorem 5.1 there are two disjoint regainingly approximable sets 
 $A,B$
 such that
$A,B$
 such that 
 ${C=A\cup B}$
. At least one of A or B must be undecidable.
${C=A\cup B}$
. At least one of A or B must be undecidable.
Next, we separate regaining approximability from computable enumerability.
Theorem 5.3. There exists a c.e. set 
 $A\subseteq \mathbb {N}$
 that is not regainingly approximable.
$A\subseteq \mathbb {N}$
 that is not regainingly approximable.
Proof Let 
 $\varphi _0,\varphi _1,\varphi _2,\ldots $
 be a standard enumeration of all possibly partial computable functions with domain and range in
$\varphi _0,\varphi _1,\varphi _2,\ldots $
 be a standard enumeration of all possibly partial computable functions with domain and range in 
 $\mathbb {N}$
. As usual, we write
$\mathbb {N}$
. As usual, we write 
 $\varphi _e(n)[t]{\downarrow }$
 to express that the eth Turing machine (which computes
$\varphi _e(n)[t]{\downarrow }$
 to express that the eth Turing machine (which computes 
 $\varphi _e$
) stops after at most t steps on input n.
$\varphi _e$
) stops after at most t steps on input n.
 We shall construct a computable enumeration 
 $g\colon \mathbb {N}\to \mathbb {N}$
 of a set
$g\colon \mathbb {N}\to \mathbb {N}$
 of a set 
 $A \subseteq \mathbb {N}$
 such that the following requirements
$A \subseteq \mathbb {N}$
 such that the following requirements 
 $(\mathcal {R}_{e})$
 will be satisfied for all
$(\mathcal {R}_{e})$
 will be satisfied for all 
 $e\in \mathbb {N}$
:
$e\in \mathbb {N}$
: 
 $$\begin{align*}\begin{array}{r@{\;}l} (\mathcal{R}_{e})\colon & \text{if } \varphi_e \text{ is total and increasing then } \\ & (\exists n_e\in\mathbb{N}) (\forall n> n_e) (\{0,\ldots,n-1\} \cap A \not\subseteq \mathrm{Enum}(g)[\varphi_e(n)]). \end{array}\end{align*}$$
$$\begin{align*}\begin{array}{r@{\;}l} (\mathcal{R}_{e})\colon & \text{if } \varphi_e \text{ is total and increasing then } \\ & (\exists n_e\in\mathbb{N}) (\forall n> n_e) (\{0,\ldots,n-1\} \cap A \not\subseteq \mathrm{Enum}(g)[\varphi_e(n)]). \end{array}\end{align*}$$
According to Theorem 4.7 this is sufficient.
 We construct g in stages; in stage t we proceed as follows: Define 
 $e:=\pi _1(\pi _1(t))$
 and
$e:=\pi _1(\pi _1(t))$
 and 
 $k:=\pi _2(\pi _1(t))$
, hence,
$k:=\pi _2(\pi _1(t))$
, hence, 
 $\langle e,k \rangle = \pi _1(t)$
. Check whether the following conditions are satisfied:
$\langle e,k \rangle = \pi _1(t)$
. Check whether the following conditions are satisfied: 
 $$ \begin{align*} & (\forall n\leq \langle e,k+1\rangle) \ \varphi_e(n)[t]{\downarrow} \\\text{and}\ &(\forall n < \langle e,k+1\rangle) \ \varphi_e(n) < \varphi_e(n+1) \\\text{and}\ &t \geq \varphi_e(\langle e,k+1\rangle ) \\\text{and}\ &\langle e,k \rangle \not\in \mathrm{Enum}(g)[t]. \end{align*} $$
$$ \begin{align*} & (\forall n\leq \langle e,k+1\rangle) \ \varphi_e(n)[t]{\downarrow} \\\text{and}\ &(\forall n < \langle e,k+1\rangle) \ \varphi_e(n) < \varphi_e(n+1) \\\text{and}\ &t \geq \varphi_e(\langle e,k+1\rangle ) \\\text{and}\ &\langle e,k \rangle \not\in \mathrm{Enum}(g)[t]. \end{align*} $$
If they are, set 
 $g(t):=1+ \langle e,k \rangle $
, otherwise
$g(t):=1+ \langle e,k \rangle $
, otherwise 
 $g(t):=0$
.
$g(t):=0$
.
 This completes the construction; we proceed with the verification. It is clear that g is computable and an enumeration without repetitions of some c.e. set 
 $A \subseteq \mathbb {N}$
. We wish to show that
$A \subseteq \mathbb {N}$
. We wish to show that 
 $\mathcal {R}_e$
 is satisfied for all
$\mathcal {R}_e$
 is satisfied for all 
 $e\in \mathbb {N}$
. Consider an
$e\in \mathbb {N}$
. Consider an 
 $e\in \mathbb {N}$
 such that
$e\in \mathbb {N}$
 such that 
 $\varphi _e$
 is total and increasing, as well as a number
$\varphi _e$
 is total and increasing, as well as a number 
 ${n> \langle e,0 \rangle }$
. There exists a unique
${n> \langle e,0 \rangle }$
. There exists a unique 
 $k\in \mathbb {N}$
 with
$k\in \mathbb {N}$
 with 
 ${\langle e,k \rangle < n \leq \langle e,k+1\rangle }$
. The function g enumerates
${\langle e,k \rangle < n \leq \langle e,k+1\rangle }$
. The function g enumerates 
 $\langle e,k\rangle $
 into A at some uniquely determined stage t, that is, there exists exactly one
$\langle e,k\rangle $
 into A at some uniquely determined stage t, that is, there exists exactly one 
 $t \in \mathbb {N}$
 with
$t \in \mathbb {N}$
 with 
 ${g(t)=1+\langle e,k\rangle }$
. Then
${g(t)=1+\langle e,k\rangle }$
. Then 
 $$\begin{align*}\langle e,k \rangle \in \mathrm{Enum}(g)[t+1] \setminus \mathrm{Enum}(g)[t].\end{align*}$$
$$\begin{align*}\langle e,k \rangle \in \mathrm{Enum}(g)[t+1] \setminus \mathrm{Enum}(g)[t].\end{align*}$$
Since 
 $n \leq \langle e,k+1\rangle $
, we have
$n \leq \langle e,k+1\rangle $
, we have 
 $\varphi _e(n) \leq \varphi _e( \langle e,k+1 \rangle ) \leq t$
, and therefore
$\varphi _e(n) \leq \varphi _e( \langle e,k+1 \rangle ) \leq t$
, and therefore 
 $$\begin{align*}\langle e,k \rangle \not\in\mathrm{Enum}(g)[t] \supseteq \mathrm{Enum}(g)[\varphi_e(n)]. \end{align*}$$
$$\begin{align*}\langle e,k \rangle \not\in\mathrm{Enum}(g)[t] \supseteq \mathrm{Enum}(g)[\varphi_e(n)]. \end{align*}$$
Thus 
 $\langle e,k \rangle \in \{0,\ldots ,n-1\}\cap A$
 witnesses that
$\langle e,k \rangle \in \{0,\ldots ,n-1\}\cap A$
 witnesses that 
 $\mathcal {R}_e$
 is satisfied with
$\mathcal {R}_e$
 is satisfied with 
 $n_e=\langle e,0\rangle $
.
$n_e=\langle e,0\rangle $
.
Finally, we show that the regainingly approximable Turing degrees are exactly the c.e. Turing degrees.
Theorem 5.4. For every computably enumerable set 
 $A \subseteq \mathbb {N}$
 there exists a regainingly approximable set
$A \subseteq \mathbb {N}$
 there exists a regainingly approximable set 
 $B \subseteq \mathbb {N}$
 with
$B \subseteq \mathbb {N}$
 with 
 $A \equiv _T B$
.
$A \equiv _T B$
.
Proof W.l.o.g. we may assume that A is infinite. Thus fix some computable injective function 
 $f \colon \mathbb {N} \to \mathbb {N}$
 with
$f \colon \mathbb {N} \to \mathbb {N}$
 with 
 $f(\mathbb {N}) = A$
.
$f(\mathbb {N}) = A$
.
 We will build a c.e. B by defining a computable injective function 
 ${g\colon \mathbb {N}\to \mathbb {N}}$
 and letting
${g\colon \mathbb {N}\to \mathbb {N}}$
 and letting 
 $B:=g(\mathbb {N})$
. In parallel, we will define an increasing sequence
$B:=g(\mathbb {N})$
. In parallel, we will define an increasing sequence 
 $(S_i)_i$
 such that:
$(S_i)_i$
 such that: 
- 
(i) for all i we have  $\{0,\ldots ,S_i-1\} \cap B\subseteq \{g(0),\ldots ,g(S_i-1)\}$
, which implies that B is regainingly approximable, and such that $\{0,\ldots ,S_i-1\} \cap B\subseteq \{g(0),\ldots ,g(S_i-1)\}$
, which implies that B is regainingly approximable, and such that
- 
(ii)  $A \equiv _T (S_i)_i \equiv _T B$
. $A \equiv _T (S_i)_i \equiv _T B$
.
To ensure these properties, we will define 
 $(S_i)_i$
 in such a way that, for all i and for all
$(S_i)_i$
 in such a way that, for all i and for all 
 $t\geq S_i$
, we have on the one hand that
$t\geq S_i$
, we have on the one hand that 
 $f(t)\geq i$
 and on the other hand that
$f(t)\geq i$
 and on the other hand that 
 $g(t)\geq S_i$
. This last property will imply (i); and concerning (ii), informally speaking, our choice of
$g(t)\geq S_i$
. This last property will imply (i); and concerning (ii), informally speaking, our choice of 
 $(S_i)_i$
 means that f and g can enumerate “small” numbers only for arguments smaller than
$(S_i)_i$
 means that f and g can enumerate “small” numbers only for arguments smaller than 
 $S_i$
. This will allow us to show
$S_i$
. This will allow us to show 
 $A \leq _T (S_i)_i$
 and
$A \leq _T (S_i)_i$
 and 
 $B \leq _T (S_i)_i$
. Finally, the statements
$B \leq _T (S_i)_i$
. Finally, the statements 
 $(S_i)_i \leq _T A$
 and
$(S_i)_i \leq _T A$
 and 
 $(S_i)_i \leq _T B$
 will follow from the way we define
$(S_i)_i \leq _T B$
 will follow from the way we define 
 $(S_i)_i$
 alongside g.
$(S_i)_i$
 alongside g.
 After these informal remarks, we proceed with the full proof. The following algorithm works recursively in infinitely many stages to compute two functions 
 $g \colon \mathbb {N} \to \mathbb {N}$
 and
$g \colon \mathbb {N} \to \mathbb {N}$
 and 
 ${s\colon \mathbb {N}^2\to \mathbb {N}}$
; we write
${s\colon \mathbb {N}^2\to \mathbb {N}}$
; we write 
 $s_i[t]$
 for
$s_i[t]$
 for 
 $s(i,t)$
. Since it will turn out below that
$s(i,t)$
. Since it will turn out below that 
 $(s_i[t])_t$
 is eventually constant for every i, allowing us to define
$(s_i[t])_t$
 is eventually constant for every i, allowing us to define 
 $S_i:=\lim _{t\to \infty } s_i[t]$
, it is suggestive to think of
$S_i:=\lim _{t\to \infty } s_i[t]$
, it is suggestive to think of 
 $s_i[t]$
 as our preliminary guess for
$s_i[t]$
 as our preliminary guess for 
 $S_i$
 at stage t.
$S_i$
 at stage t.
 At stage 
 $0$
 we let
$0$
 we let 
 $$ \begin{align*} s_i[0] := i \end{align*} $$
$$ \begin{align*} s_i[0] := i \end{align*} $$
for all 
 $i\in \mathbb {N}$
. And for every
$i\in \mathbb {N}$
. And for every 
 $t\in \mathbb {N}$
, at stage
$t\in \mathbb {N}$
, at stage 
 $t+1$
 we define
$t+1$
 we define 
 $$ \begin{align*} g(t) := s_{f(t)}[t] \end{align*} $$
$$ \begin{align*} g(t) := s_{f(t)}[t] \end{align*} $$
and set
 $$ \begin{align*} s_i[t+1] := \begin{cases} s_i[t], &\text{if }i \leq f(t), \\ s_i[t] + \max\{t, g(0), \dots, g(t)\}+1, &\text{if }i> f(t), \\ \end{cases} \end{align*} $$
$$ \begin{align*} s_i[t+1] := \begin{cases} s_i[t], &\text{if }i \leq f(t), \\ s_i[t] + \max\{t, g(0), \dots, g(t)\}+1, &\text{if }i> f(t), \\ \end{cases} \end{align*} $$
for all 
 $i \in \mathbb {N}$
. Finally, we define
$i \in \mathbb {N}$
. Finally, we define 
 $B := g(\mathbb {N})$
.
$B := g(\mathbb {N})$
.
This ends the construction, and we proceed with the verification. First note that B is clearly computably enumerable.
Claim 1. For every 
 $t\in \mathbb {N}$
, the sequence
$t\in \mathbb {N}$
, the sequence 
 $(s_i[t])_i$
 is increasing.
$(s_i[t])_i$
 is increasing.
Proof This is clear for 
 $t=0$
, and can easily be seen to hold for all other t by induction.
$t=0$
, and can easily be seen to hold for all other t by induction.
Claim 2. For every 
 $i \in \mathbb {N}$
, the sequence
$i \in \mathbb {N}$
, the sequence 
 $(s_i[t])_{t}$
 is nondecreasing and eventually constant.
$(s_i[t])_{t}$
 is nondecreasing and eventually constant.
Proof By construction 
 $(s_i[t])_t$
 is nondecreasing for every
$(s_i[t])_t$
 is nondecreasing for every 
 $i \in \mathbb {N}$
. Fix some
$i \in \mathbb {N}$
. Fix some 
 $i\in \mathbb {N}$
; to see that
$i\in \mathbb {N}$
; to see that 
 $(s_i[t])_t$
 is eventually constant, choose
$(s_i[t])_t$
 is eventually constant, choose 
 $t_i$
 such that
$t_i$
 such that 
 $f(t) \geq i$
 for all
$f(t) \geq i$
 for all 
 $t\geq t_i$
. Note that such a
$t\geq t_i$
. Note that such a 
 $t_i$
 exists because
$t_i$
 exists because 
 $(f(t))_t$
 tends to infinity. Then, for every
$(f(t))_t$
 tends to infinity. Then, for every 
 $t\geq t_i$
, we have
$t\geq t_i$
, we have 
 $s_{i}[t+1]=s_{i}[t]$
, and thus
$s_{i}[t+1]=s_{i}[t]$
, and thus 
 $s_{i}[t]=s_{i}[t_i]$
 for all
$s_{i}[t]=s_{i}[t_i]$
 for all 
 $t\geq t_i$
.
$t\geq t_i$
.
 Define the sequence 
 $(S_i)_i$
 by
$(S_i)_i$
 by 
 $S_i:=\lim _{t\to \infty } s_i[t]$
. By Claim 1,
$S_i:=\lim _{t\to \infty } s_i[t]$
. By Claim 1, 
 $(S_i)_i$
 is increasing.
$(S_i)_i$
 is increasing.
Claim 3. For every 
 $i\in \mathbb {N}$
 and every
$i\in \mathbb {N}$
 and every 
 $t \geq S_i$
 we have
$t \geq S_i$
 we have 
 $s_i[t]=S_i$
.
$s_i[t]=S_i$
.
Proof Assume otherwise and choose 
 $t\geq S_i$
 with
$t\geq S_i$
 with 
 $s_i[t+1]\neq s_i[t]$
. Then
$s_i[t+1]\neq s_i[t]$
. Then 
 $$ \begin{align*} S_i\geq s_i[t+1]=s_i[t]+\max\{t, g(0), \dots, g(t)\}+1> t \geq S_i, \end{align*} $$
$$ \begin{align*} S_i\geq s_i[t+1]=s_i[t]+\max\{t, g(0), \dots, g(t)\}+1> t \geq S_i, \end{align*} $$
a contradiction.
Claim 4. For every 
 $i \in \mathbb {N}$
 and every
$i \in \mathbb {N}$
 and every 
 $t \geq S_i$
 we have
$t \geq S_i$
 we have 
 $f(t) \geq i$
.
$f(t) \geq i$
.
Proof Consider any 
 $i \in \mathbb {N}$
 and some stage
$i \in \mathbb {N}$
 and some stage 
 $t \geq S_i$
. If we had
$t \geq S_i$
. If we had 
 $f(t) < i$
, then we would obtain
$f(t) < i$
, then we would obtain 
 $$ \begin{align*} s_i[t+1] = s_i[t]+\max\{t, g(0), \dots, g(t)\}+1> s_i[t]=S_i, \end{align*} $$
$$ \begin{align*} s_i[t+1] = s_i[t]+\max\{t, g(0), \dots, g(t)\}+1> s_i[t]=S_i, \end{align*} $$
contradicting Claim 3.
Claim 5. For every 
 $i \in \mathbb {N}$
 and every
$i \in \mathbb {N}$
 and every 
 $t \geq S_i$
 we have
$t \geq S_i$
 we have 
 $g(t) \geq S_i$
.
$g(t) \geq S_i$
.
Proof Consider any 
 $i \in \mathbb {N}$
 and some stage
$i \in \mathbb {N}$
 and some stage 
 $t \geq S_i$
. By Claim 4,
$t \geq S_i$
. By Claim 4, 
 $f(t) \geq i$
, and by Claim 1,
$f(t) \geq i$
, and by Claim 1, 
 $(s_j[t])_j$
 is increasing. Together with Claim 3, we obtain
$(s_j[t])_j$
 is increasing. Together with Claim 3, we obtain 
 $g(t) = s_{f(t)}[t] \geq s_i[t] = S_i$
.
$g(t) = s_{f(t)}[t] \geq s_i[t] = S_i$
.
Claim 6. B is regainingly approximable.
Proof According to Claim 5 the function 
 $n \mapsto g(n)+1$
 is an
$n \mapsto g(n)+1$
 is an 
 $\mathrm {id}_{\mathbb {N}}$
-good, computable enumeration of B, and hence B is regainingly approximable.
$\mathrm {id}_{\mathbb {N}}$
-good, computable enumeration of B, and hence B is regainingly approximable.
Claim 7. The function g is injective.
Proof Consider two stages 
 $t_1 < t_2$
; we need to show
$t_1 < t_2$
; we need to show 
 $g(t_1) \neq g(t_2)$
. Since f is injective, we have
$g(t_1) \neq g(t_2)$
. Since f is injective, we have 
 $f(t_1) \neq f(t_2)$
.
$f(t_1) \neq f(t_2)$
. 
- 
• If  $f(t_1) < f(t_2)$
, then due to Claims 1 and 2 we have $f(t_1) < f(t_2)$
, then due to Claims 1 and 2 we have $$ \begin{align*} g(t_1) = s_{f(t_1)}[t_1] < s_{f(t_2)}[t_1] \leq s_{f(t_2)}[t_2] = g(t_2). \end{align*} $$ $$ \begin{align*} g(t_1) = s_{f(t_1)}[t_1] < s_{f(t_2)}[t_1] \leq s_{f(t_2)}[t_2] = g(t_2). \end{align*} $$
- 
• If  $f(t_1)> f(t_2)$
, then $f(t_1)> f(t_2)$
, then $s_{f(t_1)}[t_1]> s_{f(t_2)}[t_1]$
 holds due to Claim 1. If $s_{f(t_1)}[t_1]> s_{f(t_2)}[t_1]$
 holds due to Claim 1. If $s_{f(t_2)}[t_2] = s_{f(t_2)}[t_1]$
 then Otherwise $s_{f(t_2)}[t_2] = s_{f(t_2)}[t_1]$
 then Otherwise $$ \begin{align*} g(t_2)= s_{f(t_2)}[t_2] = s_{f(t_2)}[t_1] < s_{f(t_1)}[t_1] = g(t_1). \end{align*} $$ $$ \begin{align*} g(t_2)= s_{f(t_2)}[t_2] = s_{f(t_2)}[t_1] < s_{f(t_1)}[t_1] = g(t_1). \end{align*} $$ $s_{f(t_2)}[t_2]> s_{f(t_2)}[t_1]$
 due to Claim 2 and we obtain $s_{f(t_2)}[t_2]> s_{f(t_2)}[t_1]$
 due to Claim 2 and we obtain $$ \begin{align*} g(t_2) = s_{f(t_2)}[t_2] \geq s_{f(t_2)}[t_1] + g(t_1) + 1> g(t_1). \end{align*} $$ $$ \begin{align*} g(t_2) = s_{f(t_2)}[t_2] \geq s_{f(t_2)}[t_1] + g(t_1) + 1> g(t_1). \end{align*} $$
Thus, g is injective.
Claim 8a. A is computable in 
 $(S_i)_i$
.
$(S_i)_i$
.
Proof Given oracle 
 $(S_i)_i$
, in order to decide whether
$(S_i)_i$
, in order to decide whether 
 $n\in A$
 for arbitrary
$n\in A$
 for arbitrary 
 ${n \in \mathbb {N}}$
, it suffices to check whether n is enumerated by f before stage
${n \in \mathbb {N}}$
, it suffices to check whether n is enumerated by f before stage 
 ${S_{n+1}}$
. If not, then since by Claim 4 we have
${S_{n+1}}$
. If not, then since by Claim 4 we have 
 $f(t) \geq n+1$
 for all
$f(t) \geq n+1$
 for all 
 ${t \geq S_{n+1}}$
, we must have
${t \geq S_{n+1}}$
, we must have 
 $n \notin A$
.
$n \notin A$
.
Claim 8b. B is computable in 
 $(S_i)_i$
.
$(S_i)_i$
.
Proof Given oracle 
 $(S_i)_i$
, in order to decide whether
$(S_i)_i$
, in order to decide whether 
 $n\in B$
 for arbitrary
$n\in B$
 for arbitrary 
 $n \in \mathbb {N}$
 we proceed as follows: Compute the smallest
$n \in \mathbb {N}$
 we proceed as follows: Compute the smallest 
 $i \in \mathbb {N}$
 with
$i \in \mathbb {N}$
 with 
 $S_i> n$
 and check whether n is enumerated by g before stage
$S_i> n$
 and check whether n is enumerated by g before stage 
 $S_{i}$
. If not, then since by Claim 5 we have
$S_{i}$
. If not, then since by Claim 5 we have 
 $g(t) \geq S_i> n$
 for all
$g(t) \geq S_i> n$
 for all 
 $t \geq S_{i}$
, we must have
$t \geq S_{i}$
, we must have 
 $n \notin B$
.
$n \notin B$
.
Claim 9a. The sequence 
 $(S_i)_i$
 is computable in A.
$(S_i)_i$
 is computable in A.
Proof To determine 
 $S_i$
 for any
$S_i$
 for any 
 $i \in \mathbb {N}$
 it suffices to use oracle A to compute the smallest t such that
$i \in \mathbb {N}$
 it suffices to use oracle A to compute the smallest t such that 
 $$\begin{align*}A \cap \{0,\dots,i-1\} \subseteq \{f(0),\ldots,f(t-1)\} \end{align*}$$
$$\begin{align*}A \cap \{0,\dots,i-1\} \subseteq \{f(0),\ldots,f(t-1)\} \end{align*}$$
and to output 
 $S_i = s_i[t]$
.
$S_i = s_i[t]$
.
Claim 9b. The sequence 
 $(S_i)_i$
 is computable in B.
$(S_i)_i$
 is computable in B.
Proof We claim that there is a recursive algorithm using oracle B that computes 
 $(S_i)_i$
. By construction, we have
$(S_i)_i$
. By construction, we have 
 $S_0 = 0$
. So suppose that for
$S_0 = 0$
. So suppose that for 
 ${i \in \mathbb {N}}$
 the numbers
${i \in \mathbb {N}}$
 the numbers 
 $S_0, \dots , S_i$
 are known. We claim that:
$S_0, \dots , S_i$
 are known. We claim that: 
- 
(i) if  $S_i\in B$
 and if the uniquely determined (cf. Claim 7) number t with $S_i\in B$
 and if the uniquely determined (cf. Claim 7) number t with $g(t)=S_i$
 satisfies $g(t)=S_i$
 satisfies $f(t) = i$
 and $f(t) = i$
 and $t \geq S_i$
, then $t \geq S_i$
, then ${S_{i+1}=s_{i+1}[t+1]}$
; and that ${S_{i+1}=s_{i+1}[t+1]}$
; and that
- 
(ii)  $S_{i+1}=s_{i+1}[S_i]$
 holds otherwise. $S_{i+1}=s_{i+1}[S_i]$
 holds otherwise.
Assuming this claim, oracle B clearly computes 
 $S_{i+1}$
.
$S_{i+1}$
.
 To see (i), assume that 
 $S_i\in B$
 and that the uniquely determined t with
$S_i\in B$
 and that the uniquely determined t with 
 $g(t)=S_i$
 satisfies
$g(t)=S_i$
 satisfies 
 $f(t) = i$
 and
$f(t) = i$
 and 
 $t \geq S_i$
. By Claim 4 and because f is injective, for all
$t \geq S_i$
. By Claim 4 and because f is injective, for all 
 $t'\geq t+1$
 we obtain
$t'\geq t+1$
 we obtain 
 $f(t')\geq i+1$
. This implies
$f(t')\geq i+1$
. This implies 
 $S_{i+1}=s_{i+1}[t+1]$
.
$S_{i+1}=s_{i+1}[t+1]$
.
 For (ii), if 
 $S_{i+1}\neq s_{i+1}[S_i]$
, then there must exist a number
$S_{i+1}\neq s_{i+1}[S_i]$
, then there must exist a number 
 $t\geq S_i$
 with
$t\geq S_i$
 with 
 ${s_{i+1}[t+1]\neq s_{i+1}[t]}$
 and hence with
${s_{i+1}[t+1]\neq s_{i+1}[t]}$
 and hence with 
 $i+1>f(t)$
. Then, by Claim 4,
$i+1>f(t)$
. Then, by Claim 4, 
 $f(t)=i$
, and by Claim 3,
$f(t)=i$
, and by Claim 3, 
 $S_i=s_i[t]=s_{f(t)}(t)=g(t)$
, hence,
$S_i=s_i[t]=s_{f(t)}(t)=g(t)$
, hence, 
 $S_i\in B$
.
$S_i\in B$
.
 This concludes the proof that for every c.e. set 
 $A \subseteq \mathbb {N}$
 there exists a regainingly approximable set
$A \subseteq \mathbb {N}$
 there exists a regainingly approximable set 
 $B \subseteq \mathbb {N}$
 with
$B \subseteq \mathbb {N}$
 with 
 $A \equiv _T B$
.
$A \equiv _T B$
.
Corollary 5.5. There is a high regainingly approximable set.
6. Complexity of regainingly approximable numbers
 Let 
 $\Sigma :=\{0,1\}$
, let
$\Sigma :=\{0,1\}$
, let 
 $\Sigma ^n$
 denote the set of finite binary strings of length n, and write
$\Sigma ^n$
 denote the set of finite binary strings of length n, and write 
 $\Sigma ^*$
 for
$\Sigma ^*$
 for 
 $\bigcup _n \Sigma ^n$
. For a binary string v, let
$\bigcup _n \Sigma ^n$
. For a binary string v, let 
 $|v|$
 denote its length. By
$|v|$
 denote its length. By 
 $0^n$
 we denote the string of length n that consists of n zeros, and for any infinite binary sequence A and any number
$0^n$
 we denote the string of length n that consists of n zeros, and for any infinite binary sequence A and any number 
 $n\in \mathbb {N}$
, by
$n\in \mathbb {N}$
, by 
 $A \restriction n$
 we denote the string
$A \restriction n$
 we denote the string 
 $A(0)\ldots A(n-1)$
 of length n. For a function
$A(0)\ldots A(n-1)$
 of length n. For a function 
 $f\colon \mathrm {dom}(f)\to \Sigma ^*$
 with
$f\colon \mathrm {dom}(f)\to \Sigma ^*$
 with 
 $\mathrm {dom}(f) \subseteq \Sigma ^*$
 write
$\mathrm {dom}(f) \subseteq \Sigma ^*$
 write 
 $$\begin{align*}C_f(w):=\min(\{|v|\colon f(v)=w\}\cup \{\infty\})\end{align*}$$
$$\begin{align*}C_f(w):=\min(\{|v|\colon f(v)=w\}\cup \{\infty\})\end{align*}$$
for every 
 ${w\in \Sigma ^*}$
. If
${w\in \Sigma ^*}$
. If 
 $\mathrm {dom}(f)$
 has the property that no two different elements in it can be prefixes of each other then we say that f has prefix-free domain; and in this case it is customary to write
$\mathrm {dom}(f)$
 has the property that no two different elements in it can be prefixes of each other then we say that f has prefix-free domain; and in this case it is customary to write 
 $K_f$
 for
$K_f$
 for 
 $C_f$
. Let C denote
$C_f$
. Let C denote 
 $C_{U_1}$
 for some optimally universal Turing machine
$C_{U_1}$
 for some optimally universal Turing machine 
 $U_1$
 and let K denote
$U_1$
 and let K denote 
 $K_{U_2}$
 for some optimally universal prefix-free Turing machine
$K_{U_2}$
 for some optimally universal prefix-free Turing machine 
 $U_2$
 (see, for instance, Downey and Hirschfeldt [Reference Downey and Hirschfeldt4, Sections 3.1 and 3.5] for a discussion of such Turing machines). In the remainder of this section we will write U for
$U_2$
 (see, for instance, Downey and Hirschfeldt [Reference Downey and Hirschfeldt4, Sections 3.1 and 3.5] for a discussion of such Turing machines). In the remainder of this section we will write U for 
 $U_2$
. The functions C and K are called the plain and the prefix-free Kolmogorov complexity, respectively.
$U_2$
. The functions C and K are called the plain and the prefix-free Kolmogorov complexity, respectively.
 An infinite binary sequence A is called i.o. K-trivial if there exists a constant 
 $c\in \mathbb {N}$
 such that, for infinitely many n,
$c\in \mathbb {N}$
 such that, for infinitely many n, 
 $$\begin{align*}K(A \restriction n) \leq K(0^n) + c.\end{align*}$$
$$\begin{align*}K(A \restriction n) \leq K(0^n) + c.\end{align*}$$
Recalling the conventions laid out in Section 2, we can make the following observation.
Proposition 6.1. Every regainingly approximable number 
 $\alpha $
 is i.o. K-trivial, and hence not Martin-Löf random.
$\alpha $
 is i.o. K-trivial, and hence not Martin-Löf random.
Proof Assume w.l.o.g. that 
 $\alpha $
 is not computable and
$\alpha $
 is not computable and 
 $\alpha \in (0,1)$
. Let
$\alpha \in (0,1)$
. Let 
 $(a_n)_n$
 be an increasing, computable sequence of w.l.o.g. strictly positive rational numbers converging to
$(a_n)_n$
 be an increasing, computable sequence of w.l.o.g. strictly positive rational numbers converging to 
 $\alpha $
 such that, for infinitely many n,
$\alpha $
 such that, for infinitely many n, 
 $\alpha -a_n<2^{-n}$
.
$\alpha -a_n<2^{-n}$
.
 For every 
 $n\in \mathbb {N}$
, let
$n\in \mathbb {N}$
, let 
 $u_n$
 be the binary string of length n with
$u_n$
 be the binary string of length n with 
 $$\begin{align*}0.u_n \leq a_n < 0.u_n+2^{-n}.\end{align*}$$
$$\begin{align*}0.u_n \leq a_n < 0.u_n+2^{-n}.\end{align*}$$
If 
 $u_n\neq 1^n$
 let
$u_n\neq 1^n$
 let 
 $v_n\in \Sigma ^n$
 be such that
$v_n\in \Sigma ^n$
 be such that 
 $0.v_n = 0.u_n+2^{-n}$
; otherwise let
$0.v_n = 0.u_n+2^{-n}$
; otherwise let 
 ${v_n:=u_n}$
. Clearly,
${v_n:=u_n}$
. Clearly, 
 $(u_n)_n$
 and
$(u_n)_n$
 and 
 $(v_n)_n$
 are computable.
$(v_n)_n$
 are computable.
 Define a computable function 
 $f\colon \mathrm {dom}(f)\to \Sigma ^*$
 with prefix-free domain
$f\colon \mathrm {dom}(f)\to \Sigma ^*$
 with prefix-free domain 
 $\mathrm {dom}(f)\subseteq \Sigma ^*$
 as follows: If
$\mathrm {dom}(f)\subseteq \Sigma ^*$
 as follows: If 
 $w\in \Sigma ^*$
 satisfies
$w\in \Sigma ^*$
 satisfies 
 $U(w)=0^n$
, for some
$U(w)=0^n$
, for some 
 $n\in \mathbb {N}$
, then let
$n\in \mathbb {N}$
, then let 
 $f(w0) := u_n$
 and
$f(w0) := u_n$
 and 
 $f(w1) := v_n$
. Otherwise we leave
$f(w1) := v_n$
. Otherwise we leave 
 $f(wa)$
 undefined for
$f(wa)$
 undefined for 
 ${a\in \Sigma }$
. Note that the domain of f is prefix-free.
${a\in \Sigma }$
. Note that the domain of f is prefix-free.
 By construction, for infinitely many n, we have 
 $\alpha - 0.u_n < 2 \cdot 2^{-n}$
. Then, for these n, we have
$\alpha - 0.u_n < 2 \cdot 2^{-n}$
. Then, for these n, we have 
 $\alpha \restriction n \in \{u_n,v_n\}$
; thus
$\alpha \restriction n \in \{u_n,v_n\}$
; thus 
 $$\begin{align*}K(\alpha\restriction n) \leq^+ K_f(\alpha\restriction n) = K(0^n)+1,\end{align*}$$
$$\begin{align*}K(\alpha\restriction n) \leq^+ K_f(\alpha\restriction n) = K(0^n)+1,\end{align*}$$
and 
 $\alpha $
 is i.o. K-trivial.
$\alpha $
 is i.o. K-trivial.
In view of Proposition 6.1, it is natural to wonder whether regainingly approximable numbers can at the same time have infinitely many initial segments of high Kolmogorov complexity. The answer is yes, as demonstrated by the next theorem.
Theorem 6.2. There is a regainingly approximable number 
 ${\alpha \in (0,1)}$
 such that
${\alpha \in (0,1)}$
 such that 
 $K(\alpha \restriction n)> n$
 for infinitely many n.
$K(\alpha \restriction n)> n$
 for infinitely many n.
 We point out that an analogous result cannot hold for plain Kolmogorov complexity; this is due to a result of Martin-Löf [Reference Martin-Löf9] (see Nies, Stephan, and Terwijn [Reference Nies, Stephan and Terwijn12, Proposition 2.4]) who proved that any infinite binary sequence A for which 
 ${C(A\restriction n)>n}$
 holds for infinitely many n is Martin-Löf random; thus such an A cannot be regainingly approximable.
${C(A\restriction n)>n}$
 holds for infinitely many n is Martin-Löf random; thus such an A cannot be regainingly approximable.
Proof of Theorem 6.2
 Fix a computable injective function 
 $h\colon \mathbb {N}\to \Sigma ^*$
 with
$h\colon \mathbb {N}\to \Sigma ^*$
 with 
 ${h(\mathbb {N})=\mathrm {dom}(U)}$
. For
${h(\mathbb {N})=\mathrm {dom}(U)}$
. For 
 $t\in \mathbb {N}$
 and
$t\in \mathbb {N}$
 and 
 $v\in \Sigma ^*$
 write
$v\in \Sigma ^*$
 write 
 $$\begin{align*}U^{-1}\{v\}[t] &:= \{u \in\Sigma^* \colon U(u)=v \text{ and } (\exists s < t) (h(s)=u) \},\\K(v)[t] &:= \begin{cases} \infty, & \text{if } U^{-1}\{v\}[t]=\emptyset, \\ \min\{|u|\colon u \in U^{-1}\{v\}[t]\}, & \text{otherwise}. \end{cases} \end{align*}$$
$$\begin{align*}U^{-1}\{v\}[t] &:= \{u \in\Sigma^* \colon U(u)=v \text{ and } (\exists s < t) (h(s)=u) \},\\K(v)[t] &:= \begin{cases} \infty, & \text{if } U^{-1}\{v\}[t]=\emptyset, \\ \min\{|u|\colon u \in U^{-1}\{v\}[t]\}, & \text{otherwise}. \end{cases} \end{align*}$$
Then, for every 
 $v\in \Sigma ^*$
, the function
$v\in \Sigma ^*$
, the function 
 $t\mapsto K(v)[t]$
 is nonincreasing and eventually constant with limit
$t\mapsto K(v)[t]$
 is nonincreasing and eventually constant with limit 
 $K(v)$
.
$K(v)$
.
 We sketch the underlying idea before giving the formal proof. In order to ensure that 
 $\alpha $
 has initial segments of high prefix-free Kolmogorov complexity, we want to mimic Chaitin’s
$\alpha $
 has initial segments of high prefix-free Kolmogorov complexity, we want to mimic Chaitin’s 
 $\Omega $
, that is, the sum
$\Omega $
, that is, the sum 
 $\sum _n 2^{-|h(n)|}$
. Of course, overdoing this would lead to a Martin-Löf random number in the limit, and such numbers cannot be regainingly approximable. Instead,
$\sum _n 2^{-|h(n)|}$
. Of course, overdoing this would lead to a Martin-Löf random number in the limit, and such numbers cannot be regainingly approximable. Instead, 
 $\alpha $
 is obtained from the series
$\alpha $
 is obtained from the series 
 $\sum _n 2^{-|h(n)|}$
 by multiplying its elements by certain weights. More precisely, at any given time some weight is fixed, and we keep adding elements to the series scaled by this weight. As continuing to do this forever would lead to a scaled copy of
$\sum _n 2^{-|h(n)|}$
 by multiplying its elements by certain weights. More precisely, at any given time some weight is fixed, and we keep adding elements to the series scaled by this weight. As continuing to do this forever would lead to a scaled copy of 
 $\Omega $
, that is, to some Martin-Löf random number, that process must eventually produce a rational number with an initial segment of high Kolmogorov complexity.
$\Omega $
, that is, to some Martin-Löf random number, that process must eventually produce a rational number with an initial segment of high Kolmogorov complexity.
 If at a stage t it looks as if that has happened, we change the weight to a new, smaller value that is at most 
 $2^{-t}$
. As all new terms that are added to the series after t will now be scaled by this new weight, their total sum cannot exceed
$2^{-t}$
. As all new terms that are added to the series after t will now be scaled by this new weight, their total sum cannot exceed 
 $2^{-t}$
. Doing this for infinitely many t then ensures regaining approximability of
$2^{-t}$
. Doing this for infinitely many t then ensures regaining approximability of 
 $\alpha $
.
$\alpha $
.
 One issue with this approach is that we can never be sure about the stages when we change weights. This is because prefix-free Kolmogorov complexity is uncomputable, and we have to work with its approximations. Thus, it may turn out that what looked like an initial segment of high complexity at stage t really has much lower complexity, and as a result we may have dropped the weight too early. However, since 
 $\alpha $
 only needs to be left-computable, it is easy to deal with this: we simply retroactively scale up the weights of all terms that had been given too small a weight.
$\alpha $
 only needs to be left-computable, it is easy to deal with this: we simply retroactively scale up the weights of all terms that had been given too small a weight.
 We now come to the formal construction, which will proceed in stages to define computable functions 
 ${\ell ,r\colon \mathbb {N}^2\to \mathbb {N}}$
 and
${\ell ,r\colon \mathbb {N}^2\to \mathbb {N}}$
 and 
 $w\colon \mathbb {N}^2\to \mathbb {N}\cup \{\infty \}$
 as well as a computable sequence
$w\colon \mathbb {N}^2\to \mathbb {N}\cup \{\infty \}$
 as well as a computable sequence 
 $(a_t)_t$
 of dyadic rational numbers in the interval
$(a_t)_t$
 of dyadic rational numbers in the interval 
 $[0,1)$
. We shall write
$[0,1)$
. We shall write 
 $\ell (n)[t]$
 for
$\ell (n)[t]$
 for 
 $\ell (n,t)$
,
$\ell (n,t)$
, 
 $r(n)[t]$
 for
$r(n)[t]$
 for 
 $r(n,t)$
, and
$r(n,t)$
, and 
 $w(n)[t]$
 for
$w(n)[t]$
 for 
 $w(n,t)$
.
$w(n,t)$
.
 At stage 
 $0$
 define
$0$
 define 
 $$\begin{align*}\ell(n)[0]:=0 \text{ and } r(n)[0]:=n \text{ and } w(n)[0]:=\infty, \end{align*}$$
$$\begin{align*}\ell(n)[0]:=0 \text{ and } r(n)[0]:=n \text{ and } w(n)[0]:=\infty, \end{align*}$$
for all 
 $n\in \mathbb {N}$
, as well as
$n\in \mathbb {N}$
, as well as 
 $a_0:=0$
.
$a_0:=0$
.
 At a stage t with 
 $t>0$
 first define
$t>0$
 first define 
 $\ell (0)[t]:=0$
 and, for
$\ell (0)[t]:=0$
 and, for 
 $n>0$
,
$n>0$
, 
 $$\begin{align*}\ell(n)[t] := \min\{m \in \mathbb{N} \colon m> \ell(n-1)[t] \text{ and } K(a_{t-1} \restriction m)[t]>m\}. \end{align*}$$
$$\begin{align*}\ell(n)[t] := \min\{m \in \mathbb{N} \colon m> \ell(n-1)[t] \text{ and } K(a_{t-1} \restriction m)[t]>m\}. \end{align*}$$
Note that this is well-defined because for a fixed t we have 
 $K(v)[t]=\infty $
 for almost all
$K(v)[t]=\infty $
 for almost all 
 $v\in \Sigma ^*$
. Intuitively speaking,
$v\in \Sigma ^*$
. Intuitively speaking, 
 $\ell (n)[t]$
 is our guess at stage t about the length of what we hope will be the nth witness for the existence of infinitely many initial segments of
$\ell (n)[t]$
 is our guess at stage t about the length of what we hope will be the nth witness for the existence of infinitely many initial segments of 
 $\alpha $
 that have high complexity.
$\alpha $
 that have high complexity.
 If there is no 
 $i<t$
 with
$i<t$
 with 
 $\ell (i)[t]\neq \ell (i)[t-1]$
, then define
$\ell (i)[t]\neq \ell (i)[t-1]$
, then define 
 $$\begin{align*}r(n)[t] := r(n)[t-1] \text{ and } w(n)[t]:=w(n)[t-1], \end{align*}$$
$$\begin{align*}r(n)[t] := r(n)[t-1] \text{ and } w(n)[t]:=w(n)[t-1], \end{align*}$$
for every 
 $n\in \mathbb {N}$
. Otherwise let
$n\in \mathbb {N}$
. Otherwise let 
 $i_t$
 denote the minimal such i and define
$i_t$
 denote the minimal such i and define 
 $$\begin{align*}r(n)[t] &:= \begin{cases} r(n)[t-1], & \text{if } n \leq i_t, \\ r(n)[t-1]+t, & \text{if } i_t < n, \end{cases} \\w(n)[t] &:= \begin{cases} \min(w(n)[t-1],r(i_t)[t]), & \text{if } n < t,\\ \infty, & \text{otherwise,} \end{cases}\end{align*}$$
$$\begin{align*}r(n)[t] &:= \begin{cases} r(n)[t-1], & \text{if } n \leq i_t, \\ r(n)[t-1]+t, & \text{if } i_t < n, \end{cases} \\w(n)[t] &:= \begin{cases} \min(w(n)[t-1],r(i_t)[t]), & \text{if } n < t,\\ \infty, & \text{otherwise,} \end{cases}\end{align*}$$
for every 
 $n\in \mathbb {N}$
. In either case, using the convention
$n\in \mathbb {N}$
. In either case, using the convention 
 $2^{-\infty }:=0$
, define
$2^{-\infty }:=0$
, define 
 $$\begin{align*}a_t := \sum_{n=0}^{t-1} 2^{-w(n)[t]} \cdot 2^{-|h(n)|}. \end{align*}$$
$$\begin{align*}a_t := \sum_{n=0}^{t-1} 2^{-w(n)[t]} \cdot 2^{-|h(n)|}. \end{align*}$$
Using the terminology from the informal proof sketch above, r is employed to reduce weights when we believe that we discovered an initial segment of high complexity, with the intent of ensuring regaining approximability of 
 $\alpha $
. This is then used to define the function w determining the scaling factors that will be applied to the elements of the series
$\alpha $
. This is then used to define the function w determining the scaling factors that will be applied to the elements of the series 
 $\sum _n 2^{-|h(n)|}$
; note how the appearance of
$\sum _n 2^{-|h(n)|}$
; note how the appearance of 
 $i_t$
 in the definition of w enables retroactively increasing these factors later if required.
$i_t$
 in the definition of w enables retroactively increasing these factors later if required.
 This ends the description of the construction; we proceed with the verification. It is easy to see that 
 $\ell ,r, w$
, and
$\ell ,r, w$
, and 
 $(a_t)_t$
 are computable; we need to argue that
$(a_t)_t$
 are computable; we need to argue that 
 $\alpha :=\lim _{t\to \infty } a_t$
 exists with the desired properties. The following claims are immediate.
$\alpha :=\lim _{t\to \infty } a_t$
 exists with the desired properties. The following claims are immediate.
Claim 1. For every 
 $t\in \mathbb {N}$
 the function
$t\in \mathbb {N}$
 the function 
 $n\mapsto r(n)[t]$
 is increasing.
$n\mapsto r(n)[t]$
 is increasing.
Claim 2. For every 
 $n\in \mathbb {N}$
 the function
$n\in \mathbb {N}$
 the function 
 $t\mapsto r(n)[t]$
 is nondecreasing.
$t\mapsto r(n)[t]$
 is nondecreasing.
Claim 3. For every 
 $t\in \mathbb {N}$
 the function
$t\in \mathbb {N}$
 the function 
 $n\mapsto w(n)[t]$
 is nondecreasing and satisfies
$n\mapsto w(n)[t]$
 is nondecreasing and satisfies 
 $w(n)[t]=\infty $
 for all
$w(n)[t]=\infty $
 for all 
 $n\geq t$
.
$n\geq t$
.
Claim 4. For every 
 $n\in \mathbb {N}$
 the function
$n\in \mathbb {N}$
 the function 
 $t\mapsto w(n)[t]$
 is nonincreasing.
$t\mapsto w(n)[t]$
 is nonincreasing.
 Due to Claim 3, using the convention 
 $2^{-\infty }=0$
, we can write
$2^{-\infty }=0$
, we can write 
 $$\begin{align*}a_t = \sum_{n=0}^\infty 2^{-w(n)[t]} \cdot 2^{-|h(n)|} , \end{align*}$$
$$\begin{align*}a_t = \sum_{n=0}^\infty 2^{-w(n)[t]} \cdot 2^{-|h(n)|} , \end{align*}$$
for all 
 $t\in \mathbb {N}$
. By Claim 4,
$t\in \mathbb {N}$
. By Claim 4, 
 $(a_t)_t$
 is nondecreasing, and since
$(a_t)_t$
 is nondecreasing, and since 
 $$\begin{align*}a_t \leq \sum_{n=0}^{t-1} 2^{-|h(n)|} < 1 , \end{align*}$$
$$\begin{align*}a_t \leq \sum_{n=0}^{t-1} 2^{-|h(n)|} < 1 , \end{align*}$$
for all 
 $t\in \mathbb {N}$
, its limit
$t\in \mathbb {N}$
, its limit 
 $\alpha $
 exists and is a left-computable number in
$\alpha $
 exists and is a left-computable number in 
 $(0,1]$
.
$(0,1]$
.
Claim 5. Let 
 $n_0,t_0\in \mathbb {N}$
 be numbers such that:
$n_0,t_0\in \mathbb {N}$
 be numbers such that: 
- 
(i)  $n_0<t_0$
, $n_0<t_0$
,
- 
(ii)  $\ell (n_0)[t_0]\neq \ell (n_0)[t_0-1]$
, $\ell (n_0)[t_0]\neq \ell (n_0)[t_0-1]$
,
- 
(iii) for every  $t\geq t_0$
 and every $t\geq t_0$
 and every $i<n_0$
, $i<n_0$
, $\ell (i)[t]=\ell (i)[t-1]$
. $\ell (i)[t]=\ell (i)[t-1]$
.
Then 
 $w(n)[t]=w(n)[t_0]$
 for all
$w(n)[t]=w(n)[t_0]$
 for all 
 $n<t_0$
 and all
$n<t_0$
 and all 
 $t\geq t_0$
.
$t\geq t_0$
.
Proof Fix any 
 $n<t_0$
 and
$n<t_0$
 and 
 $t>t_0$
; in order to show
$t>t_0$
; in order to show 
 ${w(n)[t]=w(n)[t_0]}$
 we may inductively assume that the statement has already been proven for
${w(n)[t]=w(n)[t_0]}$
 we may inductively assume that the statement has already been proven for 
 $t-1$
, that is, that
$t-1$
, that is, that 
 ${w(n)[t-1]=w(n)[t_0]}$
 holds; thus it suffices to prove
${w(n)[t-1]=w(n)[t_0]}$
 holds; thus it suffices to prove 
 $w(n)[t]=w(n)[t-1]$
.
$w(n)[t]=w(n)[t-1]$
.
 This is clear by construction if there is no 
 $i<t$
 with
$i<t$
 with 
 ${\ell (i)[t]\neq \ell (i)[t-1]}$
; thus let us assume that such an i does exist. In this case, by construction,
${\ell (i)[t]\neq \ell (i)[t-1]}$
; thus let us assume that such an i does exist. In this case, by construction, 
 $$\begin{align*}w(n)[t]=\min (w(n)[t-1],r(i_t)[t])\end{align*}$$
$$\begin{align*}w(n)[t]=\min (w(n)[t-1],r(i_t)[t])\end{align*}$$
and hence it is enough to show that 
 $w(n)[t-1] \leq r(i_t)[t]$
.
$w(n)[t-1] \leq r(i_t)[t]$
.
 By definition we have 
 $\ell (i_t)[t]\neq \ell (i_t)[t-1]$
; thus (iii) implies
$\ell (i_t)[t]\neq \ell (i_t)[t-1]$
; thus (iii) implies 
 $i_t\geq n_0$
. Similarly, (ii) and (iii) imply
$i_t\geq n_0$
. Similarly, (ii) and (iii) imply 
 $i_{t_0}=n_0$
. Then using Claims 1 and 2 as well as the induction hypothesis we obtain
$i_{t_0}=n_0$
. Then using Claims 1 and 2 as well as the induction hypothesis we obtain 
 $$ \begin{align*} w(n)[t-1] &= w(n)[t_0] = \min(w(n)[t_0-1],r(i_{t_0}[t_0])) \\ &\leq r(i_{t_0})[t_0] = r(n_0)[t_0] \leq r(i_t)[t_0] \leq r(i_t)[t], \end{align*} $$
$$ \begin{align*} w(n)[t-1] &= w(n)[t_0] = \min(w(n)[t_0-1],r(i_{t_0}[t_0])) \\ &\leq r(i_{t_0})[t_0] = r(n_0)[t_0] \leq r(i_t)[t_0] \leq r(i_t)[t], \end{align*} $$
which concludes the proof.
Claim 6. For every 
 $n\in \mathbb {N}$
, the function
$n\in \mathbb {N}$
, the function 
 $t\mapsto \ell (n)[t]$
 is eventually constant.
$t\mapsto \ell (n)[t]$
 is eventually constant.
Proof For the sake of a contradiction, assume that there is a smallest 
 $n_0\in \mathbb {N}$
 such that
$n_0\in \mathbb {N}$
 such that 
 $t\mapsto \ell (n_0)[t]$
 is not eventually constant. By the definitions above we must have
$t\mapsto \ell (n_0)[t]$
 is not eventually constant. By the definitions above we must have 
 $n_0>0$
. Define
$n_0>0$
. Define 
 $$\begin{align*}t_0 := \min\left\{t> n_0 \colon \begin{array}{c} \ell(n_0)[t]\neq \ell(n_0)[t-1] \text{ and } \\ (\forall i< n_0) (\forall s \geq t) ( \ell(i)[s]=\ell(i)[s-1] ) \end{array} \right\} \end{align*}$$
$$\begin{align*}t_0 := \min\left\{t> n_0 \colon \begin{array}{c} \ell(n_0)[t]\neq \ell(n_0)[t-1] \text{ and } \\ (\forall i< n_0) (\forall s \geq t) ( \ell(i)[s]=\ell(i)[s-1] ) \end{array} \right\} \end{align*}$$
and, for 
 $j>0$
,
$j>0$
, 
 $$\begin{align*}t_j := \min\{t> t_{j-1} \colon \ell(n_0)[t]\neq \ell(n_0)[t-1] \}. \end{align*}$$
$$\begin{align*}t_j := \min\{t> t_{j-1} \colon \ell(n_0)[t]\neq \ell(n_0)[t-1] \}. \end{align*}$$
The sequence 
 $(t_j)_j$
 is trivially increasing and
$(t_j)_j$
 is trivially increasing and 
 $w(n)[t_j]=w(n)[t_0]$
 for all
$w(n)[t_j]=w(n)[t_0]$
 for all 
 $n<t_0$
 and all
$n<t_0$
 and all 
 $j\in \mathbb {N}$
, by Claim 5.
$j\in \mathbb {N}$
, by Claim 5.
 We show by induction that 
 $w(n)[t]\geq r(n_0)[t_0]$
 for all
$w(n)[t]\geq r(n_0)[t_0]$
 for all 
 $n\geq t_0$
 and
$n\geq t_0$
 and 
 $t\geq t_0$
:
$t\geq t_0$
: 
- 
• For  $t=t_0$
 this is clear by Claim 3. $t=t_0$
 this is clear by Claim 3.
- 
• For  $t>t_0$
 and if there is no $t>t_0$
 and if there is no $i<t$
 with $i<t$
 with $\ell (i)[t]\neq \ell (i)[t-1]$
 then the assertion is clear inductively by definition of w. $\ell (i)[t]\neq \ell (i)[t-1]$
 then the assertion is clear inductively by definition of w.
- 
• If  $t>t_0$
 and there is an $t>t_0$
 and there is an $i<t$
 with $i<t$
 with $\ell (i)[t]\neq \ell (i)[t-1]$
 then, $\ell (i)[t]\neq \ell (i)[t-1]$
 then,- 
– if  $n\geq t$
 we have $n\geq t$
 we have $w(n)[t]=\infty> r(n_0)[t_0]$
, and $w(n)[t]=\infty> r(n_0)[t_0]$
, and
- 
– if  $n<t$
 then $n<t$
 then $w(n)[t] = \min (w(n)[t-1],r(i_t)[t]) \geq r(n_0)[t_0]$
.This is because, on the one hand, $w(n)[t] = \min (w(n)[t-1],r(i_t)[t]) \geq r(n_0)[t_0]$
.This is because, on the one hand, $w(n)[t-1] \geq r(n_0)[t_0]$
 can be assumed to hold inductively and, on the other hand, $w(n)[t-1] \geq r(n_0)[t_0]$
 can be assumed to hold inductively and, on the other hand, $r(i_t)[t] \geq r(n_0)[t] \geq r(n_0)[t_0]$
 by Claims 1 and 2 together with the fact that $r(i_t)[t] \geq r(n_0)[t] \geq r(n_0)[t_0]$
 by Claims 1 and 2 together with the fact that $i_t\geq n_0$
 by choice of $i_t\geq n_0$
 by choice of $t_0$
. $t_0$
.
 
- 
 By choice of 
 $t_0$
 we have for every
$t_0$
 we have for every 
 $j\in \mathbb {N}$
 that
$j\in \mathbb {N}$
 that 
 $i_{t_j}=n_0$
 and thus
$i_{t_j}=n_0$
 and thus 
 $$\begin{align*}r(i_{t_j})[t_j]=r(n_0)[t_j]=r(n_0)[t_0]. \end{align*}$$
$$\begin{align*}r(i_{t_j})[t_j]=r(n_0)[t_j]=r(n_0)[t_0]. \end{align*}$$
Combining this with the previous results and using the definition of w, for 
 $j>0$
 we see that
$j>0$
 we see that 
 $$\begin{align*}w(n)[t_j]= r(n_0)[t_0] \text{ for } t_0 \leq n < t_j. \end{align*}$$
$$\begin{align*}w(n)[t_j]= r(n_0)[t_0] \text{ for } t_0 \leq n < t_j. \end{align*}$$
Hence, for all 
 $j>0$
 we obtain
$j>0$
 we obtain 
 $$ \begin{align*} a_{t_j} &= \sum_{n=0}^{t_j-1} 2^{-w(n)[t_j]} \cdot 2^{-|h(n)|} \\ &= \sum_{n=0}^{t_0-1} 2^{-w(n)[t_0]} \cdot 2^{-|h(n)|} + \sum_{n=t_0}^{t_j-1} 2^{-r(n_0)[t_0]} \cdot 2^{-|h(n)|} \\ &= b_0 + 2^{-r(n_0)[t_0]} \cdot \sum_{n=0}^{t_j-1} 2^{-|h(n)|} , \end{align*} $$
$$ \begin{align*} a_{t_j} &= \sum_{n=0}^{t_j-1} 2^{-w(n)[t_j]} \cdot 2^{-|h(n)|} \\ &= \sum_{n=0}^{t_0-1} 2^{-w(n)[t_0]} \cdot 2^{-|h(n)|} + \sum_{n=t_0}^{t_j-1} 2^{-r(n_0)[t_0]} \cdot 2^{-|h(n)|} \\ &= b_0 + 2^{-r(n_0)[t_0]} \cdot \sum_{n=0}^{t_j-1} 2^{-|h(n)|} , \end{align*} $$
where 
 $b_0$
 is the rational number given by
$b_0$
 is the rational number given by 
 $$\begin{align*}b_0 := \sum_{n=0}^{t_0-1} \left( 2^{-w(n)[t_0]} - 2^{-r(n_0)[t_0]} \right) \cdot 2^{-|h(n)|}. \end{align*}$$
$$\begin{align*}b_0 := \sum_{n=0}^{t_0-1} \left( 2^{-w(n)[t_0]} - 2^{-r(n_0)[t_0]} \right) \cdot 2^{-|h(n)|}. \end{align*}$$
Writing 
 $\Omega :=\sum _{u\in \mathrm {dom}(U)} 2^{-|u|}=\sum _{n=0}^\infty 2^{-|h(n)|}$
 we conclude that
$\Omega :=\sum _{u\in \mathrm {dom}(U)} 2^{-|u|}=\sum _{n=0}^\infty 2^{-|h(n)|}$
 we conclude that 
 $$\begin{align*}\alpha = b_0 + 2^{-r(n_0)[t_0]} \cdot \Omega .\end{align*}$$
$$\begin{align*}\alpha = b_0 + 2^{-r(n_0)[t_0]} \cdot \Omega .\end{align*}$$
Thus we see that 
 $\alpha $
 is Martin-Löf random, and by a result of Chaitin [Reference Chaitin3] (see, for instance, Downey and Hirschfeldt [Reference Downey and Hirschfeldt4, Corollary 6.6.2]) we have for all sufficiently large m that
$\alpha $
 is Martin-Löf random, and by a result of Chaitin [Reference Chaitin3] (see, for instance, Downey and Hirschfeldt [Reference Downey and Hirschfeldt4, Corollary 6.6.2]) we have for all sufficiently large m that 
 $K(\alpha \restriction m)> m$
. Fix an
$K(\alpha \restriction m)> m$
. Fix an 
 $m_0>\ell (n_0-1)[t_0]$
 with
$m_0>\ell (n_0-1)[t_0]$
 with 
 $K(\alpha \restriction m_0)> m_0$
, and a
$K(\alpha \restriction m_0)> m_0$
, and a 
 $j_0$
 such that, for all
$j_0$
 such that, for all 
 $t\geq t_{j_0}$
, we have
$t\geq t_{j_0}$
, we have 
 $a_{t-1} \restriction m_0 = \alpha \restriction m_0$
. Then, for
$a_{t-1} \restriction m_0 = \alpha \restriction m_0$
. Then, for 
 $t\geq t_{j_0}$
 we obtain
$t\geq t_{j_0}$
 we obtain 
 $$\begin{align*}K(a_{t-1} \restriction m_0)[t] = K(\alpha \restriction m_0)[t] \geq K(\alpha \restriction m_0)> m_0, \end{align*}$$
$$\begin{align*}K(a_{t-1} \restriction m_0)[t] = K(\alpha \restriction m_0)[t] \geq K(\alpha \restriction m_0)> m_0, \end{align*}$$
hence, by definition, 
 $\ell (n_0)[t]\leq m_0$
 for any such t. Combining this with the fact that the map
$\ell (n_0)[t]\leq m_0$
 for any such t. Combining this with the fact that the map 
 $t \mapsto \ell (n_0)[t]$
 is by definition nondecreasing for
$t \mapsto \ell (n_0)[t]$
 is by definition nondecreasing for 
 $t\geq t_{j_0}$
 shows that it must be eventually constant after all, a contradiction.
$t\geq t_{j_0}$
 shows that it must be eventually constant after all, a contradiction.
 Define 
 $L_n:=\lim _{t\to \infty } \ell (n)[t]$
, for
$L_n:=\lim _{t\to \infty } \ell (n)[t]$
, for 
 $n\in \mathbb {N}$
. By the definition of
$n\in \mathbb {N}$
. By the definition of 
 $\ell $
 we have that
$\ell $
 we have that 
 $(L_n)_n$
 is increasing and that
$(L_n)_n$
 is increasing and that 
 $$ \begin{align} K(\alpha\restriction L_n)> L_n \text{ for all } n\in\mathbb{N}. \end{align} $$
$$ \begin{align} K(\alpha\restriction L_n)> L_n \text{ for all } n\in\mathbb{N}. \end{align} $$
It remains to show that 
 $\alpha $
 is regainingly approximable. To that end, define a sequence
$\alpha $
 is regainingly approximable. To that end, define a sequence 
 $(s_m)_m$
 by
$(s_m)_m$
 by 
 $$\begin{align*}s_m := \max\left(\{0\}\cup \{s \in\mathbb{N} \colon (\exists i<m)\; (i<s \text{ and } \ell(i)[s]\neq \ell(i)[s-1]) \}\right), \end{align*}$$
$$\begin{align*}s_m := \max\left(\{0\}\cup \{s \in\mathbb{N} \colon (\exists i<m)\; (i<s \text{ and } \ell(i)[s]\neq \ell(i)[s-1]) \}\right), \end{align*}$$
for all 
 $m\in \mathbb {N}$
. Clearly,
$m\in \mathbb {N}$
. Clearly, 
 $(s_m)_m$
 is nondecreasing.
$(s_m)_m$
 is nondecreasing.
Claim 7. The sequence 
 $(s_m)_m$
 is unbounded.
$(s_m)_m$
 is unbounded.
Proof For the sake of a contradiction, let us assume that 
 $(s_m)_m$
 is bounded and that
$(s_m)_m$
 is bounded and that 
 $S\in \mathbb {N}$
 is an upper bound. Then, for all
$S\in \mathbb {N}$
 is an upper bound. Then, for all 
 $t>S$
 and all
$t>S$
 and all 
 ${m\in \mathbb {N}}$
 there is no
${m\in \mathbb {N}}$
 there is no 
 $i<m$
 with
$i<m$
 with 
 $i<t$
 and
$i<t$
 and 
 $\ell (i)[t]\neq \ell (i)[t-1]$
. Hence, for all
$\ell (i)[t]\neq \ell (i)[t-1]$
. Hence, for all 
 $t>S$
 and
$t>S$
 and 
 $i<t$
 we have
$i<t$
 we have 
 $\ell (i)[t]=\ell (i)[t-1]$
. This implies
$\ell (i)[t]=\ell (i)[t-1]$
. This implies 
 $w(n)[t]=w(n)[t-1]$
 for all
$w(n)[t]=w(n)[t-1]$
 for all 
 $n\in \mathbb {N}$
 and all
$n\in \mathbb {N}$
 and all 
 $t>S$
. We obtain
$t>S$
. We obtain 
 $a_t=a_{t-1}$
 for all
$a_t=a_{t-1}$
 for all 
 $t>S$
, hence,
$t>S$
, hence, 
 $\alpha =a_S$
. Thus
$\alpha =a_S$
. Thus 
 $\alpha $
 would be a rational number, which contradicts (*).
$\alpha $
 would be a rational number, which contradicts (*).
 Consider an 
 $m\in \mathbb {N}$
 with
$m\in \mathbb {N}$
 with 
 $s_m>0$
. We claim that
$s_m>0$
. We claim that 
 $$\begin{align*}\alpha - a_{s_m} < 2^{-s_m}. \end{align*}$$
$$\begin{align*}\alpha - a_{s_m} < 2^{-s_m}. \end{align*}$$
By Claim 4, for every 
 $n\in \mathbb {N}$
, the function
$n\in \mathbb {N}$
, the function 
 $t\mapsto w(n)[t]$
 is eventually constant. Thus we can define
$t\mapsto w(n)[t]$
 is eventually constant. Thus we can define 
 $W_n:=\lim _{t\to \infty } w(n)[t]$
 for every
$W_n:=\lim _{t\to \infty } w(n)[t]$
 for every 
 $n\in \mathbb {N}$
 and write
$n\in \mathbb {N}$
 and write 
 $$\begin{align*}\alpha = \sum_{n=0}^\infty 2^{-W(n)} \cdot 2^{-|h(n)|}. \end{align*}$$
$$\begin{align*}\alpha = \sum_{n=0}^\infty 2^{-W(n)} \cdot 2^{-|h(n)|}. \end{align*}$$
Claim 8. For 
 $n < s_m$
 we have
$n < s_m$
 we have 
 $W(n)= w(n)[s_m]$
.
$W(n)= w(n)[s_m]$
.
Proof By definition of 
 $s_m$
, we have
$s_m$
, we have 
 $\ell (i)[t] = \ell (i)[t-1]$
 for all
$\ell (i)[t] = \ell (i)[t-1]$
 for all 
 $t\geq s_m$
 and all
$t\geq s_m$
 and all 
 $i<i_{s_m}$
. Thus, the claim follows from Claim 5 with
$i<i_{s_m}$
. Thus, the claim follows from Claim 5 with 
 $s_m$
 in place of
$s_m$
 in place of 
 $t_0$
 and
$t_0$
 and 
 $i_{s_m}$
 in place of
$i_{s_m}$
 in place of 
 $n_0$
.
$n_0$
.
Claim 9. For 
 $n \geq s_m$
 we have
$n \geq s_m$
 we have 
 $W(n) \geq s_m$
.
$W(n) \geq s_m$
.
Proof It suffices to prove that 
 $w(n)[t]\geq s_m$
 for all
$w(n)[t]\geq s_m$
 for all 
 $t\geq s_m$
 and
$t\geq s_m$
 and 
 $n\geq s_m$
. We proceed by induction over t. By Claim 3 the assertion is clear for
$n\geq s_m$
. We proceed by induction over t. By Claim 3 the assertion is clear for 
 $t=s_m$
. Thus consider an arbitrary
$t=s_m$
. Thus consider an arbitrary 
 $t>s_m$
. If there is no
$t>s_m$
. If there is no 
 $i<t$
 with
$i<t$
 with 
 $\ell (i)[t]\neq \ell (i)[t-1]$
, then the assertion is clear inductively by definition of
$\ell (i)[t]\neq \ell (i)[t-1]$
, then the assertion is clear inductively by definition of 
 $w$
.
$w$
.
 So let us assume that there exists such an i. In case that 
 $n\geq t$
 we obtain
$n\geq t$
 we obtain 
 $w(n)[t]=\infty>s_m$
, and we are done. Otherwise, if
$w(n)[t]=\infty>s_m$
, and we are done. Otherwise, if 
 $n<t$
, then by definition
$n<t$
, then by definition 
 $w(n)[t]=\min (w(n)[t-1],r(i_t)[t])$
. Assuming inductively that the assertion is already proven for
$w(n)[t]=\min (w(n)[t-1],r(i_t)[t])$
. Assuming inductively that the assertion is already proven for 
 $t-1$
, it is thus enough to show
$t-1$
, it is thus enough to show 
 $r(i_t)[t]\geq s_m$
. By the definition of
$r(i_t)[t]\geq s_m$
. By the definition of 
 $s_m$
 we have
$s_m$
 we have 
 $i_t>i_{s_m}$
, thus by Claim 2,
$i_t>i_{s_m}$
, thus by Claim 2, 
 $$\begin{align*}r(i_t)[t] \geq r(i_t)[s_m] = r(i_t)[s_m-1]+s_m \geq s_m, \end{align*}$$
$$\begin{align*}r(i_t)[t] \geq r(i_t)[s_m] = r(i_t)[s_m-1]+s_m \geq s_m, \end{align*}$$
completing the proof of Claim 9.
Using Claim 8 it follows that
 $$\begin{align*}a_{s_m} = \sum_{n=0}^{s_m-1} 2^{-w(n)[s_m]} \cdot 2^{-|h(n)|} = \sum_{n=0}^{s_m-1} 2^{-W(n)} \cdot 2^{-|h(n)|},\end{align*}$$
$$\begin{align*}a_{s_m} = \sum_{n=0}^{s_m-1} 2^{-w(n)[s_m]} \cdot 2^{-|h(n)|} = \sum_{n=0}^{s_m-1} 2^{-W(n)} \cdot 2^{-|h(n)|},\end{align*}$$
and thus, using Claim 9, we can conclude that
 $$ \begin{align*} \alpha - a_{s_m} &= \sum_{n=s_m}^\infty 2^{-W(n)} \cdot 2^{-|h(n)|}\\ &\leq \sum_{n=s_m}^\infty 2^{-s_m} \cdot 2^{-|h(n)|}\\ &< 2^{-s_m} \cdot \Omega \\ &< 2^{-s_m}.\\[-34pt] \end{align*} $$
$$ \begin{align*} \alpha - a_{s_m} &= \sum_{n=s_m}^\infty 2^{-W(n)} \cdot 2^{-|h(n)|}\\ &\leq \sum_{n=s_m}^\infty 2^{-s_m} \cdot 2^{-|h(n)|}\\ &< 2^{-s_m} \cdot \Omega \\ &< 2^{-s_m}.\\[-34pt] \end{align*} $$
7. Complexity of regainingly approximable sets
As every c.e. set is i.o. K-trivial by a result of Barmpalias and Vlek [Reference Barmpalias and Vlek2, Proposition 2.2], this holds in particular for every regainingly approximable set. Thus, in analogy to the results in the last section, it is natural to wonder if a regainingly approximable set may at the same time have infinitely many initial segments of high Kolmogorov complexity.
 Of course, in the setting of c.e. sets, there is no hope of achieving complexities as high as in the previous section; this is because for any such set A we trivially have 
 $C(A \restriction n ) \leq ^+ 2 \log n$
 for all n. Kummer [Reference Kummer7, Theorem 3.1] showed that there exist c.e. sets that achieve this trivial upper bound infinitely often; to be precise, that there exists a c.e. set A and a constant
$C(A \restriction n ) \leq ^+ 2 \log n$
 for all n. Kummer [Reference Kummer7, Theorem 3.1] showed that there exist c.e. sets that achieve this trivial upper bound infinitely often; to be precise, that there exists a c.e. set A and a constant 
 $d\in \mathbb {N}$
 such that, for infinitely many n,
$d\in \mathbb {N}$
 such that, for infinitely many n, 
 $$ \begin{align} C(A \restriction n ) \geq 2 \log n - d. \end{align} $$
$$ \begin{align} C(A \restriction n ) \geq 2 \log n - d. \end{align} $$
Such sets have been named Kummer complex (see, for instance, Downey and Hirschfeldt [Reference Downey and Hirschfeldt4, Section 16.1]). We will show that there are Kummer complex regainingly approximable sets.
For prefix-free complexity, analogous observations can be made: In this setting it is easy to see that for every computably enumerable set A we have
 $$\begin{align*}K(A \restriction n) \leq ^+ 2\log (n) + 2\log \log (n)\end{align*}$$
$$\begin{align*}K(A \restriction n) \leq ^+ 2\log (n) + 2\log \log (n)\end{align*}$$
for all 
 $n \in \mathbb {N}$
. Barmpalias and Downey [Reference Barmpalias and Downey1, Theorem 1.10] constructed a c.e. set A and a number
$n \in \mathbb {N}$
. Barmpalias and Downey [Reference Barmpalias and Downey1, Theorem 1.10] constructed a c.e. set A and a number 
 $d\in \mathbb {N}$
 with
$d\in \mathbb {N}$
 with 
 $$\begin{align*}K(A \restriction n) \geq 2\log(n) + \log\log(n) - d\end{align*}$$
$$\begin{align*}K(A \restriction n) \geq 2\log(n) + \log\log(n) - d\end{align*}$$
for infinitely many 
 $n \in \mathbb {N}$
; for the purposes of this article we will call such sets Kummer K-complex. Again, we will show that there are Kummer K-complex regainingly approximable sets.
$n \in \mathbb {N}$
; for the purposes of this article we will call such sets Kummer K-complex. Again, we will show that there are Kummer K-complex regainingly approximable sets.
Thus, in this sense, regainingly approximable sets can achieve both maximal plain and maximal prefix-free complexity infinitely often.
Theorem 7.1. There exists a regainingly approximable set that is Kummer complex.
We will use the following lemma in the proof.
Lemma 7.2. For any c.e. sets 
 $X, Y \subseteq \mathbb {N}$
 there exists some
$X, Y \subseteq \mathbb {N}$
 there exists some 
 $d \in \mathbb {N}$
 with
$d \in \mathbb {N}$
 with 
 $$ \begin{align} C((X\cup Y)\restriction n) \leq \max\{C(X\restriction n), C(Y\restriction n)\} + d \end{align} $$
$$ \begin{align} C((X\cup Y)\restriction n) \leq \max\{C(X\restriction n), C(Y\restriction n)\} + d \end{align} $$
for all 
 $n \in \mathbb {N}$
.
$n \in \mathbb {N}$
.
 In the remainder of this section we write U for the Turing machine 
 $U_1$
 that was used to define plain Kolmogorov complexity.
$U_1$
 that was used to define plain Kolmogorov complexity.
Proof of Lemma 7.2
 Let 
 $f_X\colon \mathbb {N}\to \mathbb {N}$
 and
$f_X\colon \mathbb {N}\to \mathbb {N}$
 and 
 $f_Y\colon \mathbb {N}\to \mathbb {N}$
 be computable enumerations (in the sense defined in Section 4) of X and Y, respectively. Then
$f_Y\colon \mathbb {N}\to \mathbb {N}$
 be computable enumerations (in the sense defined in Section 4) of X and Y, respectively. Then 
 $f_Z\colon \mathbb {N}\to \mathbb {N}$
 defined by
$f_Z\colon \mathbb {N}\to \mathbb {N}$
 defined by 
 $$\begin{align*}f_Z(n):=\begin{cases} f_X(n/2), & \text{if } n \text{ is even}, \\ f_Y((n-1)/2), & \text{if } n \text{ is odd}, \end{cases} \end{align*}$$
$$\begin{align*}f_Z(n):=\begin{cases} f_X(n/2), & \text{if } n \text{ is even}, \\ f_Y((n-1)/2), & \text{if } n \text{ is odd}, \end{cases} \end{align*}$$
is a computable enumeration of 
 $Z:=X\cup Y$
. Define a computable function
$Z:=X\cup Y$
. Define a computable function 
 $g\colon \mathrm {dom}(g)\to \Sigma ^*$
 with
$g\colon \mathrm {dom}(g)\to \Sigma ^*$
 with 
 $\mathrm {dom}(g)\subseteq \Sigma ^*$
 as follows for every
$\mathrm {dom}(g)\subseteq \Sigma ^*$
 as follows for every 
 $v\in \Sigma ^*$
:
$v\in \Sigma ^*$
: 
- 
• If  $U(v)$
 is defined and if there is a $U(v)$
 is defined and if there is a $t\in \mathbb {N}$
 such that then let t be the smallest such number and set $t\in \mathbb {N}$
 such that then let t be the smallest such number and set $$\begin{align*}U(v) = \mathrm{Enum}(f_X)[t] \restriction |U(v)|, \end{align*}$$
Otherwise we leave $$\begin{align*}U(v) = \mathrm{Enum}(f_X)[t] \restriction |U(v)|, \end{align*}$$
Otherwise we leave $$\begin{align*}g(v0):= \mathrm{Enum}(f_Z)[2t] \restriction |U(v)|.\end{align*}$$ $$\begin{align*}g(v0):= \mathrm{Enum}(f_Z)[2t] \restriction |U(v)|.\end{align*}$$ $g(v0)$
 undefined. $g(v0)$
 undefined.
- 
• If  $U(v)$
 is defined and if there is a $U(v)$
 is defined and if there is a $t\in \mathbb {N}$
 such that then let t be the smallest such number and set $t\in \mathbb {N}$
 such that then let t be the smallest such number and set $$\begin{align*}U(v) = \mathrm{Enum}(f_Y)[t] \restriction |U(v)|,\end{align*}$$
Otherwise we leave $$\begin{align*}U(v) = \mathrm{Enum}(f_Y)[t] \restriction |U(v)|,\end{align*}$$
Otherwise we leave $$\begin{align*}g(v1):= \mathrm{Enum}(f_Z)[2t] \restriction |U(v)|.\end{align*}$$ $$\begin{align*}g(v1):= \mathrm{Enum}(f_Z)[2t] \restriction |U(v)|.\end{align*}$$ $g(v1)$
 undefined. $g(v1)$
 undefined.
 For the verification, consider any 
 $n \in \mathbb {N}$
. Let
$n \in \mathbb {N}$
. Let 
 $s_X\in \mathbb {N}$
 be the smallest number with
$s_X\in \mathbb {N}$
 be the smallest number with 
 $\mathrm {Enum}(f_X)[s_X]\restriction n = X\restriction n$
, and let
$\mathrm {Enum}(f_X)[s_X]\restriction n = X\restriction n$
, and let 
 $s_Y\in \mathbb {N}$
 be the smallest number with
$s_Y\in \mathbb {N}$
 be the smallest number with 
 $\mathrm {Enum}(f_Y)[s_Y]\restriction n = Y\restriction n$
. Then
$\mathrm {Enum}(f_Y)[s_Y]\restriction n = Y\restriction n$
. Then 
 $s:=\max \{s_X,s_Y\}$
 satisfies
$s:=\max \{s_X,s_Y\}$
 satisfies 
 $\mathrm {Enum}(f_Z)[2s]\restriction n = Z\restriction n$
.
$\mathrm {Enum}(f_Z)[2s]\restriction n = Z\restriction n$
.
 If 
 $s=s_X$
, then for every
$s=s_X$
, then for every 
 $v\in \Sigma ^*$
 with
$v\in \Sigma ^*$
 with 
 $U(v)=X\restriction n$
 we obtain
$U(v)=X\restriction n$
 we obtain 
 $g(v0)=Z\restriction n$
. Similarly, if
$g(v0)=Z\restriction n$
. Similarly, if 
 $s=s_Y$
, then for every
$s=s_Y$
, then for every 
 $v\in \Sigma ^*$
 with
$v\in \Sigma ^*$
 with 
 $U(v)=Y\restriction n$
 we obtain
$U(v)=Y\restriction n$
 we obtain 
 $g(v1)=Z\restriction n$
. Thus,
$g(v1)=Z\restriction n$
. Thus, 
 $$\begin{align*}C(Z\restriction n) \leq^+ C_g(Z\restriction n) \leq \max\{C(X\restriction n), C(Y\restriction n)\} + 1 .\\[-34pt] \end{align*}$$
$$\begin{align*}C(Z\restriction n) \leq^+ C_g(Z\restriction n) \leq \max\{C(X\restriction n), C(Y\restriction n)\} + 1 .\\[-34pt] \end{align*}$$
Note that the proof works equally well with prefix-free complexity in place of plain complexity, leading to the following corollary.
Corollary 7.3. For any c.e. sets 
 $A, B \subseteq \mathbb {N}$
 there exists some
$A, B \subseteq \mathbb {N}$
 there exists some 
 $d \in \mathbb {N}$
 with
$d \in \mathbb {N}$
 with 
 $$\begin{align*}K((A\cup B)\restriction n) \leq \max\{K(A\restriction n), K(B\restriction n)\} + d \end{align*}$$
$$\begin{align*}K((A\cup B)\restriction n) \leq \max\{K(A\restriction n), K(B\restriction n)\} + d \end{align*}$$
for all 
 $n \in \mathbb {N}$
.
$n \in \mathbb {N}$
.
Proof of Theorem 7.1
 Let 
 $Z \subseteq \mathbb {N}$
 be a c.e. Kummer complex set. By Theorem 5.1 there exist regainingly approximable sets
$Z \subseteq \mathbb {N}$
 be a c.e. Kummer complex set. By Theorem 5.1 there exist regainingly approximable sets 
 ${X, Y \subseteq Z}$
 with
${X, Y \subseteq Z}$
 with 
 $X \cup Y = Z$
. Fix a constant
$X \cup Y = Z$
. Fix a constant 
 $d \in \mathbb {N}$
 that is large enough to witness the Kummer complexity (†) of Z and such that (‡) is true for all n. If we let
$d \in \mathbb {N}$
 that is large enough to witness the Kummer complexity (†) of Z and such that (‡) is true for all n. If we let 
 $c:=2d$
, then we have, for the infinitely many n satisfying (†), that
$c:=2d$
, then we have, for the infinitely many n satisfying (†), that 
 $$\begin{align*}2\log(n) - d \leq C(Z \restriction n) \leq \max\{C(X \restriction n), C(Y \restriction n) \} + d, \end{align*}$$
$$\begin{align*}2\log(n) - d \leq C(Z \restriction n) \leq \max\{C(X \restriction n), C(Y \restriction n) \} + d, \end{align*}$$
hence,
 $$\begin{align*}\max\{C(X \restriction n), C(Y \restriction n) \} \geq 2\log(n) - c, \end{align*}$$
$$\begin{align*}\max\{C(X \restriction n), C(Y \restriction n) \} \geq 2\log(n) - c, \end{align*}$$
and at least one of X and Y has to be Kummer complex.
By applying the same proof to a set Z that is Kummer K-complex instead of Kummer complex and by replacing the use of Lemma 7.2 by that of Corollary 7.3, we can also obtain the following result.
Theorem 7.4. There exists a regainingly approximable set that is Kummer K-complex.
8. Arithmetical properties
The class of regainingly approximable sets is not closed under union, according to Theorems 5.1 and 5.3. The following limited closure properties do hold, however, and will be useful in the proof of the next theorem.
Lemma 8.1.
- 
(1) The union of a regainingly approximable set and a decidable set is regainingly approximable. 
- 
(2) If A is a regainingly approximable set and  $f\colon \mathbb {N}\to \mathbb {N}$
 is a computable, nondecreasing function, then the set is regainingly approximable. $f\colon \mathbb {N}\to \mathbb {N}$
 is a computable, nondecreasing function, then the set is regainingly approximable. $$\begin{align*}f(A):=\{n\in\mathbb{N} \colon (\exists k\in A)\; n=f(k)\}\end{align*}$$ $$\begin{align*}f(A):=\{n\in\mathbb{N} \colon (\exists k\in A)\; n=f(k)\}\end{align*}$$
Proof Let 
 $A\subseteq \mathbb {N}$
 be a regainingly approximable set. By Theorem 4.7 there exists a computable
$A\subseteq \mathbb {N}$
 be a regainingly approximable set. By Theorem 4.7 there exists a computable 
 $\mathrm {id}_{\mathbb {N}}$
-good enumeration
$\mathrm {id}_{\mathbb {N}}$
-good enumeration 
 $g\colon \mathbb {N}\to \mathbb {N}$
 of A. For the first assertion, let
$g\colon \mathbb {N}\to \mathbb {N}$
 of A. For the first assertion, let 
 $B\subseteq \mathbb {N}$
 be a decidable set. Then the function
$B\subseteq \mathbb {N}$
 be a decidable set. Then the function 
 $h\colon \mathbb {N}\to \mathbb {N}$
 defined by
$h\colon \mathbb {N}\to \mathbb {N}$
 defined by 
 $h(2n):=g(n)$
 and
$h(2n):=g(n)$
 and 
 $h(2n+1):=n+1$
 if
$h(2n+1):=n+1$
 if 
 $n\in B$
,
$n\in B$
, 
 $h(2n+1):=0$
 if
$h(2n+1):=0$
 if 
 $n\not \in B$
, is a computable and
$n\not \in B$
, is a computable and 
 $2n$
-good enumeration of
$2n$
-good enumeration of 
 ${A\cup B}$
.
${A\cup B}$
.
 For the second assertion, let 
 $f\colon \mathbb {N}\to \mathbb {N}$
 be a computable, nondecreasing function. Then the function
$f\colon \mathbb {N}\to \mathbb {N}$
 be a computable, nondecreasing function. Then the function 
 $h\colon \mathbb {N}\to \mathbb {N}$
 defined by
$h\colon \mathbb {N}\to \mathbb {N}$
 defined by 
 $$\begin{align*}h(n):=\begin{cases} 0, & \text{if } g(n)=0, \\ f(g(n)-1)+1, & \text{if } g(n)>0, \end{cases} \end{align*}$$
$$\begin{align*}h(n):=\begin{cases} 0, & \text{if } g(n)=0, \\ f(g(n)-1)+1, & \text{if } g(n)>0, \end{cases} \end{align*}$$
for 
 $n\in \mathbb {N}$
, is a computable enumeration of
$n\in \mathbb {N}$
, is a computable enumeration of 
 $f(A)$
. If f is bounded then
$f(A)$
. If f is bounded then 
 $f(A)$
 is finite and h is trivially an
$f(A)$
 is finite and h is trivially an 
 $\mathrm {id}_{\mathbb {N}}$
-good enumeration of
$\mathrm {id}_{\mathbb {N}}$
-good enumeration of 
 $f(A)$
. Thus assume that f is unbounded. Then the function
$f(A)$
. Thus assume that f is unbounded. Then the function 
 $r\colon \mathbb {N}\to \mathbb {N}$
 defined by
$r\colon \mathbb {N}\to \mathbb {N}$
 defined by 
 $$\begin{align*}r(n) := \max\{m\in\mathbb{N} \colon f(m)\leq n\} , \end{align*}$$
$$\begin{align*}r(n) := \max\{m\in\mathbb{N} \colon f(m)\leq n\} , \end{align*}$$
for 
 $n\in \mathbb {N}$
, is computable, nondecreasing, and unbounded. We claim that h is an r-good enumeration of
$n\in \mathbb {N}$
, is computable, nondecreasing, and unbounded. We claim that h is an r-good enumeration of 
 $f(A)$
. By assumption, the set
$f(A)$
. By assumption, the set 
 $$\begin{align*}B := \{m\in\mathbb{N} \colon \{0,\ldots,m-1\}\cap A \subseteq \mathrm{Enum}(g)[m]\} \end{align*}$$
$$\begin{align*}B := \{m\in\mathbb{N} \colon \{0,\ldots,m-1\}\cap A \subseteq \mathrm{Enum}(g)[m]\} \end{align*}$$
is infinite. So is the set 
 $C:=f(B)$
. Consider numbers
$C:=f(B)$
. Consider numbers 
 $n\in C$
 and
$n\in C$
 and 
 $m\in B$
 with
$m\in B$
 with 
 $f(m)=n$
. Then
$f(m)=n$
. Then 
 $m \leq r(n)$
 and we obtain
$m \leq r(n)$
 and we obtain 
 $$ \begin{align*} \{0,\ldots,n-1\}\cap f(A) &= \{0,\ldots,f(m)-1\}\cap f(A) \\ &\subseteq f(\{0,\ldots,m-1\}\cap A) \\ &\subseteq f(\mathrm{Enum}(g)[m]) \\ &= \mathrm{Enum}(h)[m] \\ &\subseteq \mathrm{Enum}(h)[r(n)].\\[-34pt] \end{align*} $$
$$ \begin{align*} \{0,\ldots,n-1\}\cap f(A) &= \{0,\ldots,f(m)-1\}\cap f(A) \\ &\subseteq f(\{0,\ldots,m-1\}\cap A) \\ &\subseteq f(\mathrm{Enum}(g)[m]) \\ &= \mathrm{Enum}(h)[m] \\ &\subseteq \mathrm{Enum}(h)[r(n)].\\[-34pt] \end{align*} $$
The class of regainingly approximable sets is also not closed under intersection, as the following theorem establishes.
Theorem 8.2. There exist regainingly approximable sets 
 $A,B\subseteq \mathbb {N}$
 such that
$A,B\subseteq \mathbb {N}$
 such that 
 $A\cap B$
 is not regainingly approximable.
$A\cap B$
 is not regainingly approximable.
Proof For natural numbers 
 $a,b$
 and a set
$a,b$
 and a set 
 $D\subseteq \mathbb {N}$
 we write
$D\subseteq \mathbb {N}$
 we write 
 $(a\cdot D + b)$
 for the set
$(a\cdot D + b)$
 for the set 
 $$\begin{align*}(a\cdot D + b) :=\{ n\in\mathbb{N} \colon (\exists d\in D)\; n=a\cdot d + b \} \end{align*}$$
$$\begin{align*}(a\cdot D + b) :=\{ n\in\mathbb{N} \colon (\exists d\in D)\; n=a\cdot d + b \} \end{align*}$$
and 
 $(a\cdot D)$
 for the set
$(a\cdot D)$
 for the set 
 $(a\cdot D + 0)$
. By Theorem 5.3 there exists a c.e. set
$(a\cdot D + 0)$
. By Theorem 5.3 there exists a c.e. set 
 $\widetilde {C}\subseteq \mathbb {N}$
 that is not regainingly approximable. By Theorem 5.1 there exist two disjoint, regainingly approximable sets
$\widetilde {C}\subseteq \mathbb {N}$
 that is not regainingly approximable. By Theorem 5.1 there exist two disjoint, regainingly approximable sets 
 $\widetilde {A},\widetilde {B}\subseteq \mathbb {N}$
 with
$\widetilde {A},\widetilde {B}\subseteq \mathbb {N}$
 with 
 $\widetilde {A} \cup \widetilde {B} = \widetilde {C}$
. By Lemma 8.1 the sets
$\widetilde {A} \cup \widetilde {B} = \widetilde {C}$
. By Lemma 8.1 the sets 
 $$\begin{align*}A:=(2\cdot \widetilde{A}) \cup (2\cdot\mathbb{N}+1) \ \text{ and } \ B:=(2\cdot\widetilde{B}+1) \cup (2\cdot \mathbb{N}) \end{align*}$$
$$\begin{align*}A:=(2\cdot \widetilde{A}) \cup (2\cdot\mathbb{N}+1) \ \text{ and } \ B:=(2\cdot\widetilde{B}+1) \cup (2\cdot \mathbb{N}) \end{align*}$$
are regainingly approximable. We claim that their intersection
 $$\begin{align*}A\cap B = (2\cdot \widetilde{A}) \cup (2\cdot \widetilde{B}+1)\end{align*}$$
$$\begin{align*}A\cap B = (2\cdot \widetilde{A}) \cup (2\cdot \widetilde{B}+1)\end{align*}$$
is not regainingly approximable. To see this, let 
 $g\colon \mathbb {N}\to \mathbb {N}$
 be defined by
$g\colon \mathbb {N}\to \mathbb {N}$
 be defined by 
 $g(n)= \lfloor n/2 \rfloor $
 for all
$g(n)= \lfloor n/2 \rfloor $
 for all 
 $n\in \mathbb {N}$
. We observe
$n\in \mathbb {N}$
. We observe 
 ${\widetilde {C}=g(A\cap B)}$
. Thus, if
${\widetilde {C}=g(A\cap B)}$
. Thus, if 
 $A\cap B$
 were a regainingly approximable set, then so would be
$A\cap B$
 were a regainingly approximable set, then so would be 
 $\widetilde {C}$
 according to Lemma 8.1(2), a contradiction.
$\widetilde {C}$
 according to Lemma 8.1(2), a contradiction.
We return to the study of regainingly approximable numbers.
Corollary 8.3.
- 
(1) There exists a strongly left-computable number that is not regainingly approximable. 
- 
(2) There exists a strongly left-computable number that is regainingly approximable but not computable. 
Proof The first assertion follows from Theorems 4.7 and 5.3. The second assertion follows from Corollary 5.2 and Theorem 4.7 and from the well-known fact that, for any 
 $A\subseteq \mathbb {N}$
, the number
$A\subseteq \mathbb {N}$
, the number 
 $2^{-A}$
 is computable if and only if A is decidable.
$2^{-A}$
 is computable if and only if A is decidable.
 From now on, let 
 $\leq _{\mathrm {S}}$
 denote Solovay [Reference Solovay16] reducibility (see, for instance, Downey and Hirschfeldt [Reference Downey and Hirschfeldt4, Section 9.1]) between left-computable numbers. The following result states that the regainingly approximable numbers are closed downwards with respect to this reducibility.
$\leq _{\mathrm {S}}$
 denote Solovay [Reference Solovay16] reducibility (see, for instance, Downey and Hirschfeldt [Reference Downey and Hirschfeldt4, Section 9.1]) between left-computable numbers. The following result states that the regainingly approximable numbers are closed downwards with respect to this reducibility.
Proposition 8.4. Let 
 $\beta $
 be a regainingly approximable number, and let
$\beta $
 be a regainingly approximable number, and let 
 $\alpha $
 be a left-computable number with
$\alpha $
 be a left-computable number with 
 $\alpha \leq _{\mathrm {S}} \beta $
. Then
$\alpha \leq _{\mathrm {S}} \beta $
. Then 
 $\alpha $
 is regainingly approximable as well.
$\alpha $
 is regainingly approximable as well.
Proof Let 
 $f\colon \{q\in \mathbb {Q} \colon q < \beta \}\to \mathbb {Q}$
 be a computable function and
$f\colon \{q\in \mathbb {Q} \colon q < \beta \}\to \mathbb {Q}$
 be a computable function and 
 ${c\in \mathbb {N}}$
 be a number such that, for all
${c\in \mathbb {N}}$
 be a number such that, for all 
 $q\in \{q\in \mathbb {Q} \colon q < \beta \}$
, we have
$q\in \{q\in \mathbb {Q} \colon q < \beta \}$
, we have 
 $f(q)<\alpha $
 and
$f(q)<\alpha $
 and 
 ${\alpha - f(q) < 2^c \cdot (\beta - q)}$
. By Proposition 3.2 there exists a computable and increasing sequence
${\alpha - f(q) < 2^c \cdot (\beta - q)}$
. By Proposition 3.2 there exists a computable and increasing sequence 
 $(b_n)_n$
 of rational numbers converging to
$(b_n)_n$
 of rational numbers converging to 
 $\beta $
 such that
$\beta $
 such that 
 $\beta - b_n < 2^{-n-c}$
 for infinitely many
$\beta - b_n < 2^{-n-c}$
 for infinitely many 
 ${n\in \mathbb {N}}$
. The sequence
${n\in \mathbb {N}}$
. The sequence 
 $(a_n)_n$
 defined by
$(a_n)_n$
 defined by 
 $$\begin{align*}a_n := \max\{f(b_i) \colon 0 \leq i \leq n\} \end{align*}$$
$$\begin{align*}a_n := \max\{f(b_i) \colon 0 \leq i \leq n\} \end{align*}$$
is a nondecreasing, computable sequence of rational numbers converging to 
 $\alpha $
. For the infinitely many n with
$\alpha $
. For the infinitely many n with 
 $\beta - b_n < 2^{-n-c}$
 we obtain
$\beta - b_n < 2^{-n-c}$
 we obtain 
 $$\begin{align*}\alpha - a_n \leq \alpha - f(b_n) < 2^c \cdot(\beta - b_n) < 2^{-n}.\\[-34pt] \end{align*}$$
$$\begin{align*}\alpha - a_n \leq \alpha - f(b_n) < 2^c \cdot(\beta - b_n) < 2^{-n}.\\[-34pt] \end{align*}$$
Corollary 8.5.
- 
(1) If the sum of two left-computable numbers is regainingly approximable, then both of them are regainingly approximable. 
- 
(2) The sum of a regainingly approximable number and a computable number is again a regainingly approximable number. 
Proof The first assertion follows from Proposition 8.4 and from the fact that for any two left-computable numbers 
 $\alpha $
,
$\alpha $
, 
 $\beta $
 one has
$\beta $
 one has 
 $\alpha \leq _{\mathrm {S}} \alpha +\beta $
 and
$\alpha \leq _{\mathrm {S}} \alpha +\beta $
 and 
 $\beta \leq _{\mathrm {S}} \alpha +\beta $
.
$\beta \leq _{\mathrm {S}} \alpha +\beta $
.
Since adding a computable number to a left-computable number does not change its Solovay degree, the second assertion follows from Proposition 8.4 as well.
Corollary 8.6. Every strongly left-computable number can be written as the sum of two strongly left-computable numbers that are regainingly approximable.
Corollary 8.7. There exist two strongly left-computable and regainingly approximable numbers whose sum is not regainingly approximable.
Proof According to Corollary 8.3(1), there exists a strongly left-computable number 
 $\gamma $
 that is not regainingly approximable. Then, according to Corollary 8.6, there exist two strongly left-computable and regainingly approximable numbers
$\gamma $
 that is not regainingly approximable. Then, according to Corollary 8.6, there exist two strongly left-computable and regainingly approximable numbers 
 $\alpha $
,
$\alpha $
, 
 $\beta $
 with
$\beta $
 with 
 ${\alpha +\beta =\gamma }$
 that witness the truth of the assertion.
${\alpha +\beta =\gamma }$
 that witness the truth of the assertion.
Corollary 8.6 raises the question whether every left-computable number can be written as the sum of two regainingly approximable numbers; the answer is no. This follows from Proposition 6.1, from the fact that there exist Martin-Löf random left-computable numbers, and from the result of Downey, Hirschfeldt, and Nies [Reference Downey, Hirschfeldt and Nies5, Corollary 3.6] that the sum of two left-computable numbers that are not Martin-Löf random is again not Martin-Löf random.
9. Splitting regular reals
 In Theorem 5.1 we have seen that for any c.e. set 
 $C\subseteq \mathbb {N}$
 there exist two disjoint, regainingly approximable sets
$C\subseteq \mathbb {N}$
 there exist two disjoint, regainingly approximable sets 
 $A,B\subseteq \mathbb {N}$
 with
$A,B\subseteq \mathbb {N}$
 with 
 $C=A\cup B$
. That result left open whether this splitting can be obtained effectively. We now give a positive answer by presenting an according algorithm in the following lemma. We will formulate this splitting algorithm in a more general form so that we may apply it not only to the class of c.e. sets, but in fact to a larger class of objects, namely to the regular reals as defined by Wu [Reference Wu18].
$C=A\cup B$
. That result left open whether this splitting can be obtained effectively. We now give a positive answer by presenting an according algorithm in the following lemma. We will formulate this splitting algorithm in a more general form so that we may apply it not only to the class of c.e. sets, but in fact to a larger class of objects, namely to the regular reals as defined by Wu [Reference Wu18].
 For a function 
 $f\colon \mathbb {N}\to \mathbb {N}$
 and a set
$f\colon \mathbb {N}\to \mathbb {N}$
 and a set 
 $S\subseteq \mathbb {N}$
 from now on we will write
$S\subseteq \mathbb {N}$
 from now on we will write 
 $$\begin{align*}f^{-1}S:=\{i\in\mathbb{N}\colon f(i)\in S\}. \end{align*}$$
$$\begin{align*}f^{-1}S:=\{i\in\mathbb{N}\colon f(i)\in S\}. \end{align*}$$
Lemma 9.1. There is an algorithm which, given a function 
 $f\colon \mathbb {N}\to \mathbb {N}$
 with the property that
$f\colon \mathbb {N}\to \mathbb {N}$
 with the property that 
 $f^{-1}\{n+1\}$
 is finite for every n, computes two functions
$f^{-1}\{n+1\}$
 is finite for every n, computes two functions 
 $g,h\colon \mathbb {N}\to \mathbb {N}$
 such that for every n
$g,h\colon \mathbb {N}\to \mathbb {N}$
 such that for every n 
 $$ \begin{align} |g^{-1}\{n+1\}| + |h^{-1}\{n+1\}| = |f^{-1}\{n+1\}| \end{align} $$
$$ \begin{align} |g^{-1}\{n+1\}| + |h^{-1}\{n+1\}| = |f^{-1}\{n+1\}| \end{align} $$
and such that there exists an increasing sequence 
 $(S_i)_i$
 with
$(S_i)_i$
 with 
 $$ \begin{align*} & g^{-1}\{1,\ldots,S_i\}\subseteq \{0,\ldots,S_i-1\} \text{ for all even } i \text{ and } \\ & h^{-1}\{1,\ldots,S_i\}\subseteq \{0,\ldots,S_i-1\} \text{ for all odd } i. \end{align*} $$
$$ \begin{align*} & g^{-1}\{1,\ldots,S_i\}\subseteq \{0,\ldots,S_i-1\} \text{ for all even } i \text{ and } \\ & h^{-1}\{1,\ldots,S_i\}\subseteq \{0,\ldots,S_i-1\} \text{ for all odd } i. \end{align*} $$
Proof Let a function f as in the statement be given. We will describe an algorithm that works in stages to define the desired functions g and h, as well as a function 
 $s\colon \mathbb {N}\times \mathbb {N}\to \mathbb {N}$
. We write
$s\colon \mathbb {N}\times \mathbb {N}\to \mathbb {N}$
. We write 
 $s_i[t]$
 for
$s_i[t]$
 for 
 $s(i,t)$
 to express the intuition that
$s(i,t)$
 to express the intuition that 
 $s_i[t]$
 is our current guess for
$s_i[t]$
 is our current guess for 
 $S_i$
 at stage t.
$S_i$
 at stage t.
 We begin with a high-level overview of the proof strategy: Whenever f enumerates a number n at stage t, we need to let either g or h enumerate it. To decide between the two, we follow a greedy strategy in the following sense: for every t the sequence 
 $(s_i[t])_i$
 will be increasing; if at stage t the largest number i with
$(s_i[t])_i$
 will be increasing; if at stage t the largest number i with 
 $s_i[t]\leq n$
 is even then we let g enumerate n, otherwise h. As we will show this is sufficient to prove the statement of the lemma; in particular, for every i,
$s_i[t]\leq n$
 is even then we let g enumerate n, otherwise h. As we will show this is sufficient to prove the statement of the lemma; in particular, for every i, 
 $(s_i[t])_t$
 will turn out to be nondecreasing and eventually constant, allowing us to define
$(s_i[t])_t$
 will turn out to be nondecreasing and eventually constant, allowing us to define 
 ${S_i:=\lim _{t\to \infty } s_i[t]}$
.
${S_i:=\lim _{t\to \infty } s_i[t]}$
.
With this we are ready to give the formal proof.
 At stage 
 $0$
 we define
$0$
 we define 
 $s_i[0]:=i$
 for all
$s_i[0]:=i$
 for all 
 $i\in \mathbb {N}$
.
$i\in \mathbb {N}$
.
 At a stage 
 $t+1$
 with
$t+1$
 with 
 $t\in \mathbb {N}$
 we proceed as follows:
$t\in \mathbb {N}$
 we proceed as follows: 
- 
• If  $f(t)=0$
 then we set $f(t)=0$
 then we set $g(t):=0$
 and $g(t):=0$
 and $h(t):=0$
. Intuitively speaking this means that f does not enumerate any new element at stage t, and that we therefore do not let g and h enumerate any new elements either. We also set $h(t):=0$
. Intuitively speaking this means that f does not enumerate any new element at stage t, and that we therefore do not let g and h enumerate any new elements either. We also set $s_i[t+1]:=s_i[t]$
 for all $s_i[t+1]:=s_i[t]$
 for all $i\in \mathbb {N}$
. $i\in \mathbb {N}$
.
- 
• If  $f(t)>0$
 then this means $f(t)>0$
 then this means $n:=f(t)-1$
 is enumerated by f at stage t. We want to let n be enumerated by either g or by h, as follows: If is even then we set $n:=f(t)-1$
 is enumerated by f at stage t. We want to let n be enumerated by either g or by h, as follows: If is even then we set $$\begin{align*}k_t := \min\{j\in\mathbb{N} \colon s_j[t]> n\} \end{align*}$$ $$\begin{align*}k_t := \min\{j\in\mathbb{N} \colon s_j[t]> n\} \end{align*}$$ $g(t):=0$
 and $g(t):=0$
 and $h(t):=n+1$
 (which means that n is enumerated by h); otherwise, if $h(t):=n+1$
 (which means that n is enumerated by h); otherwise, if $k_t$
 is odd, then we set $k_t$
 is odd, then we set $g(t):=n+1$
 and $g(t):=n+1$
 and $h(t):=0$
 (which means that n is enumerated by g). Furthermore, we define $h(t):=0$
 (which means that n is enumerated by g). Furthermore, we define $s_i[t+1]$
 for all $s_i[t+1]$
 for all $i\in \mathbb {N}$
 by $i\in \mathbb {N}$
 by $$\begin{align*}s_i[t+1] := \begin{cases} s_i[t], & \text{if } i \leq k_t, \\ s_i[t] + t + 1, & \text{if } k_t < i. \end{cases} \end{align*}$$ $$\begin{align*}s_i[t+1] := \begin{cases} s_i[t], & \text{if } i \leq k_t, \\ s_i[t] + t + 1, & \text{if } k_t < i. \end{cases} \end{align*}$$
This ends the description of stage t and of the algorithm; we proceed with the verification.
Claim 1. For every 
 $t\in \mathbb {N}$
, the sequence
$t\in \mathbb {N}$
, the sequence 
 $(s_i[t])_i$
 is increasing.
$(s_i[t])_i$
 is increasing.
Proof We prove the claim by induction over t; it clearly holds for 
 $t=0$
. Fix any
$t=0$
. Fix any 
 $t\in \mathbb {N}$
 and assume that
$t\in \mathbb {N}$
 and assume that 
 $(s_i[t])_i$
 is increasing. If
$(s_i[t])_i$
 is increasing. If 
 $f(t)=0$
 then
$f(t)=0$
 then 
 $(s_i[t+1])_i$
 is identical to
$(s_i[t+1])_i$
 is identical to 
 $(s_i[t])_i$
 and hence it is increasing as well. Thus assume that
$(s_i[t])_i$
 and hence it is increasing as well. Thus assume that 
 $f(t)>0$
; then
$f(t)>0$
; then 
 $k_t$
 is defined by the induction hypothesis and we can observe that
$k_t$
 is defined by the induction hypothesis and we can observe that 
 $(s_i[t])_i$
 is increasing:
$(s_i[t])_i$
 is increasing: 
- 
• For any  $i<j\leq k_t$
 we have $i<j\leq k_t$
 we have $$\begin{align*}s_i[t+1] = s_i[t] < s_j[t] = s_j[t+1].\end{align*}$$ $$\begin{align*}s_i[t+1] = s_i[t] < s_j[t] = s_j[t+1].\end{align*}$$
- 
• For any  $i\leq k_t < j$
 we have $i\leq k_t < j$
 we have $$\begin{align*}s_i[t+1] = s_i[t] < s_j[t] < s_j[t] + t + 1 = s_j[t+1].\end{align*}$$ $$\begin{align*}s_i[t+1] = s_i[t] < s_j[t] < s_j[t] + t + 1 = s_j[t+1].\end{align*}$$
- 
• For any  $k_t< i<j$
 we have $k_t< i<j$
 we have $$\begin{align*}s_i[t+1] = s_i[t] + t + 1 < s_j[t] + t + 1 = s_j[t+1].\end{align*}$$ $$\begin{align*}s_i[t+1] = s_i[t] + t + 1 < s_j[t] + t + 1 = s_j[t+1].\end{align*}$$
This proves the claim.
 By Claim 1, for every 
 $t\in \mathbb {N}$
 with
$t\in \mathbb {N}$
 with 
 $f(t)>0$
, the number
$f(t)>0$
, the number 
 $k_t$
 is well defined; thus it is clear that g and h as defined by the algorithm satisfy (
$k_t$
 is well defined; thus it is clear that g and h as defined by the algorithm satisfy (
 $\ast $
) for every
$\ast $
) for every 
 $n\in \mathbb {N}$
. It remains to prove the second condition in the statement of the lemma.
$n\in \mathbb {N}$
. It remains to prove the second condition in the statement of the lemma.
Claim 2. For every i, the sequence 
 $(s_i[t])_t$
 is nondecreasing and eventually constant.
$(s_i[t])_t$
 is nondecreasing and eventually constant.
Proof That 
 $(s_i[t])_t$
 is nondecreasing for every i is clear by definition. We show by induction over i that it is eventually constant as well.
$(s_i[t])_t$
 is nondecreasing for every i is clear by definition. We show by induction over i that it is eventually constant as well.
 It is easy to see that 
 $s_0[t]=0$
 for all t. Thus consider an arbitrary number
$s_0[t]=0$
 for all t. Thus consider an arbitrary number 
 $i>0$
. By the induction hypothesis there exists a number
$i>0$
. By the induction hypothesis there exists a number 
 $t_1$
 such that, for all
$t_1$
 such that, for all 
 $t\geq t_1$
,
$t\geq t_1$
, 
 $s_{i-1}[t]=s_{i-1}[t_1]$
. Let
$s_{i-1}[t]=s_{i-1}[t_1]$
. Let 
 $t_2$
 be large enough so that
$t_2$
 be large enough so that 
 $t_2> t_1$
 and
$t_2> t_1$
 and 
 $$\begin{align*}f^{-1}\{1,\ldots, s_{i-1}[t_1]\} \subseteq \{0,\ldots,t_2-1\} \end{align*}$$
$$\begin{align*}f^{-1}\{1,\ldots, s_{i-1}[t_1]\} \subseteq \{0,\ldots,t_2-1\} \end{align*}$$
(meaning that numbers smaller than 
 $s_{i-1}[t_1]$
 are enumerated by f only in stages before
$s_{i-1}[t_1]$
 are enumerated by f only in stages before 
 $t_2$
). Then, for every
$t_2$
). Then, for every 
 $t\geq t_2$
 with
$t\geq t_2$
 with 
 $f(t)>0$
 and for every
$f(t)>0$
 and for every 
 $j<i$
 we have
$j<i$
 we have 
 $$\begin{align*}s_j[t] \leq s_{i-1}[t] = s_{i-1}[t_1] \leq f(t)-1 , \end{align*}$$
$$\begin{align*}s_j[t] \leq s_{i-1}[t] = s_{i-1}[t_1] \leq f(t)-1 , \end{align*}$$
hence, 
 $k_t\geq i$
 and consequently
$k_t\geq i$
 and consequently 
 $s_i[t+1]=s_i[t]$
. By induction we obtain
$s_i[t+1]=s_i[t]$
. By induction we obtain 
 $s_i[t]=s_i[t_2]$
, for all
$s_i[t]=s_i[t_2]$
, for all 
 $t\geq t_2$
. Thus,
$t\geq t_2$
. Thus, 
 $(s_i[t])_t$
 is eventually constant, and Claim 2 is proven.
$(s_i[t])_t$
 is eventually constant, and Claim 2 is proven.
 Let the sequence 
 $(S_i)_i$
 be defined by
$(S_i)_i$
 be defined by 
 $S_i:=\lim _{t\to \infty } s_i[t]$
. Due to Claim 1,
$S_i:=\lim _{t\to \infty } s_i[t]$
. Due to Claim 1, 
 $(S_i)_i$
 is increasing.
$(S_i)_i$
 is increasing.
Claim 3. For every 
 $i\in \mathbb {N}$
 and every
$i\in \mathbb {N}$
 and every 
 $t\geq S_i$
,
$t\geq S_i$
, 
 $s_i[t]=S_i$
.
$s_i[t]=S_i$
.
Proof If this were not true then there would be some 
 $t \geq S_i$
 with
$t \geq S_i$
 with 
 $s_i[t+1]\neq s_i[t]$
, hence, with
$s_i[t+1]\neq s_i[t]$
, hence, with 
 $S_i\geq s_i[t+1]=s_i[t]+t+1 \geq t+1> S_i$
, a contradiction.
$S_i\geq s_i[t+1]=s_i[t]+t+1 \geq t+1> S_i$
, a contradiction.
Claim 4. For every even i, 
 $g^{-1}\{1,\ldots ,S_i\}\subseteq \{0,\ldots ,S_i-1\}$
.
$g^{-1}\{1,\ldots ,S_i\}\subseteq \{0,\ldots ,S_i-1\}$
.
Proof Consider an even number i as well as some 
 $t\geq S_i$
 with
$t\geq S_i$
 with 
 $g(t)>0$
; we need to show that
$g(t)>0$
; we need to show that 
 $g(t)>S_i$
. To see this, first observe that the assumption that
$g(t)>S_i$
. To see this, first observe that the assumption that 
 $g(t)>0$
 implies that we must have
$g(t)>0$
 implies that we must have 
 $f(t)=g(t)$
 and that
$f(t)=g(t)$
 and that 
 $k_t$
 must be odd. Hence
$k_t$
 must be odd. Hence 
 $k_t\neq i$
. If
$k_t\neq i$
. If 
 $k_t$
 were smaller than i then we would obtain
$k_t$
 were smaller than i then we would obtain 
 $s_i[t+1] = s_i[t]+t+1> s_i[t]=S_i$
 in contradiction to Claim 3. We conclude
$s_i[t+1] = s_i[t]+t+1> s_i[t]=S_i$
 in contradiction to Claim 3. We conclude 
 $i<k_t$
. This implies
$i<k_t$
. This implies 
 $s_i[t]\leq f(t)-1$
 by the definition of
$s_i[t]\leq f(t)-1$
 by the definition of 
 $k_t$
. As
$k_t$
. As 
 $t\geq S_i$
, using Claim 3 again, we obtain
$t\geq S_i$
, using Claim 3 again, we obtain 
 $$\begin{align*}S_i=s_i[t] \leq f(t)-1 <f(t) = g(t),\end{align*}$$
$$\begin{align*}S_i=s_i[t] \leq f(t)-1 <f(t) = g(t),\end{align*}$$
which proves Claim 4.
Claim 5. For every odd i, 
 $h^{-1}\{1,\ldots ,S_i\}\subseteq \{0,\ldots ,S_i-1\}$
.
$h^{-1}\{1,\ldots ,S_i\}\subseteq \{0,\ldots ,S_i-1\}$
.
Proof The proof is symmetric to that of Claim 4; it is enough to interchange “even” and “odd” and to replace “g” by “h”.
 As a corollary we immediately obtain the following uniformly effective version of Theorem 5.1. Note that the algorithm even provides 
 $\mathrm {id}_{\mathbb {N}}$
-good enumerations of A and B. While we already showed in Theorem 4.7 that
$\mathrm {id}_{\mathbb {N}}$
-good enumerations of A and B. While we already showed in Theorem 4.7 that 
 $\mathrm {id}_{\mathbb {N}}$
-good computable enumerations exist for all regainingly approximable sets, its proof was not fully uniform.
$\mathrm {id}_{\mathbb {N}}$
-good computable enumerations exist for all regainingly approximable sets, its proof was not fully uniform.
Theorem 9.2. Given an enumeration 
 $f_C\colon \mathbb {N}\to \mathbb {N}$
 of a set
$f_C\colon \mathbb {N}\to \mathbb {N}$
 of a set 
 $C\subseteq \mathbb {N}$
 one can compute enumerations without repetitions
$C\subseteq \mathbb {N}$
 one can compute enumerations without repetitions 
 $f_A\colon \mathbb {N}\to \mathbb {N}$
 of a set
$f_A\colon \mathbb {N}\to \mathbb {N}$
 of a set 
 $A\subseteq \mathbb {N}$
 and
$A\subseteq \mathbb {N}$
 and 
 $f_B\colon \mathbb {N}\to \mathbb {N}$
 of a set
$f_B\colon \mathbb {N}\to \mathbb {N}$
 of a set 
 $B\subseteq \mathbb {N}$
 such that
$B\subseteq \mathbb {N}$
 such that
 $:$
$:$
 
- 
(1) C is the disjoint union of A and B, and 
- 
(2) there exist infinitely many t with and infinitely many t with $$\begin{align*}A \cap \{0,\ldots,t-1\} \subseteq \mathrm{Enum}(f_A)[t]\end{align*}$$ $$\begin{align*}A \cap \{0,\ldots,t-1\} \subseteq \mathrm{Enum}(f_A)[t]\end{align*}$$ $$\begin{align*}B \cap \{0,\ldots,t-1\} \subseteq \mathrm{Enum}(f_B)[t].\end{align*}$$ $$\begin{align*}B \cap \{0,\ldots,t-1\} \subseteq \mathrm{Enum}(f_B)[t].\end{align*}$$
Proof First transform 
 $f_C$
 into an enumeration without repetitions
$f_C$
 into an enumeration without repetitions 
 $\widetilde {f_C}$
 as in Remark 4.5; then apply the algorithm described in the proof of Lemma 9.1 to
$\widetilde {f_C}$
 as in Remark 4.5; then apply the algorithm described in the proof of Lemma 9.1 to 
 $\widetilde {f_C}$
.
$\widetilde {f_C}$
.
Lemma 9.1 can also be applied to regular reals as defined by Wu [Reference Wu18].
Definition 9.3 (Wu [Reference Wu18])
- 
(1) For  $n\in \mathbb {N}$
, a real number is called n-strongly computably enumerable (n-strongly c.e.) if it can be written as the sum of n strongly left-computable numbers. $n\in \mathbb {N}$
, a real number is called n-strongly computably enumerable (n-strongly c.e.) if it can be written as the sum of n strongly left-computable numbers.
- 
(2) If a real number is n-strongly c.e. for some  $n\in \mathbb {N}$
, then it is called regular. $n\in \mathbb {N}$
, then it is called regular.
 Such numbers can be characterized conveniently by using the following representation 
 $\delta $
 (in the sense of Weihrauch [Reference Weihrauch17]) of non-negative real numbers: Call
$\delta $
 (in the sense of Weihrauch [Reference Weihrauch17]) of non-negative real numbers: Call 
 $f\colon \mathbb {N}\to \mathbb {N}$
 a
$f\colon \mathbb {N}\to \mathbb {N}$
 a 
 $\delta $
-name of a non-negative real number
$\delta $
-name of a non-negative real number 
 $\alpha $
 if the series
$\alpha $
 if the series 
 $\sum _k a_k$
 defined by
$\sum _k a_k$
 defined by 
 $$\begin{align*}a_k := \begin{cases} 2^{-f(k)}, & \text{if } f(k)>0, \\ 0, & \text{if } f(k)=0, \end{cases} \end{align*}$$
$$\begin{align*}a_k := \begin{cases} 2^{-f(k)}, & \text{if } f(k)>0, \\ 0, & \text{if } f(k)=0, \end{cases} \end{align*}$$
converges to 
 $\alpha $
. Note that, if some
$\alpha $
. Note that, if some 
 $f\colon \mathbb {N}\to \mathbb {N}$
 is a
$f\colon \mathbb {N}\to \mathbb {N}$
 is a 
 $\delta $
-name of a non-negative real number, then
$\delta $
-name of a non-negative real number, then 
 $f^{-1}\{i+1\}$
 must be finite for every
$f^{-1}\{i+1\}$
 must be finite for every 
 $i\in \mathbb {N}$
; if we even have
$i\in \mathbb {N}$
; if we even have 
 $|f^{-1}\{i+1\}| \leq n$
 for some
$|f^{-1}\{i+1\}| \leq n$
 for some 
 $n\in \mathbb {N}$
 and all
$n\in \mathbb {N}$
 and all 
 $i\in \mathbb {N}$
, then we say that f is a
$i\in \mathbb {N}$
, then we say that f is a 
 $(\delta ,n)$
-name of
$(\delta ,n)$
-name of 
 $\alpha $
. Then the following lemma is obvious.
$\alpha $
. Then the following lemma is obvious.
Lemma 9.4. For 
 $n\in \mathbb {N}$
, a non-negative real number
$n\in \mathbb {N}$
, a non-negative real number 
 $\alpha $
 is n-strongly c.e. if and only if there exists a computable
$\alpha $
 is n-strongly c.e. if and only if there exists a computable 
 $(\delta ,n)$
-name f of
$(\delta ,n)$
-name f of 
 $\alpha $
.
$\alpha $
.
Applying our previous results to the regular reals, we obtain the following final theorem.
Theorem 9.5.
- 
(1) Given a  $\delta $
-name $\delta $
-name $($
or a $($
or a $(\delta ,n)$
-name $(\delta ,n)$
-name $) f_\gamma $
 of a non-negative real number $) f_\gamma $
 of a non-negative real number $\gamma $
, one can compute $\gamma $
, one can compute $\delta $
-names $\delta $
-names $($
or $($
or $(\delta ,n)$
-names $(\delta ,n)$
-names $) f_\alpha $
 of a non-negative real number $) f_\alpha $
 of a non-negative real number $\alpha $
 and $\alpha $
 and $f_\beta $
 of a non-negative real number $f_\beta $
 of a non-negative real number $\beta $
 with $\beta $
 with $\alpha +\beta =\gamma $
 and such that there exist infinitely many t with and infinitely many t with $\alpha +\beta =\gamma $
 and such that there exist infinitely many t with and infinitely many t with $$\begin{align*}f_\alpha^{-1}\{1,\ldots,t\}\subseteq \{0,\ldots,t-1\} \end{align*}$$ $$\begin{align*}f_\alpha^{-1}\{1,\ldots,t\}\subseteq \{0,\ldots,t-1\} \end{align*}$$ $$\begin{align*}f_\beta^{-1}\{1,\ldots,t\}\subseteq \{0,\ldots,t-1\}. \end{align*}$$ $$\begin{align*}f_\beta^{-1}\{1,\ldots,t\}\subseteq \{0,\ldots,t-1\}. \end{align*}$$
- 
(2) For every n and every n-strongly c.e. real  $\gamma $
 there exist n-strongly c.e. reals $\gamma $
 there exist n-strongly c.e. reals $\alpha $
 and $\alpha $
 and $\beta $
 with $\beta $
 with $\alpha +\beta =\gamma $
 that are additionally regainingly approximable. $\alpha +\beta =\gamma $
 that are additionally regainingly approximable.
- 
(3) For every regular real  $\gamma $
 there exist regular reals $\gamma $
 there exist regular reals $\alpha $
 and $\alpha $
 and $\beta $
 with $\beta $
 with $\alpha +\beta =\gamma $
 that are additionally regainingly approximable. $\alpha +\beta =\gamma $
 that are additionally regainingly approximable.
Proof Applying the algorithm in the proof of Lemma 9.1 to 
 $f_\gamma $
 proves the first assertion. For the second assertion, use the first assertion and the following claim.
$f_\gamma $
 proves the first assertion. For the second assertion, use the first assertion and the following claim.
Claim. Let f be a 
 $(\delta ,n)$
-name of a non-negative real number
$(\delta ,n)$
-name of a non-negative real number 
 $\alpha $
 such that there exist infinitely many t with
$\alpha $
 such that there exist infinitely many t with 
 $f^{-1}\{1,\ldots ,t\}\subseteq \{0,\ldots ,t-1\}$
. Then
$f^{-1}\{1,\ldots ,t\}\subseteq \{0,\ldots ,t-1\}$
. Then 
 $\alpha $
 is regainingly approximable.
$\alpha $
 is regainingly approximable.
Proof As above, define
 $$\begin{align*}a_k := \begin{cases} 2^{-f(k)}, & \text{if } f(k)>0, \\ 0, & \text{if } f(k)=0, \end{cases} \end{align*}$$
$$\begin{align*}a_k := \begin{cases} 2^{-f(k)}, & \text{if } f(k)>0, \\ 0, & \text{if } f(k)=0, \end{cases} \end{align*}$$
and 
 $A_t:=\sum _{k<t} a_k$
. Then the sequence
$A_t:=\sum _{k<t} a_k$
. Then the sequence 
 $(A_t)_t$
 is a computable nondecreasing sequence converging to
$(A_t)_t$
 is a computable nondecreasing sequence converging to 
 $\alpha = \sum _{k=0}^\infty a_k$
. For any of the infinitely many t with
$\alpha = \sum _{k=0}^\infty a_k$
. For any of the infinitely many t with 
 $f^{-1}\{1,\ldots ,t\}\subseteq \{0,\ldots ,t-1\}$
 we obtain
$f^{-1}\{1,\ldots ,t\}\subseteq \{0,\ldots ,t-1\}$
 we obtain 
 $$\begin{align*}\alpha - A_t = \sum_{k=t}^\infty a_k \leq n \cdot \sum_{j=t+1}^\infty 2^{-j} = n \cdot 2^{-t}. \end{align*}$$
$$\begin{align*}\alpha - A_t = \sum_{k=t}^\infty a_k \leq n \cdot \sum_{j=t+1}^\infty 2^{-j} = n \cdot 2^{-t}. \end{align*}$$
By Proposition 3.2 this implies that 
 $\alpha $
 is regainingly approximable.
$\alpha $
 is regainingly approximable.
The third follows immediately from the second assertion.
Acknowledgments
 The authors would like to thank the anonymous referees for insightful remarks, in particular, for suggesting to explore the Kolmogorov complexity of regainingly approximable sets and numbers, as well as for suggesting to extend the characterization of the c.e. sets 
 $A\subseteq \mathbb {N}$
 such that
$A\subseteq \mathbb {N}$
 such that 
 $2^{-A}$
 is regainingly approximable to a characterization of arbitrary (not necessarily c.e.) sets with this property.
$2^{-A}$
 is regainingly approximable to a characterization of arbitrary (not necessarily c.e.) sets with this property.
 
 








 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
