1 Introduction
 One of the most fundamental problems in all of combinatorics concerns bounding the famous Ramsey number 
 $R(\ell ,k)$
, which may be defined as the smallest number n such that every graph on n vertices contains either a clique of size
$R(\ell ,k)$
, which may be defined as the smallest number n such that every graph on n vertices contains either a clique of size 
 $\ell $
 or an independent set of size k. The first highly challenging instance of this general problem is the determination of the Ramsey numbers
$\ell $
 or an independent set of size k. The first highly challenging instance of this general problem is the determination of the Ramsey numbers 
 $R(3,k)$
, posed as a prize-money question by Erdős already back in 1961 [Reference Erdős28] (see also this Erdős problem page entry). Currently, the best known asymptotic bounds are
$R(3,k)$
, posed as a prize-money question by Erdős already back in 1961 [Reference Erdős28] (see also this Erdős problem page entry). Currently, the best known asymptotic bounds are 
 $$ \begin{align*}\left(\frac{1}{4}-o(1)\right)\frac{k^2}{\ln k}\le R(3,k)\le (1+o(1))\frac{k^2}{\ln k}.\end{align*} $$
$$ \begin{align*}\left(\frac{1}{4}-o(1)\right)\frac{k^2}{\ln k}\le R(3,k)\le (1+o(1))\frac{k^2}{\ln k}.\end{align*} $$
The lower bound was established by an analysis of the famous triangle-free process independently by Fiz Pontiveros, Griffiths, and Morris [Reference Fiz Pontiveros, Griffiths and Morris31] and Bohman and Keevash [Reference Bohman and Keevash10] in 2013. Proving an upper bound on 
 $R(3,k)$
 is equivalent to establishing a lower bound on the independence number
$R(3,k)$
 is equivalent to establishing a lower bound on the independence number 
 $\alpha (G)$
 (i.e., the size of a largest independent set in G) for all triangle-free graphs on n vertices. In 1980, Ajtai, Komlós, and Szemerédi [Reference Ajtai, Komlós and Szemerédi2] famously proved that every triangle-free graph G on n vertices with average degree
$\alpha (G)$
 (i.e., the size of a largest independent set in G) for all triangle-free graphs on n vertices. In 1980, Ajtai, Komlós, and Szemerédi [Reference Ajtai, Komlós and Szemerédi2] famously proved that every triangle-free graph G on n vertices with average degree 
 $\overline {d}$
 satisfies
$\overline {d}$
 satisfies 
 $\alpha (G)\ge c\frac {\ln \overline {d}}{\overline {d}}$
, where
$\alpha (G)\ge c\frac {\ln \overline {d}}{\overline {d}}$
, where 
 $c>0$
 is some small absolute constant, which is easily seen to imply that
$c>0$
 is some small absolute constant, which is easily seen to imply that 
 $R(3,k)\le O\left (\frac {k^2}{\log k}\right )$
. The stronger upper bound on
$R(3,k)\le O\left (\frac {k^2}{\log k}\right )$
. The stronger upper bound on 
 $R(3,k)$
 stated above is due to a strengthening of the result of Ajtai, Komlós and Szemerédi, established in a landmark result by Shearer in 1983. Namely, Shearer [Reference Shearer51] improved the constant factor c in this bound significantly by showing that
$R(3,k)$
 stated above is due to a strengthening of the result of Ajtai, Komlós and Szemerédi, established in a landmark result by Shearer in 1983. Namely, Shearer [Reference Shearer51] improved the constant factor c in this bound significantly by showing that 
 $\alpha (G)\ge \frac {(1-\overline {d})+\overline {d}\ln \overline {d}}{(\overline {d}-1)^2}n=(1-o(1))\frac {\ln \overline {d}}{\overline {d}}n$
. Further refining this result, Shearer [Reference Shearer52] proved in 1991 that every triangle-free graph G satisfies
$\alpha (G)\ge \frac {(1-\overline {d})+\overline {d}\ln \overline {d}}{(\overline {d}-1)^2}n=(1-o(1))\frac {\ln \overline {d}}{\overline {d}}n$
. Further refining this result, Shearer [Reference Shearer52] proved in 1991 that every triangle-free graph G satisfies 
 $\alpha (G)\ge \sum _{v\in V(G)}g(d_G(v))$
, where
$\alpha (G)\ge \sum _{v\in V(G)}g(d_G(v))$
, where 
 $d_G(v)$
 denotes the degree of v in G and
$d_G(v)$
 denotes the degree of v in G and 
 $g(d)=(1-o(1))\frac {\ln d}{d}$
 is a recursively defined function. This bound is slightly better than Shearer’s first bound in terms of the average degree for graphs with unbalanced degree sequences.
$g(d)=(1-o(1))\frac {\ln d}{d}$
 is a recursively defined function. This bound is slightly better than Shearer’s first bound in terms of the average degree for graphs with unbalanced degree sequences.
 These two classic bounds on the independence number of triangle-free graphs due to Shearer have essentially remained the state of the art on the topic for four decades, and due to their ubiquity have found widespread application as a tool across many areas of extremal and probabilistic combinatorics. By a result of Bollobás [Reference Bollobás11], it is known that Shearer’s bounds are tight up to a multiplicative factor of 
 $2$
. Because of the relation to the Ramsey numbers
$2$
. Because of the relation to the Ramsey numbers 
 $R(3,k)$
 discussed above, any constant factor improvement of Shearer’s longstanding bounds would be a major breakthrough in Ramsey theory. Due to this, a lot of research has been devoted to finding strengthenings and generalizations of Shearer’s bounds: We refer to [Reference Davies and Kang23, Reference Kang, Kühn, Methuku and Osthus39] for recent surveys covering Shearer’s bound and relations to the hard-core model in statistical mechanics as well as the theory of graph coloring and to [Reference Alon3, Reference Davies, de Joannis de Verclos, Kang and Pirot19, Reference Davies, de Joannis de Verclos, Kang and Pirot20, Reference Davies, Jenssen, Perkins and Roberts22, Reference Davies, Kang, Pirot and Sereni24, Reference Dhawan26, Reference Pirot and Sereni49] for some extensions and generalizations of Shearer’s bounds.
$R(3,k)$
 discussed above, any constant factor improvement of Shearer’s longstanding bounds would be a major breakthrough in Ramsey theory. Due to this, a lot of research has been devoted to finding strengthenings and generalizations of Shearer’s bounds: We refer to [Reference Davies and Kang23, Reference Kang, Kühn, Methuku and Osthus39] for recent surveys covering Shearer’s bound and relations to the hard-core model in statistical mechanics as well as the theory of graph coloring and to [Reference Alon3, Reference Davies, de Joannis de Verclos, Kang and Pirot19, Reference Davies, de Joannis de Verclos, Kang and Pirot20, Reference Davies, Jenssen, Perkins and Roberts22, Reference Davies, Kang, Pirot and Sereni24, Reference Dhawan26, Reference Pirot and Sereni49] for some extensions and generalizations of Shearer’s bounds.
 The study of lower bounds on the independence number is closely connected to the theory of graph coloring. Recall that in a proper graph coloring vertices are assigned colors such that neighboring vertices have distinct colors, and the chromatic number 
 $\chi (G)$
 of a graph G is the smallest amount of colors required to properly color G. It is easily seen by the definition (by considering a largest “color class”) that every graph G on n vertices has an independent set of size at least
$\chi (G)$
 of a graph G is the smallest amount of colors required to properly color G. It is easily seen by the definition (by considering a largest “color class”) that every graph G on n vertices has an independent set of size at least 
 $\frac {n}{\chi (G)}$
. An even stronger lower bound on the independence number is provided by the well-known fractional chromatic number
$\frac {n}{\chi (G)}$
. An even stronger lower bound on the independence number is provided by the well-known fractional chromatic number 
 $\chi _f(G)$
 of the graph. The fractional chromatic number has many different equivalent definitions (see the standard textbook [Reference Scheinerman and Ullman50] on fractional coloring as a reference). Here, we shall find the following definition convenient:
$\chi _f(G)$
 of the graph. The fractional chromatic number has many different equivalent definitions (see the standard textbook [Reference Scheinerman and Ullman50] on fractional coloring as a reference). Here, we shall find the following definition convenient: 
 $\chi _f(G)$
 is the minimum real number
$\chi _f(G)$
 is the minimum real number 
 $r\ge 1$
 for which there exists a probability distribution on the independent sets of G such that a random independent set I sampled from this distribution contains any given vertex
$r\ge 1$
 for which there exists a probability distribution on the independent sets of G such that a random independent set I sampled from this distribution contains any given vertex 
 $v\in V(G)$
 with probability at least
$v\in V(G)$
 with probability at least 
 $\frac {1}{r}$
. By considering the expected size of a random set drawn from such a distribution, one immediately verifies that
$\frac {1}{r}$
. By considering the expected size of a random set drawn from such a distribution, one immediately verifies that 
 $\alpha (G)\ge \frac {n}{\chi _f(G)}$
 holds for every graph G. In general, the latter lower bound
$\alpha (G)\ge \frac {n}{\chi _f(G)}$
 holds for every graph G. In general, the latter lower bound 
 $\frac {n}{\chi _f(G)}$
 on the independence number is stronger than the lower bound
$\frac {n}{\chi _f(G)}$
 on the independence number is stronger than the lower bound 
 $\frac {n}{\chi (G)}$
, as there are graphs (such as the Kneser graphs [Reference Bárány7, Reference Lovász44]) for which
$\frac {n}{\chi (G)}$
, as there are graphs (such as the Kneser graphs [Reference Bárány7, Reference Lovász44]) for which 
 $\chi _f(G)$
 is much smaller than
$\chi _f(G)$
 is much smaller than 
 $\chi (G)$
.
$\chi (G)$
.
 Given these lower bounds of the independence number in terms of the (fractional) chromatic number, it is natural to ask whether there are analogues or strengthenings of Shearer’s bounds that provide corresponding upper bounds for the (fractional) chromatic number. A prime example of such a result is a recent breakthrough of Molloy [Reference Molloy46], who proved that 
 $\chi (G)\le (1+o(1))\frac {\Delta }{\ln \Delta }$
 for every triangle-free graph G with maximum degree
$\chi (G)\le (1+o(1))\frac {\Delta }{\ln \Delta }$
 for every triangle-free graph G with maximum degree 
 $\Delta $
, where the
$\Delta $
, where the 
 $o(1)$
-term vanishes as
$o(1)$
-term vanishes as 
 $\Delta \rightarrow \infty $
. This strengthened a longstanding previous bound of the form
$\Delta \rightarrow \infty $
. This strengthened a longstanding previous bound of the form 
 $O\left (\frac {\Delta }{\ln \Delta }\right )$
 due to Johansson [Reference Johansson38] and recovers Shearer’s independence number bound in the case of regular graphs in a stronger form. As with Shearer’s bound, it is known that Molloy’s bound is optimal up to a factor of
$O\left (\frac {\Delta }{\ln \Delta }\right )$
 due to Johansson [Reference Johansson38] and recovers Shearer’s independence number bound in the case of regular graphs in a stronger form. As with Shearer’s bound, it is known that Molloy’s bound is optimal up to a factor of 
 $2$
, and improving the constant
$2$
, and improving the constant 
 $1$
 to any constant below
$1$
 to any constant below 
 $1$
 would be a major advance in the field. Several interesting strengthenings and generalizations of Johansson’s and Molloy’s results have been proved in the literature, see, for example, [Reference Cambie, Cames van Batenburg, Davies and Kang15, Reference Alon, Krivelevich and Sudakov4, Reference Anderson, Bernshteyn and Dhawan5, Reference Anderson, Dhawan and Kuchukova6, Reference Bernshteyn, Brazelton, Cao and Kang8, Reference Bonamy, Kelly, Nelson and Postle12, Reference Bradshaw, Mohar and Stacho13, Reference Davies, de Joannis de Verclos, Kang and Pirot19, Reference Hurley, de Joannis de Verclos and Kang35, Reference Hurley and Pirot36, Reference Pirot and Sereni49] for some selected examples.
$1$
 would be a major advance in the field. Several interesting strengthenings and generalizations of Johansson’s and Molloy’s results have been proved in the literature, see, for example, [Reference Cambie, Cames van Batenburg, Davies and Kang15, Reference Alon, Krivelevich and Sudakov4, Reference Anderson, Bernshteyn and Dhawan5, Reference Anderson, Dhawan and Kuchukova6, Reference Bernshteyn, Brazelton, Cao and Kang8, Reference Bonamy, Kelly, Nelson and Postle12, Reference Bradshaw, Mohar and Stacho13, Reference Davies, de Joannis de Verclos, Kang and Pirot19, Reference Hurley, de Joannis de Verclos and Kang35, Reference Hurley and Pirot36, Reference Pirot and Sereni49] for some selected examples.
Our results.
Our first main result concerns the following conjecture from 2018 posed by Kelly and Postle [Reference Kelly and Postle40] that claims a local strengthening of Shearer’s bounds that can also be seen as a degree-sequence generalization of Molloy’s bound for fractional coloringFootnote 1 .
Conjecture 1.1 (Local fractional Shearer/Molloy, cf. Conjecture 2.2 in [Reference Kelly and Postle40]).
 For every triangle-free graph there exists a probability distribution on its independent sets such that every vertex 
 $v\in V(G)$
 appears with probability at least
$v\in V(G)$
 appears with probability at least 
 $(1-o(1))\frac {\ln d_G(v)}{d_G(v)}$
 in a random independent set sampled from the distribution. Here, the
$(1-o(1))\frac {\ln d_G(v)}{d_G(v)}$
 in a random independent set sampled from the distribution. Here, the 
 $o(1)$
 term represents any function that tends to
$o(1)$
 term represents any function that tends to 
 $0$
 as the degree grows.
$0$
 as the degree grows.
To see that this conjecture indeed forms a strengthening of Shearer’s bounds, note that the expected size of a random independent set drawn from a distribution as given by the conjecture is
 $$ \begin{align*}\sum_{v\in V(G)}(1-o(1))\frac{\ln d_G(v)}{d_G(v)},\end{align*} $$
$$ \begin{align*}\sum_{v\in V(G)}(1-o(1))\frac{\ln d_G(v)}{d_G(v)},\end{align*} $$
which recovers Shearer’s second (stronger) lower bound [Reference Shearer52] on the independence number up to lower-order terms. But on top of that, and this explains the word “local” in the name of the conjecture, the distribution in Conjecture 1.1 guarantees that every vertex can be expected to be contained in the random independent set a good fraction of the time (and lower degree vertices are contained proportionally more frequently). This relates back to the previously discussed fractional chromatic number, and, for instance, directly implies that 
 $\chi _f(G)\le (1+o(1))\frac {\Delta (G)}{\ln \Delta (G)}$
 for every triangle-free graph, which recovers the fractional version of Molloy’s bound.
$\chi _f(G)\le (1+o(1))\frac {\Delta (G)}{\ln \Delta (G)}$
 for every triangle-free graph, which recovers the fractional version of Molloy’s bound.
 Adding to that, Conjecture 1.1 connects to several other notions of graph coloring discussed in detail by Kelly and Postle, see in particular [Reference Kelly and Postle40, Proposition 1.4] which provides many different equivalent formulations of Conjecture 1.1. One of these involves the notion of fractional coloring with local demands introduced by Dvořák, Sereni and Volec [Reference Dvořák, Sereni and Volec27]. Following Kelly and Postle [Reference Kelly and Postle40], given a graph G and a so-called demand function 
 $h:V(G)\rightarrow [0,1]$
 that assigns to each vertex its individual “demand,” an h-coloring of a graph G is a mapping
$h:V(G)\rightarrow [0,1]$
 that assigns to each vertex its individual “demand,” an h-coloring of a graph G is a mapping 
 $c:V(G)\rightarrow 2^{[0,1]}$
 that assigns to every vertex
$c:V(G)\rightarrow 2^{[0,1]}$
 that assigns to every vertex 
 $v\in V(G)$
 a measurable subset
$v\in V(G)$
 a measurable subset 
 $c(v)\subseteq [0,1]$
 of measure at least
$c(v)\subseteq [0,1]$
 of measure at least 
 $h(v)$
, in such a way that adjacent vertices in G are assigned disjoint subsets. Since the function h does not have to be constant but can depend on local information concerning the vertex v in G, this setting extends the usual paradigm of graph coloring in a local manner. Kelly and Postle [Reference Kelly and Postle40, Proposition 1.4] proved that Conjecture 1.1 is equivalent to the statement that every triangle-free graph has an h-coloring, where
$h(v)$
, in such a way that adjacent vertices in G are assigned disjoint subsets. Since the function h does not have to be constant but can depend on local information concerning the vertex v in G, this setting extends the usual paradigm of graph coloring in a local manner. Kelly and Postle [Reference Kelly and Postle40, Proposition 1.4] proved that Conjecture 1.1 is equivalent to the statement that every triangle-free graph has an h-coloring, where 
 $h:V(G)\rightarrow [0,1]$
 is a function depending only on the vertex-degrees such that
$h:V(G)\rightarrow [0,1]$
 is a function depending only on the vertex-degrees such that 
 $h(v)=(1-o(1))\frac {\ln d_G(v)}{d_G(v)}$
. We refer to the extensive introduction of [Reference Kelly and Postle40] for further applications of the conjecture.
$h(v)=(1-o(1))\frac {\ln d_G(v)}{d_G(v)}$
. We refer to the extensive introduction of [Reference Kelly and Postle40] for further applications of the conjecture.
 In one of their main results, Kelly and Postle [Reference Kelly and Postle40, Theorem 2.3] proved a relaxation of Conjecture 1.1, replacing the bound 
 $(1-o(1))\frac {\ln d_G(v)}{d_G(v)}$
 with the asymptotically weaker
$(1-o(1))\frac {\ln d_G(v)}{d_G(v)}$
 with the asymptotically weaker 
 $\left (\frac {1}{2e}-o(1)\right )\frac {\ln d_G(v)}{d_G(v) \ln \ln d_G(v) }$
. As the first main result of this paper, we fully resolve Conjecture 1.1.
$\left (\frac {1}{2e}-o(1)\right )\frac {\ln d_G(v)}{d_G(v) \ln \ln d_G(v) }$
. As the first main result of this paper, we fully resolve Conjecture 1.1.
Theorem 1.2. For every triangle-free graph G there exists a probability distribution 
 $\mathcal {D}$
 on the independent sets of G such that
$\mathcal {D}$
 on the independent sets of G such that 
 $$ \begin{align*}\mathbb{P}_{I\sim D}[v\in I]\ge (1-o(1))\frac{\ln(d_G(v))}{d_G(v)}\end{align*} $$
$$ \begin{align*}\mathbb{P}_{I\sim D}[v\in I]\ge (1-o(1))\frac{\ln(d_G(v))}{d_G(v)}\end{align*} $$
for every 
 $v\in V(G)$
. Here the
$v\in V(G)$
. Here the 
 $o(1)$
 represents a function of
$o(1)$
 represents a function of 
 $d_G(v)$
 that tends to
$d_G(v)$
 that tends to 
 $0$
 as the degree grows.
$0$
 as the degree grows.
 A pleasing consequence of Theorem 1.2 is that it can also be used to fully resolve another conjecture about fractional coloring raised in 2018 by Cames van Batenburg, de Joannis de Verclos, Kang, and Pirot [Reference Cames van Batenburg, de Joannis de Verclos, Kang and Pirot16]: Motivated by the aforementioned problem of estimating the Ramsey number 
 $R(3,k)$
, in 1967 Erdős asked the fundamental question of determining the maximum chromatic number of triangle-free graphs on n vertices. An observation of Erdős and Hajnal [Reference Erdős and Hajnal29] combined with Shearer’s bound implies an upper bound
$R(3,k)$
, in 1967 Erdős asked the fundamental question of determining the maximum chromatic number of triangle-free graphs on n vertices. An observation of Erdős and Hajnal [Reference Erdős and Hajnal29] combined with Shearer’s bound implies an upper bound 
 $(2\sqrt {2}+o(1)))\sqrt {\frac {n}{\ln n}}$
 for this problem. In recent work of Davies and Illingworth [Reference Davies and Illingworth21], this upper bound was improved by a
$(2\sqrt {2}+o(1)))\sqrt {\frac {n}{\ln n}}$
 for this problem. In recent work of Davies and Illingworth [Reference Davies and Illingworth21], this upper bound was improved by a 
 $\sqrt {2}$
-factor to the current state of the art
$\sqrt {2}$
-factor to the current state of the art 
 $(2+o(1))\sqrt {\frac {n}{\ln n}}$
. The current best lower bound for this quantity is
$(2+o(1))\sqrt {\frac {n}{\ln n}}$
. The current best lower bound for this quantity is 
 $(1/\sqrt {2}-o(1))\sqrt {\frac {n}{\ln n}}$
, coming from the aforementioned lower bounds on
$(1/\sqrt {2}-o(1))\sqrt {\frac {n}{\ln n}}$
, coming from the aforementioned lower bounds on 
 $R(3,k)$
 [Reference Fiz Pontiveros, Griffiths and Morris31, Reference Bohman and Keevash10].
$R(3,k)$
 [Reference Fiz Pontiveros, Griffiths and Morris31, Reference Bohman and Keevash10].
Cames van Batenburg et al. [Reference Cames van Batenburg, de Joannis de Verclos, Kang and Pirot16] studied the natural analogue of this question for fractional coloring and made the following conjecture.
Conjecture 1.3 (cf. Conjecture 4.3 in [Reference Cames van Batenburg, de Joannis de Verclos, Kang and Pirot16]).
 As 
 $n\rightarrow \infty $
, every triangle-free graph on n vertices has fractional chromatic number at most
$n\rightarrow \infty $
, every triangle-free graph on n vertices has fractional chromatic number at most 
 $(\sqrt {2}+o(1))\sqrt {\frac {n}{\ln n}}$
.
$(\sqrt {2}+o(1))\sqrt {\frac {n}{\ln n}}$
.
 In one of their main results [Reference Cames van Batenburg, de Joannis de Verclos, Kang and Pirot16, Theorem 1.4], Cames van Batenburg et al. proved the fractional version of the result of Davies and Illingworth, namely an upper bound 
 $(2+o(1))\sqrt {\frac {n}{\ln n}}$
 on the fractional chromatic number. Using a connection between Conjectures 1.1 and 1.3 proved by Kelly and Postle [Reference Kelly and Postle40, Proposition 5.2], we are able to confirm Conjecture 1.3 too.
$(2+o(1))\sqrt {\frac {n}{\ln n}}$
 on the fractional chromatic number. Using a connection between Conjectures 1.1 and 1.3 proved by Kelly and Postle [Reference Kelly and Postle40, Proposition 5.2], we are able to confirm Conjecture 1.3 too.
Theorem 1.4. The maximum fractional chromatic number among all n-vertex triangle-free graphs is at most
 $$ \begin{align*}(\sqrt{2}+o(1))\sqrt{\frac{n}{\ln(n)}}.\end{align*} $$
$$ \begin{align*}(\sqrt{2}+o(1))\sqrt{\frac{n}{\ln(n)}}.\end{align*} $$
We also prove a similar upper bound on the fractional chromatic number of triangle-free graphs in terms of the number of edges, as follows.
Theorem 1.5. The maximum fractional chromatic number among triangle-free graphs with m edges is at most
 $$ \begin{align*}(18^{1/3}+o(1)) \frac{m^{1/3}}{(\ln m)^{2/3}}.\end{align*} $$
$$ \begin{align*}(18^{1/3}+o(1)) \frac{m^{1/3}}{(\ln m)^{2/3}}.\end{align*} $$
 Theorem 1.5 comes very close to confirming another conjecture of Cames van Batenburg et al. [Reference Cames van Batenburg, de Joannis de Verclos, Kang and Pirot16, Conjecture 4.4], stating that every triangle-free graph with m edges has fractional chromatic number at most 
 $(16^{1/3}+o(1))m^{1/3}/( \ln m)^{2/3}.$
 In fact, after a personal communication with the authors of [Reference Cames van Batenburg, de Joannis de Verclos, Kang and Pirot16], it turned out that the constant
$(16^{1/3}+o(1))m^{1/3}/( \ln m)^{2/3}.$
 In fact, after a personal communication with the authors of [Reference Cames van Batenburg, de Joannis de Verclos, Kang and Pirot16], it turned out that the constant 
 $16^{1/3}$
 seems to be due to a miscalculation on their end. In particular, it was claimed in [Reference Cames van Batenburg, de Joannis de Verclos, Kang and Pirot16] that the conjectured bound on the fractional chromatic number can be verified in the special case of d–regular triangle-free graphs using the upper bound
$16^{1/3}$
 seems to be due to a miscalculation on their end. In particular, it was claimed in [Reference Cames van Batenburg, de Joannis de Verclos, Kang and Pirot16] that the conjectured bound on the fractional chromatic number can be verified in the special case of d–regular triangle-free graphs using the upper bound 
 $\chi _f(G)\leq \min \left ((1+o(1))d/\ln d, n/d \right ).$
 However, assuming
$\chi _f(G)\leq \min \left ((1+o(1))d/\ln d, n/d \right ).$
 However, assuming 
 $n=(1+o(1))d^2/\ln d$
 and thus
$n=(1+o(1))d^2/\ln d$
 and thus 
 $m=(1/2+o(1))d^3/\ln d$
, this upper bound simplifies to
$m=(1/2+o(1))d^3/\ln d$
, this upper bound simplifies to 
 $(1+o(1))d/\ln d = (1+o(1))(2m)^{1/3}/(\ln m^{1/3})^{2/3}=(1+o(1))(18m)^{1/3}/(\ln m)^{2/3}$
, matching our bound in Theorem 1.5.
$(1+o(1))d/\ln d = (1+o(1))(2m)^{1/3}/(\ln m^{1/3})^{2/3}=(1+o(1))(18m)^{1/3}/(\ln m)^{2/3}$
, matching our bound in Theorem 1.5.
 Let us now turn to our second main result. A broad generalization of the standard upper bound 
 $\chi (G)\le \Delta (G)+1$
 on the chromatic number in terms of the maximum degree is the upper bound
$\chi (G)\le \Delta (G)+1$
 on the chromatic number in terms of the maximum degree is the upper bound 
 $\chi (G)\le d+1$
 which holds for every d-degenerate graph. Given Johansson’s and Molloy’s improved upper bounds for the chromatic number of the form
$\chi (G)\le d+1$
 which holds for every d-degenerate graph. Given Johansson’s and Molloy’s improved upper bounds for the chromatic number of the form 
 $O(\Delta (G)/\ln \Delta (G))$
 for triangle-free graphs, it is tempting to suspect that similarly an upper bound
$O(\Delta (G)/\ln \Delta (G))$
 for triangle-free graphs, it is tempting to suspect that similarly an upper bound 
 $O(d/\ln d)$
 holds for the chromatic number of triangle-free d-degenerate graphs. However, this turns out to be too strong: A number of articles ranging back to at least the 1940s, see for instance [Reference Descartes25, Reference Zykov55, Reference Mycielski47], present constructions of d-degenerate triangle-free graphs with chromatic number
$O(d/\ln d)$
 holds for the chromatic number of triangle-free d-degenerate graphs. However, this turns out to be too strong: A number of articles ranging back to at least the 1940s, see for instance [Reference Descartes25, Reference Zykov55, Reference Mycielski47], present constructions of d-degenerate triangle-free graphs with chromatic number 
 $d+1$
. However, all these constructions turn out to have relatively small fractional chromatic number. And indeed, a well-known conjecture by Harris [Reference Harris33] posits that this is part of a general phenomenon.
$d+1$
. However, all these constructions turn out to have relatively small fractional chromatic number. And indeed, a well-known conjecture by Harris [Reference Harris33] posits that this is part of a general phenomenon.
Conjecture 1.6 (cf. Conjecture 6.2 in [Reference Harris33]).
 Suppose that G is d-degenerate and triangle-free. Then 
 $\chi _f(G)= O(d/\log d)$
.
$\chi _f(G)= O(d/\log d)$
.
 This conjecture has gained quite some attention in recent years. It is known to imply various other conjectures and strengthenings of known results in the literature [Reference Esperet, Kang and Thomassé30, Reference Harris33, Reference Janzer, Steiner and Sudakov37, Reference Kwan, Letzter, Sudakov and Tran41, Reference Li43] including another well-known conjecture by Esperet, Kang and Thomassé [Reference Esperet, Kang and Thomassé30, Conjecture 1.5] that any triangle-free graph with minimum degree d contains an induced bipartite subgraph of minimum degree 
 $\Omega (\log d).$
 Currently the best known lower bound on the conjecture by Esperet et al. is
$\Omega (\log d).$
 Currently the best known lower bound on the conjecture by Esperet et al. is 
 $\Omega (\log d/\log \log d)$
 due to Kwan, Letzter, Sudakov and Tran [Reference Kwan, Letzter, Sudakov and Tran41], though Girão and Hunter (personal communication) recently announced upcoming work improving this to average degree
$\Omega (\log d/\log \log d)$
 due to Kwan, Letzter, Sudakov and Tran [Reference Kwan, Letzter, Sudakov and Tran41], though Girão and Hunter (personal communication) recently announced upcoming work improving this to average degree 
 $(1-o(1))\ln d$
. See also [Reference Davies, Kang, Pirot and Sereni24, Reference Kelly and Postle40]. Despite this attention, Harris’ conjecture itself has remained wide open with no significant improvement of the trivial upper bound
$(1-o(1))\ln d$
. See also [Reference Davies, Kang, Pirot and Sereni24, Reference Kelly and Postle40]. Despite this attention, Harris’ conjecture itself has remained wide open with no significant improvement of the trivial upper bound 
 $d+1$
 having existed in the literature thus far.
$d+1$
 having existed in the literature thus far.
As our second main result, we fully resolve Conjecture 1.6. More precisely, we prove the following.
Theorem 1.7. Suppose G is a triangle-free and d-degenerate graph. Then 
 $\chi _f(G)\leq (4+o(1))\frac {d}{\ln d},$
 where the
$\chi _f(G)\leq (4+o(1))\frac {d}{\ln d},$
 where the 
 $o(1)$
 term tends to
$o(1)$
 term tends to 
 $0$
 as d increases.
$0$
 as d increases.
 As already mentioned, this result is known to have some nice consequences. For instance, a direct application of the theorem implies that any triangle-free graph with minimum degree d contains an induced bipartite subgraph of average degree at least 
 $(\frac 14-o(1))\ln d$
 (and thus one of minimum degree at least
$(\frac 14-o(1))\ln d$
 (and thus one of minimum degree at least 
 $(\frac 18-o(1))\ln d$
). We refer to [Reference Esperet, Kang and Thomassé30, Theorem 3.1] for further details on the calculations. This proves [Reference Esperet, Kang and Thomassé30, Conjecture 1.5], improving on the previously best known bound [Reference Kwan, Letzter, Sudakov and Tran41] by a factor
$(\frac 18-o(1))\ln d$
). We refer to [Reference Esperet, Kang and Thomassé30, Theorem 3.1] for further details on the calculations. This proves [Reference Esperet, Kang and Thomassé30, Conjecture 1.5], improving on the previously best known bound [Reference Kwan, Letzter, Sudakov and Tran41] by a factor 
 $\Theta (\log \log d)$
. We note that the factor
$\Theta (\log \log d)$
. We note that the factor 
 $\frac 14$
 can likely be improved by a more careful analysis, but we do not attempt this here.
$\frac 14$
 can likely be improved by a more careful analysis, but we do not attempt this here.
In addition, Harris [Reference Harris33] observed that Theorem 1.7 can be extended to the setting where the triangle-free condition is relaxed to G being locally sparse, similar to the extension of the upper bound for the chromatic number of triangle-free graphs presented in [Reference Alon, Krivelevich and Sudakov4]. More precisely, we say that a d-degenerate graph G has local triangle bound y if each vertex in G is the last vertex of at most y triangles, where last refers to the degeneracy ordering of the graph. Combining Theorem 1.7 with [Reference Harris33, Lemma 6.3], it follows that
 $$ \begin{align*}\chi_f(G)= O\left(\frac{d}{\ln(d^2/y)}\right)\end{align*} $$
$$ \begin{align*}\chi_f(G)= O\left(\frac{d}{\ln(d^2/y)}\right)\end{align*} $$
for any d-degenerate graph G with local triangle bound y. This in turn proves various relationships between the chromatic number and the triangle count of a graph. We refer to [Reference Harris33, Section 6] for more details. We remark that Harris formally stated his results in a slightly weaker form, namely with “local triangle bound” referring to the maximum number of triangles containing a vertex in the graph. However, looking into his arguments [Reference Harris33] it is not hard to check that they work just as well for the modified definition of local triangle bound given above.
In fact, Theorem 1.7 can be seen as a special case of the following generalization of Harris’ conjecture.
Theorem 1.8. Let G be a triangle-free graph with a vertex ordering 
 $v_1, v_2,\dots , v_n$
. Suppose
$v_1, v_2,\dots , v_n$
. Suppose 
 $p:V(G)\rightarrow [0, 1]$
 satisfies
$p:V(G)\rightarrow [0, 1]$
 satisfies 
 $$ \begin{align*}p(v_i) \leq \prod_{v_j \in N_L(v_i)} \left(1-p(v_j)\right)\end{align*} $$
$$ \begin{align*}p(v_i) \leq \prod_{v_j \in N_L(v_i)} \left(1-p(v_j)\right)\end{align*} $$
for all vertices 
 $v_i$
, where
$v_i$
, where 
 $N_L(v_i)$
 denotes the set of neighbors
$N_L(v_i)$
 denotes the set of neighbors 
 $v_j$
 of
$v_j$
 of 
 $v_i$
 with
$v_i$
 with 
 $j<i$
. Then there exists a probability distribution
$j<i$
. Then there exists a probability distribution 
 $\mathcal {D}$
 over the independent sets of G such that
$\mathcal {D}$
 over the independent sets of G such that 
 $$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}}[v_i\in I]\geq \frac{p(v_i)}{4},\end{align*} $$
$$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}}[v_i\in I]\geq \frac{p(v_i)}{4},\end{align*} $$
for all vertices 
 $v_i$
.
$v_i$
.
 It is not too hard to see that this statement implies Theorem 1.7. In fact, Theorem 1.8 also implies, up to constant factors, the local fractional Shearer bound as can be seen by ordering the vertices of any triangle-free graph decreasingly by their degrees and setting 
 $p(v_i)=\Theta \left ( \frac {\ln d_G(v_i)}{d_G(v_i)}\right )$
. In particular, this matches the bound in Theorem 1.2 up to a constant factor. Beyond this, Theorem 1.8 appears to be a very natural extension of Harris’ conjecture which may be of independent interest.
$p(v_i)=\Theta \left ( \frac {\ln d_G(v_i)}{d_G(v_i)}\right )$
. In particular, this matches the bound in Theorem 1.2 up to a constant factor. Beyond this, Theorem 1.8 appears to be a very natural extension of Harris’ conjecture which may be of independent interest.
High-level proof ideas.
 The key concept to prove our two main results, Theorem 1.2 and 1.7, is to equip the triangle-free graphs under consideration with positive vertex-weights, and attempt to prove extensions of our claims in these generalized setups (see Theorems 2.1 and 4.1, respectively). In both of these settings, we will construct random independent sets I by iteratively/inductively applying the following type of operation: Pick some vertex v and include it in I with probability depending on its current weight. If v is added to I, set the weight of its neighbors to 
 $0$
 (or, equivalently, remove its neighbors from G). Otherwise, update the vertex-weights in G such that weights are preserved in expectation. This is particularly useful, as it limits the influence of the event that
$0$
 (or, equivalently, remove its neighbors from G). Otherwise, update the vertex-weights in G such that weights are preserved in expectation. This is particularly useful, as it limits the influence of the event that 
 $v\in I$
 from any update outside the neighborhood of v, which in turn allows us to derive lower bounds on the probability that
$v\in I$
 from any update outside the neighborhood of v, which in turn allows us to derive lower bounds on the probability that 
 $v\in I$
 in terms of the initial weights of v and its neighbors.
$v\in I$
 in terms of the initial weights of v and its neighbors.
Organization.
The rest of the paper is structured as follows: In Section 2 we establish a key technical result, namely Theorem 2.1, that goes beyond Theorem 1.2 and generalizes it to a vertex-weighted setting. In the following Section 3 we then derive our first three results (Theorems 1.2, 1.4, and 1.5) from Theorem 2.1. In Section 4, we then proceed to present a random process on weighted graphs with a fixed linear vertex-ordering. Analyzing this process then yields our second key technical result, Theorem 4.1. Finally, our second main result Theorem 1.7 as well as Theorem 1.8 can be quickly deduced as special cases of this more general statement.
We conclude the paper in Section 5 with some discussion of open problems and future research directions.
2 Key technical result for Theorem 1.2
In this section, we present the proof of a key technical result, Theorem 2.1 below, which generalizes Theorem 1.2 to a vertex-weighted setting.
 In the following, we denote byFootnote 
2
 
 $f:[0,\infty )\rightarrow \mathbb {R}_+$
 the unique continuous extension of
$f:[0,\infty )\rightarrow \mathbb {R}_+$
 the unique continuous extension of 
 $x\rightarrow \frac {(1-x)+x\ln (x)}{(x-1)^2}$
 from
$x\rightarrow \frac {(1-x)+x\ln (x)}{(x-1)^2}$
 from 
 $[0,\infty )\setminus \{0,1\}$
 to
$[0,\infty )\setminus \{0,1\}$
 to 
 $[0,\infty )$
. It is not hard to check that f exists and has the following properties:
$[0,\infty )$
. It is not hard to check that f exists and has the following properties: 
- 
•  $f(0)=1, f(1)=\frac {1}{2}$
. $f(0)=1, f(1)=\frac {1}{2}$
.
- 
• f is convex. 
- 
• f is strictly monotonically decreasing. 
- 
• f is continuously differentiable on  $(0,\infty )$
 and satisfies the following differential equation: for every $(0,\infty )$
 and satisfies the following differential equation: for every $$ \begin{align*}x(x-1)f'(x)+(x+1)f(x)=1\end{align*} $$ $$ \begin{align*}x(x-1)f'(x)+(x+1)f(x)=1\end{align*} $$ $x>0$
. $x>0$
.
- 
•  $|xf'(x)|<1$
 for every $|xf'(x)|<1$
 for every $x>0$
. $x>0$
.
- 
•  $f(x)=(1-o(1))\frac {\ln (x)}{x}$
 as $f(x)=(1-o(1))\frac {\ln (x)}{x}$
 as $x\rightarrow \infty $
. $x\rightarrow \infty $
.
In the following, given a weight function 
 $w:V(G)\rightarrow \mathbb {R}_+$
 on the vertices of a graph G and a subset
$w:V(G)\rightarrow \mathbb {R}_+$
 on the vertices of a graph G and a subset 
 $X\subseteq V(G)$
,
$X\subseteq V(G)$
, 
 $w(X):=\sum _{v\in X}{w(v)}$
 denotes the total weight of X.
$w(X):=\sum _{v\in X}{w(v)}$
 denotes the total weight of X.
In this section our goal shall be to establish the following statement, which represents a main technical contribution of the paper at hand and from which Theorems 1.2, 1.4, and 1.5 can be deduced (this will be done in the following section). We believe that the more general result offered by Theorem 2.1 may be of independent interest.
Theorem 2.1. For every triangle-free graph G and every strictly positive weight function 
 $w:V(G)\rightarrow \mathbb {R}_+$
 on the vertices there exists a probability distribution
$w:V(G)\rightarrow \mathbb {R}_+$
 on the vertices there exists a probability distribution 
 $\mathcal {D}$
 on the independent sets of G such that
$\mathcal {D}$
 on the independent sets of G such that 
 $$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}}[v\in I]\ge f\left(\frac{w(N_G(v))}{w(v)}\right)\end{align*} $$
$$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}}[v\in I]\ge f\left(\frac{w(N_G(v))}{w(v)}\right)\end{align*} $$
for every vertex 
 $v\in V(G)$
.
$v\in V(G)$
.
Proof. We prove the statement by induction on 
 $|V(G)|$
. In the base case
$|V(G)|$
. In the base case 
 $|V(G)|=1$
, there is a unique vertex v of G, so
$|V(G)|=1$
, there is a unique vertex v of G, so 
 $w(N_G(v))=w(\emptyset )=0$
 and hence our target probability of the appearance of v in a randomly drawn independent set is
$w(N_G(v))=w(\emptyset )=0$
 and hence our target probability of the appearance of v in a randomly drawn independent set is 
 $f(0)=1$
. This is easily achieved by letting
$f(0)=1$
. This is easily achieved by letting 
 $\mathcal {D}$
 be the probability distribution that always picks
$\mathcal {D}$
 be the probability distribution that always picks 
 $\{v\}$
, establishing the induction base.
$\{v\}$
, establishing the induction base.
For the induction step, let us assume that G is a triangle-free graph on at least two vertices and that we have already proven the claim of the theorem for all triangle-free graphs with strictly less vertices than G.
 Let 
 $K \subseteq [0,1]$
 be the set of all
$K \subseteq [0,1]$
 be the set of all 
 $\delta \in [0,1]$
 such that for every strictly positive weight function
$\delta \in [0,1]$
 such that for every strictly positive weight function 
 $w:V(G)\rightarrow \mathbb {R}_+$
 there exists a probability distribution
$w:V(G)\rightarrow \mathbb {R}_+$
 there exists a probability distribution 
 $\mathcal {D}$
 on the independent sets of G such that
$\mathcal {D}$
 on the independent sets of G such that 
 $\mathbb {P}_{I\sim D}[v\in I]\ge f\left (\frac {w(N_G(v))}{w(v)}\right )-\delta $
 for every
$\mathbb {P}_{I\sim D}[v\in I]\ge f\left (\frac {w(N_G(v))}{w(v)}\right )-\delta $
 for every 
 $v\in V(G)$
. Since f takes values in
$v\in V(G)$
. Since f takes values in 
 $[0,1]$
, we trivially have
$[0,1]$
, we trivially have 
 $1\in K$
. Furthermore, we claim that the set K is closed (and thus compact). To see this, note that
$1\in K$
. Furthermore, we claim that the set K is closed (and thus compact). To see this, note that 
 $K=\bigcap _{w:V(G)\rightarrow \mathbb {R}_+} K_w$
, where
$K=\bigcap _{w:V(G)\rightarrow \mathbb {R}_+} K_w$
, where 
 $K_w$
 is the set of all
$K_w$
 is the set of all 
 $\delta \in [0,1]$
 for which there exists a probability distribution
$\delta \in [0,1]$
 for which there exists a probability distribution 
 $\mathcal {D}$
 on the independent sets of G satisfying
$\mathcal {D}$
 on the independent sets of G satisfying 
 $\mathbb {P}_{I\sim \mathcal {D}}[v\in I]\ge f\left (\frac {w(N_G(v))}{w(v)}\right )-\delta $
 for every
$\mathbb {P}_{I\sim \mathcal {D}}[v\in I]\ge f\left (\frac {w(N_G(v))}{w(v)}\right )-\delta $
 for every 
 $v\in V(G)$
. Since intersections of closed sets are closed, it suffices to show that
$v\in V(G)$
. Since intersections of closed sets are closed, it suffices to show that 
 $K_w$
 is closed for every fixed
$K_w$
 is closed for every fixed 
 $w:V(G)\rightarrow \mathbb {R}_+$
. Now, consider the following linear program (
$w:V(G)\rightarrow \mathbb {R}_+$
. Now, consider the following linear program (
 $\mathcal {I}(G)$
 denotes the collection of all independent sets in G):
$\mathcal {I}(G)$
 denotes the collection of all independent sets in G): 
 $$ \begin{align*} \text{min}~~ &y\\ \text{s.t.}~~ y+\sum_{I\in \mathcal{I}(G): v \in I}{x_I}&\ge f\left(\frac{w(N_G(v))}{w(v)}\right)~~(\forall v \in V(G)), \\ \sum_{I\in \mathcal{I}(G)} x_I&=1 \\ x_I &\ge 0~~(\forall I \in \mathcal{I}(G)). \end{align*} $$
$$ \begin{align*} \text{min}~~ &y\\ \text{s.t.}~~ y+\sum_{I\in \mathcal{I}(G): v \in I}{x_I}&\ge f\left(\frac{w(N_G(v))}{w(v)}\right)~~(\forall v \in V(G)), \\ \sum_{I\in \mathcal{I}(G)} x_I&=1 \\ x_I &\ge 0~~(\forall I \in \mathcal{I}(G)). \end{align*} $$
It can easily be checked that this linear program is bounded and feasible, and hence has a unique optimum 
 $y^\ast $
. Further, since the constraints of the program encode a probability distribution
$y^\ast $
. Further, since the constraints of the program encode a probability distribution 
 $\mathcal {D}$
 on independent sets with
$\mathcal {D}$
 on independent sets with 
 $\mathbb {P}_{I\sim \mathcal {D}}[v\in I]\ge f\left (\frac {w(N_G(v))}{w(v)}\right )-y$
, we can see that
$\mathbb {P}_{I\sim \mathcal {D}}[v\in I]\ge f\left (\frac {w(N_G(v))}{w(v)}\right )-y$
, we can see that 
 $K_w=[y^\ast ,1]$
 is indeed a closed set as desired.
$K_w=[y^\ast ,1]$
 is indeed a closed set as desired.
 This shows that K is indeed compact and hence has a unique minimum 
 $\delta _0 \in K$
. Our goal is to show that
$\delta _0 \in K$
. Our goal is to show that 
 $0\in K$
 (equivalently,
$0\in K$
 (equivalently, 
 $\delta _0=0$
), since this clearly establishes the induction hypothesis for G. So, toward a contradiction, let us assume
$\delta _0=0$
), since this clearly establishes the induction hypothesis for G. So, toward a contradiction, let us assume 
 $\delta _0>0$
 in the following.
$\delta _0>0$
 in the following.
 Let us define 
 $\delta :=\delta _0-\frac {\delta _0^2}{8}$
. Then, since
$\delta :=\delta _0-\frac {\delta _0^2}{8}$
. Then, since 
 $\delta \in (0,\delta _0)$
 and hence
$\delta \in (0,\delta _0)$
 and hence 
 $\delta \notin K$
, there exists a positive weight function
$\delta \notin K$
, there exists a positive weight function 
 $w:V(G)\rightarrow \mathbb {R}_+$
 such that there exists no probability distribution on the independent sets of G for which every vertex v is contained in an independent set drawn from the distribution with probability at least
$w:V(G)\rightarrow \mathbb {R}_+$
 such that there exists no probability distribution on the independent sets of G for which every vertex v is contained in an independent set drawn from the distribution with probability at least 
 $f\left (\frac {w(N_G(v))}{w(v)}\right )-\delta $
. Since the latter formula is scale-invariant, we may assume without loss of generality throughout the rest of the proof that
$f\left (\frac {w(N_G(v))}{w(v)}\right )-\delta $
. Since the latter formula is scale-invariant, we may assume without loss of generality throughout the rest of the proof that 
 $w(V(G))=1$
.
$w(V(G))=1$
.
 Let us pick and fix some 
 $\varepsilon \in (0,1)$
 (for now arbitrarily, later on we will assign a concrete value). Let
$\varepsilon \in (0,1)$
 (for now arbitrarily, later on we will assign a concrete value). Let 
 $w':V(G)\rightarrow \mathbb {R}^+$
 be a modified vertex-weighting of G, defined as
$w':V(G)\rightarrow \mathbb {R}^+$
 be a modified vertex-weighting of G, defined as 
 $w'(v):=w(v)\cdot \exp \left (\varepsilon w(N_G(v))\right )$
 for every
$w'(v):=w(v)\cdot \exp \left (\varepsilon w(N_G(v))\right )$
 for every 
 $v\in V(G)$
.
$v\in V(G)$
.
 Since 
 $\delta _0\in K$
, there must exist a probability distribution
$\delta _0\in K$
, there must exist a probability distribution 
 $\mathcal {D}$
 on the independent sets of G such that
$\mathcal {D}$
 on the independent sets of G such that 
 $$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}}[v\in I]\ge f\left(\frac{w'(N_G(v))}{w'(v)}\right)-\delta_0\end{align*} $$
$$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}}[v\in I]\ge f\left(\frac{w'(N_G(v))}{w'(v)}\right)-\delta_0\end{align*} $$
for every 
 $v\in V(G)$
.
$v\in V(G)$
.
 For a vertex 
 $u\in V(G)$
, let us denote by
$u\in V(G)$
, let us denote by 
 $\overline {N}_G(u):=\{u\}\cup N_G(u)$
 the closed neighborhood of u and by
$\overline {N}_G(u):=\{u\}\cup N_G(u)$
 the closed neighborhood of u and by 
 $G_u:=G-\overline {N}_G(u)$
 the graph obtained from G by deleting this closed neighborhood. By the inductive assumption, for every
$G_u:=G-\overline {N}_G(u)$
 the graph obtained from G by deleting this closed neighborhood. By the inductive assumption, for every 
 $u\in V(G)$
 there exists a probability distribution
$u\in V(G)$
 there exists a probability distribution 
 $\mathcal {D}_u$
 on
$\mathcal {D}_u$
 on 
 $G_u$
 such that
$G_u$
 such that 
 $\mathbb {P}_{I\sim \mathcal {D}_u}[v \in I]\ge f\left (\frac {w'(N_{G_u}(v))}{w'(v)}\right )$
 for every
$\mathbb {P}_{I\sim \mathcal {D}_u}[v \in I]\ge f\left (\frac {w'(N_{G_u}(v))}{w'(v)}\right )$
 for every 
 $v\in V(G_u)$
.
$v\in V(G_u)$
.
 Let us now define 
 $\varepsilon :=\frac {\delta _0}{4} \in (0,1)$
, and let us consider the following process to generate a random independent set I of G:
$\varepsilon :=\frac {\delta _0}{4} \in (0,1)$
, and let us consider the following process to generate a random independent set I of G: 
- 
• With probability  $1-\varepsilon $
 (we call this event A), draw I randomly from the distribution $1-\varepsilon $
 (we call this event A), draw I randomly from the distribution $\mathcal {D}$
 and return I. $\mathcal {D}$
 and return I.
- 
• With probability  $\varepsilon $
 (we call this event $\varepsilon $
 (we call this event $B:=A^{\mathsf {c}}$
), do the following: First, sample randomly a vertex $B:=A^{\mathsf {c}}$
), do the following: First, sample randomly a vertex $u\in V(G)$
 where u equals any given vertex x with probability exactly $u\in V(G)$
 where u equals any given vertex x with probability exactly $w(x)$
. Then, randomly draw an independent set $w(x)$
. Then, randomly draw an independent set $I_u$
 from the distribution $I_u$
 from the distribution $\mathcal {D}_u$
 and return the independent set $\mathcal {D}_u$
 and return the independent set $I:=\{u\}\cup I_u$
. $I:=\{u\}\cup I_u$
.
In the following, let 
 $\mathcal {D}'$
 denote the probability distribution on independent sets of G that is induced by the random independent set I created according to the above process. By our choice of the weight function w, there must exist some vertex
$\mathcal {D}'$
 denote the probability distribution on independent sets of G that is induced by the random independent set I created according to the above process. By our choice of the weight function w, there must exist some vertex 
 $v\in V(G)$
 such that
$v\in V(G)$
 such that 
 $$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}'}[v\in I]<f\left(\frac{w(N_G(v))}{w(v)}\right)-\delta.\end{align*} $$
$$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}'}[v\in I]<f\left(\frac{w(N_G(v))}{w(v)}\right)-\delta.\end{align*} $$
Our intermediate goal is to give a lower bound on 
 $\mathbb {P}_{I\sim \mathcal {D}'}[v\in I]$
.
$\mathbb {P}_{I\sim \mathcal {D}'}[v\in I]$
.
To estimate this probability, we stick with the random process described above. We then have
 $$ \begin{align*} \mathbb{P}_{I\sim \mathcal{D}'}[v\in I] &=(1-\varepsilon)\mathbb{P}_{I\sim \mathcal{D}'}[v\in I|A]+\varepsilon \mathbb{P}_{I\sim \mathcal{D}'}[v\in I|B]\\ &=(1-\varepsilon)\mathbb{P}_{I\sim \mathcal{D}}[v\in I]+\varepsilon\sum_{x\in V(G)}\mathbb{P}_{I\sim \mathcal{D}'}[v\in I|B \wedge \{u=x\}]w(x)\\ &=(1-\varepsilon)\mathbb{P}_{I\sim \mathcal{D}}[v\in I]+\varepsilon w(v)+\varepsilon\sum_{x\in V(G)\setminus \overline{N}_G(v)}\mathbb{P}_{I\sim \mathcal{D}_x}[v\in I]w(x).\end{align*} $$
$$ \begin{align*} \mathbb{P}_{I\sim \mathcal{D}'}[v\in I] &=(1-\varepsilon)\mathbb{P}_{I\sim \mathcal{D}'}[v\in I|A]+\varepsilon \mathbb{P}_{I\sim \mathcal{D}'}[v\in I|B]\\ &=(1-\varepsilon)\mathbb{P}_{I\sim \mathcal{D}}[v\in I]+\varepsilon\sum_{x\in V(G)}\mathbb{P}_{I\sim \mathcal{D}'}[v\in I|B \wedge \{u=x\}]w(x)\\ &=(1-\varepsilon)\mathbb{P}_{I\sim \mathcal{D}}[v\in I]+\varepsilon w(v)+\varepsilon\sum_{x\in V(G)\setminus \overline{N}_G(v)}\mathbb{P}_{I\sim \mathcal{D}_x}[v\in I]w(x).\end{align*} $$
By our choice of the distributions 
 $\mathcal {D}$
 and
$\mathcal {D}$
 and 
 $\mathcal {D}_x$
,
$\mathcal {D}_x$
, 
 $x\in V(G)$
, we have
$x\in V(G)$
, we have 
 $$ \begin{align*} &(1-\varepsilon)\mathbb{P}_{I\sim \mathcal{D}}[v\in I]+\varepsilon\sum_{x\in V(G)\setminus \overline{N}_G(v)}\mathbb{P}_{I\sim \mathcal{D}_x}[v\in I]w(x)\\ &\ge (1-\varepsilon)\left(f\left(\frac{w'(N_G(v))}{w'(v)}\right)-\delta_0\right)+\varepsilon \sum_{x\in V(G)\setminus \overline{N}_G(v)}f\left(\frac{w'(N_{G_x}(v))}{w'(v)}\right)w(x)\\ &=-(1-\varepsilon)\delta_0+(1-\varepsilon)f\left(\frac{w'(N_G(v))}{w'(v)}\right)+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)f\left(\frac{w'(N_{G_x}(v))}{w'(v)}\right).\end{align*} $$
$$ \begin{align*} &(1-\varepsilon)\mathbb{P}_{I\sim \mathcal{D}}[v\in I]+\varepsilon\sum_{x\in V(G)\setminus \overline{N}_G(v)}\mathbb{P}_{I\sim \mathcal{D}_x}[v\in I]w(x)\\ &\ge (1-\varepsilon)\left(f\left(\frac{w'(N_G(v))}{w'(v)}\right)-\delta_0\right)+\varepsilon \sum_{x\in V(G)\setminus \overline{N}_G(v)}f\left(\frac{w'(N_{G_x}(v))}{w'(v)}\right)w(x)\\ &=-(1-\varepsilon)\delta_0+(1-\varepsilon)f\left(\frac{w'(N_G(v))}{w'(v)}\right)+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)f\left(\frac{w'(N_{G_x}(v))}{w'(v)}\right).\end{align*} $$
Since 
 $(1-\varepsilon )+\sum _{x\in V(G)\setminus \overline {N}_G(v)}\varepsilon w(x)=(1-\varepsilon )+\varepsilon \left (1-w(\overline {N}_G(v))\right )=1-\varepsilon w(\overline {N}_G(v))$
, the convexity of f implies that
$(1-\varepsilon )+\sum _{x\in V(G)\setminus \overline {N}_G(v)}\varepsilon w(x)=(1-\varepsilon )+\varepsilon \left (1-w(\overline {N}_G(v))\right )=1-\varepsilon w(\overline {N}_G(v))$
, the convexity of f implies that 
 $$ \begin{align*}(1-\varepsilon)f\left(\frac{w'(N_G(v))}{w'(v)}\right)+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)f\left(\frac{w'(N_{G_x}(v))}{w'(v)}\right)\ge\end{align*} $$
$$ \begin{align*}(1-\varepsilon)f\left(\frac{w'(N_G(v))}{w'(v)}\right)+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)f\left(\frac{w'(N_{G_x}(v))}{w'(v)}\right)\ge\end{align*} $$
 $$ \begin{align*}\left(1-\varepsilon w(\overline{N}_G(v))\right)f\left(\frac{(1-\varepsilon)w'(N_G(v))+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)w'(N_{G_x}(v))}{w'(v)\left(1-\varepsilon w(\overline{N}_G(v))\right)}\right).\end{align*} $$
$$ \begin{align*}\left(1-\varepsilon w(\overline{N}_G(v))\right)f\left(\frac{(1-\varepsilon)w'(N_G(v))+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)w'(N_{G_x}(v))}{w'(v)\left(1-\varepsilon w(\overline{N}_G(v))\right)}\right).\end{align*} $$
The next claim gives a simple upper bound for the expression in the argument of f above.
Claim 2.2. We have that
 $$ \begin{align*}\frac{(1-\varepsilon)w'(N_G(v))+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)w'(N_{G_x}(v))}{w'(v)\left(1-\varepsilon w(\overline{N}_G(v))\right)} \le \frac{w(N_G(v))}{w(v)}\cdot e^{\varepsilon (w(v)-w(N_G(v)))}.\end{align*} $$
$$ \begin{align*}\frac{(1-\varepsilon)w'(N_G(v))+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)w'(N_{G_x}(v))}{w'(v)\left(1-\varepsilon w(\overline{N}_G(v))\right)} \le \frac{w(N_G(v))}{w(v)}\cdot e^{\varepsilon (w(v)-w(N_G(v)))}.\end{align*} $$
Proof.
We have
 $$ \begin{align*} &(1-\varepsilon) w'(N_G(v))+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)w'(N_{G_x}(v))\\ &=\sum_{y\in N_G(v)}{(1-\varepsilon)w'(y)}+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)\sum_{y\in N_G(v)\setminus \overline{N}_G(x)}{w'(y)}\\ &=\sum_{y\in N_G(v)}\left((1-\varepsilon)+\sum_{x\in V(G)\setminus (\overline{N}_G(v)\cup \overline{N}_G(y))}\varepsilon w(x)\right)w'(y)\\ &=\sum_{y\in N_G(v)}\left(1-\varepsilon+\varepsilon(1-w(\overline{N}_G(v)\cup \overline{N}_G(y)))\right)w'(y)\\ &=\sum_{y\in N_G(v)}\left(1-\varepsilon w(\overline{N}_G(v)\cup \overline{N}_G(y))\right)w'(y). \end{align*} $$
$$ \begin{align*} &(1-\varepsilon) w'(N_G(v))+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)w'(N_{G_x}(v))\\ &=\sum_{y\in N_G(v)}{(1-\varepsilon)w'(y)}+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x)\sum_{y\in N_G(v)\setminus \overline{N}_G(x)}{w'(y)}\\ &=\sum_{y\in N_G(v)}\left((1-\varepsilon)+\sum_{x\in V(G)\setminus (\overline{N}_G(v)\cup \overline{N}_G(y))}\varepsilon w(x)\right)w'(y)\\ &=\sum_{y\in N_G(v)}\left(1-\varepsilon+\varepsilon(1-w(\overline{N}_G(v)\cup \overline{N}_G(y)))\right)w'(y)\\ &=\sum_{y\in N_G(v)}\left(1-\varepsilon w(\overline{N}_G(v)\cup \overline{N}_G(y))\right)w'(y). \end{align*} $$
Note that for every 
 $y\in N_G(v)$
, we have
$y\in N_G(v)$
, we have 
 $\overline {N}_G(v)\cup \overline {N}_G(y)=N_G(v)\cup N_G(y)$
. Furthermore, since G is triangle-free, the sets
$\overline {N}_G(v)\cup \overline {N}_G(y)=N_G(v)\cup N_G(y)$
. Furthermore, since G is triangle-free, the sets 
 $N_G(v)$
 and
$N_G(v)$
 and 
 $N_G(y)$
 are disjoint, and thus we have
$N_G(y)$
 are disjoint, and thus we have 
 $w(\overline {N}_G(v)\cup \overline {N}_G(y))=w(N_G(v))+w(N_G(y))$
. This implies
$w(\overline {N}_G(v)\cup \overline {N}_G(y))=w(N_G(v))+w(N_G(y))$
. This implies 
 $$ \begin{align*} &\frac{(1-\varepsilon)w'(N_G(v))+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x) w'(N_{G_x}(v))}{w'(v)\left(1-\varepsilon w(\overline{N}_G(v))\right)}\\ &=\frac{ 1}{w'(v)}\sum_{y\in N_G(v)}{\frac{1-\varepsilon w(N_G(v))-\varepsilon w(N_G(y))}{1-\varepsilon w(\overline{N}_G(v))}w'(y)}\\ &=\frac{ 1}{w'(v)}\sum_{y\in N_G(v)}{\left(1-\varepsilon \frac{w(N_G(y))-w(v)}{1-\varepsilon w(\overline{N}_G(v))}\right)w'(y)}\\ &\le \frac{ 1}{w'(v)}\sum_{y\in N_G(v)}{\left(1-\varepsilon (w(N_G(y))-w(v))\right)w'(y)}\\ &\le \frac{1}{w'(v)}\sum_{y\in N_G(v)}\exp\left(-\varepsilon (w(N_G(y))-w(v))\right)\cdot w(y)\exp\left(\varepsilon w(N_G(y))\right)\\ &=\frac{1}{w'(v)}\exp\left(\varepsilon w(v)\right)w(N_G(v))\\ &=\frac{w(N_G(v))}{w(v)}\cdot \exp\left(\varepsilon(w(v)-w(N_G(v)))\right), \end{align*} $$
$$ \begin{align*} &\frac{(1-\varepsilon)w'(N_G(v))+\sum_{x\in V(G)\setminus \overline{N}_G(v)}\varepsilon w(x) w'(N_{G_x}(v))}{w'(v)\left(1-\varepsilon w(\overline{N}_G(v))\right)}\\ &=\frac{ 1}{w'(v)}\sum_{y\in N_G(v)}{\frac{1-\varepsilon w(N_G(v))-\varepsilon w(N_G(y))}{1-\varepsilon w(\overline{N}_G(v))}w'(y)}\\ &=\frac{ 1}{w'(v)}\sum_{y\in N_G(v)}{\left(1-\varepsilon \frac{w(N_G(y))-w(v)}{1-\varepsilon w(\overline{N}_G(v))}\right)w'(y)}\\ &\le \frac{ 1}{w'(v)}\sum_{y\in N_G(v)}{\left(1-\varepsilon (w(N_G(y))-w(v))\right)w'(y)}\\ &\le \frac{1}{w'(v)}\sum_{y\in N_G(v)}\exp\left(-\varepsilon (w(N_G(y))-w(v))\right)\cdot w(y)\exp\left(\varepsilon w(N_G(y))\right)\\ &=\frac{1}{w'(v)}\exp\left(\varepsilon w(v)\right)w(N_G(v))\\ &=\frac{w(N_G(v))}{w(v)}\cdot \exp\left(\varepsilon(w(v)-w(N_G(v)))\right), \end{align*} $$
where we used the definition of 
 $w'$
 in the last and third to last line. This concludes the proof of the claim.
$w'$
 in the last and third to last line. This concludes the proof of the claim.
 Using Claim 2.2 and the previously established inequalities (using that f is monotonically decreasing), it follows that 
 $\mathbb {P}_{I\sim \mathcal {D}'}[v\in I]$
 is lower-bounded by
$\mathbb {P}_{I\sim \mathcal {D}'}[v\in I]$
 is lower-bounded by 
 $$ \begin{align*}\varepsilon w(v)-(1-\varepsilon)\delta_0+\left(1-\varepsilon w(\overline{N}_G(v))\right)f\left(\frac{w(N_G(v))}{w(v)}\cdot \exp\left(\varepsilon (w(v)-w(N_G(v)))\right)\right).\end{align*} $$
$$ \begin{align*}\varepsilon w(v)-(1-\varepsilon)\delta_0+\left(1-\varepsilon w(\overline{N}_G(v))\right)f\left(\frac{w(N_G(v))}{w(v)}\cdot \exp\left(\varepsilon (w(v)-w(N_G(v)))\right)\right).\end{align*} $$
 Let us now go about estimating the above expression. By Taylor expansion, it is not hard to verify that the inequality 
 $\exp (z)\le 1+z+z^2$
 holds for every
$\exp (z)\le 1+z+z^2$
 holds for every 
 $z\in [-1,1]$
. Let us set
$z\in [-1,1]$
. Let us set 
 $x:=\frac {w(N_G(v))}{w(v)}$
,
$x:=\frac {w(N_G(v))}{w(v)}$
, 
 $z:=\varepsilon (w(v)-w(N_G(v)))$
 and
$z:=\varepsilon (w(v)-w(N_G(v)))$
 and 
 $y:=x\cdot \exp (z)$
. Note that since f is convex and differentiable, we have the inequality
$y:=x\cdot \exp (z)$
. Note that since f is convex and differentiable, we have the inequality 
 $f(y)\ge f(x)+f'(x)(y-x)$
. Since
$f(y)\ge f(x)+f'(x)(y-x)$
. Since 
 $w(v), w(N_G(v))\le w(V(G))=1$
, we obtain
$w(v), w(N_G(v))\le w(V(G))=1$
, we obtain 
 $|z|\le \varepsilon <1$
 and thus
$|z|\le \varepsilon <1$
 and thus 
 $y\le x(1+z+z^2)$
. Since f is monotonically decreasing, we have
$y\le x(1+z+z^2)$
. Since f is monotonically decreasing, we have 
 $f'(x)<0$
. Putting these facts together, it follows that
$f'(x)<0$
. Putting these facts together, it follows that 
 $$ \begin{align*}&f\left(\frac{w(N_G(v))}{w(v)}\cdot \exp\left(\varepsilon (w(v)-w(N_G(v)))\right)\right)=f(y)\\ &\ge f(x)+f'(x)(y-x)\\ &\ge f(x)+f'(x)\cdot x\cdot (z+z^2)\\ & \ge f(x)+xzf'(x)-\varepsilon^2,\end{align*} $$
$$ \begin{align*}&f\left(\frac{w(N_G(v))}{w(v)}\cdot \exp\left(\varepsilon (w(v)-w(N_G(v)))\right)\right)=f(y)\\ &\ge f(x)+f'(x)(y-x)\\ &\ge f(x)+f'(x)\cdot x\cdot (z+z^2)\\ & \ge f(x)+xzf'(x)-\varepsilon^2,\end{align*} $$
where we used that 
 $|x\cdot f'(x)|\le 1$
 for every
$|x\cdot f'(x)|\le 1$
 for every 
 $x>0$
 and that
$x>0$
 and that 
 $|z|\le \varepsilon $
 in the last line. Plugging this estimate into the above lower bound for
$|z|\le \varepsilon $
 in the last line. Plugging this estimate into the above lower bound for 
 $\mathbb {P}_{I\sim \mathcal {D}'}[v\in I]$
 and using that by our choice of v, we have
$\mathbb {P}_{I\sim \mathcal {D}'}[v\in I]$
 and using that by our choice of v, we have 
 $\mathbb {P}_{I\sim \mathcal {D}'}[v\in I]<f\left (\frac {w(N_G(v))}{w(v)}\right )-\delta $
, we find:
$\mathbb {P}_{I\sim \mathcal {D}'}[v\in I]<f\left (\frac {w(N_G(v))}{w(v)}\right )-\delta $
, we find: 
 $$ \begin{align*} &f(x)-\delta>\mathbb{P}_{I\sim \mathcal{D}'}[v\in I]\\ &\ge -(1-\varepsilon)\delta_0+\varepsilon w(v)+(1-\underbrace{\varepsilon (w(v)+w(N_G(v)))}_{\le \varepsilon})\cdot (f(x)+\underbrace{xzf'(x)-\varepsilon^2}_{\le \varepsilon})\\ &>-(1-\varepsilon)\delta_0+\varepsilon w(v)+f(x)+xzf'(x)-2\varepsilon^2-\varepsilon(w(v)+w(N_G(v)))f(x). \end{align*} $$
$$ \begin{align*} &f(x)-\delta>\mathbb{P}_{I\sim \mathcal{D}'}[v\in I]\\ &\ge -(1-\varepsilon)\delta_0+\varepsilon w(v)+(1-\underbrace{\varepsilon (w(v)+w(N_G(v)))}_{\le \varepsilon})\cdot (f(x)+\underbrace{xzf'(x)-\varepsilon^2}_{\le \varepsilon})\\ &>-(1-\varepsilon)\delta_0+\varepsilon w(v)+f(x)+xzf'(x)-2\varepsilon^2-\varepsilon(w(v)+w(N_G(v)))f(x). \end{align*} $$
Rearranging yields
 $$ \begin{align*}\delta_0-\delta>\varepsilon\delta_0-2\varepsilon^2+\varepsilon w(v)+xzf'(x)-\varepsilon (w(v)+w(N_G(v)))f(x).\end{align*} $$
$$ \begin{align*}\delta_0-\delta>\varepsilon\delta_0-2\varepsilon^2+\varepsilon w(v)+xzf'(x)-\varepsilon (w(v)+w(N_G(v)))f(x).\end{align*} $$
Using that 
 $x=\frac {w(N_G(v))}{w(v)}$
 and
$x=\frac {w(N_G(v))}{w(v)}$
 and 
 $z=\varepsilon w(v) (1-x)$
, we can simplify as follows.
$z=\varepsilon w(v) (1-x)$
, we can simplify as follows. 
 $$ \begin{align*} &\varepsilon w(v)+xzf'(x)-\varepsilon(w(v)+w(N_G(v)))f(x)\\ &=\varepsilon w(v)\left(1+x(1-x)f'(x)-(1+x)f(x)\right)=0, \end{align*} $$
$$ \begin{align*} &\varepsilon w(v)+xzf'(x)-\varepsilon(w(v)+w(N_G(v)))f(x)\\ &=\varepsilon w(v)\left(1+x(1-x)f'(x)-(1+x)f(x)\right)=0, \end{align*} $$
where we used the differential equation satisfied by f in the last step. Hence, we have proven the inequality 
 $\delta _0-\delta>\varepsilon \delta _0-2\varepsilon ^2$
. Recalling our definitions
$\delta _0-\delta>\varepsilon \delta _0-2\varepsilon ^2$
. Recalling our definitions 
 $\delta :=\delta _0-\frac {\delta _0^2}{8}$
 and
$\delta :=\delta _0-\frac {\delta _0^2}{8}$
 and 
 $\varepsilon :=\frac {\delta _0}{4}$
 we can now see that the above inequality implies
$\varepsilon :=\frac {\delta _0}{4}$
 we can now see that the above inequality implies 
 $\frac {\delta _0^2}{8}>\frac {\delta _0^2}{8}$
, which is absurd. This is the desired contradiction, which shows that our initial assumption, namely that
$\frac {\delta _0^2}{8}>\frac {\delta _0^2}{8}$
, which is absurd. This is the desired contradiction, which shows that our initial assumption, namely that 
 $\delta _0>0$
, was wrong. Hence, we have shown that
$\delta _0>0$
, was wrong. Hence, we have shown that 
 $\delta _0=0$
, establishing the inductive claim for G. This concludes the proof of the theorem by induction.
$\delta _0=0$
, establishing the inductive claim for G. This concludes the proof of the theorem by induction.
3 Proofs of Theorems 1.2, 1.4, and 1.5
 In this section we use Theorem 2.1 established in the previous section to deduce Theorems 1.2, 1.4, and 1.5. Let us start with Theorem 1.2, which is a simple corollary of Theorem 2.1 by using the all-
 $1$
 weight assignment.
$1$
 weight assignment. 
Proof of Theorem 1.2.
 Let G be any given triangle-free graph, and let 
 $w:V(G)\rightarrow \mathbb {R}_+$
 be defined as
$w:V(G)\rightarrow \mathbb {R}_+$
 be defined as 
 $w(v):=1$
 for every
$w(v):=1$
 for every 
 $v\in V(G)$
. Then
$v\in V(G)$
. Then 
 $\frac {w(N_G(v))}{w(v)}=d_G(v)$
 for every vertex
$\frac {w(N_G(v))}{w(v)}=d_G(v)$
 for every vertex 
 $v\in V(G)$
, and hence by Theorem 2.1 there exists a probability distribution
$v\in V(G)$
, and hence by Theorem 2.1 there exists a probability distribution 
 $\mathcal {D}$
 on independent sets of G such that
$\mathcal {D}$
 on independent sets of G such that 
 $$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}}[v\in I]\ge f(d_G(v))\end{align*} $$
$$ \begin{align*}\mathbb{P}_{I\sim \mathcal{D}}[v\in I]\ge f(d_G(v))\end{align*} $$
for every 
 $v\in V(G)$
. Since
$v\in V(G)$
. Since 
 $f(x)=(1-o(1))\frac {\ln (x)}{x}$
, this establishes Theorem 1.2.
$f(x)=(1-o(1))\frac {\ln (x)}{x}$
, this establishes Theorem 1.2.
Next, let us deduce Theorem 1.4. This, in fact, can be derived from Theorem 1.2 using the following relationship between Conjectures 1.1 and 1.3 proved by Kelly and Postle [Reference Kelly and Postle40, Proposition 5.2]:
Proposition 3.1. For every 
 $\varepsilon , c>0$
, the following holds for sufficiently large n. Let G be a triangle-free graph on n vertices with demand function h such that
$\varepsilon , c>0$
, the following holds for sufficiently large n. Let G be a triangle-free graph on n vertices with demand function h such that 
 $h(v)\ge c\frac {\ln d_G(v)}{d_G(v)}$
 for every
$h(v)\ge c\frac {\ln d_G(v)}{d_G(v)}$
 for every 
 $v\in V(G)$
. If G has an h-coloring, then
$v\in V(G)$
. If G has an h-coloring, then 
 $$ \begin{align*}\chi_f(G)\le (\sqrt{2/c}+\varepsilon)\sqrt{\frac{n}{\ln n}}.\end{align*} $$
$$ \begin{align*}\chi_f(G)\le (\sqrt{2/c}+\varepsilon)\sqrt{\frac{n}{\ln n}}.\end{align*} $$
With this statement at hand, we can now easily deduce Theorem 1.4.
Proof of Theorem 1.4.
 The statement of Theorem 1.4 is equivalent to showing that for every fixed 
 $\delta>0$
 and n sufficiently large in terms of
$\delta>0$
 and n sufficiently large in terms of 
 $\delta $
, every triangle-free graph G on n vertices satisfies
$\delta $
, every triangle-free graph G on n vertices satisfies 
 $\chi _f(G)\le (\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
. Let
$\chi _f(G)\le (\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
. Let 
 $\varepsilon>0$
 and
$\varepsilon>0$
 and 
 $0<c<1$
 (only depending on
$0<c<1$
 (only depending on 
 $\delta $
) be chosen such that
$\delta $
) be chosen such that 
 $\sqrt {2/c}+\varepsilon <\sqrt {2}+\delta $
. By Proposition 3.1 there exists
$\sqrt {2/c}+\varepsilon <\sqrt {2}+\delta $
. By Proposition 3.1 there exists 
 $n_0=n_0(\varepsilon ,c)\in \mathbb {N}$
 such that every triangle-free graph G with
$n_0=n_0(\varepsilon ,c)\in \mathbb {N}$
 such that every triangle-free graph G with 
 $n\ge n_0$
 vertices that admits an h-coloring for some demand function h satisfying
$n\ge n_0$
 vertices that admits an h-coloring for some demand function h satisfying 
 $h(v)\ge c\frac {\ln d_G(v)}{d_G(v)}$
 for all
$h(v)\ge c\frac {\ln d_G(v)}{d_G(v)}$
 for all 
 $v\in V(G)$
 has fractional chromatic number at most
$v\in V(G)$
 has fractional chromatic number at most 
 $(\sqrt {2/c}+\varepsilon )\sqrt {\frac {n}{\ln n}}\le (\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
. By [Reference Kelly and Postle40, Proposition 1.4 (c)] the latter statement is equivalent to the following: Every triangle-free graph on
$(\sqrt {2/c}+\varepsilon )\sqrt {\frac {n}{\ln n}}\le (\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
. By [Reference Kelly and Postle40, Proposition 1.4 (c)] the latter statement is equivalent to the following: Every triangle-free graph on 
 $n\ge n_0$
 vertices that admits a probability distribution on its independent sets such that each vertex v is included with probability at least
$n\ge n_0$
 vertices that admits a probability distribution on its independent sets such that each vertex v is included with probability at least 
 $c\frac {\ln d_G(v)}{d_G(v)}$
 in a randomly drawn independent set has fractional chromatic number at most
$c\frac {\ln d_G(v)}{d_G(v)}$
 in a randomly drawn independent set has fractional chromatic number at most 
 $(\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
.
$(\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
.
 Since 
 $c<1$
, Theorem 1.2 implies that there exists a constant
$c<1$
, Theorem 1.2 implies that there exists a constant 
 $D=D(c)$
 such that every triangle-free graph of minimum degree at least D admits a probability distribution on its independent sets where each vertex v is included in a randomly drawn independent set with probability at least
$D=D(c)$
 such that every triangle-free graph of minimum degree at least D admits a probability distribution on its independent sets where each vertex v is included in a randomly drawn independent set with probability at least 
 $c\frac {\ln d_G(v)}{d_G(v)}$
. Putting this together with the statement above, we immediately obtain that every triangle-free graph on
$c\frac {\ln d_G(v)}{d_G(v)}$
. Putting this together with the statement above, we immediately obtain that every triangle-free graph on 
 $n\ge n_0$
 vertices with minimum degree at least D has fractional chromatic number at most
$n\ge n_0$
 vertices with minimum degree at least D has fractional chromatic number at most 
 $(\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
.
$(\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
.
 Let 
 $n_1$
 be an integer chosen large enough such that
$n_1$
 be an integer chosen large enough such that 
 $(\sqrt {2}+\delta )\sqrt {\frac {n_1}{\ln n_1}}>\max \{D+1,n_0\}$
. We now claim that every triangle-free graph on
$(\sqrt {2}+\delta )\sqrt {\frac {n_1}{\ln n_1}}>\max \{D+1,n_0\}$
. We now claim that every triangle-free graph on 
 $n\ge n_1$
 vertices has fractional chromatic number at most
$n\ge n_1$
 vertices has fractional chromatic number at most 
 $(\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
, which is the statement that we wanted to prove initially. Let G be any given triangle-free graph on
$(\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
, which is the statement that we wanted to prove initially. Let G be any given triangle-free graph on 
 $n\ge n_1$
 vertices. Let
$n\ge n_1$
 vertices. Let 
 $G'$
 be the subgraph of G obtained by repeatedly removing vertices of degree less than D from G, until no such vertices are left (
$G'$
 be the subgraph of G obtained by repeatedly removing vertices of degree less than D from G, until no such vertices are left (
 $G'$
 is the so-called D-core of G). Then
$G'$
 is the so-called D-core of G). Then 
 $G'$
 is a triangle-free graph that is either empty or has minimum degree at least D. Hence, we either have
$G'$
 is a triangle-free graph that is either empty or has minimum degree at least D. Hence, we either have 
 $|V(G')|<n_0$
 and thus
$|V(G')|<n_0$
 and thus 
 $\chi _f(G')<n_0$
, or
$\chi _f(G')<n_0$
, or 
 $|V(G')|\ge n_0$
 and thus
$|V(G')|\ge n_0$
 and thus 
 $\chi _f(G')\le (\sqrt {2}+\delta )\sqrt {\frac {|V(G')|}{\ln |V(G')|}}\le (\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
.
$\chi _f(G')\le (\sqrt {2}+\delta )\sqrt {\frac {|V(G')|}{\ln |V(G')|}}\le (\sqrt {2}+\delta )\sqrt {\frac {n}{\ln n}}$
.
 Pause to verify that 
 $\chi _f(G)\le \max \{\chi _f(G-v),d_G(v)+1\}$
 holds for every graph G and every vertex
$\chi _f(G)\le \max \{\chi _f(G-v),d_G(v)+1\}$
 holds for every graph G and every vertex 
 $v\in V(G)$
. Repeated application of this fact combined with the definition of
$v\in V(G)$
. Repeated application of this fact combined with the definition of 
 $G'$
 now implies that
$G'$
 now implies that 
 $$ \begin{align*}\chi_f(G)\le \max\{\chi_f(G'),D+1\}\le \max\left\{n_0,(\sqrt{2}+\delta)\sqrt{\frac{n}{\ln n}},D+1\right\}=(\sqrt{2}+\delta)\sqrt{\frac{n}{\ln n}},\end{align*} $$
$$ \begin{align*}\chi_f(G)\le \max\{\chi_f(G'),D+1\}\le \max\left\{n_0,(\sqrt{2}+\delta)\sqrt{\frac{n}{\ln n}},D+1\right\}=(\sqrt{2}+\delta)\sqrt{\frac{n}{\ln n}},\end{align*} $$
as desired. Here, we used our choice of 
 $n_1$
 and that
$n_1$
 and that 
 $n\ge n_1$
 in the last step. This concludes the proof.
$n\ge n_1$
 in the last step. This concludes the proof.
Finally, let us prove the upper bound on the fractional chromatic number of triangle-free graphs with a given number of edges stated in Theorem 1.5. Interestingly, it can be deduced by applying Theorem 2.1 with two different vertex-weight functions following a similar proof idea to Proposition 3.1.
Proof of Theorem 1.5.
 Let G be any given triangle-free graph with m edges. To prove the upper bound on the fractional chromatic number, without loss of generality it suffices to consider the case when G has no isolated vertices. By definition of the fractional chromatic number, we have to show that there exists a probability distribution on the independent sets of G for which a randomly drawn independent set contains any given vertex of G with probability at least 
 $ (1-o(1))(\ln m)^{2/3}/(18 m)^{1/3}$
. To construct such a distribution, we consider the following process to generate a random independent set I in G. With probability
$ (1-o(1))(\ln m)^{2/3}/(18 m)^{1/3}$
. To construct such a distribution, we consider the following process to generate a random independent set I in G. With probability 
 $1/3$
 pick I as in Theorem 2.1 with the weight function defined as
$1/3$
 pick I as in Theorem 2.1 with the weight function defined as 
 $w_1(v):=1$
 for every
$w_1(v):=1$
 for every 
 $v\in V(G)$
, with probability
$v\in V(G)$
, with probability 
 $1/3$
 we pick I as in Theorem 2.1 using the weight function
$1/3$
 we pick I as in Theorem 2.1 using the weight function 
 $w_2(v):=d_G(v)$
 for every
$w_2(v):=d_G(v)$
 for every 
 $v\in V(G)$
, and with probability
$v\in V(G)$
, and with probability 
 $1/3$
 we pick a random vertex u in G with
$1/3$
 we pick a random vertex u in G with 
 $\mathbb {P}[u=v]=d_G(v)/2m$
 for every
$\mathbb {P}[u=v]=d_G(v)/2m$
 for every 
 $v\in V(G)$
 and let I be its neighborhood (which is clearly an independent set in G).
$v\in V(G)$
 and let I be its neighborhood (which is clearly an independent set in G).
It follows that
 $$ \begin{align*}\mathbb{P}[v\in I]\geq \frac{1}{3} f(d_G(v))+\frac{1}{3} f(S_G(v)/d_G(v)) + \frac{1}{3} \frac{S_G(v)}{2m},\end{align*} $$
$$ \begin{align*}\mathbb{P}[v\in I]\geq \frac{1}{3} f(d_G(v))+\frac{1}{3} f(S_G(v)/d_G(v)) + \frac{1}{3} \frac{S_G(v)}{2m},\end{align*} $$
for every 
 $v\in V(G)$
, where
$v\in V(G)$
, where 
 $S_G(v)$
 denotes the sum of degrees over all neighbors of v in G. It suffices to show that the right-hand side is at least
$S_G(v)$
 denotes the sum of degrees over all neighbors of v in G. It suffices to show that the right-hand side is at least 
 $(1-o(1)) (\ln m)^{2/3}/(18 m)^{1/3}$
 for all vertices v.
$(1-o(1)) (\ln m)^{2/3}/(18 m)^{1/3}$
 for all vertices v.
 Observe that if either 
 $d_G(v)< m^{1/3}$
 or
$d_G(v)< m^{1/3}$
 or 
 $S_G(v)/d_G(v) < m^{1/3}$
, then the desired inequality is already satisfied with room to spare from the first and second terms, respectively. Otherwise, if
$S_G(v)/d_G(v) < m^{1/3}$
, then the desired inequality is already satisfied with room to spare from the first and second terms, respectively. Otherwise, if 
 $d_G(v)\geq m^{1/3}$
 and
$d_G(v)\geq m^{1/3}$
 and 
 $S_G(v)/d(v) \geq m^{1/3}$
, we have
$S_G(v)/d(v) \geq m^{1/3}$
, we have 
 $$ \begin{align*} &\frac{1}{3} f(d_G(v))+\frac{1}{3} f(S_G(v)/d_G(v)) + \frac{1}{3} \frac{S_G(v)}{2m}\\ &\qquad = \frac{1}{3} \frac{(1-o(1))\ln d_G(v)}{d_G(v)} +\frac{1}{3} \frac{(1-o(1))\ln(S_G(v)/d_G(v))}{S_G(v)/d_G(v)} + \frac{1}{3} \frac{S_G(v)}{2m}\\ &\qquad \geq \frac{1}{3} \frac{(1-o(1))\ln(m^{1/3})}{d(v)} +\frac{1}{3} \frac{(1-o(1))\ln(m^{1/3})}{S_G(v)/d_G(v)} + \frac{1}{3} \frac{S_G(v)}{2m}\\ &\qquad \geq \left(\frac{(1-o(1))\ln(m^{1/3})}{d_G(v)} \cdot \frac{(1-o(1))\ln(m^{1/3})}{S_G(v)/d_G(v)} \cdot \frac{S_G(v)}{2m}\right)^{1/3}, \end{align*} $$
$$ \begin{align*} &\frac{1}{3} f(d_G(v))+\frac{1}{3} f(S_G(v)/d_G(v)) + \frac{1}{3} \frac{S_G(v)}{2m}\\ &\qquad = \frac{1}{3} \frac{(1-o(1))\ln d_G(v)}{d_G(v)} +\frac{1}{3} \frac{(1-o(1))\ln(S_G(v)/d_G(v))}{S_G(v)/d_G(v)} + \frac{1}{3} \frac{S_G(v)}{2m}\\ &\qquad \geq \frac{1}{3} \frac{(1-o(1))\ln(m^{1/3})}{d(v)} +\frac{1}{3} \frac{(1-o(1))\ln(m^{1/3})}{S_G(v)/d_G(v)} + \frac{1}{3} \frac{S_G(v)}{2m}\\ &\qquad \geq \left(\frac{(1-o(1))\ln(m^{1/3})}{d_G(v)} \cdot \frac{(1-o(1))\ln(m^{1/3})}{S_G(v)/d_G(v)} \cdot \frac{S_G(v)}{2m}\right)^{1/3}, \end{align*} $$
where the last line follows by the AM–GM inequality. Simplifying yields a lower bound of 
 $(1-o(1))\left (\frac {\ln (m)^2}{18m}\right )^{1/3}$
, as desired.
$(1-o(1))\left (\frac {\ln (m)^2}{18m}\right )^{1/3}$
, as desired.
4 Proofs of Theorems 1.7 and 1.8
In this section, we present a new stochastic process for generating an independent set I in a graph G, and prove an accompanying key technical result, Theorem 4.1, which lower bounds the probability of any vertex being contained in I, under the assumption that G is triangle-free. Theorems 1.7 and 1.8 are both direct consequences of this statement.
 Let us assume that G is a triangle-free graph with vertices 
 $v_1, \dots , v_n$
. We denote by
$v_1, \dots , v_n$
. We denote by 
 $N_L(v_i)$
 the set of neighbors
$N_L(v_i)$
 the set of neighbors 
 $v_j$
 of
$v_j$
 of 
 $v_i$
 where
$v_i$
 where 
 $j<i$
. Similarly,
$j<i$
. Similarly, 
 $N_R(v_i)$
 denotes the set of neighbors
$N_R(v_i)$
 denotes the set of neighbors 
 $v_j$
 of
$v_j$
 of 
 $v_i$
 where
$v_i$
 where 
 $j>i$
. Let
$j>i$
. Let 
 $w_0:V(G)\rightarrow \mathbb {R}_{+}$
 be any assignment of positive weights to the vertices of G.
$w_0:V(G)\rightarrow \mathbb {R}_{+}$
 be any assignment of positive weights to the vertices of G.
 Consider the following process: Initially assign vertices the weights 
 $w(v_i)=w_0(v_i)$
 for all
$w(v_i)=w_0(v_i)$
 for all 
 $1\leq i\leq n$
. Then for each step i in
$1\leq i\leq n$
. Then for each step i in 
 $1, 2, \dots , n$
 do the following:
$1, 2, \dots , n$
 do the following: 
- 
• With probability  $1-e^{-w(v_i)}$
, put $1-e^{-w(v_i)}$
, put $w(v_j)=0$
 for all $w(v_j)=0$
 for all $v_j\in N_R(v_i)$
. $v_j\in N_R(v_i)$
.
- 
• With probability  $e^{-w(v_i)}$
, multiply the weight of all $e^{-w(v_i)}$
, multiply the weight of all $v_j\in N_R(v_i)$
 by $v_j\in N_R(v_i)$
 by $e^{w(v_i)}$
. $e^{w(v_i)}$
.
 Let I be the set of vertices 
 $v_i$
 for which the first option occurred. It is easy to see that I is an independent set. If
$v_i$
 for which the first option occurred. It is easy to see that I is an independent set. If 
 $v_i\in I$
, then at step i all vertices
$v_i\in I$
, then at step i all vertices 
 $v_j\in N_R(v_i)$
 get assigned the weight
$v_j\in N_R(v_i)$
 get assigned the weight 
 $0$
 for the rest of the process, which means they enter the independent set with probability
$0$
 for the rest of the process, which means they enter the independent set with probability 
 $1-e^{-0}=0.$
$1-e^{-0}=0.$
As will be proven next, the random independent set I generated in this fashion has the following property.
Theorem 4.1. Let 
 $w_0:V(G)\rightarrow (0,1)$
 be any weight function satisfying
$w_0:V(G)\rightarrow (0,1)$
 be any weight function satisfying 
 $$ \begin{align*}\frac{1}{2}\ln(w_0(v_k))+\sum_{v_i\in N_L(v_k)}w_0(v_i)\le 0\end{align*} $$
$$ \begin{align*}\frac{1}{2}\ln(w_0(v_k))+\sum_{v_i\in N_L(v_k)}w_0(v_i)\le 0\end{align*} $$
for every 
 $1\leq k\leq n$
. Then we have
$1\leq k\leq n$
. Then we have 
 $$ \begin{align*}\mathbb{P}[v_k\in I] \geq \frac{w_0(v_k)}{2}\end{align*} $$
$$ \begin{align*}\mathbb{P}[v_k\in I] \geq \frac{w_0(v_k)}{2}\end{align*} $$
for every 
 $1\leq k\leq n$
.
$1\leq k\leq n$
.
 Consider a fixed vertex 
 $v_k$
. In order to prove Theorem 4.1, we will work toward establishing a lower bound on the probability that
$v_k$
. In order to prove Theorem 4.1, we will work toward establishing a lower bound on the probability that 
 $v_k\in I$
. We do this by considering a modified process, defined as follows. Initially assign the vertices weights
$v_k\in I$
. We do this by considering a modified process, defined as follows. Initially assign the vertices weights 
 $\tilde {w}(v_i)=w_0(v_i)$
 for all
$\tilde {w}(v_i)=w_0(v_i)$
 for all 
 $1\leq i\leq n$
. Then for each step i in
$1\leq i\leq n$
. Then for each step i in 
 $1, 2, \dots , k-1$
, do the following.
$1, 2, \dots , k-1$
, do the following. 
- 
• If  $v_i\not \in N_L(v_k)$
, do the same update rule as for w. $v_i\not \in N_L(v_k)$
, do the same update rule as for w.
- 
• If  $v_i\in N_L(v_k)$
, multiply the weight of all vertices $v_i\in N_L(v_k)$
, multiply the weight of all vertices $v_j\in N_R(v_i)$
 by $v_j\in N_R(v_i)$
 by $e^{\tilde {w}(v_i)}$
. $e^{\tilde {w}(v_i)}$
.
In other words, 
 $\tilde {w}$
 has the same update rule as w for any step i where
$\tilde {w}$
 has the same update rule as w for any step i where 
 $v_i\not \in N_L(v_k)$
. For any step i where
$v_i\not \in N_L(v_k)$
. For any step i where 
 $v_i\in N_L(v_k)$
, the process follows the update rule of the second bullet point of w with probability
$v_i\in N_L(v_k)$
, the process follows the update rule of the second bullet point of w with probability 
 $1$
.
$1$
.
 Let us denote by 
 $w_i(v_j)$
 and
$w_i(v_j)$
 and 
 $\tilde {w}_i(v_j)$
 the weight of
$\tilde {w}_i(v_j)$
 the weight of 
 $v_j$
 after step i in the respective processes, let
$v_j$
 after step i in the respective processes, let 
 $\tilde {w}_0(v_j):=w_0(v_j)$
, and let
$\tilde {w}_0(v_j):=w_0(v_j)$
, and let 
 $$ \begin{align*}X:=\sum_{v_i\in N_L(v_k)} \tilde{w}_{k-1}(v_i).\end{align*} $$
$$ \begin{align*}X:=\sum_{v_i\in N_L(v_k)} \tilde{w}_{k-1}(v_i).\end{align*} $$
By construction of 
 $\tilde {w}$
, we have the following relation to w.
$\tilde {w}$
, we have the following relation to w.
Claim 4.2. For any function 
 $f:\mathbb {R}\rightarrow \mathbb {R}$
 such that
$f:\mathbb {R}\rightarrow \mathbb {R}$
 such that 
 $f(0)=0$
 we have
$f(0)=0$
 we have 
 $$ \begin{align*}\mathbb{E}_{w}\left[f(w_{k-1}(v_k))\right]=\mathbb{E}_{\tilde{w}}\left[f(\tilde{w}_{k-1}(v_k)) e^{-X} \right].\end{align*} $$
$$ \begin{align*}\mathbb{E}_{w}\left[f(w_{k-1}(v_k))\right]=\mathbb{E}_{\tilde{w}}\left[f(\tilde{w}_{k-1}(v_k)) e^{-X} \right].\end{align*} $$
Proof.
 We can encode each possible sequence of weight functions 
 $(w_0, w_1, \dots , w_{k-1})$
 of the process w as a sequence
$(w_0, w_1, \dots , w_{k-1})$
 of the process w as a sequence 
 $a\in \{1,2\}^{k-1}$
 where
$a\in \{1,2\}^{k-1}$
 where 
 $a_i$
 denotes whether, in step i, randomness chooses the first or the second bullet point. In other words,
$a_i$
 denotes whether, in step i, randomness chooses the first or the second bullet point. In other words, 
 $a_i=1$
 if and only if
$a_i=1$
 if and only if 
 $v_i\in I$
.
$v_i\in I$
.
 Note that if 
 $a_i=1$
 for any index i where
$a_i=1$
 for any index i where 
 $v_i\in N_L(v_k)$
 then this sequence will result in
$v_i\in N_L(v_k)$
 then this sequence will result in 
 $w_{k-1}(v_k)=0$
. Thus, such a sequence does not contribute to the value of
$w_{k-1}(v_k)=0$
. Thus, such a sequence does not contribute to the value of 
 $\mathbb {E}_{w}\left [f(w_{k-1}(v_k))\right ]$
. Similarly, if
$\mathbb {E}_{w}\left [f(w_{k-1}(v_k))\right ]$
. Similarly, if 
 $a_i=a_j=1$
 for any two neighboring vertices
$a_i=a_j=1$
 for any two neighboring vertices 
 $v_i$
 and
$v_i$
 and 
 $v_j$
, then the probability of the corresponding sequence is
$v_j$
, then the probability of the corresponding sequence is 
 $0$
, which means it also does not contribute to
$0$
, which means it also does not contribute to 
 $\mathbb {E}_{w}\left [f(w_{k-1}(v_k))\right ]$
.
$\mathbb {E}_{w}\left [f(w_{k-1}(v_k))\right ]$
.
 Let 
 $A\subseteq \{1,2\}^{k-1}$
 denote the set of sequences that do not match either of the aforementioned conditions. Then any
$A\subseteq \{1,2\}^{k-1}$
 denote the set of sequences that do not match either of the aforementioned conditions. Then any 
 $a\in A$
 can be interpreted as a possible sequence of weight functions
$a\in A$
 can be interpreted as a possible sequence of weight functions 
 $(w_0, \dots , w_{k-1})$
 and
$(w_0, \dots , w_{k-1})$
 and 
 $(\tilde {w}_0, \dots , \tilde {w}_{k-1})$
 produced by either process w or
$(\tilde {w}_0, \dots , \tilde {w}_{k-1})$
 produced by either process w or 
 $\tilde {w}$
. Note that, by definition of w and
$\tilde {w}$
. Note that, by definition of w and 
 $\tilde {w}$
, the same sequence of choices a will produce the same sequence of weight functions in either process. Let us denote this common sequence by
$\tilde {w}$
, the same sequence of choices a will produce the same sequence of weight functions in either process. Let us denote this common sequence by 
 $w^a$
, and let us denote by
$w^a$
, and let us denote by 
 $\mathbb {P}_w[a]$
 and
$\mathbb {P}_w[a]$
 and 
 $\mathbb {P}_{\tilde {w}}[a]$
 the probabilities that the sequence of choices of the respective processes equals a.
$\mathbb {P}_{\tilde {w}}[a]$
 the probabilities that the sequence of choices of the respective processes equals a.
 By comparing the transition probabilities of w and 
 $\tilde {w}$
, we immediately get
$\tilde {w}$
, we immediately get 
 $$ \begin{align*} \frac{\mathbb{P}_w[a]}{\mathbb{P}_{\tilde{w}}[a]} &= \exp\left(-\sum_{v_i\in N_L(v_k)} w^a_{i-1}(v_i)\right) =\exp\left(-\sum_{v_i\in N_L(v_k)} w^a_{k-1}(v_i)\right) \end{align*} $$
$$ \begin{align*} \frac{\mathbb{P}_w[a]}{\mathbb{P}_{\tilde{w}}[a]} &= \exp\left(-\sum_{v_i\in N_L(v_k)} w^a_{i-1}(v_i)\right) =\exp\left(-\sum_{v_i\in N_L(v_k)} w^a_{k-1}(v_i)\right) \end{align*} $$
for all 
 $a\in A$
, where the last equality follows by observing that no vertex
$a\in A$
, where the last equality follows by observing that no vertex 
 $v_i$
 has its weight updated after step
$v_i$
 has its weight updated after step 
 $i-1$
. Thus
$i-1$
. Thus 
 $$ \begin{align*} \mathbb{E}_{w}[f(w_{k-1}(v_k)] &=\sum_{a\in A} f(w^a_{k-1}(v_k)) \mathbb{P}_w[a]\\ &=\sum_{a\in A} f(w^a_{k-1}(v_k))\exp\left(-\sum_{v_i\in N_L(v_k)} w^a_{k-1}(v_i)\right)\mathbb{P}_{\tilde{w}}[a]\\ &=\mathbb{E}_{\tilde{w}}[f(\tilde{w}_{k-1}(v_k))e^{-X} ].\\[-34pt] \end{align*} $$
$$ \begin{align*} \mathbb{E}_{w}[f(w_{k-1}(v_k)] &=\sum_{a\in A} f(w^a_{k-1}(v_k)) \mathbb{P}_w[a]\\ &=\sum_{a\in A} f(w^a_{k-1}(v_k))\exp\left(-\sum_{v_i\in N_L(v_k)} w^a_{k-1}(v_i)\right)\mathbb{P}_{\tilde{w}}[a]\\ &=\mathbb{E}_{\tilde{w}}[f(\tilde{w}_{k-1}(v_k))e^{-X} ].\\[-34pt] \end{align*} $$
Claim 4.3. Suppose 
 $v_i\in N_L(v_k)$
. Then
$v_i\in N_L(v_k)$
. Then 
 $\tilde {w}_t(v_i)$
 is a martingale in t for
$\tilde {w}_t(v_i)$
 is a martingale in t for 
 $t=0, \dots , k-1$
.
$t=0, \dots , k-1$
.
Proof.
 By definition of 
 $\tilde {w}$
, the only steps j where the value of
$\tilde {w}$
, the only steps j where the value of 
 $\tilde {w}(v_i)$
 is updated are those where
$\tilde {w}(v_i)$
 is updated are those where 
 $v_j\in N_L(v_i)$
. Note that
$v_j\in N_L(v_i)$
. Note that 
 $v_j\not \in N_L(v_k)$
 as otherwise
$v_j\not \in N_L(v_k)$
 as otherwise 
 $v_i, v_j, v_k$
 would form a triangle. Thus
$v_i, v_j, v_k$
 would form a triangle. Thus 
 $\tilde {w}(v_i)$
 is updated according to
$\tilde {w}(v_i)$
 is updated according to 
 $$ \begin{align*}\tilde{w}_j(v_i) = \begin{cases} 0&\text{ with probability }1-e^{-\tilde{w}_{j-1}(v_j)}\\ \tilde{w}_{j-1}(v_i)e^{\tilde{w}_{j-1}(v_j)}&\text{ with probability }e^{-\tilde{w}_{j-1}(v_j)}.\end{cases}\end{align*} $$
$$ \begin{align*}\tilde{w}_j(v_i) = \begin{cases} 0&\text{ with probability }1-e^{-\tilde{w}_{j-1}(v_j)}\\ \tilde{w}_{j-1}(v_i)e^{\tilde{w}_{j-1}(v_j)}&\text{ with probability }e^{-\tilde{w}_{j-1}(v_j)}.\end{cases}\end{align*} $$
It is easy to see that this is preserved in expectation.
Claim 4.4.
 $$ \begin{align*}\mathbb{E}_{\tilde{w}}[X] = \sum_{v_i \in N_L(v_k)} w_0(v_i).\end{align*} $$
$$ \begin{align*}\mathbb{E}_{\tilde{w}}[X] = \sum_{v_i \in N_L(v_k)} w_0(v_i).\end{align*} $$
Proof.
 By Claim 4.3, 
 $\mathbb {E}_{\tilde {w}}[X]=\sum _{v_i\in N_L(v_k)}\mathbb {E}_{\tilde {w}}[\tilde {w}_{k-1}(v_i)] = \sum _{v_i\in N_L(v_k)}\mathbb {E}_{\tilde {w}}[\tilde {w}_{0}(v_i)]. $
$\mathbb {E}_{\tilde {w}}[X]=\sum _{v_i\in N_L(v_k)}\mathbb {E}_{\tilde {w}}[\tilde {w}_{k-1}(v_i)] = \sum _{v_i\in N_L(v_k)}\mathbb {E}_{\tilde {w}}[\tilde {w}_{0}(v_i)]. $
Claim 4.5.
 $$ \begin{align*}\tilde{w}_{k-1}(v_k)=w_0(v_k) e^X.\end{align*} $$
$$ \begin{align*}\tilde{w}_{k-1}(v_k)=w_0(v_k) e^X.\end{align*} $$
Proof.
 By definition of 
 $\tilde {w}$
,
$\tilde {w}$
, 
 $\tilde {w}(v_k)$
 increases by a factor
$\tilde {w}(v_k)$
 increases by a factor 
 $e^{\tilde {w}_{i-1}(v_i)}=e^{\tilde {w}_{k-1}(v_i)}$
 for each step i where
$e^{\tilde {w}_{i-1}(v_i)}=e^{\tilde {w}_{k-1}(v_i)}$
 for each step i where 
 $v_i\in N_L(v_k)$
. For any other step,
$v_i\in N_L(v_k)$
. For any other step, 
 $\tilde {w}(v_k)$
 is unchanged.
$\tilde {w}(v_k)$
 is unchanged.
Claim 4.6.
 $$ \begin{align*}\mathbb{P}_{w}[v_k\in I] = \mathbb{E}_{\tilde{w}}\left[ \left(1-e^{-w_0(v_k) e^X}\right)e^{-X} \right].\end{align*} $$
$$ \begin{align*}\mathbb{P}_{w}[v_k\in I] = \mathbb{E}_{\tilde{w}}\left[ \left(1-e^{-w_0(v_k) e^X}\right)e^{-X} \right].\end{align*} $$
Proof.
 By the definition of w and I we have 
 $\mathbb {P}_{w}[v_k\in I] = \mathbb {E}_{w}[1-e^{-w_{k-1}(v_k)}].$
 Let
$\mathbb {P}_{w}[v_k\in I] = \mathbb {E}_{w}[1-e^{-w_{k-1}(v_k)}].$
 Let 
 $f(x)=1-e^{-x}$
. By Claim 4.2, noting that
$f(x)=1-e^{-x}$
. By Claim 4.2, noting that 
 $f(0)=0$
, we get
$f(0)=0$
, we get 
 $$ \begin{align*}\mathbb{E}_{w}[1-e^{-w_{k-1}(v_k)}] = \mathbb{E}_{w}[f(w_{k-1}(v_k))] = \mathbb{E}_{\tilde{w}}[f(\tilde{w}_{k-1}(v_k))e^{-X} ].\end{align*} $$
$$ \begin{align*}\mathbb{E}_{w}[1-e^{-w_{k-1}(v_k)}] = \mathbb{E}_{w}[f(w_{k-1}(v_k))] = \mathbb{E}_{\tilde{w}}[f(\tilde{w}_{k-1}(v_k))e^{-X} ].\end{align*} $$
By Claim 4.5, 
 $\tilde {w}_{k-1}(v_k)=w_0(v_k) e^X$
. Combining these gives the desired equality.
$\tilde {w}_{k-1}(v_k)=w_0(v_k) e^X$
. Combining these gives the desired equality.
Proof of Theorem 4.1.
By Claim 4.4, we know that X is a non-negative random variable satisfying
 $$ \begin{align*}\mathbb{E}_{\tilde{w}}[X] = \sum_{v_i \in N_L(v_k)} w_0(v_i) \leq \frac12 \ln\left( \frac1{w_0(v_k)} \right).\end{align*} $$
$$ \begin{align*}\mathbb{E}_{\tilde{w}}[X] = \sum_{v_i \in N_L(v_k)} w_0(v_i) \leq \frac12 \ln\left( \frac1{w_0(v_k)} \right).\end{align*} $$
Moreover, by Claim 4.6, we have that
 $$ \begin{align*}\mathbb{P}_{w}[v_k \in I]=\mathbb{E}_{\tilde{w}}\left[\left(1-e^{-w_0(v_k) e^X}\right)e^{-X}\right].\end{align*} $$
$$ \begin{align*}\mathbb{P}_{w}[v_k \in I]=\mathbb{E}_{\tilde{w}}\left[\left(1-e^{-w_0(v_k) e^X}\right)e^{-X}\right].\end{align*} $$
In order to estimate this expectation given the aforementioned conditions on X, we need the following somewhat technical inequalities.
Claim 4.7. For any 
 $0<t<1.79328$
, the following two inequalities hold.
$0<t<1.79328$
, the following two inequalities hold. 
- 
1.  $e^t < 1+t+t^2$ $e^t < 1+t+t^2$
- 
2.  $(1-e^{-t})\left (1-\frac {\ln (1/t)}{2\ln (1.79328/t)}\right )\geq \frac {t}2$ $(1-e^{-t})\left (1-\frac {\ln (1/t)}{2\ln (1.79328/t)}\right )\geq \frac {t}2$
Proof.
 It is not hard to verify both inequalities by computer assistance, or by a direct proof if one replaces 
 $1.79328$
 by a less ambitious constant. For the sake of clarity of the presentation, we omit explicit proofs.
$1.79328$
 by a less ambitious constant. For the sake of clarity of the presentation, we omit explicit proofs.
Claim 4.8. For any real numbers 
 $x>0$
 and
$x>0$
 and 
 $0<w<1.79328$
 we have
$0<w<1.79328$
 we have 
 $$ \begin{align*}\left(1-e^{-w e^x}\right)e^{-x} \geq (1-e^{-w})\left(1-\frac{x}{\ln(1.79328/w)}\right).\end{align*} $$
$$ \begin{align*}\left(1-e^{-w e^x}\right)e^{-x} \geq (1-e^{-w})\left(1-\frac{x}{\ln(1.79328/w)}\right).\end{align*} $$
Proof.
 Observe that 
 $\left (1-e^{-w e^x}\right )e^{-x}$
 is non-negative. Moreover, it is easy to check that its second derivative in x equals
$\left (1-e^{-w e^x}\right )e^{-x}$
 is non-negative. Moreover, it is easy to check that its second derivative in x equals 
 $$ \begin{align*}e^{-x-w e^x}\left(e^{w e^x} - 1 - we^x - w^2 e^{2x}\right),\end{align*} $$
$$ \begin{align*}e^{-x-w e^x}\left(e^{w e^x} - 1 - we^x - w^2 e^{2x}\right),\end{align*} $$
which, by Claim 4.7 (1), is negative whenever 
 $we^x < 1.79328$
, that is,
$we^x < 1.79328$
, that is, 
 $x<\ln (1.79328/w)$
. Hence the inequality in the lemma holds for
$x<\ln (1.79328/w)$
. Hence the inequality in the lemma holds for 
 $0\leq x \leq \ln (1.79328/w)$
 as the inequality clearly holds at both endpoints, and the function is concave on the interval between these points. But for larger x, the inequality also holds as the right-hand side then turns negative.
$0\leq x \leq \ln (1.79328/w)$
 as the inequality clearly holds at both endpoints, and the function is concave on the interval between these points. But for larger x, the inequality also holds as the right-hand side then turns negative.
 Given these inequalities, the theorem follows by straightforward calculations. By Claim 4.8 and since 
 $\mathbb {E}_{\tilde {w}}[X]\le \frac {1}{2}\ln \left (\frac {1}{w_0(v_k)}\right )$
, we have:
$\mathbb {E}_{\tilde {w}}[X]\le \frac {1}{2}\ln \left (\frac {1}{w_0(v_k)}\right )$
, we have: 
 $$ \begin{align*} \mathbb{P}_{w}[v_k\in I] &\geq \mathbb{E}_{\tilde{w}}\left[\left(1-e^{-w_0(v_k)}\right)\left(1-\frac{X}{\ln(1.79328/w_0(v_k))}\right)\right]\\ &\geq \left(1-e^{-w_0(v_k)}\right)\left(1-\frac{\ln(1/w_0(v_k))}{2\ln(1.79328/w_0(v_k))}\right), \end{align*} $$
$$ \begin{align*} \mathbb{P}_{w}[v_k\in I] &\geq \mathbb{E}_{\tilde{w}}\left[\left(1-e^{-w_0(v_k)}\right)\left(1-\frac{X}{\ln(1.79328/w_0(v_k))}\right)\right]\\ &\geq \left(1-e^{-w_0(v_k)}\right)\left(1-\frac{\ln(1/w_0(v_k))}{2\ln(1.79328/w_0(v_k))}\right), \end{align*} $$
which by Claim 4.7 (2) is at least 
 $\frac {w_0(v_k)}{2}$
, as desired.
$\frac {w_0(v_k)}{2}$
, as desired.
Proof of Theorem 1.7.
 Let G be a triangle-free d-degenerate graph with degeneracy order 
 $v_1, \dots , v_n$
 such that
$v_1, \dots , v_n$
 such that 
 $|N_L(v_i)|\leq d$
 for all vertices
$|N_L(v_i)|\leq d$
 for all vertices 
 $v_i$
. Assume
$v_i$
. Assume 
 $d\geq 2.$
 We apply Theorem 4.1 with
$d\geq 2.$
 We apply Theorem 4.1 with 
 $w_0\equiv \frac {\ln d -\ln \ln d}{2d}.$
 One immediately checks that
$w_0\equiv \frac {\ln d -\ln \ln d}{2d}.$
 One immediately checks that 
 $0<w_0(v_k)<1$
 and
$0<w_0(v_k)<1$
 and 
 $$ \begin{align*}\frac12 \ln w_0(v_k) + \sum_{v_i\in N_L(v_k)} w_0(v_i) \leq \frac12 \ln\left(\frac12 \left(\ln d - \ln\ln d\right)\right)-\frac12 \ln\ln d\leq 0,\end{align*} $$
$$ \begin{align*}\frac12 \ln w_0(v_k) + \sum_{v_i\in N_L(v_k)} w_0(v_i) \leq \frac12 \ln\left(\frac12 \left(\ln d - \ln\ln d\right)\right)-\frac12 \ln\ln d\leq 0,\end{align*} $$
which implies that
 $$ \begin{align*}\mathbb{P}[v_k\in I] \geq \frac12 w_0(v_k)=\left(\frac14-o(1)\right)\frac{\ln d}{d}.\\[-38pt] \end{align*} $$
$$ \begin{align*}\mathbb{P}[v_k\in I] \geq \frac12 w_0(v_k)=\left(\frac14-o(1)\right)\frac{\ln d}{d}.\\[-38pt] \end{align*} $$
Proof of Theorem 1.8.
 We apply Theorem 4.1 with 
 $w_0(v_i):=\frac 12 p(v_i)$
. Observe that
$w_0(v_i):=\frac 12 p(v_i)$
. Observe that 
 $$ \begin{align*}p(v_k) \leq \prod_{v_i\in N_L(v_k)}\left(1-p(v_i)\right) \leq \exp\left(-\sum_{v_i\in N_L(v_k)} p(v_i)\right),\end{align*} $$
$$ \begin{align*}p(v_k) \leq \prod_{v_i\in N_L(v_k)}\left(1-p(v_i)\right) \leq \exp\left(-\sum_{v_i\in N_L(v_k)} p(v_i)\right),\end{align*} $$
which implies that 
 $\ln p(v_k) + \sum _{v_i\in N_L(v_k)} p(v_i) \leq 0$
 and hence
$\ln p(v_k) + \sum _{v_i\in N_L(v_k)} p(v_i) \leq 0$
 and hence 
 $\frac 12 \ln w_0(v_k) + \sum _{v_i\in N_L(v_k)} w(v_i)\leq -\frac 12 \ln 2 < 0.$
 Moreover, clearly
$\frac 12 \ln w_0(v_k) + \sum _{v_i\in N_L(v_k)} w(v_i)\leq -\frac 12 \ln 2 < 0.$
 Moreover, clearly 
 $0<w_0(v_i)<\frac 12$
 for all
$0<w_0(v_i)<\frac 12$
 for all 
 $v_i$
. Hence
$v_i$
. Hence 
 $$ \begin{align*}\mathbb{P}[v_k\in I] \geq \frac12 w_0(v_k)=\frac14 p(v_k),\end{align*} $$
$$ \begin{align*}\mathbb{P}[v_k\in I] \geq \frac12 w_0(v_k)=\frac14 p(v_k),\end{align*} $$
as desired.
5 Conclusion
In this final section, we would like to briefly mention some open problems and directions for future research.
 First, it would be interesting to see to what extent our method used in the proof of Theorem 1.2 can be adapted to the more general setting of graphs with small clique number. Ajtai, Erdős, Komlós, and Szemerédi [Reference Ajtai, Erdős, Komlós and Szemerédi1] proved a lower bound of 
 $\Omega _r\left (\frac {\ln \overline {d}}{\overline {d} \ln \ln \overline {d}}n\right )$
 for the independence number of n-vertex
$\Omega _r\left (\frac {\ln \overline {d}}{\overline {d} \ln \ln \overline {d}}n\right )$
 for the independence number of n-vertex 
 $K_r$
-free graphs with average degree
$K_r$
-free graphs with average degree 
 $\overline {d}$
 (see also the later constant-factor improvement [Reference Shearer53] due to Shearer). Johannson [Reference Johansson38] and Molloy [Reference Molloy46] established analogous upper bounds for the chromatic number of
$\overline {d}$
 (see also the later constant-factor improvement [Reference Shearer53] due to Shearer). Johannson [Reference Johansson38] and Molloy [Reference Molloy46] established analogous upper bounds for the chromatic number of 
 $K_r$
-free graphs with maximum degree
$K_r$
-free graphs with maximum degree 
 $\Delta $
 of the form
$\Delta $
 of the form 
 $O_r\left (\frac {\Delta \ln \ln \Delta }{\ln \Delta }\right )$
. In both of these results, it remains a major open problem whether the additional
$O_r\left (\frac {\Delta \ln \ln \Delta }{\ln \Delta }\right )$
. In both of these results, it remains a major open problem whether the additional 
 $\ln \ln $
-factors are necessary or can be omitted. Related to these questions, Kelly and Postle [Reference Kelly and Postle40, Conjecture 2.4] posed the following conjecture (rephrased).
$\ln \ln $
-factors are necessary or can be omitted. Related to these questions, Kelly and Postle [Reference Kelly and Postle40, Conjecture 2.4] posed the following conjecture (rephrased).
Conjecture 5.1. For every 
 $r\in \mathbb {N}$
 there exists a constant
$r\in \mathbb {N}$
 there exists a constant 
 $c=c(r)>0$
 such that every
$c=c(r)>0$
 such that every 
 $K_r$
-free graph G admits a probability distribution on its independent sets such that every vertex
$K_r$
-free graph G admits a probability distribution on its independent sets such that every vertex 
 $v\in V(G)$
 is contained in a random independent set drawn from the distribution with probability at least
$v\in V(G)$
 is contained in a random independent set drawn from the distribution with probability at least 
 $c\frac {\ln d_G(v)}{d_G(v)}$
.
$c\frac {\ln d_G(v)}{d_G(v)}$
.
 While this remains wide open, Kelly and Postle [Reference Kelly and Postle40, Theorem 2.5] proved a weaker version, replacing 
 $c\frac {\ln d_G(v)}{d_G(v)}$
 with
$c\frac {\ln d_G(v)}{d_G(v)}$
 with 
 $c\frac {\ln d_G(v)}{d_G(v)(\ln \ln d_G(v))^2}$
. As a first step, it would be interesting to see whether one could remove one of the two
$c\frac {\ln d_G(v)}{d_G(v)(\ln \ln d_G(v))^2}$
. As a first step, it would be interesting to see whether one could remove one of the two 
 $\ln \ln $
-factors in this result of Kelly and Postle, which would yield a fractional/local demand version of the aforementioned bounds of Ajtai, Erdős, Komlós and Szemerédi as well as of Molloy. It would also be very interesting to prove generalizations of Theorem 1.7 for
$\ln \ln $
-factors in this result of Kelly and Postle, which would yield a fractional/local demand version of the aforementioned bounds of Ajtai, Erdős, Komlós and Szemerédi as well as of Molloy. It would also be very interesting to prove generalizations of Theorem 1.7 for 
 $K_r$
-free graphs for any
$K_r$
-free graphs for any 
 $r\ge 4$
.
$r\ge 4$
.
 Looking at our proof of Theorem 2.1, it seems likely that by driving the “step size” 
 $\varepsilon $
 to zero, one can arrive at some explicit stochastic differential equation for the obtained distribution on random independent sets. It may be interesting to write down such an equation explicitly and see whether it has connections to other known distributions on independent sets.
$\varepsilon $
 to zero, one can arrive at some explicit stochastic differential equation for the obtained distribution on random independent sets. It may be interesting to write down such an equation explicitly and see whether it has connections to other known distributions on independent sets.
 Other open problems closely related to our results in this paper can be phrased in the context of so-called list packings, see in particular the conjectures and open problems in the papers [Reference Cambie, Cames van Batenburg, Davies and Kang15, Reference Cambie, Cames van Batenburg, Davies and Kang14] by Cambie et al. One of the open problems from these works related to Theorem 1.2 is whether for every triangle-free graph G and every assignment 
 $L(\cdot )$
 of color-lists to the vertices of G such that
$L(\cdot )$
 of color-lists to the vertices of G such that 
 $|L(v)|\ge (C+o(1))\frac {d_G(v)}{\ln d_G(v)}$
 for every
$|L(v)|\ge (C+o(1))\frac {d_G(v)}{\ln d_G(v)}$
 for every 
 $v\in V(G)$
, there exists a probability distribution on the proper L-colorings of G such that every color in
$v\in V(G)$
, there exists a probability distribution on the proper L-colorings of G such that every color in 
 $L(v)$
 is chosen with equal probability for every
$L(v)$
 is chosen with equal probability for every 
 $v\in V(G)$
.
$v\in V(G)$
.
 Finally, given the resolution of Harris’ conjecture, a natural remaining question is to determine the optimal leading constant C for the problem. In particular, by combining Theorem 1.7 with [Reference Bollobás11], we know that 
 $\frac 12\leq C \leq 4.$
 It would appear that the most reasonable answer is
$\frac 12\leq C \leq 4.$
 It would appear that the most reasonable answer is 
 $C=1$
. We state this as a conjecture.
$C=1$
. We state this as a conjecture.
Conjecture 5.2. The following holds for any sufficiently large d.
- 
1.  $\chi _f(G)\leq (1+o(1))\frac {d}{\ln d}$
 for all d-degenerate triangle-free graphs G. $\chi _f(G)\leq (1+o(1))\frac {d}{\ln d}$
 for all d-degenerate triangle-free graphs G.
- 
2. There exists a d-degenerate triangle-free graph G such that  $\chi _f(G)\geq (1-o(1))\frac {d}{\ln d}.$ $\chi _f(G)\geq (1-o(1))\frac {d}{\ln d}.$
 As some first evidence toward Conjecture 5.2, (1), we observe (as a further consequence of Theorem 2.1) that it holds when the degeneracy of the graph is replaced by the spectral radius 
 $\rho (G)$
, that is, the spectral radius of the adjacency matrix.
$\rho (G)$
, that is, the spectral radius of the adjacency matrix.
Theorem 5.3. Every triangle-free graph G satisfies
 $$ \begin{align*}\chi_f(G)\le (1+o(1))\frac{\rho(G)}{\ln \rho(G)}.\end{align*} $$
$$ \begin{align*}\chi_f(G)\le (1+o(1))\frac{\rho(G)}{\ln \rho(G)}.\end{align*} $$
Proof. Let G be any given triangle-free graph. We will show that 
 $\chi _f(G)\le \frac {1}{f(\rho (G)))}$
, where f is the function defined in Section 2. Since
$\chi _f(G)\le \frac {1}{f(\rho (G)))}$
, where f is the function defined in Section 2. Since 
 $f(x)=(1-o(1))\frac {\ln (x)}{x}$
, this will verify the claim of Theorem 1.7. Pause to note that
$f(x)=(1-o(1))\frac {\ln (x)}{x}$
, this will verify the claim of Theorem 1.7. Pause to note that 
 $\chi _f(G)=\max \{\chi _f(G_1),\ldots ,\chi _f(G_c)\}$
 and similarly
$\chi _f(G)=\max \{\chi _f(G_1),\ldots ,\chi _f(G_c)\}$
 and similarly 
 $\rho (G)=\max \{\rho (G_1),\ldots ,\rho (G_c)\}$
 holds for every graph G with connected components
$\rho (G)=\max \{\rho (G_1),\ldots ,\rho (G_c)\}$
 holds for every graph G with connected components 
 $G_1,\ldots ,G_c$
. Hence, since f is monotonically decreasing, it suffices to show the inequality
$G_1,\ldots ,G_c$
. Hence, since f is monotonically decreasing, it suffices to show the inequality 
 $\chi _f(G)\le \frac {1}{f(\rho (G))}$
 for all connected triangle-free graphs. So let G be such a graph, and let
$\chi _f(G)\le \frac {1}{f(\rho (G))}$
 for all connected triangle-free graphs. So let G be such a graph, and let 
 $A\in \mathbb {R}^{V(G)\times V(G)}$
 be its adjacency matrix. By definition, A has non-negative entries, and hence we may apply the Perron–Frobenius theorem to find that
$A\in \mathbb {R}^{V(G)\times V(G)}$
 be its adjacency matrix. By definition, A has non-negative entries, and hence we may apply the Perron–Frobenius theorem to find that 
 $\rho (A)=\rho (G)$
 is an eigenvalue of A and that there exists a corresponding eigenvector
$\rho (A)=\rho (G)$
 is an eigenvalue of A and that there exists a corresponding eigenvector 
 $\mathbf {u}\in \mathbb {R}^{V(G)}$
 with non-negative entries. So we have
$\mathbf {u}\in \mathbb {R}^{V(G)}$
 with non-negative entries. So we have 
 $A\mathbf {u}=\rho (G)\mathbf {u}$
, which reformulated means that
$A\mathbf {u}=\rho (G)\mathbf {u}$
, which reformulated means that 
 $$ \begin{align*}\sum_{x\in N_G(v)}\mathbf{u}_x=\rho(G)\mathbf{u}_v\end{align*} $$
$$ \begin{align*}\sum_{x\in N_G(v)}\mathbf{u}_x=\rho(G)\mathbf{u}_v\end{align*} $$
for every 
 $v\in V(G)$
. This equality in particular implies that if at least one neighbor of a vertex v has a positive entry in
$v\in V(G)$
. This equality in particular implies that if at least one neighbor of a vertex v has a positive entry in 
 $\mathbf {u}$
, then so does v. Hence, since G is a connected graph, it follows that
$\mathbf {u}$
, then so does v. Hence, since G is a connected graph, it follows that 
 $\mathbf {u}_v>0$
 for every
$\mathbf {u}_v>0$
 for every 
 $v\in V(G)$
. Now interpret the entries of the vector
$v\in V(G)$
. Now interpret the entries of the vector 
 $\mathbf {u}$
 as a strictly positive weight assignment to the vertices of G. Then, by Theorem 2.1, there exists a probability distribution
$\mathbf {u}$
 as a strictly positive weight assignment to the vertices of G. Then, by Theorem 2.1, there exists a probability distribution 
 $\mathcal {D}$
 on the independent sets of G such that for every
$\mathcal {D}$
 on the independent sets of G such that for every 
 $v\in V(G)$
, we have
$v\in V(G)$
, we have 
 $$ \begin{align*}\mathbb{P}_{I\sim D}[v\in I]\ge f\left(\frac{\sum_{x\in N_G(v)}\mathbf{u}_x}{\mathbf{u}_v}\right)=f(\rho(G)).\end{align*} $$
$$ \begin{align*}\mathbb{P}_{I\sim D}[v\in I]\ge f\left(\frac{\sum_{x\in N_G(v)}\mathbf{u}_x}{\mathbf{u}_v}\right)=f(\rho(G)).\end{align*} $$
By definition of the fractional chromatic number, this implies that 
 $\chi _f(G)\le \frac {1}{f(\rho (G))}$
, as desired. This concludes the proof.
$\chi _f(G)\le \frac {1}{f(\rho (G))}$
, as desired. This concludes the proof.
 It is well-known that the spectral radius 
 $\rho (G)$
 is always sandwiched between the degeneracy of the graph and the maximum degree, and can be significantly smaller than the latter. Thus, Theorem 5.3 provides a first step toward Conjecture 5.2, (1). Moreover, it lines up nicely with a rich area of research that is concerned with spectral bounds on the (fractional) chromatic number, see, for example, Chapter 6 of the textbook on spectral graph theory [Reference Chung17] by Chung and [Reference Bilu9, Reference Cvetkovic18, Reference Guo and Spiro32, Reference Hoffman34, Reference Kwan and Wigderson42, Reference Mohar45, Reference Nikiforov48] for some small selection of articles on the topic. For example, Theorem 5.3 relates to Wilf’s classic spectral bound [Reference Wilf54] on the chromatic number, which states that every connected graph G satisfies
$\rho (G)$
 is always sandwiched between the degeneracy of the graph and the maximum degree, and can be significantly smaller than the latter. Thus, Theorem 5.3 provides a first step toward Conjecture 5.2, (1). Moreover, it lines up nicely with a rich area of research that is concerned with spectral bounds on the (fractional) chromatic number, see, for example, Chapter 6 of the textbook on spectral graph theory [Reference Chung17] by Chung and [Reference Bilu9, Reference Cvetkovic18, Reference Guo and Spiro32, Reference Hoffman34, Reference Kwan and Wigderson42, Reference Mohar45, Reference Nikiforov48] for some small selection of articles on the topic. For example, Theorem 5.3 relates to Wilf’s classic spectral bound [Reference Wilf54] on the chromatic number, which states that every connected graph G satisfies 
 $\chi (G)\le \rho (G)+1$
 with equality if and only if G is an odd cycle or a complete graph. In fact, we conjecture that the restriction to the fractional chromatic number in Theorem 5.3 is not necessary and that Wilf’s bound can be improved for all triangle-free graphs as follows.
$\chi (G)\le \rho (G)+1$
 with equality if and only if G is an odd cycle or a complete graph. In fact, we conjecture that the restriction to the fractional chromatic number in Theorem 5.3 is not necessary and that Wilf’s bound can be improved for all triangle-free graphs as follows.
Conjecture 5.4. Every triangle-free graph satisfies
 $$ \begin{align*}\chi(G)\le (1+o(1))\frac{\rho(G)}{\ln \rho(G)}.\end{align*} $$
$$ \begin{align*}\chi(G)\le (1+o(1))\frac{\rho(G)}{\ln \rho(G)}.\end{align*} $$
Competing interest
The authors have no competing interests to declare.
Funding statement
The research of the second author was supported by grant No. 216071 of the Swiss National Science Foundation.
 
 


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
