Bayesian inference of vorticity in unbounded flow from limited pressure measurements

We study the instantaneous inference of an unbounded planar flow from sparse noisy pressure measurements. The true flow field comprises one or more regularized point vortices of various strength and size. We interpret the true flow's measurements with a vortex estimator, also consisting of regularized vortices, and attempt to infer the positions and strengths of this estimator assuming little prior knowledge. The problem often has several possible solutions, many due to a variety of symmetries. To deal with this ill-posedness and to quantify the uncertainty, we develop the vortex estimator in a Bayesian setting. We use Markov-chain Monte Carlo and a Gaussian mixture model to sample and categorize the probable vortex states in the posterior distribution, tailoring the prior to avoid spurious solutions. Through experiments with one or more true vortices, we reveal many aspects of the vortex inference problem. With fewer sensors than states, the estimator infers a manifold of equally-possible states. Using one more sensor than states ensures that no cases of rank deficiency arise. Uncertainty grows rapidly with distance when a vortex lies outside of the vicinity of the sensors. Vortex size cannot be reliably inferred, but the position and strength of a larger vortex can be estimated with a much smaller one. In estimates of multiple vortices their individual signs are discernible because of the non-linear coupling in the pressure. When the true vortex state is inferred from an estimator of fewer vortices, the estimate approximately aggregates the true vortices where possible.


Introduction
In a wide variety of practical situations, we wish to infer the state of a fluid flow from a limited number of flow sensors with generally noisy output signals.In particular, such knowledge of the flow state may assist within the larger scope of a flow control task, either in the training or application of a control strategy.For example, in reinforcement learning (RL) applications for guiding a vehicle to some target through a highly-disturbed fluid environment (Verma et al. 2018), the system is partially observable if the RL framework only has knowledge of the state of the vehicle itself.It is unable to distinguish between quiescent and disturbed areas of the environment and take actions that are distinctly advantageous in either, thus limiting the effectiveness of the control strategy.Augmenting this state with some knowledge of the flow may be helpful to improve this effectiveness.
The problem of flow estimation is very broad and can be pursued with different types of sensors in the presence of various flow physics.We focus in this paper on the inference of incompressible flows of moderate and large Reynolds numbers from pressure sensors, a problem that has been of interest in the fluid dynamics community for many years (Naguib et al. 2001;Murray & Ukeiley 2003;Gomez et al. 2019;Sashittal & Bodony 2 2021;Iacobello et al. 2022;Zhong et al. 2023).Flow estimation from other types of noisy measurements has also been pursued in closely-related contexts in recent years, with tools very similar to those used in the present work (Juniper & Yoko 2022;Kontogiannis et al. 2022).Estimation generally seeks to infer a finite-dimensional state vector x from a finite-dimensional measurement vector y.Since the state of a fluid flow is inherently infinite-dimensional, a central task of flow estimation is to approximately represent the flow so that it can be parameterized by a finite-dimensional state vector.For example, this could be done by a linear decomposition into data-driven modes (e.g., with Proper Orthogonal Decomposition (POD) or Dynamic Mode Decomposition (DMD))-in which the flow state comprises the coefficients of these modes-or by a generalized (non-linear) form of this decomposition via a neural network (Morimoto et al. 2022;Fukami & Taira 2023).Though these are very effective in representing flows close to their training set, they are generally less effective in representing newly-encountered flows.For example, a vehicle's interaction with a gust (an incident disturbance) may take very different forms, but the basis modes or neural network can only be trained on a subset of these interactions.Furthermore, even when these approaches are effective, it is difficult to probe them for intuition about some basic questions that underlie the estimation task.How does the effectiveness of the estimation depend on the physical distance between the sensors and vortical flow structures, or on the size of the structures?
We take a different approach to flow representation in this paper, writing the vorticity field as a sum of N (nearly-)singular vortex elements.(The adjective "nearly" conveys that we will regularize each element with a smoothing kernel of small radius.)A distinct feature of a flow singularity (in contrast to, say, a POD mode) is that both its strength and its position are degrees of freedom in the state vector, so that it can efficiently and adaptively approximate a compact evolving vortex structure even with a small number of vortex elements.The compromise for this additional flexibility is that it introduces a nonlinear relationship between the velocity field and the state vector.However, since pressure is already inherently quadratically dependent on the velocity, one must contend with nonlinearity in the inference problem regardless of the choice of flow representation.Another advantage of using singularities as a representation of the flow is that their velocity and pressure fields are known exactly, providing useful insight into the estimation problem.
To restrict the dimensionality of the problem and make the estimation tractable, we truncate the set of vortex elements to a small number N .In this manner, the point vortices can be thought of as an adaptive low-rank representation of the flow, each capturing a vortex structure but omitting most details of the structure's internal dynamics.To keep the scope of this paper somewhat simpler, we will narrow our focus to unbounded two-dimensional vortical flows, so that the estimated vorticity field is given by where δϵ is a regularized form of the two-dimensional Dirac delta function with small radius ϵ, and any vortex J has strength Γ J and position described by two Cartesian coordinates r J = (x J , y J ).As we show in Appendix A, the pressure field due to a set of N vortex elements is given by p(r) − p ∞ = − 1 2 ρ on the pressure field, and Π ϵ is a regularized vortex interaction kernel, encapsulating the coupled effect of a pair of vortices on pressure; their details are provided in Appendix A. This focus on unbounded two-dimensional flows preserves the essential purpose of the study, to reveal the important aspects of vortex estimation from pressure, and postpones to a future paper the effect of a body's presence or other flow contributors.Thus, the state dimensionality of the problems in this paper will be n = 3N , composed of the positions and strengths of the N vortex elements.
In this limited context, we address the following question: Given a set of noisy pressure measurements at d observation points (sensors) in or adjacent to an incompressible twodimensional flow, to what extent can we infer a distribution of vortices?It is important to make a few points before we embark on our answer.First, because of the noise in measurements, we will address the inference problem in a probabilistic (i.e., Bayesian) manner: find the distribution of probable states based on the likelihood of the true observations.As we have noted, we have no hope of approximating a smoothly-distributed vorticity field with a small number of singular vortex elements.However, as a result of our probabilistic approach, the expectation of the vorticity field over the whole distribution of estimated states will be smooth, even though the vorticity of each realization of the estimated flow is singular in space.This fact is shown in Appendix B.
Second, vortex flows are generally unsteady, so we ultimately wish to address this inference question over some time interval.Indeed, that has been the subject of several previous works, e.g., Darakananda et al. (2018); Le Provost & Eldredge (2021); Le Provost et al. (2022), in which pressure measurements were assimilated into the estimate of the state via an ensemble Kalman filter (EnKF) (Evensen 1994).Each step of such sequential data assimilation consists of the same Bayesian inference (or analysis) procedure: we start with an initial guess for the probability distribution (the prior) and seek an improved guess (the posterior).At steps beyond the first one, we generate this prior by simply advancing an ensemble of vortex element systems forward by one time step (the forecast), from states drawn from the posterior at the end of the previous step.The quality of the estimation generally improves over time as the sequential estimator proceeds.However, when we start the problem we have no such posterior from earlier steps.Furthermore, even at later times, as new vortices are created or enter the region of interest, the prior will lack a description of these new features.
This challenge forms a central task of the present paper: We seek to infer the flow state at some instant from a prior that expresses little to no knowledge of the flow.Aside from some loose bounds, we do not have any guess for where the vortex elements lie, how strong they are, or even how many there should be.All we know of the true system's behavior comes from the sensor measurements, and we therefore estimate the vortex state by maximizing the likelihood that these sensor measurements will arise.It should also be noted that viscosity has no effect on the instantaneous relationships between vorticity, velocity, and pressure in unbounded flow, so it is irrelevant if the true system is viscous or not.To assess the success of our inference approach, we will compute the expectation of the vorticity field under our estimated probability distribution and compare it with the true field, as we will presume to know this latter field for testing purposes.
The last point to make is that the inference of a flow from pressure is often an illposed problem with multiple possible solutions, a common issue with inverse problems of partial differential equations.For example, we will find that there may be many candidate vortex systems that reasonably match the pressure sensor data, forming ridges or local maxima of likelihood, even if they are not the global maximum solution.As we will show, these situations arise most frequently when the number of sensors is less than or equal to the number of states, i.e., when the inverse problem is underdetermined to some degree.In these cases, we will find that adding even one additional sensor can address the underlying ill-posedness.There will also be various symmetries that arise due to the vortex-sensor arrangement.In this paper, we will use techniques to mitigate the effect of these symmetries on the estimation task.However, multiple solutions may still arise even with such techniques, and we seek to explore this multiplicity thoroughly.Therefore, we adopt a solution strategy that explores all features of the likelihood function, including multiple maxima.We describe the probability-based formulation of the vortex estimation problem and our methodologies for revealing it in Section 2.Then, we present results of various estimation exercises with this methodology in Section 3, and discuss the results and their generality in Section 4.

The inference problem
Our goal is to estimate the state of an unbounded two-dimensional vortical flow with vortex system (1.1), which we will call a vortex estimator, specified completely by the n-dimensional vector (n = 3N ) where is the 3-component state of vortex J.The associated state covariance matrix is written as where each 3 × 3 block represents the covariance between vortex elements J and K: 2) expresses the pressure (relative to ambient), ∆p(r) ≡ p(r) − p ∞ , as a continuous function of space, r.Here, we also explicitly acknowledge its dependence on the state, ∆p(r, x).Furthermore, we will limit our observations to a finite number of sensor locations, r = s α , for α = 1, . . ., d, and define from this an observation operator, h : R n → R d , mapping a given state vector x to the pressure at these sensor locations: The objective of this paper is essentially to explore the extent to which we can invert function (2.5): from a given set of pressure observations y * ∈ R d of the true system at s α , α = 1, . . ., d, determine the state x.In this work, the true sensor measurements y * (the truth data) will be synthetically generated from the pressure field of a set of vortex elements in unbounded quiescent fluid, obtained from the same expression for pressure (1.2) that we use for h(x) in the estimator.Throughout, we will refer to the set of vortices that generated the measurements as the true vortices.However, there is inherent uncertainty in y * due to random measurement noise ε ∈ R d , so we model the predicted observations y as where ε is normally distributed about zero mean, N (ε|0, Σ E ), and the sensor noise is assumed independent and identically distributed, so that its covariance is Σ E = σ 2 E I with I ∈ R d×d the identity.We seek a probabilistic form of the inversion of (2.6) when set equal to y * .That is, we seek the conditional probability distribution of states based on the observed data, π(x|y * ): the peaks of this distribution would represent the most probable state(s) based on the measurements, and the breadth of the peaks would represent our uncertainty about the answer.
From Bayes' theorem, the conditional probability of the state given an observation, π(x|y), can be regarded as a posterior distribution over x, where π 0 (x) is the prior distribution, describing our original beliefs about the state x, and L(y|x) is called the likelihood function, representing the probability of observing certain data y at a given state, x.The likelihood function encapsulates our physicsbased prediction of the sensor measurements, based on the observation operator h(x).
Collectively, L(y|x)π 0 (x) represents the joint distribution of states and their associated observations.The distribution of observations, π(y), is a uniform normalizing factor.Its value is unnecessary for characterizing the posterior distribution over x, since only comparisons (ratios) of the posterior are needed during sampling, as we discuss below in Section 2.4.Thus, we can omit the denominator in (2.7).We evaluate this unnormalized posterior at the true observations, y * , and denote it by π(x|y * ) = L(y * |x)π 0 (x).
Our goal is to explore and characterize this unnormalized posterior for the vortex system.Expressing our lack of prior knowledge, we write π 0 (x) as a uniform distribution within a certain acceptable bounding region B on the state components (discussed in more specific detail below), (2.8) Following from our observation model (2.6) with Gaussian noise, the likelihood is a Gaussian distribution about the predicted observations, h(x): where we have defined the covariance-weighted norm (2.10) Thus, our unnormalized posterior for the vortex estimator is given by For practical purposes it is helpful to take the log of this probability, so that ratios of probabilities-some of them near machine zero-are assessed instead via differences in their logs.Because only differences are relevant, we can dispense with constants that arise from taking the log, such as the inverse square root factor.We note that the uniform probability distribution U n (x|B) is uniform and positive inside B and zero outside.Thus, to within an additive constant, this log-posterior is (2.12) where c B (x) is a barrier function arising from the log of the uniform distribution, equal to zero for any x inside of the restricted region B of our uniform distribution, and In the examples that we present in this paper, the pressure sensors will be uniformly distributed along a straight line on the x axis, unless otherwise specified.and the set of vortices used for estimation purposes as the vortex estimator.To ensure finite pressures throughout the domain, both the true vortices and the vortex estimator are regularized as discussed in Appendix A.4, with a small blob radius ϵ = 0.01 unless otherwise stated.To improve the scaling and conditioning of the problem, the pressure (relative to ambient) is implicitly normalized by ρΓ 2 0 /L 2 , where Γ 0 is the strength of the largest-magnitude vortex in the true set and L represents a characteristic distance of the vortex set from the sensors; all positions are implicitly normalized by L and vortex strengths by Γ 0 .Unless otherwise specified, the measurement noise is σ E = 5 × 10 −4 .

Symmetries and non-linearity in the vortex-pressure system
As mentioned in the introduction, there are many situations in which multiple solutions arise due to symmetries.This is easy to see from a simple thought experiment, depicted in Figure 1(a).Suppose that we wish to estimate a single vortex from pressure sensors arranged in a straight line.A vortex on either side of this line of sensors will induce the same pressure on the sensors, and a vortex of either sign of strength will, as well.Thus, in this simple problem, there are four possible states that are indistinguishable from each other, and we would need more information about the circumstances of the problem to rule out three of them.Such symmetries arise commonly in the problems we will study in this paper.
The symmetry with respect to the sign of vortex strength is due to the non-linear relationship between pressure and vorticity.However, it is important to note that this symmetry issue is partly alleviated by the non-linear relationship, as well, because of the coupling that it introduces between vortices.Figure 2 depicts the pressure fields for two examples of a pair of vortices: one in which the vortices in the pair have equal strengths and another in which the vortices have equal but opposite strengths.Though the pressure in the vortex cores of both pairs is similar and sharply negative, the pressures outside the cores are distinctly different because the interaction kernel enters the sum with different sign.At the positions of the sensors, the pressure has a small region of positive pressure in the case of vortices of opposite sign.These differences are essential for inferring the relative signs of vortex strengths in sets of vortices.However, it is important to stress that the pressure is invariant to a change of sign of all vortices in the set, so we would still need more prior information to discriminate one overall sign from the other.
Another symmetry arises when there is more than one vortex to estimate, as in Figure 1(b), because in such a case, there is no unique ordering of the vortices in the state vector.With each of the vortices assigned a fixed set of parameters, any of the N !permutations of the ordering leads to the same pressure measurements.This vortex relabeling symmetry is a discrete analog of the particle relabeling symmetry in continuum mechanics (Marsden & Ratiu 2013); it is also closely analogous to the non-identifiability issue of the mixture models that will be used for the probabilistic modeling in this paper.All of the N !solutions are obviously equivalent from a flow field perspective, so this symmetry is not a problem if we assess estimator performance based on flow field metrics.However, the N !solutions form distinct points in the state space and we must anticipate this multiplicity when we search for high-probability regions.
The barrier function c B (x) in (2.12) allows us to anticipate and eliminate some modes that arise from problem symmetries, because we can use the bounding region B to reject samples that fail to meet certain criteria.To eliminate some of the aforementioned symmetries, we will, without loss of generality, restrict our vortex estimator to search for vortices that lie above the line of sensors on the x axis.For cases of multiple vortices in the estimator, we re-order the vortex entries in the state vector by their x position at each MCMC step to eliminate the relabeling symmetry.We also assume that the leftmost vortex has positive strength, which reduces the number of probable states by half; the signs of all estimated vortices can easily be switched a posteriori if new knowledge shows that this assumption is wrong.

The true covariance and rank deficiency
The main challenge of the vortex inference problem is that the observation operator is non-linear, so the posterior (2.11) is not Gaussian.Thus, we will instead sample this posterior and develop an approximate model for the samples, composed of a mixture of Gaussians.However, we can obtain some important insight by supposing that we already know the true state and then linearizing the observation operator about it, where H ≡ ∇h(x * ) ∈ R d×n , the Jacobian of the observation operator at the true state.
Then we can derive an approximating n-dimensional Gaussian model about mean x * (plus a bias due to noise in the realization of the true measurements) with covariance (2.14) A brief derivation of this result is included in Appendix C. We will refer to Σ * X as the "true" state covariance.It is useful to note that the matrix E H is the so-called Fisher information matrix for our Gaussian likelihood (Cui & Zahm 2021), evaluated at the true state.In other words, it quantifies the information about the state of the system that is available in the measurements.Because all sensors have the same noise variance, We can then use the singular value decomposition of the Jacobian, H = U SV T , to write a diagonalized form of the covariance, Here, the eigenvalue matrix is Λ = σ 2 E D −1 , where D = S T S ∈ R n×n is a diagonal matrix containing squares s 2 j of the singular values S of H in decreasing magnitude up to the rank r ⩽ min(d, n) of H and padded with n − r zeros.The uncertainty ellipsoid thus has semi-axis lengths along the directions v j given by the corresponding columns of V .Thus, the greatest uncertainty λ 1/2 n is associated with the smallest singular value s n of H.The corresponding eigenvector, v n , indicates the mixture of states for which we have the most confusion.
In fact, the smallest singular values of H are necessarily zero if n > d, i.e., when there are fewer sensors than states and the problem is therefore underdetermined.In such a case, H has a null space spanned by the last n − r columns in V .However, x * is not a unique solution in these problems; rather, it is simply one element of a manifold of vortex states that produce equivalent sensor readings (to within the noise).The covariance Σ * X evaluated at any x * on the manifold reveals the local tangent to this manifold-directions near x * along which we get identical sensor values.The true covariance Σ * X will also be very useful for illuminating cases in which the problem is ostensibly fully determined (n ⩽ d), but for which the arrangement of sensors and true vortices nonetheless creates significant uncertainty in the estimate.In some of these cases, the smallest singular values may still be zero or quite small, indicating that the effective rank of H is smaller than n.

Sampling and modeling of the posterior
The true covariance matrix (2.14) and its eigendecomposition will be an important tool in the study that follows, but we generally will only use it when we presume to know the true state x * and seek illumination on the estimation in the vicinity of the solution.To characterize the problem more fully and explore the potential multiplicity of solutions, we will generate samples of the posterior and then fit the samples with an approximate distribution πy * (x) ≈ π(x|y * ) over x.The overall algorithm is shown in the center panel in Figure 3 in the context of an example of estimating one vortex with three sensors.For the sampling task, we use the Metropolis-Hastings (MH) method (see, e.g., Chib & Greenberg 1995), a simple but powerful form of Markov chain Monte Carlo (MCMC).This method relies only on differences of the log probabilities between a proposed chain entry and the current chain entry to determine whether the proposal is is more probable than the previous entry and should be added to the chain of samples.To ensure that the MCMC sampling does not get stuck in one of possibly multiple high-probability regions, we use the method of parallel tempering (Sambridge 2014).
In practice, we generally have found good results in parallel tempering by using five parallel Markov chains exploring the target distribution raised to respective powers 3.5 p , where p takes integer values between −4 and 0. We initially carry out 10 4 steps of the algorithm with the MCMC proposal variance set to a diagonal matrix of 4 × 10 −4 for every state component.Then, we perform 10 6 steps with proposal variances set more tightly, uniformly equal to 2.5 × 10 −5 .The sample data set is then obtained from the p = 0 chain after discarding the first half of the chain (for burn-in) and retaining only every 100th chain entry of the remainder to minimize autocorrelations in the samples.On a MacBook Pro with a M1 processor, the overall process takes around 20 seconds in the most challenging cases (i.e., the highest-dimensional state spaces).There are other methods that move more efficiently in the direction of local maxima (e.g., such as hybrid MCMC, which uses the Jacobian of the log-posterior to guide the ascent).However, the approach we have taken here is quite versatile for more general cases, particularly those in which the Jacobian is impractical to compute repetitively in a long sequential process.
To approximate the posterior distribution over x from the samples, we employ a Gaussian mixture model (GMM), which assumes the form of a weighted sum of K normal distributions (the mixture components), (2.17) where x (k) and Σ (k) are the mean and covariance of component k of the mixture.Each weight α k lies between 0 and 1 and represents the probability of a sample point "belonging to" component k, so it quantifies the importance of that component in the overall mixture.For a given set of samples and a pre-selected number of components K, the parameters of the GMM (α k , x (k) , and Σ (k) ) are found via the Expectation Maximization algorithm (Bishop 2006).It is generally advantageous to choose K to be large (∼ 5-9) because extraneous components are assigned little weight, resulting in a smaller number of effective components.It should also be noted that, for the special case of a single mode of small variance, a GMM with K = 1 and a sufficient number of samples approaches the Gaussian approximation about the true state x * described in Section 2.3, with covariance Σ * X .
As we show in Appendix B, the GMM has a very attractive feature when used in the context of singular vortex elements, because we can exactly evaluate the expectation of the vorticity field under the GMM probability, where r comprise the position and strength of the J-vortex of mean state x (k) in equation (2.1), and Σ (k) r J r J and Σ (k) Γ J r J are elements in the JJ-block of covariance Σ (k) , defined in (2.4).Thus, under a Gaussian mixture model of the state, the expected vorticity field is itself composed of a sum of Gaussian-distributed vortices in Euclidean space (due to the first term in the square brackets), plus a sum of Gaussian-regularized dipole fields arising from covariance between the strengths and positions of the inferred vortex elements (the second term in square brackets).

Inference examples
Though we have reduced the problem by non-dimensionalizing it and restricting the possible states, there remain several parameters to explore in the vortex estimation problem: the number and relative configuration of the true vortices; the number of vortices used by the estimator; the number of sensors d and their configuration; and the measurement noise level σ E .We will also explore the inference of vortex radius ϵ when this parameter is included as part of the state.In the next section, we will explore the inference of a single vortex, using this example to investigate many of the parameters listed above.Then, in the following sections, we will examine the inference of multiple true vortices, to determine the unique aspects that emerge in this context.

Inference of a single vortex
In this section, we will explore cases in which both the true vortex system and our estimator consist of a single vortex.We will use this case to draw insight on many of the parameters of the inference problem.For most of the cases, a single line of sensors along the x axis will be used.The true vortex will remain on the y = 1 line and have unit strength.Many of the examples that follow will focus on the true configuration (x 1 , y 1 , Γ 1 ) = (0.5, 1, 1).(The subscript 1 is unnecessary with only one vortex, but allows us to align the notation with that of Section 2.1).Note, however, that we will not presume knowledge of any of these states: the bounding region of our prior will be x ∈ (−2, 2), y ∈ (0.01, 4), Γ ∈ (0, 2).
All of the basic tools used in the present investigation are depicted in Figure 3, in which the true configuration is estimated with three sensors arranged uniformly along the x axis between [−1, 1] with σ E = 5 × 10 −4 .Figure 3(a) shows the ellipsoid for covariance Σ * X , computed at the true vortex state.This figure particularly indicates that much of the uncertainty lies along a direction that mixes y 1 and Γ 1 ; indeed, the eigenvector corresponding to the direction of greatest uncertainty is (0.08, 0.79, 0.61).This uncertainty is intuitive: as a vortex moves further away from the sensors, it generates very similar sensor measurements if its strength simultaneously increases in the proportion indicated by this direction.In Figure 3(b), the samples obtained from the MCMC method are shown.Here, and in later figures in the paper, we show only the vortex positions of these samples in Euclidean space and color their symbols to denote the sign of their strength (blue for positive, red for negative).The set of samples clearly encloses the true state, shown as a block dot; the expected value from the samples is (0.51, 1.07, 1.05), which agrees well.
The samples also demonstrate the uncertainty of the estimated state.The filled ellipse in this figure corresponds to the exact covariance of Figure 3(a) and is shown for reference.As expected, the samples are spread predominantly along the direction of the maximum uncertainty.This figure also depicts an elliptical region for each Gaussian component of the mixture computed from the samples.These ellipses correspond only to the marginal covariances of the vortex positions and do not depict the uncertainty of the vortex strength.The weight of the component in the mixture is denoted by the thickness of the line.One can see from this plot that the GMM covers the samples with components, concentrating most of the weight near the center of the cluster with two dominant components.The composite of these components is best seen in Figure 3(d), in which the expected vorticity field is shown.In the remainder of this paper, this expected vorticity field will be used extensively to illuminate the uncertainty of the vortex estimation.Finally, Figure 3(c) compares the true sensor pressures with those corresponding to the expected state from the MCMC samples.These agree to within the measurement noise.

Effect of the number of sensors
In the last section, we found that three sensors were sufficient to estimate a single vortex's position and strength.In this section we investigate how this estimate of a single vortex depends on the number of sensors.In most cases, these sensors will again lie uniformly along the x axis in the range [−1, 1].Intuitively, we expect that if we have fewer sensors than there are states to estimate, we will have insufficient information to uniquely identify the vortex.Figure 4 shows that this is indeed the case.In this example, only two sensors are used to estimate the same vortex as in the previous example.The  MCMC samples are distributed along a circular arc, but are truncated outside of the aforementioned bounding region.In fact, this arc is a projection of a helical curve of equally-probable states in the three-dimensional state space.The samples broaden from the arc the further they are from the sensors due to the increase in uncertainty with distance.The true covariance, Σ * X , cannot reveal the full shape of this helical manifold, which is inherently dependent on the non-linear relationship between the sensors and the vortex.However, the rank of this covariance decreases to 2, so that the uncertainty along one of its principal axes must be infinite.This principal axis is tangent to the manifold of possible states, as shown by a line in the plot.
What if there are more sensors than states?Figure 5(a,b,c) depict expected vorticity fields for several cases in which there are increasing numbers of sensors arranged along a line, and Figure 5(d) shows the expected vorticity when 5 sensors are instead arranged in a circle of radius 2.1 about the true vortex.(The choice of radius is to ensure that the smallest distance between the true vortex and a sensor is approximately 1 in all cases.)It is apparent that the uncertainty shrinks when the number of sensors increases from 3 to 4, but does so less notably when the number increases from 4 to 5. In Figure 5(e), the maximum uncertainty is seen to drop by nearly half when one sensor is added to the basic set of 3 along a line, but decreases much more gradually when more than 4 sensors are used.The drop in uncertainty is more dramatic between 3 and 5 sensors arranged in a circle, but becomes more gradual beyond 5 sensors.

Effect of the true vortex position
It is particularly important to explore how the uncertainty is affected by the position of the true vortex relative to the sensors.We address this question here by varying this position relative to a fixed set of sensors with a fixed level of noise, σ E = 5 × 10 −4 .Figure 6(a) depicts contours (on a log scale) of the resulting maximum length of the covariance ellipsoid, λ 1/2 n , based on four sensors placed on the x axis between −1 and 1.The contours reveal that there is little uncertainty when the true vortex is in the vicinity of the sensors, but the uncertainty increases sharply with distance when the vortex lies outside the extent of sensors.Indeed, one finds empirically that the rates of increase scale approximately as λ . This behavior does not change markedly if we vary the number of sensors, as illustrated in Figure 6(b).As the true vortex's x position varies (and y 1 is held constant at 1), there is a similarly sharp rate of increase outside of the region near the sensors for 3, 4, or 5 sensors.However, though there is a small range of positions near x 1 = 0 in which 3 sensors have less uncertainty than 4, there is generally less uncertainty at all vortex positions with increasing numbers of sensors.Furthermore, the uncertainty is less variable in this near region when 4 or 5 sensors are used.

Effect of sensor noise
From the derivation in Section 2.3, we already know that the true covariance should depend linearly on the noise variance (2.16).In this section, we explore the effect of sensor noise on the estimation of a single vortex using MCMC and the subsequent fitting with a Gaussian mixture model.We keep the number of sensors fixed at 3 arranged along a line between x = −1 and 1, and the true vortex in the original configuration, (x 1 , y 1 , Γ 1 ) = (0.5, 1, 1). Figure 7 depicts the expected vorticity field as the noise standard deviation σ E increases.Unsurprisingly, the expected vorticity distribution exhibits increasing breadth as the noise level increases.However, it is notable that this breadth becomes increasingly directed away from the sensors as the noise increases.Furthermore, the center of the distribution lies somewhat further from the sensors than the true state, indicating a bias error.This trend toward increased bias error with increasing sensor noise is also apparent in other sensor numbers and arrangements.

Effect of true vortex radius
Throughout most of this paper, the radius of the true vortices is fixed at ϵ = 0.01, and the estimator vortices share this same radius.However, for practical application purposes, it is important to explore the extent to which the estimation is affected by a mismatch between these.If the true vortex is more widely distributed, can an estimator consisting of a small-radius vortex reliably determine its position and strength?Furthermore, can the radius itself be inferred?These two questions are closely related, as we will show.First, it is useful to illustrate the effect of the vortex radius on the pressure field associated with a vortex, as in Figure 8(a), which shows the vortex-pressure kernel P ϵ for two different vortex radii, ϵ = 0.01 and ϵ = 0.2.As this plot shows, the effect of vortex radius is fairly negligible beyond a distance of 5 times the larger of these two vortex radii.
As a result of this diminishing effect of vortex radius, one expects that it is very challenging to estimate this radius from pressure sensor measurements outside of the vortex core.Indeed, that is the case, as Figure 8(b) shows.This figure depicts the maximum length of the covariance ellipsoid as a function of true vortex radius, when four sensors along the x axis are used to estimate this radius (in addition to vortex position and strength), for a true vortex at (0.5, 1) with strength 1.The uncertainty is far too large for the radius to be observable until this radius approaches ϵ = 1.Even when the sensors are within the core of the vortex, they are confused between the blob radius and other vortex states.In fact, one can show that the dependence of the maximum uncertainty on ϵ −3 arises because of the nearly identical sensitivity that the pressure sensors have to changes of blob radius and changes in other states, e.g., vertical position in this case.The leading order term of the difference is proportional to ϵ 3 .Of course, if we presume precise knowledge of the other states, then vortex radius becomes more observable.
The insensitivity of pressure to vortex radius has a very important benefit, because it ensures that, even when the true vortex is relatively broad in size, a vortex estimator with small radius can still reliably infer the vortex's position and strength.This is illustrated in Figure 8(c), which depicts the contours of a true vortex of radius ϵ = 0.2 (with the same position and strength as in panel (b)), and the expected vorticity contours from an estimate carried out with a vortex element of radius ϵ = 0.01.It is apparent from this figure that the center of the vortex is estimated very well.In fact, the mean of the MCMC samples is (x 1 , y 1 , Γ 1 ) = (0.51, 1.08, 1.04), quite close to the true values.

Inference of multiple vortices
The previous sections have illustrated the crucial aspects of estimating a single vortex.In this section, we focus on inferring multiple vortices.As in the case with a single vortex, the eigenvalues and eigenvectors of the true covariance ellipsoid Σ * X will serve an important role in revealing many of the challenges in this context.However, some of the multiple vortex cases will have multiple possible solutions, and we must rely on the MCMC samples to reveal these.In the examples carried out in this section, we use the same uniform prior as in the previous section, except that now the prior vortex strengths can be negative (lying between (−2, 2)).

Two true vortices with a two-vortex estimator
In this section, we will study the inference of a pair of vortices using a two-vortex estimator.The basic configuration of true vortices consists of (x 1 , y 1 , Γ 1 ) = (−0.75,0.75, 1.2) and (x 2 , y 2 , Γ 2 ) = (0.5, 0.5, 0.4), both of radius ϵ = 0.01.As in the previous section, sensors have noise σ E = 5 × 10 −4 .In Figure 9 we demonstrate the estimation of this pair of vortices with eight sensors.The estimator is very effective at inferring the locations and strengths of the individual vortices: the mean state of the samples is (x 1 , y 1 , Γ 1 ) = (−0.75,0.77, 1.23) and (x 2 , y 2 , Γ 2 ) = (0.50, 0.50, 0.39), and the pressure field predicted by the expected state matches well with the true pressure field.
In this basic configuration, the vortices are widely separated so that the estimator's challenge is similar to that of two isolated single vortices, each estimated with four sensors.However, unique challenges arise as the true vortices become closer, as Figure 10 shows.Here, we keep the strength and vertical position of each true vortex the same as in the basic case, but vary both vortices' horizontal position-the left one is moved rightward and the right one is moved leftward-in such a manner that their average is invariant, (x 1 + x 2 )/2 = −0.125.Three different numbers of sensors are used, d = 6, 7, 8, all uniformly distributed between [−1, 1].In Figure 10(a), it is clear that using six sensors, though ostensibly sufficient to estimate the six states, is actually insufficient in a few isolated cases in which the maximum uncertainty becomes infinite.These cases are examples of rank deficiency in the vortex estimator.Importantly, this rank deficiency disappears when more than six sensors are used.An example of the estimator's behavior in one of these rank-deficient configurations is depicted in Figure 10(b,c).When six sensors are used (panel (b)), the MCMC samples are distributed more widely, along a manifold in the vicinity of the true state, with the eigenvector of the most-uncertain eigenvalue tangent to this manifold.However, when seven sensors are used (panel (c)), the MCMC samples are more tightly distributed around the true state.Thus, we can avoid rank deficiency by using more sensors than states.As a demonstration, we show in the left panels of Figure 11 the expected vorticity field that results from estimating four different true vortex configurations with eight sensors.In each case, the locations of the vortices are accurately estimated with relatively little uncertainty, even as the vortices become closer to each other than they are to the array of sensors.However, with closer vortices there is considerable uncertainty in estimating the strengths of the individual vortices, as exhibited in the right panels of Figure 11, each corresponding to the vortex configuration on the left.
As the true vortices become even closer than in the examples in Figure 11, multiple solutions emerge.This is illustrated in Figure 12, depicting the extreme case of one vortex just above the other.The MCMC identifies three modes of the posterior, each representing a different candidate solution for the estimator.One mode consists of vortices of opposite sign to either side of the true set, shown in the top row of Figure 12.The second mode, in the middle row, comprises vortices very near the true set, though the strengths of the vortices are quite uncertain, as evidenced by the long ridge of samples in the strength plot in Figure 12(d).Finally, the bottom row shows a mode that has positive vortices further apart than in the other two modes.
It is natural to ask whether we can prefer one of these two candidate solutions over the other.One way to do so is to assess them based on their corresponding weights α k in the mixture model, since each of these represents the probability of a given sample point belonging to that component.However, interpreting the mixture model's weights in this fashion requires that the MCMC has reached equilibrium, which can be challenging to determine with multimodal sampling.Instead, we follow the intuition that, if a mode is to be considered a more likely solution of the inference problem than another mode, then the samples belonging to that mode should be closer to the true observation.For this assessment we can compare the maxima of the log-posterior (2.12) among the samples belonging to two modes.The mode with a significantly larger maximum (i.e., significantly closer to zero, since (2.12) is non-positive) is a superior candidate solution.For the two modes shown in Figure 12, the maximum log-posteriors are −0.20,−0.11, and −14.83, respectively, suggesting that the mode in the middle row is mildly superior to that of the top row and clearly superior to that of the bottom row.Indeed, the fact that this clearly inferior mode appears among the samples at all is likely due to incomplete MCMC sampling.Thus, in this example with two very closely spaced positive-strength vortices, the true solution is discernible from the two spurious solutions.
In Figure 13 we carry out the same procedure of bringing two vortices closer together as in Figure 11, but now we do so for one vortex of positive strength (1.2) and another of negative strength (−1.0).We get similar results as before, successfully estimating the vortex locations and strengths.The most challenging case among these is the first, in which the two vortices are furthest apart and near the extreme range of the sensors.Interestingly, no spurious solutions arise as the vortices become very close together, as they did in the previous example.In fact, when the two opposite-sign vortices are vertically aligned, as in Figure 14, the estimator has no difficulty in identifying the individual vortices and their strengths.The figure ostensibly depicts two modes identified by the estimator, but in fact these modes are identical aside from the sign of their strengths.They remain distinct to the estimator only because they have eluded our simple  mitigations for the relabeling and strength symmetries (of ordering the vortices by their x position and assuming that the leftmost has positive strength).Clearly we could have chosen a different mitigation strategy to avoid this, but we include the separate modes here for illustration purposes.

Three true vortices
In this section we demonstrate the performance of the estimator on cases with three true vortices.In the first example, we will use three vortices in the estimator.Let the true state comprise x 1 = (x 1 , y 1 , Γ 1 ) = (−0.5,0.5, 1), x 2 = (0.25, 0.5, −1.2) and x 3 = (0.75, 0.75, 1.4).Here, we apply the techniques shown to enhance performance in the (a) ( previous sections: we use eleven sensors, two more than the marginal number, to avoid rank deficiency; and we choose a top candidate among the identified modes based on the maximum value of the log-posterior.The resulting estimated solution has only a single candidate mode, whose mean is x 1 = (−0.47,0.53, 1.10), x 2 = (0.25, 0.51, −1.25), x 3 = (0.68, 0.75, 1.36).This is shown in Figure 15.Both the expected vorticity field and the pressure field are captured very well by the estimator.
It is important to note that, in the previous example, the estimator identifies another mode (not shown) in which the signs of the rightmost two vortices are switched.However, this candidate solution was discarded on the basis of the maximum log-posterior criterion: the selected mode's value is −0.72, while the discarded mode's is smaller, −0.85.This slight difference is entirely due to the non-linear coupling between the vortices via the interaction kernel.By restricting the leftmost vortex to be positive, the signs of the middle and rightmost vortices are established through this coupling.Because the algebraic decay of both types of kernels in equation (1.2) is the same, the ability to discriminate the signs of the vortices requires a degree of balance between the vortex positions and the sensors.As a counterexample, when two of the three vortices form a compact pair that is wellseparated from the third, the estimator tends to be less able to prefer one choice of sign for the vortex pair over the other.An example is shown in Figure 16, in which the rightmost pair of vortices has opposite sign in each mode.The corresponding pressure fields shown on the right are nearly identical because the coupling of the pair with the leftmost vortex  Right panels depict contours of the estimated pressure field for that mode. is much weaker than in the pair itself.Thus, these modes are indistinguishable by our maximum log-posterior criteria: in the absence of additional prior knowledge, we cannot discern one from the other.The three-vortex estimator must explore a 9-dimensional space for the solution, a challenging task even with the various MCMC and symmetry mitigation techniques we have used in this paper.Thus, it is useful to restrict the estimator to search a lowerdimensional space, and the easiest way to achieve this is by using fewer vortices in the estimator.In Figure 17, we illustrate the behavior a two-vortex estimator on the three-vortex configuration in Figure 15, in variations in which the signs of the right two true vortices are changed.The range of strengths in the prior is expanded in this problem to (−4, 4).In the first case, the true vortex strengths are all positive.The twovortex estimator identifies a single mode, with mean vortex states x 1 = (−0.54,0.37, 0.64) and x 2 = (0.48, 0.74, 3.25), and a maximum log-posterior of −30.7.In other words, the estimator places one vortex near (and slightly weaker than) the leftmost true vortex, and another vortex near the center of the rightmost pair, with a strength roughly equal to the sum of the pair.In the second case, the two rightmost vortices are both negative, and the estimator produces an analogous result, aggregating the two negative vortices into a single vortex.The estimated state is x 1 = (−0.34,0.69, 1.27) and x 2 = (0.40, 0.62, −3.07), with maximum log-posterior −20.4,so that the rightmost pair is once again approximated by a single vortex with roughly the sum of the pair's strength.
The third case is the most interesting.Here, the true vortex configuration consists of positive, negative, and positive vortices from left to right, so there is no pairing of like-sign vortices as in the previous two cases.The estimator identifies a solution consisting of x 1 = (0.40, 1.03, 3.50) and x 2 = (0.90, 0.97, −1.2).Neither of these vortices bears an obvious connection with one of the true vortices, so no aggregation is possible.The estimator has done the best in can in the lower-dimensional space available to it, aliasing the true flow onto a dissimilar flow state.The maximum log-posterior is −88.6, significantly lower than in the other two cases.

Conclusions
In this paper, we have explored the inference of regularized point vortices from a finite number of pressure sensors with noisy measurements.By expressing the problem in a Bayesian (probabilistic) manner, we have been able to quantify the uncertainty of the estimated vortex state and to explore the multiplicity of possible solutions, which are expressed as multiple modes in the posterior distribution.We sampled the posterior with Markov-chain Monte Carlo and applied Gaussian mixture modeling to develop a tractable approximation for the posterior from the samples.Mixture modeling allowed us to soft-classify the samples into each mode.We reduced the multiplicity by anticipating many of the symmetries that arise in this inference problem-strength, relative position, and vortex re-labeling-and then mitigated their influence through simple techniques: e.g., restricting the prior region, strictly ordering the vortices in the state vector by x coordinate.The remaining multiplicity of solutions were identified by thoroughly exploring the prior region with help from the method of parallel tempering in MCMC.Where possible, the best candidate solution was discerned by monitoring the maximum log-posterior in each mode.We have also made use of the largest eigenvalue and associated eigenvector of the true covariance matrix in order to illuminate many of the challenges of the inference.
On a variety of configurations of one, two, or three true vortices, we have made several observations of this vortex inference problem.One must use at least as many sensors as there are estimator states in order to infer a unique vortex system rather than a manifold of equally-possible states.Using one additional sensor guards against the risk of cases of rank deficiency, which arise occasionally when multiple vortices are used in the estimator.However, additional sensors do not significantly improve the uncertainty of the estimate.Uncertainty scales linearly with sensor noise.It also rises very rapidly, with the fifth or sixth power of distance, when the true vortex lies outside of a region of the sensors.The size of the vortex is exceptionally challenging to estimate because its effect on pressure is almost indistinguishable from other vortex states.However, this fact is also advantageous, for it allows us to use a small radius (nearly-singular) vortex to accurately estimate the position and strength of a larger one.For systems of multiple vortices, the estimator relies on the non-linear coupling between them to ascertain the sign of the strength of each.Even when multiple modes emerge, one can often discern the best candidate among the modes based on the criterion of maximum probability (i.e., the shortest distance to the true measurements).This approach fails in some cases when the vortices are imbalanced, such as when a pair of vortices is well separated from a third.When the estimator uses fewer vortices than in the true configuration, it identifies the most likely solution in the reduced state space.Often, this reduced-order estimate appears to be a natural aggregation of the true vortex state, but in some cases the estimator aliases the sensors onto a dissimilar vortex configuration when no aggregated one is possible.
It is important to reiterate that the static inference we have studied in this paper is useful both in its own right as a one-time estimate of the vorticity field and as part of a sequential estimation of a time-varying flow.)-the inference comprises the analysis part of every step, when data is assimilated into the prediction.Some of the challenges and uncertainty of the static inference identified in this paper are overcome with advancing time as the sensors observe the evolving configuration of the flow and the forecast model predicts the state's evolution.Indeed, we speculate that the rank deficiency is partially mitigated in this manner.However, as new vortices are generated or enter the region of interest, the prior distribution obtained from the previous step will not be descriptive of these new features.Thus, our assumption of a non-informative prior remains relevant ever after the initial step of a flow, and the conclusions we have drawn in this work can guide an estimator that remains receptive to new vortex structures.Overall, most of the conclusions of this paper, including rank deficiency, the decay of signal strength with distance, and the effects of vortex couplings, are impactful on a sequential filter's overall performance.It is also important to stress the fact that, when the vortex state can be unambiguously inferred from a set of sensors, it implies that the sensor data contain sufficient information to describe the flow.This has implications for reinforcement learning-based flow control approaches (Verma et al. 2018), which treat the fluid flow as a partially observable Markov decision process.Pressure sensor data potentially make the process more Markovian, and therefore more amenable to learning a control policy (Renn & Gharib 2022).In short, there is less risk that control decisions made from measured pressures data are working on a mistaken belief about the flow state.
Finally, though we have addressed many substantive questions in this work, there are still several to address: In a realistic high Reynolds number flow, i.e. comprising a few dominant coherent structures amidst shear layers and small-scale vortices, can the estimator infer the dominant vortices?Our work here has exhibited some key findings that suggest that it can; in particular, weaker vortices with comparatively little effect on the velocity and small measured pressures are neglected by the estimator in favor of stronger vortices.But this question deserves more thorough treatment.Also, how does the presence of a body affect the vortex estimation?Furthermore, when such a body is in motion, or subject to a free stream, can the vortices and the body motion be individually inferred?In the presence of a body, stationary or in motion, it is straightforward to expand the pressure-vortex relation presented here to develop an inviscid observation model that incorporates geometry, other flow contributors, and their couplings (e.g., between a vortex and a moving body).The estimation framework presented here can readily accommodate such an enriched model.Indeed, in our prior work with an ensemble Kalman filter (Darakananda & Eldredge 2019; Le Provost & Eldredge 2021), we have found that the signal of a vortex can be enhanced in pressure sensors on a nearby wall due to the effect of the vortex's image, and that this inviscid observation model remains effective in a viscous setting, such as flow past a flat plate.However, there are challenges with estimating vortices near regions of large wall curvature, such as the edges of an airfoil, where large pressure gradients, viscous effects, and the subsequent generation of new vortices complicate the flow process.In our previous work, we have approximately accounted for these processes by augmenting the state with a leadingedge suction parameter (Ramesh et al. 2014).However, it would also be worthwhile to investigate approaches in which the physics-based observation model is replaced or augmented with a data-driven approach, such as a trained neural network (Zhong et al. 2023).
to vorticity †.The coupling between U ∞ and u ω that arises in the dynamic pressure is canceled by the modulation by U ∞ inside the integral.Thus, only the modulation of vorticity by velocity induced by other vortex elements matters to pressure.
Equation (A 10) shows that vorticity has a quadratic effect on pressure.To reveal this effect more clearly, we replace u ω in (A 10) by the Biot-Savart integral (A 5), obtaining (A 11) This form reveals an essentially triadic relationship between vorticity and pressure, illustrated in Figure 18: the pressure at r comprises a double sum of elementary interactions between vorticity at r ′ and r ′′ .Interestingly, a consequence of this relationship is that the pressure is invariant to a change of sign of the entire vorticity field.

A.2. Pressure from point vortices in the plane
To illustrate this triadic interaction in a simple setting, let us consider a twodimensional vorticity field consisting of two point vortices, The integrals can be evaluated exactly by virtue of the properties of the Dirac delta function, and in this two-dimensional setting, ∇G(r) = −r/(2π|r| 2 ).The velocity induced by vortex J is u J (r) = ∇G(r − r J ) × Γ J e z , and the resulting pressure field can be written as where we have defined a direct vortex kernel for vortex J, and a vortex interaction kernel, which we have split into additive parts arising from the dynamic pressure term and the Lamb vector term, respectively.In the expression for Π (2) , we have used the fact that the Green's function's gradient is skew-symmetric: There are a few notable features of expression (A 13).First of all, each vortex makes an independent contribution to pressure via its direct vortex kernel.This direct contribution to the pressure field is always negative, regardless of the sign of the vortex, and is radially symmetric about the center of the vortex.These direct contributions are modified by the vortex interaction kernel, in a term that introduces the signs of the individual vortices into the pressure field.This kernel, Π(r − r J , r − r K ), is dependent only on the relative positions of the observation point r from each of the two vortex positions, r J and r K .It is symmetric with respect to the members of the pair, J and K, as is apparent from Figure 19, which shows the kernel and its two additive parts.satellite panels on the right exhibit the same interaction kernel-shifted, rotated, and re-scaled for each pair.

Appendix B. The expectation of vorticity under a Gaussian mixture model for vortex states
In this section, we derive the expectation of the vorticity field (2.18) when the vortex states are described by a Gaussian mixture model π(x) given by (2.17).(We omit the subscript y * from this probability here for brevity.)We start with the singular vorticity representation (1.1), and seek to evaluate the integral where the notation ω(r, x) explicitly represents the dependence of the singular vortex field on the state vector x.It is sufficient to consider the integration of just a single Gaussian component, since the overall expectation will simply be a linear combination of the K components.Thus, we seek the integral Let us recall that the state and covariance are organized as shown in (2.1) and (2.3), respectively.The integration over x is multiplicatively decomposable into integrals over the states of the individual vortex elements, dx = dx 1 dx 2 • • • dx N .When the integral in (B 2) for vortex J in the sum is carried out, the integrals over all states of the other other vortices I ̸ = J represent marginalizations of the probability distribution over these vortices.Using properties of Gaussians (Bishop 2006), it is easy to show that this marginalized distribution is simply a Gaussian distribution over the states of vortex J, Γ J δ(r − r J )N (x J |x J , Σ JJ ) dx J .(B 3) Now, we can decompose the integral into the individual states of vortex J, dx J = dr J dΓ J , and the state x J and covariance Σ JJ partitioned accordingly, as in their definitions (2.2) and (2.4).To assist the calculations that follow, it is useful to write the joint probability distribution for the strength and position π(Γ J , r J ) = N (x J |x J , Σ JJ ) in the conditional form π(Γ J , r J ) = π(Γ J |r J )π(r J ), where π(r J ) = N (r J |r J , Σ r J r J ). (B 4) Again, using properties of Gaussians, the conditional probability π(Γ J |r J ) can be shown (by completing the square) to be π(Γ J |r J ) = N (Γ J |µ Γ J |r J , Σ Γ J |r J ), (B 5) where the mean and covariance are, respectively, 6) In this partitioned form, we can evaluate the integrals over r J and Γ J in (B 3).The integral over r J is particularly easy to because of the properties of the Dirac delta function.As a result, r J is replaced everywhere by the observation point r.We are thus left with the integral N (r|r J , Σ r J r J ) Γ J N (Γ J |µ Γ J |r J , Σ Γ J |r J ) dΓ J , (B 7) where r J is replaced by r in mean, µ Γ J |r J .This final integral is simply the expectation of Γ J over the conditional distribution, and its value for a Gaussian is the mean, µ Γ J |r J .Thus, we arrive at The final result (2.18) follows easily by introducing (B 8) into the mixture model.

Appendix C. Linearization of the observation operator in Bayes theorem
Let us suppose that the prior is Gaussian rather than uniform, with mean x 0 and covariance Σ 0 .(Ultimately, we will allow this covariance to become infinitely large.)Thus, π 0 (x) = N (x|x 0 , Σ 0 ).
(C 1) The likelihood is also assumed Gaussian about the observation prediction h(x) with covariance Σ E , as in (2.9), but now we will linearize the observation operator about the true state x * , as in (2.13).This can be written as With Gaussian prior and likelihood and a linear relationship between y and x, the joint distribution over these variables is also Gaussian.We can make use of the properties of multivariate Gaussians to obtain all of well-known results that follow; the reader is referred to Bishop (2006) for more details.The mean and covariance of the joint variable z = (x, y) are, respectively, µ X,Y = x 0 Hx 0 + b (C 3) and (C 4) To obtain the Gaussian form of the posterior distribution, we seek the mean and covariance of the conditional π(x|y), which is obtained by starting from the log of the joint distribution and rewriting it as a quadratic form in x only, with y set equal to the true observation, y * , and assumed known.Again, using well-known identities involving the inverse of a partitioned matrix, we arrive at the conditional mean and covariance, and These results balance the prior mean and covariance with the information gained from the observation, y * .However, if the prior covariance grows to infinity, Σ −1 0 → 0, reflecting our lack of prior knowledge, then all dependence on the prior vanishes, and we end up with µ X|y * = x * + (H T Σ −1 E H) −1 H T Σ −1 E (y * − h(x * )) .
(C 8) and in which we have also substituted the specific form of b in our linearized model and simplified.The second term in the mean represents a bias error that arises when the true observation differs from the model evaluated at the true state, as from the error ε * in a single realization of the measurements.

Figure 1 .
Figure 1.(a) Four configurations of vortices that would generate identical measurements for the set of pressure sensors (brown squares).(b) Two distinct vortex states that differ only in the vortex labeling but generate identical flow fields.

Figure 3 .
Figure 3.One true vortex and one-vortex estimator using three sensors, with center panel showing flowchart of overall algorithm and references to figure panels.(a) True covariance ellipsoid.The ellipse on each coordinate plane represents the marginal covariance between the state components in that plane.The true vortex state is shown as a black dot.(b) True sensor data (filled circles) with noise levels (vertical lines), compared with sensor values from estimate (open circles), obtained from expected state of the sample set.(c) MCMC samples (blue) and the resulting vortex position covariance ellipses (colored unfilled ellipses) from the Gaussian mixture model.Thicker lines of the ellipses indicate higher weights in the mixture.The true vortex position is shown as a filled black circle, and the true covariance ellipse for vortex position (corresponding to the gray ellipse in (a)) is shown filled in gray.Sensor positions are shown as brown squares.(d) Contours of the expected vorticity field, based on the mixture model.

Figure 4 .
Figure 4.One true vortex and one-vortex estimator, using two sensors, showing MCMC samples (blue dots) and the resulting vortex position ellipses from the Gaussian mixture model.The estimator truncates the samples outside the bounding region.The true vortex position is shown as filled black circle.The circular curve represents the manifold of possible states that produce the same sensor pressures; the line tangent to the circle is the direction of maximum uncertainty at the true state.

Figure 5 .
Figure 5.One true vortex and one-vortex estimator, using various numbers and configurations of sensors (shown as brown squares) arranged in a line between -1 and 1 (a,b,c) or in a circle of radius 2.1 (d).Each panel depicts contours of expected vorticity field.True vortex position shown as filled black circle in each.(e) Maximum length of the covariance ellipsoid with increasing number of sensors on line segment x = [−1, 1] (blue) or circle of radius 2.1 (gold).

Figure 8 .
Figure 8.(a) Regularized vortex-pressure kernel Pϵ, with two different choices of blob radius.(b)The maximum length of uncertainty ellipsoid versus blob radius ϵ, for one true vortex and four sensors, when blob radius is included as part of the state vector, for a true vortex at (x1, y1, Γ1) = (0.5, 1, 1).(c) Vorticity contours for a true vortex with radius ϵ = 0.2 (in gray) and expected vorticity from a vortex estimator with radius ϵ = 0.01 (in blue), using 4 sensors (shown as brown squares).

Figure 11 .
Figure 11.Two true vortices with strengths Γ1 = 1.2 and Γ2 = 0.4, and two-vortex estimator with 8 sensors (shown as brown squares in left panels).Varying separation between vortices: (a,b) x2 −x1 = 1.75, (c,d) 1.25, (e,f) 0.75, (g,h) 0.25.Each left panel depicts contours of expected vorticity field, with true vortex positions shown as filled black circle in each.Each right panel depicts MCMC samples of vortex strengths, with true vortex strengths shown as black circle, and the longest axis of the true covariance ellipsoid depicted by the line.
Figure 12. true vortices (x1, y1, Γ1) = (−0.125,0.75, 1.2) and (x2, y2, Γ2) = (−0.125,0.5, 0.4), and a two-vortex estimator with 8 sensors (shown as brown squares).Each row depicts one mixture model component.In each left panel, true vortex positions are shown as filled black circles, and each vortex's corresponding position covariance is shown as filled gray ellipse.Red ellipses depict the position covariance of the mixture model component and the dots are the MCMC samples with greater than 50 percent probability of belonging to that component (blue for positive strength; red for negative).Right panels depict the MCMC samples of vortex strengths, with true vortex strengths shown as black circle in each, and the longest axis of the true covariance ellipsoid depicted by the line.

Figure 13 .
Figure 13.Two true vortices with strengths Γ1 = 1.2 and Γ2 = −1.0,and two-vortex estimator with 8 sensors (shown as brown squares in left panels).Varying separation between vortices: (a,b) x2 −x1 = 1.75, (c,d) 1.25, (e,f) 0.75, (g,h) 0.25.Each left panel depicts contours of expected vorticity field, with true vortex positions shown as filled black circle in each.Each right panel depicts MCMC samples of vortex strengths, with true vortex strengths shown as black circle, and the longest axis of the true covariance ellipsoid depicted by the line.

Figure 16 .
Figure 16.Three true vortices with states x1 = (x1, y1, Γ1) = (−0.75,0.5, 1), x2 = (0.45, 0.5, −1.2) and x3 = (0.55, 0.75, 1.4), and a three-vortex estimator with 11 sensors (shown as brown squares).Each row depicts a mixture model component.In each left panel, true vortex positions are shown as filled black circles, and each vortex's corresponding position covariance is shown as filled gray ellipse.Red ellipses depict the position covariance of the mixture model component and the blue dots are the MCMC samples with greater than 50 percent probability of belonging to that component (blue for positive strength; red for negative).Right panels depict contours of the estimated pressure field for that mode.