Hostname: page-component-89b8bd64d-dvtzq Total loading time: 0 Render date: 2026-05-14T00:10:59.477Z Has data issue: false hasContentIssue false

DPD-v2: Generalised deep particle diffusometry for varied particle shapes and experimental conditions

Published online by Cambridge University Press:  31 March 2025

Pranshul Sardana*
Affiliation:
School of Mechanical Engineering, Purdue University, West Lafayette, IN, USA
Steven T. Wereley
Affiliation:
School of Mechanical Engineering, Purdue University, West Lafayette, IN, USA
*
Corresponding author: Pranshul Sardana; Email: psardana@purdue.edu

Abstract

Particle diffusometry (PD) is a technique of measuring the diffusion coefficient of a fluid sample by seeding it with tracer particles and observing their motion under a microscope. In microfluidic set-ups, the observed particles are often defocused and their motion is affected by factors such as fluid flow, which leads to high errors for conventional and deep learning-based PD (DPD) algorithms. This work improves the performance of DPD models by updating their architecture, avoiding temporal averaging in the input, and exploring the impact of various choices during training. These models provide state-of-the-art performance for generalised datasets regardless of particle shapes, concentration, flow or image noise and are called DPD-v2. These models provide a mean absolute error of 0.09$\unicode{x03BC}$m2s−1 for Gaussian particles and 0.07$\unicode{x03BC}$m2s−1 for defocused particles, which is 2x–4x lower errors as compared with the two following best methods. The performance of DPD-v2 models increases with crop size and the use of multiple stacks of images. The outputs of the DPD-v2 models were compared against the outputs from conventional algorithms on Gaussianised experimental no flow datasets, which provided < 0.5$\unicode{x03BC}$m2s−1 mean absolute difference. Hence, the DPD-v2 models can be used in real-world scenarios.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Figure 1. Particle shapes obtained from Gaussian and ray-tracing simulations: (a) particle from Gaussian simulation, (b) In-focus particle obtained from the ray-tracing simulation, (c) out-of-focus particle at a distance of 20$\unicode{x03BC}$m from the focal plane obtained from the ray-tracing simulation. The images are zoomed in to improve visualisation.

Figure 1

Figure 2. The 400 px × 400 px crops of images with (a) Gaussian particles and (b) defocused particles.

Figure 2

Figure 3. Main components of the DPD-v2 architecture: the first convolution layer (conv), n-repeating residual blocks (each with two convolution layers), an average pool layer and a fc layer.

Figure 3

Figure 4. Data flow during training and testing.

Figure 4

Table 1. Exhaustive hyperparameter and architectural parameter space used for the joint search of training configuration. Some experiments reduced this space to fit the large models on the GPUs and avoid training instabilities

Figure 5

Table 2. Mean absolute error ($\unicode{x03BC}$m2s−1) for DPD-v1 and DPD-v2 models on Gaussian and defocused test datasets

Figure 6

Figure 5. Mean absolute errors ($\unicode{x03BC}$m2s−1) obtained from the best DPD-v2 models obtained using the following hyperparameter optimisation criteria: maximising the R2, minimising the MSE, minimising the MAE and minimising the MRAE.

Figure 7

Figure 6. Mean absolute errors ($\unicode{x03BC}$m2s−1) obtained from the best DPD-v2 models trained with the L1 and L2 loss functions.

Figure 8

Figure 7. Mean absolute errors ($\unicode{x03BC}$m2s−1) obtained for DPD-v2 models with different depths. These models were tested on 1 and 5 stacks of images, leading to 4-frame and 20-frame results, respectively.

Figure 9

Figure 8. Mean absolute errors ($\unicode{x03BC}$m2s−1) obtained for DPD-v2 models with different average pool kernel sizes.

Figure 10

Figure 9. Performance of best DPD-v2 models trained on defocused dataset.

Figure 11

Table 3. Effect of different flows on the performance of the models trained for Gaussian and defocused particles

Figure 12

Figure 10. Mean absolute errors ($\unicode{x03BC}$m2s−1) of DPD-v2 models trained on different crop sizes. In all the cases, the same amount of spatial information by using multiple crops for smaller crop sizes.

Figure 13

Figure 11. Mean absolute errors ($\unicode{x03BC}$m2s−1) of DPD-v2 models trained on different numbers of frames in the stack. In all cases, the same amount of temporal information is used by using multiple stacks for smaller stack sizes.

Figure 14

Figure 12. Mean absolute errors ($\unicode{x03BC}$m2s−1) of DPD-v2 models trained on datasets with varying particle concentrations: 300, 600 and 1000 particles per frame. The models were trained and tested with a crop size of 256 × 256 pixels. A crop size of 512 × 512 pixels was also used for the 300-particle Gaussian case to avoid crops with no particles.

Figure 15

Figure 13. Mean absolute errors ($\unicode{x03BC}$m2s−1) of DPD-v2 models trained on datasets with varying levels of Gaussian noise. The standard deviation was set to 25.5 pixels.

Figure 16

Table 4. Mean absolute difference ($\unicode{x03BC}$m2s−1) between the outputs from DPD-v2 models and outputs from trackPy and iPED on Gaussianised experimental datasets

Figure 17

Figure 14. Mean absolute errors ($\unicode{x03BC}$m2s−1) of various methods on Gaussian and defocused datasets. The experiments were done with 20 frames.