We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent disruption of medical oxygen during the second wave of coronavirus disease 2019 (COVID-19) has caused nationwide panic. This study attempts to objectively analyze the medical oxygen supply chain in India along the principles of value stream mapping (VSM), identify bottlenecks, and recommend systemic improvements.
Methods:
Process mapping of the medical oxygen supply chain in India was done. Different licenses and approvals, their conditions, compliances, renewals, among others were factored in. All relevant circulars (Government Notices), official orders, amendments, and gazette notifications pertaining to medical oxygen from April 2020 to April 2021 were studied and corroborated with information from Petroleum and Explosives Safety Organization (PESO) official website.
Results:
Steps of medical oxygen supply chain right from oxygen manufacture to filling, storage, and transport up to the end users; have regulatory bottlenecks. Consequently, flow of materials is sluggish and very poor information flow has aggravated the inherent inefficiencies of the system. Government of India has been loosening regulatory norms at every stage to alleviate the crisis.
Conclusions:
Regulatory bottlenecks have indirectly fueled the informal sector over the years, which is not under Government’s control with difficulty in controlling black-marketing and hoarding. Technology enabled, data-driven regulatory processes with minimum discretionary human interface can make the system more resilient.
Laryngeal cancer is the second most prevalent head and neck malignancy in the USA. With recent advances in technology, this procedure is increasingly performed under local anaesthesia. This study aimed to identify the efficacy, safety and cost-effectiveness of laryngeal biopsy in out-patients by conducting a systematic review.
Method
A literature search was conducted using PubMed, Medline, Google Scholar and Embase over a 20-year period. Inclusion criteria were: studies performed on out-patient diagnostic biopsy procedures of the larynx. Exclusion criteria included all therapeutic procedures. The outcome measures were sensitivity and specificity, complication rate and cost-savings.
Results
Thirty-five studies were included in the analysis. The sensitivity and specificity varied from 60 to 100 per cent with a low complication rate and cost savings.
Conclusion
Office-based laryngeal biopsies are increasingly used in the diagnosis of laryngeal cancers, resulting in earlier diagnosis and commencement of treatment. The barrier to undertaking this procedure is low sensitivity.
Unilateral maxillary sinus opacification on computed tomography may reflect an inflammatory or neoplastic process. The neoplasia risk is not clear in the literature.
Methods
In this retrospective study, computed tomography sinus scans performed over 12 months were screened for unilateral maxillary sinus opacification, and the rates of inflammatory and neoplastic diagnoses were calculated.
Results
Of 641 computed tomography sinus scans, the rate of unilateral maxillary sinus opacification was 9 per cent. Fifty-two cases were analysed. The risk of neoplasia was 2 per cent (inverted papilloma, n = 1). No cases of unilateral maxillary sinus opacification represented malignancy, but one case of lymphoma had an incidental finding of unilateral maxillary sinus opacification on the contralateral side. Patients with an antrochoanal polyp (n = 3), fungal disease (n = 1), inverted papilloma and lymphoma all had a unilateral nasal mass.
Conclusion
Our neoplasia rate of 2 per cent was lower than previously reported. A unilateral mass was predictive of pathology that required operative management. Clinical findings, rather than simple findings of opacification on computed tomography, should drive the decision to perform biopsy.
Magnetic resonance imaging scans of the internal acoustic meatus are commonly requested in the investigation of audio-vestibular symptoms for potential vestibular schwannoma. There have been multiple studies into protocols for requesting magnetic resonance imaging for vestibular schwannoma, but none have been reported based on UK National Institute for Health and Care Excellence guidelines for investigating audio-vestibular symptoms. This study intended to identify the local magnetic resonance imaging detection rates and patterns of vestibular schwannoma, and to audit the conformity of scan requests with the National Institute for Health and Care Excellence guidelines, with a review of relevant literature.
Method
A retrospective analysis of 1300 magnetic resonance imaging scans of the internal acoustic meatus, compared against National Institute for Health and Care Excellence guidelines, was conducted over two years.
Results and conclusion
Sixteen scans were positive for vestibular schwannoma, with a detection rate of 1.23 per cent. All positive cases fit the guidelines; three of these could have been missed using other criteria. A total of 281 requests did not meet the guideline criteria but revealed no positive results, supporting the use of National Institute for Health and Care Excellence guidelines in planning magnetic resonance imaging scans for audio-vestibular symptoms.
Fetal echocardiography is the main modality of prenatal diagnosis of CHD. This study was done to describe the trends and benefits associated with prenatal diagnosis of complex CHD at a tertiary care centre.
Methods
Retrospective chart review of patients with complex CHD over an 18-year period was performed. Rates of prenatal detection along with early and late infant mortality outcomes were studied.
Results
Of 381 complex CHD patients born during the study period, 68.8% were diagnosed prenatally. Prenatal detection rate increased during the study period from low-50s in the first quarter to mid-80s in the last quarter (p=0.001). Rate of detection of conotruncal anomalies increased over the study period. No infant mortality benefit was noted with prenatal detection.
Conclusions
Improved obstetrical screening indications and techniques have contributed to higher proportions of prenatal diagnosis of complex CHD. However, prenatal diagnosis did not confer survival benefits in infancy in our study.
Enlargement of the left atrium is a non-invasive marker of diastolic dysfunction of the left ventricle, a determinant of prognosis in children with cardiomyopathy. Similarly, N-terminal prohormone brain natriuretic peptide is a useful marker in the management of children with cardiomyopathy and heart failure. The aim of this study is to evaluate the association of left atrial pressures with left atrial volume and N-terminal prohormone brain natriuretic peptide in children with cardiomyopathy.
Methods
This was a retrospective study reviewing the medical records of patients <18 years of age, who were diagnosed with cardiomyopathy or acute myocarditis with eventual development of cardiomyopathy. Left atrial volume by transthoracic echocardiogram and pulmonary capillary wedge pressure, a surrogate of left atrial pressure, obtained by means of cardiac catheterisation were analysed. In addition, N-terminal prohormone brain natriuretic peptide levels obtained at the time of the cardiac catheterisation were also reviewed. Statistical analysis was performed to evaluate the association of left atrial pressures with left atrial volume and N-terminal prohormone brain natriuretic peptide levels.
Results
There was a linear correlation of left atrial pressure estimated in the cardiac catheterisation with indexed left atrial volume (r=0.63; p<0.001) and left atrial volume z-scores (r=0.59; p<0.001). We found no statistically significant association between the left atrial pressure and N-terminal prohormone brain natriuretic peptide levels.
Conclusions
Left atrial volume measured non-invasively by echocardiography can be used as a surrogate for left atrial pressure in assessing diastolic dysfunction of the left ventricle in children with cardiomyopathy. The larger the size of the left atrium, worse is the diastolic function of the left ventricle.
Kinetic energy and reactive scalar spectra in turbulent premixed flames are studied from compressible three-dimensional direct numerical simulations (DNS) of a temporally evolving rectangular slot-jet premixed flame, a statistically one-dimensional configuration. The flames correspond to a lean premixed hydrogen–air mixture at an equivalence ratio of 0.7, preheated to 700 K and at 1 atm, and three DNS are considered with a fixed jet Reynolds number of 10 000 and a jet Damköhler number varying between 0.13 and 0.54. For the study of spectra, motivated by the need to account for density change, which can be locally strong in premixed flames, a new density-weighted definition for two-point velocity/scalar correlations is proposed. The density-weighted two-point correlation tensor retains the essential properties of its constant-density (incompressible) counterpart and recovers the density-weighted Reynolds stress tensor in the limit of zero separation. The density weighting also allows the derivation of balance equations for velocity and scalar spectrum functions in the wavenumber space that illuminate physics unique to combusting flows. Pressure–dilatation correlation is a source of kinetic energy at high wavenumbers and, analogously, reaction rate–scalar fluctuation correlation is a high-wavenumber source of scalar energy. These results are verified by the spectra constructed from the DNS data. The kinetic energy spectra show a distinct inertial range with a $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}-5/3$ scaling followed by a ‘diffusive–reactive’ range at higher wavenumbers. The exponential drop-off in this range shows a distinct inflection in the vicinity of the wavenumber corresponding to a laminar flame thickness, $\delta _L$, and this is attributed to the contribution from the pressure–dilatation term in the energy balance in wavenumber space. Likewise, a clear spike in spectra of major reactant species (hydrogen) arising from the reaction-rate term is observed at wavenumbers close to $\delta _L$. It appears that in the inertial range classical scaling laws for the spectra involving the Kolmogorov scale are applicable, but in the high-wavenumber range where chemical reactions have a strong signature the laminar flame thickness produces a better collapse. It is suggested that a full scaling should perhaps involve the Kolmogorov scale, laminar flame thickness, Damköhler number and Karlovitz number.
The sound emission from open turbulent flames is dictated by the two-point spatial correlation of the rate of change of the fluctuating heat release rate. This correlation in premixed flames can be represented well using Gaussian-type functions and unstrained laminar flame thermal thickness can be used to scale the correlation length scale, which is about a quarter of the planar laminar flame thermal thickness. This correlation and its length scale are observed to be less influenced by the fuel type or stoichiometry or turbulence Reynolds and Damkohler numbers. The time scale for fluctuating heat release rate is deduced to be about τc/34 on an average, where τc is the planar laminar flame time scale, using direct numerical simulation (DNS) data. These results and the spatial distribution of mean reaction rate obtained from Reynolds-averaged Navier–Stokes (RANS) calculations of open turbulent premixed flames employing the standard model and an algebraic reaction rate closure, involving a recently developed scalar dissipation rate model, are used to obtain the far-field sound pressure level from open flames. The calculated values agree well with measured values for flames of different stoichiometry and fuel types, having a range of turbulence intensities and heat output. Detailed analyses of RANS results clearly suggest that the noise level from turbulent premixed flames having an extensive and uniform spatial distribution of heat release rate is low.
An outbreak of gastroenteritis occurred in Italy among 39 persons who had attended a private supper. All guests were previously healthy, young, non-pregnant adults; 18 (46%) had symptoms, mostly gastrointestinal (78%), with a short incubation period. Four were hospitalized with acute febrile gastroenteritis, two of whom had blood cultures positive for Listeria monocytogenes. No other microorganisms were recovered from the hospitalized patients' specimens. Epidemiological investigation identified rice salad as the most likely vehicle of the food-borne outbreak. L. monocytogenes was isolated from three leftover foods, the kitchen freezer and blender. Isolates from the patients, the foods and the freezer were indistinguishable: serotype l/2b, same phage type and multilocus enzyme electrophoretic type. Eight (36%) of 22 guests tested were found to have antibodies against L. monocytogenes, compared with none of 11 controls from the general population. This point source outbreak was probably caused by infection with L. monocytogenes. Unusual features included the high attack rate among immunocompetent adults and the predominance of gastrointestinal symptoms.
Hydroxyapatite formed from low temperature setting calcium phosphate cements (CPC) are currently been used for various orthopaedic applications. CPCs are attractive candidates for the development of scaffolds for bone tissue engineering, since they are moldable, resorbable, set at physiological temperature without the use of toxic chemicals, and can be processed in an operating room setting. However they may have mechanical disadvantages which seriously limit them to non-load bearing orthopaedic applications. The aim of the present study was to develop composites from polyphosphazenes and calcium deficient hydroxyapatite precursors to form poorly crystalline hydroxyapatite-polymer composites. Composites were formed from calcium deficient hydroxyapatite precursors (Ca/P – 1.5, 1.6) and biodegradable polyphosphazenes, poly[bis(ethyl alanato)phosphazene] (PNEA) and poly[(50%ethyl alanato) (50%methyl phenoxy)phosphazene] (PNEA50mPh50) at physiological temperature. The results demonstrated that poorly crystalline hydroxyapatite that resembled the mineral component of bone was formed in the presence of biodegradable polyphosphazenes. The surface morphology of all the four composites was identical with a porous microstructure. The composites supported the adhesion and proliferation of osteoblast like MC3T3-E1 cells making them potential candidates for bone tissue engineering.
We have previously demonstrated that blending biodegradable glycine co-substituted polyphosphazenes with poly(lactide-co-glycolide) (PLAGA) results in novel biomaterials with versatile properties. The study showed that the degradation rate of polyphosphazene/PLAGA blends can be effectively controlled by varying the blend composition while at the same time the degradation products of polyphosphazenes effectively neutralized the acidic degradation products of PLAGA. In the present study, novel blends of hydrophobic, biodegradable polyphosphazene, poly[bis(ethyl alanato) phosphazene] (PNEA) and PLAGA (LA: GA; 85:15) were developed as candidates for bone tissue engineering applications. Two different blend compositions were developed by blending PNEA and PLAGA having weight ratios of 25:75 (Blend-1) and 50:50 (Blend-2) by the mutual solvent technique using dichloromethane as the solvent. The miscibility of the blends was determined using differential scanning calorimetry (DSC), fourier transform-infrared spectroscopy (FT-IR), and scanning electron microscopy (SEM). Surface analysis of the blends by SEM revealed a smooth uniform surface for Blend-1, whereas Blend-2 showed evidence of phase separation. PNEA is not completely miscible with PLAGA, as evidenced from DSC and FT-IR measurements. The osteocompatibilities of Blend-1 and Blend-2 were compared to those of parent polymers by following the adhesion and proliferation of primary rat osteoblast cells on two dimensional (2-D) polymer and blend films over a 21 day period in culture. Blend films showed significantly higher cell numbers on the surface compared to PLAGA and PNEA films.
A study has been made of microstructure and hardness of machining chips created from commercially pure iron and carbon steels. Large shear strains imposed during chip formation in machining are found to produce significant microstructure refinement in the chips, resulting in higher hardness compared to the bulk. Transmission electron and scanning electron microscopy have shown the chips to consist entirely of ultra-fine grain structures with ferrite grain sizes in the range of 100-800 nm. With high carbon steels, the microstructure of the bulk material prior to machining is also seen to have a significant influence on the characteristics of the chip.
The Global Trade Analysis Project (GTAP) benchmark data and parameters are specified for 37 commodities and 24 regions. Due to the size of this data set, an aggregated version of the data base and parameters will be desired for most GTAP simulations. The precise dimensions of each aggregation will depend on the problem at hand. Experienced users tend to favor strategic aggregations that allow them to focus on key sectors and regions of interest. This makes the job of sorting through the simulation results less daunting. For teaching purposes, we usually begin with the three-region, three-commodity (3×3) aggregation referred to in Chapters 2 and 4. Section II introduces you to the GTAP aggregation facility that created this 3×3 data set. We will also examine some of the key value flows, as well as the full parameter file. In section III, local behavior of the 3×3 model will be examined through the use of general equilibrium demand elasticities. This offers a valuable summary of the interaction between theory, data, and parameters in the model.
Aggregation of the GTAP data
The user specifies the desired aggregation of the GTAP data base by filling in a template file. This involves defining names for the aggregated commodities and associating them with disaggregate GTAP commodity categories, then doing the same for regions.
Multilayered Ti/Al thin films (with nominally equal layer thickness of Ti and Al) have been sputter deposited on oxidized silicon substrates at room temperature. Transmission electron microscopy (TEM) and high resolution electron microscopy have been used to characterize the structure of these multilayers as a function of the layer thickness. Ti changed from an hcp to an fcc and back to an hcp structure on reduction of the layer thickness. Al too changed from an fcc to an hcp structure at a layer thickness of 2.5 nm. The observed structural transitions have been explained on the basis of the Redfield-Zangwill model. Subsequently Ti-aluminide thin films were deposited using a γ-TiAl target. These films were found to be amorphous in the as-deposited condition with crystallites of α-Ti(Al) embedded in the amorphous matrix. On annealing under a protective Ar atmosphere at a temperature of 550 °C, the Ti-aluminide film crystallized into a nanocrystalline two phase microstructure consisting of γ-TiAl and α2-Ti3Al. The crystallization of the aluminide film has been investigated in detail by in-situ annealing experiments on a hot stage in the TEM. The results of this investigation have been discussed in this paper.
Thin films of Ti-aluminides have been sputter deposited using a binary γ-TiAl based target and a quaternary γ-TiAl based target containing alloying additions of Nb and Mn. The as-deposited binary film consisted of microcrystalline agglomerates of α-Ti(Al) embedded in an amorphous matrix whereas the as-deposited quaternary film was found to be amorphous. These films have been annealed in a furnace under a protective Ar atmosphere (referred to as ex situ annealing) as well as on a hot stage in a transmission electron microscope (TEM), referred to as in situ annealing, to study the crystallization behavior of these films and also the effect of Nb and Mn additions on the same. Ex situ annealing of both binary and quaternary films resulted in a nanocrystalline microstructure consisting of primarily γ-TiAl with a small amount of finely dispersed α2-Ti3Al. During in situ annealing of the binary film, at 753 K (stage temperature), growth of the α-Ti(Al) phase was observed together with crystallization of a second phase in relatively thicker regions of the specimen. The first crystalline phase to appear during in situ annealing of the quaternary film was a surface crystallized metastable tetragonal phase at 873 K. Prolonged annealing at the same temperature resulted in the transformation of the amorphous region between grains of the tetragonal phase into γ-TiAl. Formation of the tetragonal and intergranular γ-TiAl phases were also observed in the thin regions of the binary film when it was heated to 873 K.