To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What happens when bodies are foregrounded as information sources and brought into thinking about information literacy? In what ways do theories of embodiment and of the body disrupt current discourses and practices about information literacy and help to shape a deeper understanding of the complexity of the practice? What do we gain when we bring the body into view?
Embodiment represents knowledge that is acquired by doing and by subjecting or being subject to experiences with knowledges (our own and others) derived from enculturation, encoding or embedded performance (Blackler, 1995). Embodied knowledge is only partially explicit but nonetheless important, as it references our tangible interactions and developing experiences with practices, performances and others over time and space. Embodiment represents the enmeshment of the corporeal, emotional, sensory and sentient dimensions of the lived experience. Upon this view embodiment is a construction that is subject to the various discourses that construct, deconstruct, emplace and disrupt the body in-practice and as-it-practises. To put this in another way, embodiment is informational.
The centrality of the body to our everyday practice should not, therefore, be relegated or reduced to secondary knowledge in the library and information science (LIS) field. Our bodies act as site and source for our inward reflection and reflexivity and outwardly as site and source for others. As we reflect upon and ‘read’ embodied performances, we access the trajectories and history of the lived experience. The increasing enmeshment of our information culture with digital platforms and technologies further means that theories of embodiment and corporeality are required to ensure the centrality of the body as site and source is foregrounded and not silenced or relegated to secondary knowledge.
An argument for the body
A claim for the inclusion of the body and embodiment in information literacy research and, more broadly, in LIS, is woven through this chapter. Primarily this claim proposes that disassociating information literacy from the corporeal and embodied experience will lead to an incomplete understanding of the complexity of the practice. This, in turn, diminishes the field’s understanding of the central role that information, in all its manifestations, plays in practice.
Determining an accurate picture of ocean currents is an important societal challenge for oceanographers, aiding our understanding of the vital role currents play in regulating Earth’s climate, and in the dispersal of marine species and pollutants, including microplastics. The geodetic approach, which combines satellite observations of sea level and Earth’s gravity, offers the only means to estimate the dominant geostrophic component of these currents globally. Unfortunately, however, geodetically-determined geostrophic currents suffer from high levels of contamination in the form of geodetic noise. Conventional approaches use isotropic spatial filters to improve the signal-to-noise ratio, though this results in high levels of attenuation. Hence, the use of deep learning to improve the geodetic determination of the ocean currents is investigated. Supervised machine learning typically requires clean targets from which to learn. However, such targets do not exist in this case. Therefore, a training dataset is generated by substituting clean targets with naturally smooth climate model data and generative machine learning networks are employed to replicate geodetic noise, providing noisy input and clean target pairs. Prior knowledge of the geodetic noise is exploited to develop a more realistic training dataset. A convolutional denoising autoencoder (CDAE) is then trained on these pairs. The trained CDAE model is then applied to unseen real geodetic ocean currents. It is demonstrated that our method outperforms conventional isotropic filtering in a case study of four key regions: the Gulf Stream, the Kuroshio Current, the Agulhas Current, and the Brazil-Malvinas Confluence Zone.
We are delighted to present the Special Issue on NLP Approaches to Offensive Content Online published in the Journal of Natural Language Engineering issue 29.6. We are happy to have received a total of 26 submissions to the special issue evidencing the interest of the NLP community in this topic. Our guest editorial board comprised of international experts in the field has worked hard to review all submissions over multiple rounds of peer review. Ultimately, we accepted nine articles to appear in this special issue.
With the popularization of electric vehicles, early built parking lots cannot solve the charging problem of a large number of electric vehicles. Mobile charging robots have autonomous navigation and complete charging functions, which make up for this deficiency. However, there are static obstacles in the parking lot that are random and constantly changing their position, which requires a stable and fast iterative path planning method. The gray wolf optimization (GWO) algorithm is one of the optimization algorithms, which has the advantages of fast iteration speed and stability, but it has the drawback of easily falling into local optimization problems. This article first addresses this issue by improving the fitness function and position update of the GWO algorithm and then optimizing the convergence factor. Subsequently, the fitness function of the improved gray wolf optimization (IGWO) algorithm was further improved based on the minimum cost equation of the A* algorithm. The key coefficients AC1 and AC2 of two different fitness functions, Fitness1 and Fitness2, were discussed. The improved gray wolf optimization algorithm integrating A* algorithm (A*-IGWO) has improved the number of iterations and path length compared to the GWO algorithm in parking lots path planning problems.
The OffensEval shared tasks organized as part of SemEval-2019–2020 were very popular, attracting over 1300 participating teams. The two editions of the shared task helped advance the state of the art in offensive language identification by providing the community with benchmark datasets in Arabic, Danish, English, Greek, and Turkish. The datasets were annotated using the OLID hierarchical taxonomy, which since then has become the de facto standard in general offensive language identification research and was widely used beyond OffensEval. We present a survey of OffensEval and related competitions, and we discuss the main lessons learned. We further evaluate the performance of Large Language Models (LLMs), which have recently revolutionalized the field of Natural Language Processing. We use zero-shot prompting with six popular LLMs and zero-shot learning with two task-specific fine-tuned BERT models, and we compare the results against those of the top-performing teams at the OffensEval competitions. Our results show that while some LMMs such as Flan-T5 achieve competitive performance, in general LLMs lag behind the best OffensEval systems.
Product graphics interchange formats (GIFs) employ this format to show the features of the product and make up for the lack of physical experience online. These GIFs have been widely applied in domains such as e-shopping and social media, aiming to interest and impress viewers. Contrary to this wide application, most designers in this domain lack expertise and produce GIFs of varied quality. Moreover, the knowledge of techniques to enhance viewers’ engagement with product GIFs is also lacking. To bridge the gap, we conducted a series of studies. First, we collected and summarized seven design factors referring to existing literature and semi-structured interviews. Then, the impacts of these design factors were revealed through an online study with 106 product GIFs among 307 participants. The results showed that visual-related factors such as color contrast and moving intensity mainly impact viewers’ interest, while content-related factors such as scenario and style matching impact viewers’ impressions. The simplicity of GIFs also impressed viewers with a quick viewing mode. Finally, we conducted a workshop and verified that these results support large-scale production of product GIFs. Our studies might support the codesign methods of product GIFs and enhance their quality in design practice.
In climate modeling, the stratospheric ozone layer is typically only considered in a highly simplified form due to computational constraints. For climate projections, it would be of advantage to include the mutual interactions between stratospheric ozone, temperature, and atmospheric dynamics to accurately represent radiative forcing. The overarching goal of our research is to replace the ozone layer in climate models with a machine-learned neural representation of the stratospheric ozone chemistry that allows for a particularly fast, but accurate and stable simulation. We created a benchmark data set from pairs of input and output variables that we stored from simulations of the ATLAS Chemistry and Transport Model. We analyzed several variants of multilayer perceptrons suitable for physical problems to learn a neural representation of a function that predicts 24-h ozone tendencies based on input variables. We performed a comprehensive hyperparameter optimization of the multilayer perceptron using Bayesian search and Hyperband early stopping. We validated our model by replacing the full chemistry module of ATLAS and comparing computation time, accuracy, and stability. We found that our model had a computation time that was a factor of 700 faster than the full chemistry module. The accuracy of our model compares favorably to the full chemistry module within a 2-year simulation run, also outperforms a previous polynomial approach for fast ozone chemistry, and reproduces seasonality well in both hemispheres. In conclusion, the neural representation of stratospheric ozone chemistry in simulation resulted in an ozone layer that showed a high accuracy, significant speed-up, and stability in a long-term simulation.
Surface ozone is an air pollutant that contributes to hundreds of thousands of premature deaths annually. Accurate short-term ozone forecasts may allow improved policy actions to reduce the risk to human health. However, forecasting surface ozone is a difficult problem as its concentrations are controlled by a number of physical and chemical processes that act on varying timescales. We implement a state-of-the-art transformer-based model, the temporal fusion transformer, trained on observational data from three European countries. In four-day forecasts of daily maximum 8-hour ozone (DMA8), our novel approach is highly skillful (MAE = 4.9 ppb, coefficient of determination $ {\mathrm{R}}^2=0.81 $) and generalizes well to data from 13 other European countries unseen during training (MAE = 5.0 ppb, $ {\mathrm{R}}^2=0.78 $). The model outperforms other machine learning models on our data (ridge regression, random forests, and long short-term memory networks) and compares favorably to the performance of other published deep learning architectures tested on different data. Furthermore, we illustrate that the model pays attention to physical variables known to control ozone concentrations and that the attention mechanism allows the model to use the most relevant days of past ozone concentrations to make accurate forecasts on test data. The skillful performance of the model, particularly in generalizing to unseen European countries, suggests that machine learning methods may provide a computationally cheap approach for accurate air quality forecasting across Europe.
Land cover classification (LCC) and natural disaster response (NDR) are important issues in climate change mitigation and adaptation. Existing approaches that use machine learning with Earth observation (EO) imaging data for LCC and NDR often rely on fully annotated and segmented datasets. Creating these datasets requires a large amount of effort, and a lack of suitable datasets has become an obstacle in scaling the use of machine learning for EO. In this study, we extend our prior work on Scene-to-Patch models: an alternative machine learning approach for EO that utilizes Multiple Instance Learning (MIL). As our approach only requires high-level scene labels, it enables much faster development of new datasets while still providing segmentation through patch-level predictions, ultimately increasing the accessibility of using machine learning for EO. We propose new multi-resolution MIL architectures that outperform single-resolution MIL models and non-MIL baselines on the DeepGlobe LCC and FloodNet NDR datasets. In addition, we conduct a thorough analysis of model performance and interpretability.
Design representations play a crucial role in facilitating communication between individuals in design. Sketches and physical prototypes are frequently used to communicate design concepts in early-stage design. However, we lack an understanding of the communicative benefits each representation provides and how these benefits relate to the effort and resources required to create each representation. A mixed-methods study was conducted with 44 participants to identify whether sketches and physical prototypes led to different levels of cognitive load perceived by a communicator and listener and the characteristics that shape their cognitive load during communication. Results showed that listeners perceived higher levels of mental and physical demands when understanding ideas as low-fidelity physical prototypes, as compared to sketches. No significant differences were found in the cognitive load levels of communicators between the two conditions. Qualitative analyses of post-task semi-structured interviews identified five themes relating to verbal explanations and visual representations that shape designers’ cognitive load when understanding and communicating ideas through design representations. Results indicate that designers should be aware of the specific objectives they seek to accomplish when selecting the design representation used to communicate. This work contributes to the knowledge base needed for designers to use design representations more effectively as tools for communication.
This article describes a robot walker based on a new single degree-of-freedom six-bar leg mechanism that provides rectilinear, non-rotating, movement of the foot. The walker is statically stable and requires only two actuators, one for each side, to provide effective walking movement on a flat surface. We use Curvature Theory to design a four-bar linkage with a flat-sided coupler curve and then adds a translating link so that walker foot follows this coupler curve in rectilinear movement. A prototype walker was constructed that weighs 1.6 kg, is 180 mm tall, and travels at 162 mm/s. This is an innovative legged robot that has a simple reliable design.
Through an ecological approach to creative practice (henceforth ecomprovisation), this project deals with the expansion of creative strategies applicable to everyday contexts. Within ubiquitous music (ubimus), we target the convergence of sonification methods with the application of ecological models within the context of comprovisation. These conceptual frameworks inform the technological and aesthetic approaches applied in the making of Markarian 335. We describe the creative procedures and the implications of the design choices involved in this artwork. The contributions and shortcomings of our ecomprovisational approach are situated within the context of the current efforts to foster expanded creative possibilities in ubimus endeavours.
This article aims to explore the concept of patched/versioned musical works as creative ecologies. It identifies how the internet’s involvement in music creation and dissemination influences choices related to the release of such works. Throughout this writing, the author looks at the increasingly volatile structures surrounding recorded music in the early twenty-first century as a result of streaming platforms such as Spotify and video-based social media sites such as TikTok becoming the primary means for music consumption. It explores this volatility as a method for approaching the release of new music within dynamic musical ecosystems and looks at the growing art scene focusing on this way of working, drawing parallels between artistry and subscription-based services where content continually evolves over time.
Examining the role of arts and culture in regional Australia often focuses on economic aspects within the creative industries. However, this perspective tends to disregard the value of unconventional practices and fails to recognise the influence of regional ecological settings and the well-being advantages experienced by amateur and hobbyist musicians who explore ubiquitous methods of music creation. This article presents the results of a survey conducted among practitioners in regional Australia, exploring their utilisation of creative technology ecosystems. This project marks the first independent, evidence-based study of experimental electronic music practices in regional Australia and how local and digital resource ecosystems support those activities. Spanning the years 2021 and 2022, the study involved interviewing 11 participants from many Australian states. In this article, we share the study’s findings, outlining the diverse range of experimental electronic music practices taking place across regional Australia and how practitioners navigate the opportunities and challenges presented by their local context.
This article explores strategies that allow electronic music performers to engage their audiences and environments in live acts of co-creation. We outline our existing musical practice relying on site-specific sampling and digital mobile technologies that have been tested across a range of participatory music performances. Salient challenges within this performance context are identified and several tools and techniques are proposed as solutions. We then consider how setting-based and sample-based participatory performances can be expanded through gamification strategies. After exploring how notions of playful experience can offer new insights into the nature of audience engagement, we propose several approaches for introducing gamified elements into performative music practices that can expand the scope of audience participation while preserving key aspects of using concert location recordings and musical improvisation. We conclude by discussing the implications of these approaches for the performer–audience relationship and the prospect of musical engagement with the environment before suggesting directions for future research.
Lax extensions of set functors play a key role in various areas, including topology, concurrent systems, and modal logic, while predicate liftings provide a generic semantics of modal operators. We take a fresh look at the connection between lax extensions and predicate liftings from the point of view of quantale-enriched relations. Using this perspective, we show in particular that various fundamental concepts and results arise naturally and their proofs become very elementary. Ultimately, we prove that every lax extension is induced by a class of predicate liftings; we discuss several implications of this result.