Energy consumption in buildings, both residential and commercial, accounts for approximately 40% of all energy usage in the United States, and similar numbers are being reported from countries around the world. This significant amount of energy is used to maintain a comfortable, secure, and productive environment for the occupants. So, it is crucial that energy consumption in buildings must be optimized, all the while maintaining satisfactory levels of occupant comfort, health, and safety. Machine learning (ML) has been proven to be an invaluable tool in deriving important insights from data and optimizing various systems. In this work, we review some of the most promising ways in which ML has been leveraged to make buildings smart and energy-efficient. For the convenience of readers, we provide a brief introduction to the relevant ML paradigms and the components and functioning of each smart building system we cover. Finally, we discuss the challenges faced while implementing machine learning algorithms in smart buildings and provide future avenues for research in this field.
]]>Climate trends and weather indicators are used in several research fields due to their importance in statistical modeling, frequently used as covariates. Usually, climate indicators are available as grid files with different spatial and time resolutions. The availability of a time series of climate indicators compatible with administrative boundaries is scattered in Brazil, not fully available for several years, and produced with diverse methodologies. In this paper, we propose time series of climate indicators for the Brazilian municipalities produced using zonal statistics derived from the ERA5-Land reanalysis indicators. As a result, we present datasets with zonal statistics of climate indicators with daily data, covering the period from 1950 to 2022.
]]>Despite the importance of quantifying how the spatial patterns of heavy precipitation will change with warming, we lack tools to objectively analyze the storm-scale outputs of modern climate models. To address this gap, we develop an unsupervised, spatial machine-learning framework to quantify how storm dynamics affect changes in heavy precipitation. We find that changes in heavy precipitation (above the 80th percentile) are predominantly explained by changes in the frequency of these events, rather than by changes in how these storm regimes produce precipitation. Our study shows how unsupervised machine learning, paired with domain knowledge, may allow us to better understand the physics of the atmosphere and anticipate the changes associated with a warming world.
]]>Globally, forests are net carbon sinks that partly mitigates anthropogenic climate change. However, there is evidence of increasing weather-induced tree mortality, which needs to be better understood to improve forest management under future climate conditions. Disentangling drivers of tree mortality is challenging because of their interacting behavior over multiple temporal scales. In this study, we take a data-driven approach to the problem. We generate hourly temperate weather data using a stochastic weather generator to simulate 160,000 years of beech, pine, and spruce forest dynamics with a forest gap model. These data are used to train a generative deep learning model (a modified variational autoencoder) to learn representations of three-year-long monthly weather conditions (precipitation, temperature, and solar radiation) in an unsupervised way. We then associate these weather representations with years of high biomass loss in the forests and derive weather prototypes associated with such years. The identified prototype weather conditions are associated with 5–22% higher median biomass loss compared to the median of all samples, depending on the forest type and the prototype. When prototype weather conditions co-occur, these numbers increase to 10–25%. Our research illustrates how generative deep learning can discover compounding weather patterns associated with extreme impacts.
]]>This study aimed to identify and understand the major topics of discussion under the #sustainability hashtag on Twitter (now known as “X”) and understand user engagement. The sharp increase in social media usage combined with a rise in climate anomalies in recent years makes the area of sustainability with respect to social media a critical topic. Python was used to gather Twitter posts between January 1, 2023, and March 1, 2023. User engagement metrics were analyzed using a variety of statistical analysis methods, including keyword-frequency analysis and Latent Dirichlet Allocation (LDA), which were used to identify significant topics of discussion under the #sustainability hashtag. Additionally, histograms and scatter plots were used to visualize user engagement. LDA analysis was conducted with 7 topics after trials were run with various topics and results were analyzed to determine which number of topics best fit the dataset. The frequency analysis provided a basic overview of the discourse surrounding #sustainability with the topics of technology, business and industry, environmental awareness, and discussion of the future. The LDA model provided a more comprehensive view, including additional topics such as Environmental, Social, and Governance (ESG) and infrastructure, investing, collaboration, and education. These findings have implications for researchers, businesses, organizations, and politicians seeking to align their strategies and actions with the major topics surrounding sustainability on Twitter to have a greater impact on their audience. Researchers can use the results of this study to guide further research on the topic or contextualize their study with existing literature within the field of sustainability.
]]>Carbon credits from the reducing emissions from deforestation and degradation (REDD+) projects have been criticized for issuing junk carbon credits due to invalid ex-ante baselines. Recently, the concept of ex-post baseline has been discussed to overcome the criticism, while ex-ante baseline is still necessary for project financing and risk assessment. To address this issue, we propose a Bayesian state-space model that integrates ex-ante baseline projection and ex-post dynamic baseline updating in a unified manner. Our approach provides a tool for appropriate risk assessment and performance evaluation of REDD+ projects. We apply the proposed model to a REDD+ project in Brazil and show that it may have had a small, positive effect but has been overcredited. We also demonstrate that the 90% predictive interval of the ex-ante baseline includes the ex-post baseline, implying that our ex-ante estimation can work effectively.
]]>Transparent, understandable, and persuasive recommendations support the electricity consumers’ behavioral change to tackle the energy efficiency problem. This paper proposes an explainable multi-agent recommendation system for load shifting for household appliances. First, we extend a novel multi-agent approach by designing and implementing an Explainability Agent that provides explainable recommendations for optimal appliance scheduling in a textual and visual manner. Second, we enhance the predictive capacity of other agents by including weather data and applying state-of-the-art models (i.e., k-nearest-neighbors, extreme gradient boosting, adaptive boosting, Random Forest, logistic regression, and explainable boosting machines). Since we want to help the user understand a single recommendation, we focus on local explainability approaches. In particular, we apply post-model approaches local, interpretable, model-agnostic explanation and SHapley Additive exPlanations as model-agnostic tools that can explain the predictions of the chosen classifiers. We further provide an overview of the predictive and explainability performance. Our results show a substantial improvement in the performance of the multi-agent system while at the same time opening up the “black box” of recommendations.
]]>Domain adaptation is important in agriculture because agricultural systems have their own individual characteristics. Applying the same treatment practices (e.g., fertilization) to different systems may not have the desired effect due to those characteristics. Domain adaptation is also an inherent aspect of digital twins. In this work, we examine the potential of transfer learning for domain adaptation in pasture digital twins. We use a synthetic dataset of grassland pasture simulations to pretrain and fine-tune machine learning metamodels for nitrogen response rate prediction. We investigate the outcome in locations with diverse climates, and examine the effect on the results of including more weather and agricultural management practices data during the pretraining phase. We find that transfer learning seems promising to make the models adapt to new conditions. Moreover, our experiments show that adding more weather data on the pretraining phase has a small effect on fine-tuned model performance compared to adding more management practices. This is an interesting finding that is worth further investigation in future studies.
]]>