To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The focus of this book is the uniform Evidence Act (referred to throughout as ‘the Act’ or ‘the Acts’). The Act has not been introduced in Queensland, South Australia or Western Australia, where each state’s Evidence Act and the common law apply. However, the Act is still an important reference guide for those states due to the connection between the common law and the Act. Despite the differences between jurisdictions that have adopted the Act, there is a significant degree of uniformity. Accordingly, in this book, the provisions that are extracted to indicate the rules in relation to the Act come from the Commonwealth Act. Any important jurisdictional differences are separately identified.
This chapter considers the legislative history of evidence law and some fundamental introductory concepts that are used frequently in evidence law and the trial process. This chapter is an introductory overview; specific topics are dealt with in substance in subsequent chapters.
In this chapter, we focus on multilingualism and language contact, moving away from the strong focus on monolingualism characteristic of many traditional approaches to language history, and discussing various onsets, scenarios and outcomes of language contact. We introduce the concepts of borrowing and imposition as central constructs to understand contact-induced change in language, illustrating and critically examining these ideas in three case studies: the development of loanwords in Canadian French, Germanic substrate effects in the formation of American Englishes and mixed-language business writing in medieval Britain after the Norman Conquest. Building on these cases, we discuss which elements of the language can be transferred and explore possible pathways of social diffusion of borrowings, as well touching upon various traits and examples of code switching and similar multilingual practices in historical texts. Finally, we evaluate the constructs of pidgin and creole languages, discussing to what extent they can be seen as different in structural terms, or whether their distinctiveness arises primarily from the sociohistorical circumstances from which they arose.
This chapter explains credibility evidence under pt 3.7 of the Act and the common law principles governing the admission of credibility evidence. Central to this topic is what constitutes credibility evidence.
In general, credibility evidence is evidence that is directly relevant to the establishment of the credibility of a witness or another person for the ultimate purpose of establishing the facts in issue. As a consequence, credibility evidence is ‘collateral’ with respect to the establishment of the primary facts in issue in a proceeding. From the perspective of relevance, credibility evidence is admissible, even though it is collateral. From the perspective of admissibility, credibility evidence is initially excluded (‘primarily’) because it is collateral, but is then admitted (‘secondarily’) under specific exceptions.
The chapter thus discusses credibility evidence; exclusion of credibility evidence about a witness under the credibility rule; exceptions that permit admission of credibility evidence about a witness; and the admission of credibility evidence about persons other than witnesses.
This chapter deals with a range of matters relating to the facilitation of proof (mostly found in ch 4 of the Act) and ancillary matters (found in ch 5 of the Act). Although these provisions are somewhat technical, many are important in practice, as they allow decisions to be reached without evidence having to be taken on some issues. They also regulate the ways in which certain kinds of information, such as that contained in public documents and registers, may be used. Other aspects of proof, such as the standards of proof applying in civil and criminal proceedings, as well as judicial notice, are dealt with in Chapter 1 of this book. Warnings, although falling within ch 4 of the Act, are discussed together with discretions and limiting directions in Chapter 12 of this book.
This leading textbook introduces students and practitioners to the identification and analysis of animal remains at archaeology sites. The authors use global examples from the Pleistocene era into the present to explain how zooarchaeology allows us to form insights about relationships among people and their natural and social environments, especially site-formation processes, economic strategies, domestication, and paleoenvironments. This new edition reflects the significant technological developments in zooarchaeology that have occurred in the past two decades, notably ancient DNA, proteomics, and isotope geochemistry. Substantially revised to reflect these trends, the volume also highlights novel applications, current issues in the field, the growth of international zooarchaeology, and the increased role of interdisciplinary collaborations. In view of the growing importance of legacy collections, voucher specimens, and access to research materials, it also includes a substantially revised chapter that addresses management of zooarchaeological collections and curation of data.
A comprehensive yet concise history of the English language, this accessible textbook helps those studying the subject to understand the formation of English. It tells the story of the language from its remote ancestry to the present day, especially the effects of globalisation and the spread of, and subsequent changes to, English. Now in its third edition, it has been substantially revised and updated in light of new research, with an extended chapter on World Englishes, and a completely updated final chapter, which concentrate on changes to English in the twenty-first century. It makes difficult concepts very easy to understand, and the chapters are set out to make the most of the wide range of topics covered, using dozens of familiar texts, including the English of King Alfred, Chaucer, Shakespeare, and Addison. It is accompanied by a website with exercises for each chapter, and a range of extra resources.
• To understand the working principle of support vector machine (SVM).
• To comprehend the rules for identification of correct hyperplane.
• To understand the concept of support vectors, maximized margin, positive and negative hyperplanes.
• To apply an SVM classifier for a linear and non-linear dataset.
• To understand the process of mapping data points to higher dimensional space.
• To comprehend the working principle of the SVM Kernel.
• To highlight the applications of SVM.
10.1 Support Vector Machines
Support vector machines (SVMs) are supervised machine learning (ML) models used to solve regression and classification problems. However, it is widely used for solving classification problems. The main goal of SVM is to segregate the n-dimensional space into labels or classes by defining a decision boundary or hyperplanes. In this chapter, we shall explore SVM for solving classification problems.
10.1.1 SVM Working Principle
SVM Working Principle | Parteek Bhatia, https://youtu.be/UhzBKrIKPyE
To understand the working principle of the SVM classifier, we will take a standard ML problem where we want a machine to distinguish between a peach and an apple based on their size and color.
Let us suppose the size of the fruit is represented on the X-axis and the color of the fruit is on the Y-axis. The distribution of the dataset of apple and peach is shown in Figure 10.1.
To classify it, we must provide the machine with some sample stock of fruits and label each of the fruits in the stock as an “apple” or “peach”. For example, we have a labeled dataset of some 100 fruits with corresponding labels, i.e., “apple” or “peach”. When this data is fed into a machine, it will analyze these fruits and train itself. Once the training is completed, if some new fruit comes into the stock, the machine will classify whether it is an “apple” or a “peach”.
Most of the traditional ML algorithms would learn by observing the perfect apples and perfect peaches in the stock, i.e., they will train themselves by observing the ideal apples of stock (apples which are very much like apples in terms of their size and color) and the perfect peaches of stock (peaches which are very much like peaches in terms of their size and color). These standard samples are likely to be found in the heart of stock. The heart of the stock is shown in Figure 10.2.
After careful study of this chapter, students should be able to do the following:
LO1: Identify stress concentration in machine members.
LO2: Explain stress concentration from the theory of elasticity approach.
LO3: Calculate stress concentration due to a circular hole in a plate.
LO4: Analyze stress concentration due to an elliptical hole in a plate.
LO5: Evaluate notch sensitivity.
LO6: Create designs for reducing stress concentration.
9.1 INTRODUCTION [LO1]
Stresses given by relatively simple equations in the strength of materials for structures or machine members are based on the assumed continuity of the elastic medium. However, the presence of discontinuity destroys the assumed regularity of stress distribution in a member and a sudden increase in stresses occurs in the neighborhood of the discontinuity. In developing machines, it is impossible to avoid abrupt changes in cross-sections, holes, notches, shoulders, etc. Abrupt changes in cross-section also occur at the roots of gear teeth and threads of bolts. Some examples are shown in Figure 9.1.
Any such discontinuity acts as a stress raiser. Ideally, discontinuity in materials such as non-metallic inclusions in metals, casting defects, residual stresses from welding may also act as stress raisers. In this chapter, however, we shall consider only the geometric discontinuity that arises from design considerations of structures or machine parts.
Many theoretical methods and experimental techniques have been developed to determine stress concentrations in different structural and mechanical systems. In order to understand the concept, we shall begin with a plate with a centrally located hole. The plate is subjected to uniformly distributed tensile loading at the ends, as shown in Figure 9.2.
• To define machine learning (ML) and discuss its applications.
• To learn the differences between traditional programming and ML.
• To understand the importance of labeled and unlabeled data and its various usage for ML.
• To understand the working principle of supervised, unsupervised, and reinforcement learnings.
• To understand the key terms like data science, data mining, artificial intelligence, and deep learning.
1.1 Introduction
In today’s data-driven world, information flows through the digital landscape like an untapped river of potential. Within this vast data stream lies the key to unlocking a new era of discovery and innovation. Machine learning (ML), a revolutionary field, acts as the gateway to this wealth of opportunities. With its ability to uncover patterns, make predictive insights, and adapt to evolving information, ML has transformed industries, redefined technology, and opened the door to limitless possibilities. This book is your gateway to the fascinating realm of ML—a journey that empowers you to harness the power of data, enabling you to build intelligent systems, make informed decisions, and explore the boundless possibilities of the digital age.
ML has emerged as the dominant approach for solving problems in the modern world, and its wide-ranging applications have made it an integral part of our lives. Right from search engines to social networking sites, everything is powered by ML algorithms. Your favorite search engine uses ML algorithms to get you the appropriate search results. Smart home assistants like Alexa and Siri use ML to serve us better. The influence of ML in our day-to-day activities is so much that we cannot even realize it. Online shopping sites like Amazon, Flipkart, and Myntra use ML to recommend products. Facebook is using ML to display our feed. Netflix and YouTube are using ML to recommend videos based on our interests.
Data is growing exponentially with the Internet and smartphones, and ML has just made this data more usable and meaningful. Social media, entertainment, travel, mining, medicine, bioinformatics, or any field you could name uses ML in some form.
To understand the role of ML in the modern world, let us first discuss the applications of ML.
After careful study of this chapter, students should be able to do the following:
LO1: Identify the difference between engineering mechanics and the theory of elasticity approach.
LO2: Explain yielding and brittle fracture.
LO3: Describe the stress–strain behavior of common engineering materials.
LO4: Compare hardness, ductility, malleability, toughness, and creep.
LO5: Explain different hardness measurement techniques.
1.1 INTRODUCTION [LO1]
Mechanics is one of the oldest physical sciences, dating back to the times of Aristotle and Archimedes. The subject deals with force, displacement, and motion. The concepts of mechanics have been used to solve many mechanical and structural engineering problems through the ages. Because of its intriguing nature, many great scientists including Sir Isaac Newton and Albert Einstein delved into it for solving intricate problems in their own fields.
Engineering mechanics and mechanics of materials developed over centuries with a few experiment-based postulates and assumptions, particularly to solve engineering problems in designing machines and structural parts. Problems are many and varied. However, in most cases, the requirement is to ensure sufficient strength, stiffness, and stability of the components, and eventually those of the whole machine or structure. In order to do this, we first analyze the forces and stresses at different points in a member, and then select materials of known strength and deformation behavior, to withstand the stress distribution with tolerable deformation and stability limits. The methodology has now developed to the extent of coding that takes into account the whole field stress, strain, deformation behaviors, and material characteristics to predict the probability of failure of a component at the weakest point. Inputs from the theory of elasticity and plasticity, mathematical and computational techniques, material science, and many other branches of science are needed to develop such sophisticated coding.
The theory of elasticity too developed but as an applied mathematics topic, and engineers took very little notice of it until recently, when critical analyses of components in high-speed machinery, vehicles, aerospace technology, and many other applications became necessary. The types of problems considered in both the elementary strength of material and the theory of elasticity are similar, but the approaches are different. The strength of the materials approach is generally simple. Here the emphasis is on finding practical solutions to a problem with simplifying assumptions.
After careful study of this chapter, students should be able to do the following:
LO1: Describe constitutive equations.
LO2: Relate the elastic constants.
LO3: Recognize boundary value problems.
LO4: Explain St. Venant's principle.
LO5: Describe the principle of superposition.
LO6: Illustrate the uniqueness theorem.
LO7: Develop stress function approach.
4.1 CONSTITUTIVE EQUATIONS [LO1]
So far, we have discussed the strain and stress analysis in detail. In this chapter, we shall link the stress and strain by considering the material properties in order to completely describe the elastic, plastic, elasto-plastic, visco-elastic, or other such deformation characteristics of solids. These are known as constitutive equations, or in simpler terms the stress–strain relations. There are endless varieties of materials and loading conditions, and therefore development of a general form of constitutive equation may be challenging. Here we mainly consider linear elastic solids along with their mechanical properties and deformation behavior.
Fundamental relation between stress and strain was first given by Robert Hooke in 1676 in the most simplified manner as, “Force varies as the stretch”. This implies a load–deflection relation that was later interpreted as a stress–strain relation. Following this, we can write P = kδ, where P is the force, δ is the stretch or elongation, and k is the spring constant. This can also be written for linear elastic materials as σ = E∈, where σ is the stress, ∈ is the strain, and E is the modulus of elasticity. For nonlinear elasticity, we may write in a simplistic manner σ = E∈n, where n ≠ 1.
Hooke's Law based on this fundamental relation is given as the stress–strain relation, and in its most general form, stresses are functions of all the strain components as shown in equation (4.1.1).
• To understand the concept of artificial neural network (ANN).
• To comprehend the working of the human brain as an inspiration for the development of neural network.
• To understand the mapping of human brain neurons to an ANN.
• To understand the working of ANN with case studies.
• To understand the role of weights in building ANN.
• To perform forward and backward propagation to train the neural networks.
• To understand different activation functions like threshold function, sigmoid function, rectifier linear unit function, and hyperbolic tangent function.
• To find the optimized value of weights for minimizing the cost function by using the gradient descent approach and stochastic gradient descent algorithm.
• To understand the concept of the mini-batch method.
16.1 Introduction to Artificial Neural Network
Neural networks and deep learning are the buzzwords in modern-day computer science. And, if you think that these are the latest entrants in this field, you probably have a misconception. Neural networks have been around for quite some time, and they have only started picking up now, putting up a huge positive impact on computer science.
Artificial neural network (ANN) was invented in the 1960s and 1970s. It became a part of common tech talks, and people started thinking that this machine learning (ML) technique would solve all the complex problems that were challenging the researchers during that time. But sooner, the hopes and expectations died off over the next decade.
The decline could not be attributed to some loopholes in neural networks, but the major reason for the decline was the “technology” itself. The technology back then was not up to the right standard to facilitate neural networks as they needed a lot of data for training and huge computation resources for building the model. During that time, both data and computing power were scarce. Hence, the resulting neural network remained only on paper rather than taking centerstage of the machine to solve some real-world problems.
Later on, at the beginning of the 21st century, we saw a lot of improvements in storage techniques resulting in reduced cost per gigabyte of storage. Humanity witnessed a huge rise in big data due to the Internet boom and smartphones.
Heat, like gravity, penetrates every substance of the universe, its rays occupy all parts of space.
Jean-Baptiste-Joseph Fourier
learning Outcomes
After reading this chapter, the reader will be able to
Understand the meaning of three processes of heat flow: conduction, convection, and radiation
Know about thermal conductivity, diffusivity, and steady-state condition of a thermal conductor
Derive Fourier's one-dimensional heat flow equation and solve it in the steady state
Derive the mathematical expression for the temperature distribution in a lagged bar
Derive the amount of heat flow in a cylindrical and a spherical thermal conductor
Solve numerical problems and multiple choice questions on the process of conduction of heat
6.1 Introduction
Heat is the thermal energy transferred between different substances that are maintained at different temperatures. This energy is always transferred from the hotter object (which is maintained at a higher temperature) to the colder one (which is maintained at a lower temperature). Heat is the energy arising due to the movement of atoms and molecules that are continuously moving around, hitting each other and other objects. This motion is faster for the molecules with a largeramount of energy than the molecules with a smaller amount of energy that causes the former to have more heat. Transfer of heat continues until both objects attain the same temperature or the same speed. This transfer of heat depends upon the nature of the material property determined by a parameter known as thermal conductivity or coefficient of thermal conduction. This parameter helps us to understand the concept of transfer of thermal energy from a hotter to a colder body, to differentiate various objects in terms of the thermal property, and to determine the amount of heat conducted from the hotter to the colder region of an object. The transfer of thermal energy occurs in several situations:
When there exists a difference in temperature between an object and its surroundings,
When there exists a difference in temperature between two objects in contact with each other, and
When there exists a temperature gradient within the same object.
• To implement the k-means clustering algorithm in Python.
• To determining the ideal number of clusters by implementing its code.
• To understand how to visualize clusters using plots.
• To create the dendrogram and find the optimal number of clusters for agglomerative hierarchical clustering.
• To compare results of k-means clustering with agglomerative hierarchical clustering.
• To implement clustering through various case studies.
13.1 Implementation of k-means Clustering and Hierarchical Clustering
In the previous chapter, we discussed various clustering algorithms. We learned that clustering algorithms are broadly classified into partitioning methods, hierarchical methods, and density-based methods. The k-means clustering algorithm follows partitioning method; agglomerative and divisive algorithms follow the hierarchical method, while DBSCAN is based on density-based clustering methods.
In this chapter, we will implement each of these algorithms by considering various case studies by following a step-by-step approach. You are advised to perform all these steps on your own on the mentioned databases stated in this chapter.
The k-means algorithm is considered a partitioning method and an unsupervised machine learning (ML) algorithm used to identify clusters of data items in a dataset. It is one of the most prominent ML algorithms, and its implementation in Python is quite straightforward. This chapter will consider three case studies, i.e., customers shopping in the mall dataset, the U.S. arrests dataset, and a popular Iris dataset. We will understand the significance of k-means clustering techniques to implement it in Python through these case studies. Along with the clustering of data items, we will also discuss the ways to find out the optimal number of clusters. To compare the results of the k-means algorithm, we will also implement hierarchical clustering for these problems.
We will kick-start the implementation of the k-means algorithm in Spyder IDE using the following steps.
Step 1: Importing the libraries and the dataset—The dataset for the respective case study would be downloaded, and then the required libraries would be imported.
Step 2: Finding the optimal number of clusters—We will find the optimal number of clusters by the elbow method for the given dataset.
Step 3: Fitting k-means to the dataset—A k-means model will be prepared by training the model over the acquired dataset.
Step 4: Visualizing the clusters—The clusters formed by the k-means model would then be visualized in the form of scatter plots.