To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter takes a relatively broad approach to defences, covering a range of factors that might serve to exculpate a defendant who might otherwise appear to have committed an offence. The defences examined here are arranged into two, imperfectly realised, categories. The first group have been termed ‘mental state defences’ and the second ‘self-help defences’. The group titled ‘mental state defences’ are so categorised because they depend to a greater or lesser extent on the contention that the accused did not possess the requisite mens rea to commit the offence. In assessing whether an accused may be able to rely on a defence a number of subjective and objective elements have to be applied and analysed. It is important to understand that the considerations informing the development of each of the defences are often very different and sometimes controversial. The groupings are far from perfectly realised and the rationales and doctrines of each of the defences may manifest as many dissimilarities as they do similarities. It is hoped that the arrangement of the material in this chapter will aid understanding by drawing comparisons across different aspects of the criminal law.
The crimes of murder and manslaughter as well as any statutorily created offences involving the death of a person, such as dangerous driving causing death or assaults causing death are homicides in the sense of unlawful killings. Homicides may, however, be lawful insofar as being justified by, for example, self-defence or acting in the defence of another person; or being excused as a result of duress. They may be the consequence of an accident or an accused may not be criminally responsible because they suffer from a mental illness.This chapter will explore and analyse the crimes of murder, manslaughter and various statutory crimes involving particular types of conduct which cause the death of another person, including assaults, driving vehicles and administering drugs or other acts to hasten death. Murder is the most serious form of unlawful homicide and, with culpability rooted largely in the intentional nature of the killing, it attracts severe punishment up to a maximum of imprisonment for life. This penalty may be mandatory and may mean for the term of the offender’s natural life in some jurisdictions or in specific circumstances.
Contemporary Australian Tort Law Cases and Materials is a comprehensible textbook for students new to tort law. It scaffolds student learning by introducing the principles of tort law and demonstrating their application via case examples and key legislation. The book takes a contemporary approach to issues in Australian tort law, with a section on feminist critiques of law reform and insight into the Stolen Generations litigation. It harnesses principles of authentic assessment by offering review questions, critical thinking questions, discussion topics, comparison questions and practice problems. The annotation of the cases to highlight key principles further consolidates the book as a student-centric and learner-friendly resource. This unique approach will assist student comprehension of a range of torts and their defences, including negligence, trespass, nuisance, defamation, breach of statutory duty, and misfeasance in public office. The book also addresses vicarious and concurrent liability, remedies (including damages), and Australian statutory compensation schemes.
The first of its kind, this textbook provides a comprehensive introduction to the study of semantics and pragmatics from an interactionist perspective, grounded entirely on empirical methods of social/behavioural science. Designed for advanced undergraduate students, beginning graduate students, and practicing researchers, it responds to the growing requirement that rather than relying on their own native speaker intuitions, students gather and analyze semantic data in a broad range of research contexts, from fieldwork to psycholinguistic and child language research. Practical in its approach, it provides the tools that the advanced student needs in order to 'do' this semantic research, in both field and laboratory contexts. This is facilitated by an innovative view of meaning that combines reference and mental representations as aspects of communicative interaction. It is accompanied by a glossary of terms and a range of exercises for students, along with model answers to the exercises for instructors.
Designed for undergraduate students of computer science, mathematics, and engineering, this book provides the tools and understanding needed to master graph theory and algorithms. It offers a strong theoretical foundation, detailed pseudocodes, and a range of real-world and illustrative examples to bridge the gap between abstract concepts and practical applications. Clear explanations and chapter-wise exercises support ease of comprehension for learners. The text begins with the basic properties of graphs and progresses to topics such as trees, connectivity, and distances in graphs. It also covers Eulerian and Hamiltonian graphs, matchings, planar graphs, and graph colouring. The book concludes with discussions on independent sets, the Ramsey theorem, directed graphs and networks. Concepts are introduced in a structured manner, with appropriate context and support from mathematical language and diagrams. Algorithms are explained through rules, reasoning, pseudocode, and relevant examples.
This book provides a clear and accessible introduction to ring theory for undergraduate students. Aligned with standard curricula, it simplifies abstract concepts through structured explanations, practical examples, and real-world applications. Ideal for both students and instructors, it serves as a valuable resource for mastering fundamental concepts in ring theory with ease. The text begins with an introduction to rings and goes on to cover subrings, integral domains, ideals, and factor rings. It also discusses ring homomorphisms and polynomial rings. The book concludes with topics such as polynomial factorization and divisibility in integral domains. Each chapter is supplemented with solved examples to foster a deeper understanding of the subject. A set of practice questions is also provided to sharpen problem-solving skills.
Understanding Modern Warfare has established itself as a leading text in professional military education and undergraduate teaching. This third edition has been revised throughout to reflect dramatic changes during the past decade. Introducing three brand new chapters, this updated volume provides in-depth analysis of the most pertinent issues of the 2020s and beyond, including cyber warfare, information activities, hybrid and grey zone warfare, multi-domain operations and recent conflicts in Ukraine, Gaza, and Syria. It also includes a range of features to maximise its value as a learning tool: a structure designed to guide students through key strategic principles; key questions and annotated reading guides for deeper understanding; text boxes highlighting critical thinkers and operational concepts; and a glossary explaining key terms. Providing debate driven analysis that encourages students to develop a balanced perspective, Understanding Modern Warfare remains essential reading both for officers and for students of international relations more broadly.
This engaging textbook provides a unique introduction to language and society, by showing students how to tap into the linguistic resources of their communities. Assuming no prior experience of linguistics, it begins with chapters on introductory methods and ethics, creating a foundation for students to think of themselves as linguists. It then offers students the sociolinguistics tools they need to look both locally and globally at language and the social issues with which it interacts. The book is illustrated throughout with examples from 98 distinct languages, enabling students to connect their local experiences with global ones, and each chapter ends with classroom and community-focused exercises, to help them discover the underlying rules that shape language use in their own lives. Students will gain a greater appreciation for, and understanding of, the linguistically diverse and culturally complex sociolinguistic issues around the world, and how language interacts with multiple domains of society.
The field of materials management has its own significance in the industrial and business environment. This incorporates procurement as well as production of items. In this context, certain factors play very important role. A detailed understanding of these factors is necessary for knowing the implications pertaining to their variation among other issues. This book on Materials Management covers a good understanding of relevant conceptual topics and various parameters involved in the analysis of inventory situations. Several numericals, practical examples and cases are explained, considering relevant situation along with the different industrial and managerial aspects, making it a useful resource for students as well as instructors. It will also be helpful in generating various projects in engineering and allied management areas.
Hesiod was and is regarded as one of the founding figures of Greek literature and culture, alongside Homer, and his Theogony is the first extant attempt to give an account of the whole, of the gods and of the cosmos, how it came to be, from what, and how it achieved its present state. Strong parallels can be identified between it and various myths and texts from the ancient Near East. Moreover, it was highly influential on subsequent Greek and Latin literature and philosophy. This, the first modern commentary in over half a century, includes all the necessary linguistic, textual, metrical, and literary material that will allow students to understand and enjoy the Theogony and its place in the literary tradition. It is intended primarily for advanced undergraduates and graduate students but will also be considered valuable by scholars of Greek literature and thought.
• To understand the working principle of support vector machine (SVM).
• To comprehend the rules for identification of correct hyperplane.
• To understand the concept of support vectors, maximized margin, positive and negative hyperplanes.
• To apply an SVM classifier for a linear and non-linear dataset.
• To understand the process of mapping data points to higher dimensional space.
• To comprehend the working principle of the SVM Kernel.
• To highlight the applications of SVM.
10.1 Support Vector Machines
Support vector machines (SVMs) are supervised machine learning (ML) models used to solve regression and classification problems. However, it is widely used for solving classification problems. The main goal of SVM is to segregate the n-dimensional space into labels or classes by defining a decision boundary or hyperplanes. In this chapter, we shall explore SVM for solving classification problems.
10.1.1 SVM Working Principle
SVM Working Principle | Parteek Bhatia, https://youtu.be/UhzBKrIKPyE
To understand the working principle of the SVM classifier, we will take a standard ML problem where we want a machine to distinguish between a peach and an apple based on their size and color.
Let us suppose the size of the fruit is represented on the X-axis and the color of the fruit is on the Y-axis. The distribution of the dataset of apple and peach is shown in Figure 10.1.
To classify it, we must provide the machine with some sample stock of fruits and label each of the fruits in the stock as an “apple” or “peach”. For example, we have a labeled dataset of some 100 fruits with corresponding labels, i.e., “apple” or “peach”. When this data is fed into a machine, it will analyze these fruits and train itself. Once the training is completed, if some new fruit comes into the stock, the machine will classify whether it is an “apple” or a “peach”.
Most of the traditional ML algorithms would learn by observing the perfect apples and perfect peaches in the stock, i.e., they will train themselves by observing the ideal apples of stock (apples which are very much like apples in terms of their size and color) and the perfect peaches of stock (peaches which are very much like peaches in terms of their size and color). These standard samples are likely to be found in the heart of stock. The heart of the stock is shown in Figure 10.2.
The present chapter discusses the linguistic representation of, and reference to, individuals. Individuals were introduced in Chapters 2 and 3 as particulars – entities individuated by time and space – alongside events. The overarching question guiding this chapter is how to study the domain of reference to individuals in particular languages. The mention of particular languages in this formulation targets language documentation, description, and typology. However, I believe that the methods and tools discussed in this chapter will also be of use to students of child language, psycholinguists, and researchers engaging in corpus-based studies. The discussion begins by examining the types of concepts that populate the nominal domain (Sections 8.1–8.3). It then pivots to surveying the role of reference to individuals in the grammars of languages (Sections 8.4 and 8.5) and crosslinguistic variation in the lexicalization of the domain (Section 8.6) and concludes with a review of tools and methods for the exploration of the nominal domain (Section 8.7).
• To define machine learning (ML) and discuss its applications.
• To learn the differences between traditional programming and ML.
• To understand the importance of labeled and unlabeled data and its various usage for ML.
• To understand the working principle of supervised, unsupervised, and reinforcement learnings.
• To understand the key terms like data science, data mining, artificial intelligence, and deep learning.
1.1 Introduction
In today’s data-driven world, information flows through the digital landscape like an untapped river of potential. Within this vast data stream lies the key to unlocking a new era of discovery and innovation. Machine learning (ML), a revolutionary field, acts as the gateway to this wealth of opportunities. With its ability to uncover patterns, make predictive insights, and adapt to evolving information, ML has transformed industries, redefined technology, and opened the door to limitless possibilities. This book is your gateway to the fascinating realm of ML—a journey that empowers you to harness the power of data, enabling you to build intelligent systems, make informed decisions, and explore the boundless possibilities of the digital age.
ML has emerged as the dominant approach for solving problems in the modern world, and its wide-ranging applications have made it an integral part of our lives. Right from search engines to social networking sites, everything is powered by ML algorithms. Your favorite search engine uses ML algorithms to get you the appropriate search results. Smart home assistants like Alexa and Siri use ML to serve us better. The influence of ML in our day-to-day activities is so much that we cannot even realize it. Online shopping sites like Amazon, Flipkart, and Myntra use ML to recommend products. Facebook is using ML to display our feed. Netflix and YouTube are using ML to recommend videos based on our interests.
Data is growing exponentially with the Internet and smartphones, and ML has just made this data more usable and meaningful. Social media, entertainment, travel, mining, medicine, bioinformatics, or any field you could name uses ML in some form.
To understand the role of ML in the modern world, let us first discuss the applications of ML.
• To understand the concept of artificial neural network (ANN).
• To comprehend the working of the human brain as an inspiration for the development of neural network.
• To understand the mapping of human brain neurons to an ANN.
• To understand the working of ANN with case studies.
• To understand the role of weights in building ANN.
• To perform forward and backward propagation to train the neural networks.
• To understand different activation functions like threshold function, sigmoid function, rectifier linear unit function, and hyperbolic tangent function.
• To find the optimized value of weights for minimizing the cost function by using the gradient descent approach and stochastic gradient descent algorithm.
• To understand the concept of the mini-batch method.
16.1 Introduction to Artificial Neural Network
Neural networks and deep learning are the buzzwords in modern-day computer science. And, if you think that these are the latest entrants in this field, you probably have a misconception. Neural networks have been around for quite some time, and they have only started picking up now, putting up a huge positive impact on computer science.
Artificial neural network (ANN) was invented in the 1960s and 1970s. It became a part of common tech talks, and people started thinking that this machine learning (ML) technique would solve all the complex problems that were challenging the researchers during that time. But sooner, the hopes and expectations died off over the next decade.
The decline could not be attributed to some loopholes in neural networks, but the major reason for the decline was the “technology” itself. The technology back then was not up to the right standard to facilitate neural networks as they needed a lot of data for training and huge computation resources for building the model. During that time, both data and computing power were scarce. Hence, the resulting neural network remained only on paper rather than taking centerstage of the machine to solve some real-world problems.
Later on, at the beginning of the 21st century, we saw a lot of improvements in storage techniques resulting in reduced cost per gigabyte of storage. Humanity witnessed a huge rise in big data due to the Internet boom and smartphones.
• To implement the k-means clustering algorithm in Python.
• To determining the ideal number of clusters by implementing its code.
• To understand how to visualize clusters using plots.
• To create the dendrogram and find the optimal number of clusters for agglomerative hierarchical clustering.
• To compare results of k-means clustering with agglomerative hierarchical clustering.
• To implement clustering through various case studies.
13.1 Implementation of k-means Clustering and Hierarchical Clustering
In the previous chapter, we discussed various clustering algorithms. We learned that clustering algorithms are broadly classified into partitioning methods, hierarchical methods, and density-based methods. The k-means clustering algorithm follows partitioning method; agglomerative and divisive algorithms follow the hierarchical method, while DBSCAN is based on density-based clustering methods.
In this chapter, we will implement each of these algorithms by considering various case studies by following a step-by-step approach. You are advised to perform all these steps on your own on the mentioned databases stated in this chapter.
The k-means algorithm is considered a partitioning method and an unsupervised machine learning (ML) algorithm used to identify clusters of data items in a dataset. It is one of the most prominent ML algorithms, and its implementation in Python is quite straightforward. This chapter will consider three case studies, i.e., customers shopping in the mall dataset, the U.S. arrests dataset, and a popular Iris dataset. We will understand the significance of k-means clustering techniques to implement it in Python through these case studies. Along with the clustering of data items, we will also discuss the ways to find out the optimal number of clusters. To compare the results of the k-means algorithm, we will also implement hierarchical clustering for these problems.
We will kick-start the implementation of the k-means algorithm in Spyder IDE using the following steps.
Step 1: Importing the libraries and the dataset—The dataset for the respective case study would be downloaded, and then the required libraries would be imported.
Step 2: Finding the optimal number of clusters—We will find the optimal number of clusters by the elbow method for the given dataset.
Step 3: Fitting k-means to the dataset—A k-means model will be prepared by training the model over the acquired dataset.
Step 4: Visualizing the clusters—The clusters formed by the k-means model would then be visualized in the form of scatter plots.
The overarching question of the book is What is the scientific process of empirical semantic research? Each chapter addresses a more specific question that must be answered in order to answer this overarching question. The first chapter looks into the nature of meaning (Sections 1.1 – 1.3) and the requirements scientific theories of meaning must meet (Section 1.4). In Section 1.2, coordination emerges as the central problem of communication – the “synchronization” of embodied minds for joint action. This is made possible by the conventionalization of coordination devices, or in other words, signs. I discuss the types of signs that occur in language (anticipating a more comprehensive review in Section 2.1) and introduce the two dimensions of meaning present when speakers coordinate on the world around them: reference to the world and cognitive representation of it (Section 1.3).