To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Machine learning (ML) is one of the most upcoming knowledge areas transforming the world. It is a dynamic and transformative field that has the potential to reshape the way we interact with technology and the world around us. In the book, Machine Learning with Python: Principles and Practical Techniques, author Parteek Bhatia offers an insightful and hands-on approach to demystifying this complex subject.
In a world where ML plays an increasingly central role in our lives, understanding its principles and practical applications is essential. In this book, numerous algorithms are explained in detail, assuming no previous knowledge of readers. All these algorithms are widely used in the practicing world and research areas. This book is an invaluable resource for beginners and those looking to deepen their knowledge in the field.
Parteek Bhatia takes the reader on an engaging journey, starting from the basics and gradually building up to more advanced concepts. What sets this book apart is its focus on practicality. The book covers various ML techniques, from data pre-processing and regression to classification, clustering, and association mining. Each concept is illuminated with detailed Python implementations, allowing you to see firsthand how these algorithms work and how they can be applied to real-world problems. It also delves into more advanced topics like artificial neural networks, deep learning, convolutional neural networks, recurrent neural networks, and genetic algorithms. This comprehensive approach equips you with the tools and knowledge to tackle complex ML challenges.
Parteek Bhatia's passion for the subject and dedication to making it accessible shines through every chapter. This book is not just a collection of information; it is a learning adventure. It takes the reader from a beginner to a confident practitioner, ready to take on the exciting and everevolving world of ML.
Whether you are a student, a professional, or simply someone eager to explore the possibilities of ML, this book will be your trusted guide. Machine Learning with Python is your gateway to unlocking the potential of this fascinating field.
I commend Parteek Bhatia for his commitment to creating this educational masterpiece. As you embark on your journey through the captivating realm of ML, I encourage you to embrace the concepts, put them into practice, and let your curiosity and creativity flourish.
• To understand the differences between classification and regression techniques.
• To define classification and understand the types of classification.
• To understand the working principles of various classification techniques.
• To comprehend the decision tree classifier.
• To know the importance of information gain and Gini index in decision tree classifier.
• To comprehend the random forest algorithm.
• To discuss the working of the naive Bayes classification.
• To comprehend the working principle of the k-NN classifier.
• To comprehend the working of logistic regression classifier.
• To understand different quality metrics of the classifier like confusion matrix, precision, recall, and F-measure.
9.1 Introduction to Classification
We rely on machine learning (ML) to make critical decisions or predictions in the modern world. It is very important to understand how computers by using ML make these predictions. Usually, the predictions made by ML models are classified into two types, i.e., classification and regression. The ML models use various techniques to predict the outcome of an event by analyzing already available data. As machines learn from data, the type of training or input data plays a crucial role in deciding the machine's capability to make accurate decisions and predictions. Usually, this data is available in two forms, i.e., labeled and unlabeled. In label data, we know the value of the output attribute for the sample input attributes, while in unlabeled data, we do not have the output attribute value.
For analyzing labeled data, supervised learning is used. Classification and regression are the two types of supervised learning techniques used to predict the outcome of an unknown instance by analyzing the available labeled input instances. Classification is applied when the outcome is finite or discrete, while the regression model is applied when the outcome is infinite or continuous. For example, a classification model is used to predict whether a customer will buy a product or not. Here the outcome is finite, i.e., buying the product or not buying. In this case, the regression model predicts the number of products that the customer may buy. Here the outcome is infinite, i.e., all possible numbers, since the term quantity refers to a set of continuous numbers.
The textbook is primarily written for senior undergraduate and post graduate students studying in areas of computer science and engineering, and electrical engineering. However, as the subject covers various interdisciplinary areas, the book is also expected to be of interest to a larger readership in Science and Engineering. It has a comprehensive and balanced coverage of theory and applications of computer vision with a textbook approach providing worked out examples, and exercises. It covers theory and applications of some relatively recent advancements in technology such as on colour processing, deep learning techniques for processing images and videos, document processing, biometry, content based image retrieval, etc. It also delves with theories and processing in non-optical imaging systems, such as range or depth imaging, medical imaging and remote sensing imaging.
Teaching fundamental design concepts and the challenges of emerging technology, this textbook prepares students for a career designing the computer systems of the future. Self-contained yet concise, the material can be taught in a single semester, making it perfect for use in senior undergraduate and graduate computer architecture courses. This edition has a more streamlined structure, with the reliability and other technology background sections now included in the appendix. New material includes a chapter on GPUs, providing a comprehensive overview of their microarchitectures; sections focusing on new memory technologies and memory interfaces, which are key to unlocking the potential of parallel computing systems; deeper coverage of memory hierarchies including DRAM architectures, compression in memory hierarchies and an up-to-date coverage of prefetching. Practical examples demonstrate concrete applications of definitions, while the simple models and codes used throughout ensure the material is accessible to a broad range of computer engineering/science students.
The chapter discusses the evolution of justice and dispute resolution in the era of LawTech (LT). Traditional taxonomies of justice are mirrored in new forms of digital dispute settlement (DDS), where the idealized Justice Hercules is compared to the prospect of robo-judges. Currently, LT primarily supports traditional courts as they transition to e-courts. Alternative dispute resolution (ADR) is evolving into online dispute resolution (ODR), with blockchain-based crowdsourcing emerging as a potential alternative to traditional justice. Hybrid models of dispute resolution are also taking shape. The chapter outlines assessment criteria for adopting LT in digital systems, focusing on ensuring that DS in the digital economy remains independent, impartial, and enforceable. Human centricity is core construct for the co-development of LT and DS. This overarching principle requires human oversight, transparency, data privacy, and fairness in both access and outcomes.
Technological disruption leads to discontent in the law, regarding the limited remedies that are available under private law. The source of the problem is a ‘private law’ model that assumes that the function of law is to correct wrongs by compensating individuals who are harmed. So, the model is based on (i) individual claimants and (ii) financial redress. If we copy this private law model into our regulatory regimes for new technologies our governance remedies will fall short. On the one hand, the use of AI can affect in a single act a large number of people. On the other hand, not all offences can be cured through awarding money damages. Therefore, it is necessary to rethink private remedies in the face of AI wrongs to make law effective. To achieve this, the mantra of individual compensation has to be overcome in favor of a social perspective should prevail including the use of non-pecuniary measures to provide effective remedies for AI wrongs.
Provided the law’s classifications are broadly drawn, technological innovation will not require the classifications to be redrawn or new categories to be introduced. This is not to say, however, that innovations will never require a rethinking of old categories or the invention of new ones. Difficult as that may be, the more difficult issue is detecting disruptions in the first place. Some truly disruptive innovations, such as computer programs, may be hidden from view for a variety of reasons. Others, touted as disruptive, such as cryptoassets, may not really be the case.
Failures of environmental law to preserve, protect and improve the environment are caused by law’s contingency and constitutional presumptions of supremacy over the self-regulatory agency of nature. Contingency problems are intrinsic to law and, therefore, invite deployment of technologies. Constitutional presumptions can be corrected through geo-constitutional reform. The latter requires the elaboration of geo-constitutional principles bestowing authority on nature’s self-regulatory agency. It is suggested that principles of autonomy, loyalty, pre-emption, supremacy and rights have potential to serve that aim and imply proactive roles for technologies in environmental governance. Geo-constitutional reform is necessary to prevent the fatal collapse of the natural regulatory infrastructure enabling life and a future of environmental governance by design. Once environmental catastrophe has materialized, however, geo-constitutionalism loses its raison d’être.
This chapter argues that, as evidenced by EU digital law and EU border management, the EU legislature is complicit in the creation of complex socio-technical systems that undermine core features of the EU’s legal culture. In the case of digital law, while the EU continues to govern by publicly declared and debated legal rules, the legal frameworks – exemplified by the AI Act – are excessively complex and opaque. In the case of border management, the EU increasingly relies not on governance by law but on governance by various kinds of technological instruments. Such striking departures from the EU’s constitutive commitments to the rule of law, democracy and respect for human rights, are more than a cause for concern; they raise profound questions about what it now means to be a European.
This chapter challenges the conventional wisdom of how users of social media platforms such as Instagram, X, or TikTok pay for service access. It argues that rather than merely exchanging data for services, users unknowingly barter their attention, emotions, and cognitive resources – mental goods that corporations exploit through technologically managed systems like targeted advertising and habit-forming design. The chapter explores how these transactions are facilitated not by legal contracts but by code, which allows social media companies to extract value in ways that traditional legal conceptual frameworks cannot capture. It further highlights the negative externalities of these exchanges, such as cognitive impairments and mental health issues, framing them as pollution byproducts of the attention economy. By examining both the visible and hidden dimensions of this technologically mediated exchange, the chapter calls for a deeper understanding of the mechanisms that govern our interactions with digital platforms rather than rushing to propose new legal solutions.
Advanced AI (generative AI) poses challenges to the practice of law and to society as a whole. The proper governance of AI is unresolved but will likely be multifaceted (soft law such as standardisation, best practices and ethical guidelines), as well as hard law consisting of a blend of existing law and new regulations. This chapter argues that ‘lawyer’s professional codes’ of conduct (ethical guidelines) provide a governance system that can be applied to the AI industry. The increase in professionalisation warrants the treating of AI creators, developers and operators, as professionals subject to the obligations foisted on the legal profession and other learned professions. Legal ethics provides an overall conceptual structure that can guide AI development serving the purposes of disclosing potential liabilities to AI developers and building trust for the users of AI. Additionally, AI creators, developers and operators should be subject to fiduciary duty law. Fiduciary duty law as applied to these professionals would require a duty of care in designing safe AI systems, a duty of loyalty to customers, users and society not to create systems that manipulate consumers and democratic governance and a duty of good faith to create beneficial systems. This chapter advocates the use of ethical guidelines and fiduciary law not as soft law but as the basis of structuring private law in the governance of AI.
Law’s governance seemingly faces an uncertain future. In one direction, the alternative to law’s governance is a dangerous state of disorder and, potentially, existential threats to humanity. That is not the direction in which we should be going, and we do not want our escalating discontent with law’s governance to give it any assistance. Law’s governance is already held in contempt by many. In the other direction, if we pursue technological solutions to the imperfections in law’s governance, there is a risk that we diminish the importance of humans and their agency. If any community is contemplating transition to governance by technology, it needs to start its impact assessment with the question of whether the new tools are compatible with sustaining the foundational conditions themselves.
This chapter analyses the public and private governance structure of the EU AI Act (AIA) and its associated ecosystem of compliance and conformity. Firstly, the interaction of public and private governance in the making of AI law meant to concretise the rules in the AIA is analysed. Secondly, the focus shifts to the interaction of public and private governance in the Act’s enforcement through compliance, conformity and public authorities. Thirdly, it is argued that the EU legislature has neither fully developed public private governance nor the interaction between the two. As a result, there are many gaps in the involvement of civil society in compliance, conformity and enforcement of private regulations, in particular harmonized technical standards, Codes of Practice and Codes of Conduct. Moreover, the extreme complexity of the AIA’s governance structure is likely to trigger litigation between AI providers and deployers and the competent surveillance authorities, or more generally in B2B and B2C relations.