To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter considers the challenging yet exciting world of valuing companies. Valuation has always been a key topic in finance, but it is even more relevant in the case of high-growth ventures because of its impact on raising capital. Equity is the main source of financing for startups, which typically have high potential and few tangible assets. In order to come to an assessment, founders and investors need to determine the value of the business or ‘exchange rate’ of money for shares.
Investors in startups need to realize what is going on in the companies in which they have invested. To do so, they engage in monitoring activities to find out whether a portfolio company is developing well or whether it needs support or even corrective action. However, monitoring is only possible if the startup provides investors with the relevant information. Monitoring requires regular reports from the entrepreneurs because they are the ones who see how the startup is doing – at least they should. Generally, business reporting takes place on a monthly basis. However, venture capitalists frequently require weekly reporting and will work with the entrepreneurs to establish daily targets that lead to the achievements of the weekly targets. Additional investor reports on specific issues may complement the regular reporting, as well as meetings and calls to discuss important issues. A business report is the lens through which investors perceive and recognize the progress entrepreneurs achieve. Monitoring by investors will succeed or fail based on the quality of the reporting. This quality, in turn, depends directly on the principles and standards entrepreneurs apply when measuring the economic activities and events affecting their startup.
Business angels are private individuals – predominantly cashed-out entrepreneurs – who invest their own money in new and early-stage businesses and, having invested, then draw on their own business experience to support these ventures in a variety of ways. They are often referred to as informal investors or informal venture capitalists. Whereas the attention of scholars and the media is largely focused on institutional venture capital, business angels actually finance substantially more businesses.
In this chapter we explore how and why venture capitalists (VCs) conduct due diligence. We begin by demystifying due diligence and dissecting its objectives. From screening to final legal scrutiny, we explore the due diligence stages, offering insights from academia, experts, and the tools used by VCs. In doing so we blend academic rigour with the street-smart wisdom of industry experts – both VCs and founders. This delivers insights from both sides of the table on how to navigate the intricate dance of due diligence. Continuing in the spirit of offering real-world insights and tools, we include due diligence scorecards shared by VCs, plus noteworthy tales of successes and failures. The chapter closes with a spotlight on key trends shaping the future of due diligence and a practical checklist of the topics to include, and things to look out for, when doing due diligence.
This innovative textbook has been designed with approachability and engagement at its forefront, using language reminiscent of a live lecture and interspersing the main text with useful advice and expansions. Striking a balance between theoretical- and experimental-led approaches, this book immediately immerses the reader in charge and neutral currents, which are at the core of the Standard Model, before presenting the gauge field, allowing the introduction of Feynman diagram calculations at an early stage. This novel and effective approach gives readers a head start in understanding the Model's predictions, stoking interest early on. With in-chapter problem sessions which help readers to build their mastery of the subject, clarifying notes on equations, end of chapter exercises to consolidate learning, and marginal comments to guide readers through the complexities of the Standard Model, this is the ideal book for graduate students studying high energy physics.
This chapter explores fundamental analytical techniques in data science, distinguishing between data analysis (backward-looking) and data analytics (forward-looking prediction).
Six key analysis categories are covered:
Descriptive Analysis examines current data through statistical measures (mean, median, mode) and visualizations to understand "what is happening."
Diagnostic Analytics investigates "why something happened" using correlation analysis, emphasizing the distinction between correlation and causation.
Predictive Analytics forecasts future outcomes using historical data and regression analysis.
Prescriptive Analytics determines optimal courses of action by analyzing potential decisions.
Exploratory Analysis discovers unknown relationships through visualization when questions aren’t predetermined.
Mechanistic Analysis examines exact variable changes and their effects.
The chapter emphasizes statistical literacy as essential for data scientists, covering key concepts like variable types, frequency distributions, measures of centrality and dispersion, and regression modeling. Hands-on examples demonstrate applications across business, healthcare, and social sciences.
A framing case study discusses Uruguay’s attempt to limit cigarette sales by foreign firms. Then the chapter provides an overview of international investment law. The chapter discusses: (1) how states have historically protected foreign investment using international law, including major concepts and the evolution of investment institutions; (2) major foreign investor rights under contemporary investment law, including rules for expropriation, treatment standards, performance requirements, and legal remedies; and (3) how states seek to balance the protection of foreign investment against their own state authority in areas like maintaining public order and safety, preserving national security, and protecting the environment and labor.
This chapter focuses on applying data science and machine learning techniques to real-world problems using R. It covers four main applications: clinical data analysis, social media data collection and analysis, and large-scale data processing.
The chapter begins with exploring clinical data from a dermatology study, demonstrating visual exploration, gradient descent regression, random forest classification, and k-means clustering techniques. It then transitions to social media analysis, specifically working with Reddit APIs to collect and analyze posts, examining relationships between variables like post length, scores, and upvotes.
The YouTube section covers API authentication and data collection for video statistics analysis. Finally, the Yelp analysis demonstrates big data processing techniques, exploring user behavior patterns through correlation analysis, regression modeling, and clustering of review data.
The chapter emphasizes practical API usage, data visualization, statistical testing, and the importance of understanding both the problem and data before analysis.
This chapter focuses on using Python for statistical analysis in data science. It begins with statistics essentials, teaching how to calculate descriptive statistics like mean, median, variance, and standard deviation using NumPy. The chapter covers data visualization techniques using Matplotlib to create histograms, bar charts, and scatterplots for exploring data patterns. Key topics include importing data using Pandas DataFrames, performing correlation analysis to measure relationships between variables, and conducting statistical inference through hypothesis testing. Students learn to implement t-tests for comparing means between two groups and ANOVA for comparing multiple groups. The chapter emphasizes practical applications through hands-on examples, from analyzing family age data to comparing exam scores across different classes. These statistical techniques form the foundation for more advanced data science work, enabling students to extract meaningful insights from datasets and make data-driven decisions.
Data is systematically gathered information based around the concepts expressed in your theory. Identifying observable indicators for concepts in your theory and hypotheses greatly assists in achieving valid, reliable measurement and a satisfactory test of your hypotheses. Qualitative data generally derive from two sources, texts and human subjects. Data collection from texts frequently involves archival research, though data sourcing from strictly secondary sources can also be viable. Data from human subjects may involve elite interviewing, mass surveys, or participant observation. Your reference librarians are best positioned to help you locate relevant resources in your library and elsewhere.
This chapter introduces the basic parts of a standard empirical paper – introduction, literature review, theory & hypotheses, analysis, and conclusions. We then home in on the first one that most people write, the literature review. The literature review is a unique genre of writing with its own conventions and expectations. Good literature reviews are multi-stage processes that show evidence of redrafting and that place literature in a context that allows the reader to not only understand the shape of the forest – the entire body of literature – but to locate the author’s claims within in that literature.
This chapter provides a comprehensive introduction to supervised learning techniques for classification problems. It begins with logistic regression for binary classification, explaining the sigmoid function and gradient ascent optimization. The chapter then covers softmax regression for multi-class problems, followed by k-nearest neighbors (kNN) as an intuitive distance-based classifier.
Decision trees are explored in detail, including entropy, information gain, and the ID3 algorithm, along with derived decision rules and association rules. Random forests are presented as an ensemble method that addresses overfitting by combining multiple decision trees.
The chapter covers Naive Bayes classification based on Bayes’ theorem, despite its "naive" independence assumption. Finally, Support Vector Machines (SVMs) are introduced for both linear and non-linear classification using maximum margin hyperplanes.
Each technique includes hands-on R programming examples with real datasets, practical applications, and exercises to reinforce learning concepts.