To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Bisphenol-A (BPA) is associated with adverse health outcomes and is found in many canned foods. It is not understood if some BPA contamination can be washed away by rinsing. The objective of this single-blinded crossover experiment was to determine whether BPA exposure, as measured by urinary concentrations, could be decreased by rinsing canned beans prior to consumption. Three types of hummus were prepared from dried beans, rinsed, and unrinsed canned beans. Fourteen healthy participants ate two samples of each hummus over six experimental days and collected spot urine specimens for BPA measurement. The geometric mean BPA levels for dried beans BPA (GM = 0.97 ng/ml, 95%CI = 0.74,1.26) was significantly lower than rinsed (GM = 1.89 ng/ml, 1.37,2.59) and unrinsed (GM = 2.46 ng/ml, 1.44,4.19). Difference-in-difference estimates showed an increase in GM BPA from pre- to post-hummus between unrinsed and rinsed canned beans of 1.39 ng/ml, p-value = 0.0400. Rinsing canned beans was an effective method to reduce BPA exposure.
Let $\gamma(G)$ and $${\gamma _ \circ }(G)$$ denote the sizes of a smallest dominating set and smallest independent dominating set in a graph G, respectively. One of the first results in probabilistic combinatorics is that if G is an n-vertex graph of minimum degree at least d, then
$$\begin{equation}\gamma(G) \leq \frac{n}{d}(\log d + 1).\end{equation}$$
In this paper the main result is that if G is any n-vertex d-regular graph of girth at least five, then
$$\begin{equation}\gamma_(G) \leq \frac{n}{d}(\log d + c)\end{equation}$$
for some constant c independent of d. This result is sharp in the sense that as $d \rightarrow \infty$, almost all d-regular n-vertex graphs G of girth at least five have
Furthermore, if G is a disjoint union of ${n}/{(2d)}$ complete bipartite graphs $K_{d,d}$, then ${\gamma_\circ}(G) = \frac{n}{2}$. We also prove that there are n-vertex graphs G of minimum degree d and whose maximum degree grows not much faster than d log d such that ${\gamma_\circ}(G) \sim {n}/{2}$ as $d \rightarrow \infty$. Therefore both the girth and regularity conditions are required for the main result.
In this chapter, we define an abstraction of the sentiment analysis problem. This abstraction gives us a statement of the problem and enables us to see a rich set of interrelated subproblems. It is often said that if we cannot structure a problem, we probably do not understand the problem. The objective of the definitions is thus to abstract a structure from the complex and intimidating unstructured natural language text. This structure serves as a common framework to unify various existing research directions and enable researchers to design more robust and accurate solution techniques by exploiting the interrelationships of the subproblems. From a practical application point of view, the definitions let practitioners see which subproblems need to be solved in building a sentiment analysis system, how the subproblems are related, and what output should be produced.
Sentiment analysis, also called opinion mining, is the field of study that analyzes people’s opinions, sentiments, appraisals, attitudes, and emotions toward entities and their attributes expressed in written text. The entities can be products, services, organizations, individuals, events, issues, or topics. The field represents a large problem space. Many related names and slightly different tasks – for example, sentiment analysis, opinion mining, opinion analysis, opinion extraction, sentiment mining, subjectivity analysis, affect analysis, emotion analysis, and review mining – are now all under the umbrella of sentiment analysis. The term sentiment analysis perhaps first appeared in Nasukawa and Yi (2003), and the term opinion mining first appeared in Dave et al. (2003). However, research on sentiment and opinion began earlier (Wiebe, 2000; Das and Chen, 2001; Tong, 2001; Morinaga et al., 2002; Pang et al., 2002; Turney, 2002). Even earlier related work includes interpretation of metaphors; extraction of sentiment adjectives; affective computing; and subjectivity analysis, viewpoints, and affects (Wiebe, 1990, 1994; Hearst, 1992; Hatzivassiloglou and McKeown, 1997; Picard, 1997; Wiebe et al., 1999). An early patent on text classification included sentiment, appropriateness, humor, and many other concepts as possible class labels (Elkan, 2001).
This book introduced the field of sentiment analysis or opinion mining. It presented some basic knowledge and mature techniques in detail and surveyed numerous other state-of-the-art algorithms and techniques. Owing to numerous challenging research problems and a wide variety of practical applications, sentiment analysis has been a very active research area in several computer science fields, including NLP, data mining, web mining, and information retrieval. It has also spread to management science (Hu et al., 2006; Archak et al., 2007; Das and Chen, 2007; Dellarocas et al., 2007; Ghose et al., 2007; Park et al., 2007; Chen and Xie, 2008) and other social science fields such as communications and political science because of its importance to business and society as a whole. With the rapid expansion of social media on the web, the importance of sentiment analysis is also growing by the day.
Opinions from social media are increasingly used by individuals and organizations for making purchase decisions and making choices at elections and for marketing and product design. Positive opinions often mean profits and fames for businesses and individuals. Unfortunately, that gives imposters a strong incentive to game the system by posting fake reviews or opinions to promote or to discredit some target products, services, organizations, individuals, and even ideas without disclosing their true intentions, or the person or organization for which they are secretly working. Such individuals are called opinion spammers and their activities are called opinion spamming (Jindal and Liu, 2007, 2008). An opinion spammer is also called a shill, a plant, or a stooge in the social media environment, and opinion spamming is also called shilling or astroturfing. Opinion spamming can not only hurt consumers and damage businesses, but also warp opinions and mobilize masses into positions counter to legal or ethical mores. This can be frightening, especially when spamming is about opinions on social and political issues. It is safe to say that as opinions in social media are increasingly used in practice, opinion spamming is becoming more and more sophisticated, which presents a major challenge for its detection. However, such offenses must be detected to ensure that social media continue to be a trusted source of public opinions, rather than being full of fakes, lies, and deceptions.
As discussed in Chapter 3, document-level sentiment classification is too coarse for practical applications. We now move to the sentence level and look at methods that classify sentiment expressed in each sentence. The goal is to classify each sentence in an opinion document (e.g., a product review) as expressing a positive, negative, or neutral opinion. This gets us closer to real-life sentiment analysis applications, which require opinions about sentiment targets. Sentence-level classification is about the same as document-level classification because sentences can be regarded as short documents. Sentence-level classification, however, is often harder because the information contained in a typical sentence is much less than that contained in a typical document owing to their length difference. Most document-level sentiment classification research papers ignore the neutral class mainly because it is more difficult to perform three-class classification (positive, neutral, and negative) accurately. However, for sentence-level classification, the neutral class cannot be ignored because an opinion document can contain many sentences that express no opinion or sentiment. Note that neutral opinion often means no opinion or sentiment expressed.