To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Is there a notion of contradiction—let us call it, for dramatic effect, “absolute”—making all contradictions, so understood, unacceptable also for dialetheists? It is argued in this paper that there is, and that spelling it out brings some theoretical benefits. First it gives us a foothold on undisputed ground in the methodologically difficult debate on dialetheism. Second, we can use it to express, without begging questions, the disagreement between dialetheists and their rivals on the nature of truth. Third, dialetheism has an operator allowing it, against the opinion of many critics, to rule things out and manifest disagreement: for unlike other proposed exclusion-expressing-devices (for instance, the entailment of triviality), the operator used to formulate the notion of absolute contradiction appears to be immune both from crippling expressive limitations and from revenge paradoxes—pending a rigorous nontriviality proof for a formal dialetheic theory including it.
In this paper, we study the problem of structural analysis of Web documents aiming at extracting the sectional hierarchy of a document. In general, a document can be represented as a hierarchy of sections and subsections with corresponding headings and subheadings. We developed two machine learning models: heading extraction model and hierarchy extraction model. Heading extraction was formulated as a classification problem whereas a tree-based learning approach was employed in hierarchy extraction. For this purpose, we developed an incremental learning algorithm based on support vector machines and perceptrons. The models were evaluated in detail with respect to the performance of the heading and hierarchy extraction tasks. For comparison, a baseline rule-based approach was used that relies on heuristics and HTML document object model tree processing. The machine learning approach, which is a fully automatic approach, outperformed the rule-based approach. We also analyzed the effect of document structuring on automatic summarization in the context of Web search. The results of the task-based evaluation on TREC queries showed that structured summaries are superior to unstructured summaries both in terms of accuracy and user ratings, and enable the users to determine the relevancy of search results more accurately than search engine snippets.
In early 2011, Pepsi made headlines by announcing that after more than 20 years, they would forego advertising during the Super Bowl. Instead, PepsiCo decided to award more than $20 million in grants to fund community projects. Anyone could submit a grant application online, and award winners would be chosen by popular vote. News of Pepsi’s contest spread across social media, and with each mention, the Pepsi name was further associated with a philanthropic brand image. Contestants extended the brand promotion as they campaigned for their own personal causes, driving more traffic to Pepsi’s website.
In a similar move, P&G, one of the world’s largest marketing organizations, announced in February 2012 that they would reduce their marketing budget by $10 billion over the next four years. Much of the savings would be achieved by shifting their efforts away from traditional offline marketing methods in favor of digital marketing tools such as online banner ads, viral marketing, and social media marketing.
As individuals, we make decisions about whether to post our opinions to social media and what opinions to post. When we make these decisions, we are subject to a host of social influences. While we may have intended to express our thoughts on the latest restaurant that we visited or a movie that we recently saw, posting comments online doesn’t occur in a vacuum. Based on what others have said previously, what we choose to say (that is, if we choose to say anything at all) may change once we sit down at the computer.
Earlier chapters discussed how our opinion formation and expression behaviors change as we are exposed to the opinions that others have already posted. In turn, the opinions we express today will affect how others behave in the future. Social media platforms can be seen as opinion ecosystems where our viewpoints interact and influence those of other contributors. Some opinions will be discouraged and driven out of the ecosystem through selection effects. Other opinions adapt to the environment as a result of a variety of adjustment effects. As a result, the collective opinion of the posting population evolves.
A search of employment opportunities in social media inevitably turns up something like the following:
Social Media Associate: Act as administrator of the company blog and social media feeds as well as representing the company on all social media platforms. Create compelling content to drive traffic. Primary role is to engage community members.
In other words, the employer wants a communications associate whose main job is to tweet and blog.
Like many organizations, the employer represented here views social media as just another platform for advertising and communications. The person in charge of the social media efforts may be informed about the organization’s overall strategy, but his or her role is to simply use social media to communicate this strategy to the target consumer or constituency. This perspective on how social media fits into the organization can be very limiting and potentially problematic. Let’s break down the pitfalls associated with this line of reasoning.
Statistical parsers often require careful parameter tuning and feature selection. This is a nontrivial task for application developers who are not interested in parsing for its own sake, and it can be time-consuming even for experienced researchers. In this paper we present MaltOptimizer, a tool developed to automatically explore parameters and features for MaltParser, a transition-based dependency parsing system that can be used to train parser's given treebank data. MaltParser provides a wide range of parameters for optimization, including nine different parsing algorithms, an expressive feature specification language that can be used to define arbitrarily rich feature models, and two machine learning libraries, each with their own parameters. MaltOptimizer is an interactive system that performs parser optimization in three stages. First, it performs an analysis of the training set in order to select a suitable starting point for optimization. Second, it selects the best parsing algorithm and tunes the parameters of this algorithm. Finally, it performs feature selection and tunes machine learning parameters. Experiments on a wide range of data sets show that MaltOptimizer quickly produces models that consistently outperform default settings and often approach the accuracy achieved through careful manual optimization.
It used to be that when a new movie was released, moviegoers would look to the opinions of professional movie critics before deciding whether to see it. Under this paradigm, professional critics wielded an enormous amount of power and influence to either make or break a new movie. In today’s environment, however, social media host reviews from anyone who wants to share an opinion. And now, before we head out to the theaters, we look online for not only the reviews provided by professional movie critics but also the reviews posted by friends, and sometimes even strangers, who have already seen the movie.
Arguably, social media have the potential to give a voice to everyone, making us less reliant on the opinions of a few experts. As consumers, that means that we have available to us a wider variety of opinions. Thus, we can follow the opinions of trusted sources who share our views rather than individuals whom others have deemed to be experts. This means that through social media, an organization or business has access to the wide variety of opinions held by its various customers and stakeholders. Their opinions, rather than those of a few top executives, can now drive many of the organization’s strategic decisions. Is this a good thing? Should a company trust HiPPO (the Highest Paid Person’s Opinion) or should it follow the opinions of masses on social media?
It has become a staple in American politics that in just about every speech or debate, presidential candidates manage to work in a story about the struggles of Mr. and Mrs. John Smith from a swing state. Candidates talk to thousands of voters on the campaign trail. But these are the stories that they remember and choose to retell because, to them, they represent the stories of the larger population.
It is easy to understand why politicians latch on to these anecdotes. On a daily basis, teams of advisors and crowds of voters share their stories and offer their opinions on everything from taxes to foreign policies to healthcare reform. Even what they wear comes under scrutiny and often garners volumes of unsolicited feedback. How do politicians and other decision makers parse through all of these suggestions to identify the handful of opinions that are truly important and relevant to the larger population? Put bluntly, how do we know that the average American cares about Mr. and Mrs. John Smith’s stories?
Occasionally, we find ourselves in situations where we express an opinion that doesn’t perfectly represent the opinion that we actually hold. You don’t really like the sweater your aunt gave you last Christmas, but you tell her how much you love it and wear it anyway. Your boss’s jokes just aren’t that funny, but you at least let out a little chuckle. These are just some of the situations we find ourselves in when social norms contribute to our putting forth a viewpoint that isn’t entirely consistent with what we actually think.
In some cases, we’re just being polite when we pay someone a compliment, or we are simply choosing the path of least resistance. Even when we do hold strong opinions about a particular topic, we may temper what we say based on how we think someone else might react. We adjust our opinions to better conform to the social contexts in which we find ourselves.
In the previous chapter we discussed how environmental cues can affect whether or not we express any opinion at all. In this chapter, we discuss how our environment affects what opinion we express; in particular, we focus on the effects that others in our environment have on our opinion expression behavior.
In his book Crossing the Chasm, Geoffrey Moore argues that for a product to succeed in the mass market, it must cross the chasm that separates the innovative consumers from the rest of the market, the previously discussed imitators. But simply getting the approval of the innovators is not enough to penetrate the mass market. Instead, a few influential individuals among the innovator population can play a critical role in bridging the gap between the innovator population and the mass market.
Is this the only path toward success for a new product? In a study of how information is transmitted across a social network, Watts and Dodd demonstrate that it is not always the influential power of a few that leads to the diffusion of information. An alternative path for diffusion of information exists if there is “a critical mass of easily influenced individuals” who can fuel the viral takeoff of an product, idea, or opinion.
In the world of Facebook, Twitter, and Yelp, water-cooler conversations with co-workers and backyard small talk with neighbors have moved from the physical world to the digital arena. Previous exchanges with familiar and trusted individuals have been replaced by large-scale chatter accessible to acquaintances and strangers. Discussions that once went unrecorded now leave traces that can be explored years later. The way in which we share information and opinions has changed irrevocably.
In this new landscape, organizations ranging from Fortune 500 companies to government agencies to political campaigns continuously monitor online opinions in an effort to guide their actions. Are consumers satisfied with our product? How are our policies being perceived? Do voters agree with our platform? Brand managers, marketers, and campaign managers can potentially find answers to these questions by monitoring the opinions shared through social media.
But measuring online opinion is more complex than just reading a few posted reviews. In this book, we move beyond the current practice of social media monitoring and introduce the concept of social media intelligence. While social media monitoring is an essential step in developing a social media intelligence platform, it is by nature descriptive and retrospective. That is, social media monitoring describes what has already happened. It does not prescribe or guide an organization’s next steps.
The current state of social media intelligence is one where organizations are investing in social media monitoring but drowning in social media data and metrics. In an effort to make sense of the seemingly infinite volume of data that social media produces on a daily basis, analysts are computing an equally overwhelming number of metrics. The problem is that organizations are measuring what is easy to measure with the data. Twitter data are easy to collect and volume metrics are easy to compute, so metrics like the number of Twitter mentions or the number of Twitter followers are over-emphasized. Rather than going after the low hanging fruit, we need to shift our focus from measuring what’s easy to measure to measuring what matters. In other words, what are the metrics that will influence our strategic decision making? And our ability to define these metrics will depend on a firm understanding of opinion science.
Organizations have also struggled with integrating the intelligence gathered from social media data with other sources of data that marketing researchers have relied on for decades. Many organizations are faced with multiple research reports produced from traditional focus groups, customers surveys, in-store sales data, and social media. In several cases, the social media reports don’t align with other studies, especially when the social media metrics are not adjusted to accommodate the various biases we know exist from the opinion science research. When faced with these conflicting reports, organizations tend to favor the tried and tested offline methods over the very new and untested social media metrics. But organizations shouldn’t give up on social media intelligence quite that quickly, especially while social media tools are in their infancy. An integrated research approach that includes both the traditional offline methods and social media intelligence can be very effective, timely, and cost-efficient. The key is to track the right social media metrics and integrate social media intelligence efforts with other marketing research programs. Integration would involve the alignment of social media metrics with the offline metrics in such a way that the multiple sources of marketing intelligence complement one another. Social media intelligence can be used as an early indicator of general problem areas and offline methods can used to investigate further as a follow-up study.
This study reports on the results of classroom research investigating the effects of corpus use in the process of revising compositions in English as a foreign language. Our primary aim was to investigate the relationship between the information extracted from corpus data and how that information actually helped in revising different types of errors in the essays. In ‘data-driven learning’, previous research has often failed to provide rigorous criteria for choosing the words or phrases suitable for correction with corpus data. By investigating the above relationship, this study aims to clarify what should be corrected by looking at corpus data. 93 undergraduate students from two universities in Tokyo wrote a short essay in 20 minutes without a dictionary, and the instructors gave coded error feedback for two lexical or grammatical errors. They deliberately selected one error which should be appropriate for checking against corpus data and one that was more likely to be corrected without using any reference resource. Three weeks later, a short hands-on instruction of the corpus query tool was given, followed by revision activities in which the participants were instructed to revise their first drafts, with or without the tool depending on the codes given to each error. 188 errors were automatically classified into three different categories (omission, addition and misformation) using natural language processing techniques. All words and phrases tagged for errors were further annotated for part-of-speech (POS) information. The results show that there was a significant difference in the accuracy rate among the three error types when the students consulted the corpus: omission and addition errors were easily identified and corrected, whereas misformation errors were low in correction accuracy. This reveals that certain errors are more suitable for checking against corpus data than others.