To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Is Hitler bigger than Napoleon? Washington bigger than Lincoln? Picasso bigger than Einstein? Quantitative analysts are rapidly finding homes in social and cultural domains, from finance to politics. What about history? In this fascinating book, Steve Skiena and Charles Ward bring quantitative analysis to bear on ranking and comparing historical reputations. They evaluate each person by aggregating the traces of millions of opinions, just as Google ranks webpages. The book includes a technical discussion for readers interested in the details of the methods, but no mathematical or computational background is necessary to understand the rankings or conclusions. Along the way, the authors present the rankings of more than one thousand of history's most significant people in science, politics, entertainment, and all areas of human endeavor. Anyone interested in history or biography can see where their favorite figures place in the grand scheme of things.
Mobility of people and goods is essential in the global economy. The ability to track the routes and patterns associated with this mobility offers unprecedented opportunities for developing new, smarter applications in different domains. Much of the current research is devoted to developing concepts, models, and tools to comprehend mobility data and make it manageable for these applications. This book surveys the myriad facets of mobility data, from spatio-temporal data modeling, to data aggregation and warehousing, to data analysis, with a specific focus on monitoring people in motion (drivers, airplane passengers, crowds, and even animals in the wild). Written by a renowned group of worldwide experts, it presents a consistent framework that facilitates understanding of all these different facets, from basic definitions to state-of-the-art concepts and techniques, offering both researchers and professionals a thorough understanding of the applications and opportunities made possible by the development of mobility data.
This second edition of Excel Basics to Blackbelt capitalizes on the success of the first edition and leverages some of the advancements in visualization, data analysis, and sharing capabilities that have emerged over the past five years. As with the original text, the second edition is intended to serve as an accelerated guide to decision support designs for consultants and service professionals. This 'fast track' enables a ramping up of skills in Excel for those who may have never used it to reach a level of mastery that will allow them to integrate Excel with widely available associated applications, make use of intelligent data visualization and analysis techniques, automate activity through basic VBA designs, and develop easy-to-use interfaces for customizing use. In other words, this book provides users with lessons and examples on integrative Excel use that are not available from alternative texts.
We've talked about how difficult it can be to find or construct an optimal solution to real-world management problems, in which we're faced with nonlinear relationships and constraints that make it difficult to predict how specific decisions work together to impact performance. But in a certain way we've continued to simplify these real-world problems. There may be some shortcomings in the approaches we've taken in finding solutions, but what about the approaches we use to come up with the problems that we're trying to solve?
When we create a mathematical form to represent reality so that we can ultimately use analytics to provide an applicable real-world solution, are we missing something? And how much does this impact the real-world applicability and effectiveness of the solution we develop? These are critical questions for managers who want additional support in their decision making. Project managers don't want suggestions that come out of inappropriate assumptions.
What steps can we take to help ensure that we are, in fact, providing appropriate characterizations of reality when we structure problems and make sense of solutions? Although there are a lot of good places to start, one obvious place is an attempt to take into account the uncertainty associated with just about everything that takes place in the real world.
There is a clear truism in George Box's 1979 statement that “all models are wrong, some models are useful.” We attempt to model reality to see how changes can affect it – hopefully, for the better. But models of reality are, by their very nature, incomplete depictions, which tend to be misleading. Still worse are models and associated solutions that faithfully attempt to do justice to reality by incorporating many facets of reality into their structures. Unfortunately, a common result is an overemphasis on certain issues in decision making that, although interesting, are far less relevant to effective decisions than other issues that have been overlooked.
Ultimately, any approach to decision making is a balancing act between an appropriate accounting of relevant reality (that is, the objectives, decision variables, and constraints discussed in Chapter 4) and avoiding details that only obscure or mislead. When we attempt to rationalize all of the factors that might go into a decision-making process, as well as possible solutions that might be practically viable, we often “satisfice,” a term used to describe making a decision about a set of alternatives that respects the limitations of human time and knowledge.
As an extension to the discussion in Chapter 6, it's relevant at this point to reconsider how a feature such as Solver can come up with a solution. Although it's not necessarily critical for developers to understand the detailed technicalities of these packaged programs, any developer worth his or her salt should understand at least the limitations of these algorithms.
How Solver “Solves”
Many people use Solver with the expectation that it can find the optimal solution for any kind of problem (of reasonable size). But even small problems can have their nuances that make the job extremely difficult for the standard Solver add-in, and the resulting solutions are prone to poor performance (substantially less-than-optimal managerial recommendations). Engines like Solver commonly use hill-climbing algorithms to search for optimal solutions. In reality, this is just another heuristic (see Chapter 5). It starts with a guess of what the solution might be and then it sees if small changes to any of the decision variables of that solution can result in better value for the objective function that is subject to constraints.
Hill-climbing algorithms typically look into only one solution at a time. For example, consider the following hypothetical performance surface (where performance along the z-axis is some function of the two decision variables x and y). In Figure 7.1, a shaded dot represents a possible solution, one that, at this point, appears to be less than ideal. From a local perspective, it certainly doesn’t represent the apparent peak value of z attainable (shown by an ellipse).
The world changes all the time. Technological resources advance at such a clip it seems hard to keep up. But not all technological developments really need shadowing. What we really need to keep up with are developments that help us do the great things we've been doing all along…only more effectively.
Such is the case with spreadsheet applications. New interfaces and bells and whistles appear in each new version. Some are wonderful; however, not all are particularly useful. And sadly, some of the best features often get left behind (I always give as an example the losses in macro writing capabilities and graphical direct-editing we lost after MS Excel 2007 replaced Excel 2003). But that's history. We need to keep looking forward, and where new capabilities replace older ones, we are not necessarily left in the lurch. After all, MS Excel and VBA provide the remarkable ability to be expanded and customized to our particular needs, even in ways that can replicate some of what we loved about previous versions, and even in ways we might expect future versions to automate (but which are not yet in place).
With that in mind, the second edition of Excel Basics to Blackbelt capitalizes on many of the recent developments made available by more contemporary versions of Excel (while retaining a nod to the features of the past). Furthermore, this edition looks forward to capabilities that are not currently standard through the introduction of a series of newly developed Excel addins. The application and description of these are peppered throughout the second-edition text with an aim to augment the text and fill in the capability gaps that were present in the original edition.
Aside from typing information into Excel, there are a number of other ways to get new data into spreadsheets. These methods include opening structured, plain-text files in ways that are meaningful to Excel (for example, rawdata.txt); using other desktop applications as data sources (such as tables in MS Word and tabular results from SPSS); drawing information from structured or nonstructured online sources (such as content from COMPUSTAT or even the whitepages.com); and developing systems that create or simulate large volumes of data with desired characteristics (mainly for use in illustrating or testing the robustness of proposed management policies). In this chapter, we'll touch on each of these methods at some level.
Text File Imports and Basic Table Transfers
If you have a text file that contains information, such as a survey or database data in text-file format, it can be opened into Excel as a new file. You simply need to specify how data in that file are organized, such as separated by spaces, tabs, commas, and so on. As an example, imagine a text file titled Chp3_MultRespsFinal.txt. Each record in this file occupies a new row, and the information relating to each record is organized sequentially with each field separated by a comma. This kind of data organization is referred to as “comma delimited.” Select Home>Open in Excel and then find and select this text document. The Step 1 of the Text Import Wizard opens, as shown in Figure 3.1.
The visualization of analytical dynamics comes naturally to the tools developed in Excel. This is largely due to the dynamic nature of graphs constructed in Excel. For example, if we wanted to depict the range of possible outcomes associated with specific decisions for which outcomes had a describable level of uncertainty or variation, it would be simple enough to introduce a random term into tabular forms and then graph the result. As always, pressing the F9 key would simply draw another random number from the built-in generator and augment the associated data tables and plots to represent the volatility of those outcomes.
For example, based on the Data Table generated in the Lobo's Reservations case, we could graphically depict the variable nature of our results using the high-low-close plot (although tricked out a bit) provided in that workbook. Every time F9 is pressed, we would see how much the variability in outcomes across policy types was subject to change (simply based on different separate and independent sets of random data draws). The result, shown in Figure 10.1, would depict an alternative array of outcomes that could be associated with a set of decisions. Another example is shown with the second Data Table example, shown in Figure 10.2.
Many applications such as Microsoft MapPoint and RISKOptimizer can be leveraged through the primary interfaces with which they were designed, but they can also be called from behind the scenes through the same Visual Basic (VB) developer environment discussed in Chapter 11. From a decision support development perspective, there are several advantages to making such calls from behind the scenes. First and foremost, behind the scenes control can eliminate the need for users to become acquainted with alternative interfaces in the course of using a DSS that leverages their capabilities. Another advantage is the potential avoidance of outputs that automatically accompany the use of these applications, but are nevertheless visual and information distractions from the main point of the DSS design. The appearance of seamlessness in a designed DSS is also facilitated by VB-driven automated calls to applications. This has the potential for engendering greater confidence in the developed DSS, as well as in the developers. This chapter covers several approaches to working with such applications in roughly the order in which they have been introduced throughout this book.
Calls to MapPoint
The Chp12_MapPointCall workbook provides a template through which we can demonstrate how Excel, through VB, can leverage some of the functionality of MapPoint. As with all other demonstrations in this chapter, we'll present only a smattering of what can actually be done. To start, let's consider a hypothetical need to get information regarding a route that starts in Seattle and passes through four additional cities before returning to Seattle. The workbook outlines these stops (shown in Figure 12.1).
People make decisions every several minutes. Some of these decisions may appear trivial, such as what shirt to wear or what to have for lunch. Some decisions may appear routine: Should I provide my PIN number? Should I respond to a question from a colleague? Others are more complex: Should I recommend that my client invest in a particular firm? Should I offer to take on additional work? Should I purchase a new technology? Should I recommend a settlement in a lawsuit? These aren't simple questions and they don't have obvious answers and outcomes. Sometimes, we need help with decisions. The sources of help can vary, but increasingly these sources tend to have two things in common: analytical strength and ease of use. These sources of assistance often take the form of prepackaged off-the-shelf software tools. However, they can also be uniquely customized, and more and more frequently, this customization is being developed by individual users.
We can describe these tools by listing their potential benefits. Some of them appear in Figure 1.1. For many developers and analysts, only a few of those attributes are listed important. For others, the full complement of potential benefits must be considered. Toward this goal, visualization is very important. Indeed, one could argue that the application of visualization in data analysis is critical to the development of Decision Support Systems (DSS).
Before attempting to construct a home, architects and builders need to know what resources they have at their disposal and how they can access these resources. The same holds in decision support development, but as you'll learn in subsequent chapters, the tools available to Excel developers are more numerous than a typical user would imagine. In this chapter, we'll start with the low-hanging fruit by becoming acquainted with the general nature of Excel's front-end development environment. Figure 2.1 provides an annotated view of what people typically see when they open a new file in Excel. Only a few key elements of this interface are central to our initial discussion.
Excel files are called “workbooks.” They contain any number of worksheets (spreadsheets) that can be used in tandem. For the vast majority of its use in practice Excel is relegated to storing information. This is largely because most users don't know what else Excel can do. Not that storing information in Excel is bad, but there are often better alternatives available for storage, such as databases, in the case of very large sets of data. Functionally, the storage capability of Excel represents only the bare tip of this technological iceberg.