To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The visualization of analytical dynamics comes naturally to the tools developed in Excel. This is largely due to the dynamic nature of graphs constructed in Excel. For example, if we wanted to depict the range of possible outcomes associated with specific decisions for which outcomes had a describable level of uncertainty or variation, it would be simple enough to introduce a random term into tabular forms and then graph the result. As always, pressing the F9 key would simply draw another random number from the built-in generator and augment the associated data tables and plots to represent the volatility of those outcomes.
For example, based on the Data Table generated in the Lobo's Reservations case, we could graphically depict the variable nature of our results using the high-low-close plot (although tricked out a bit) provided in that workbook. Every time F9 is pressed, we would see how much the variability in outcomes across policy types was subject to change (simply based on different separate and independent sets of random data draws). The result, shown in Figure 10.1, would depict an alternative array of outcomes that could be associated with a set of decisions. Another example is shown with the second Data Table example, shown in Figure 10.2.
Many applications such as Microsoft MapPoint and RISKOptimizer can be leveraged through the primary interfaces with which they were designed, but they can also be called from behind the scenes through the same Visual Basic (VB) developer environment discussed in Chapter 11. From a decision support development perspective, there are several advantages to making such calls from behind the scenes. First and foremost, behind the scenes control can eliminate the need for users to become acquainted with alternative interfaces in the course of using a DSS that leverages their capabilities. Another advantage is the potential avoidance of outputs that automatically accompany the use of these applications, but are nevertheless visual and information distractions from the main point of the DSS design. The appearance of seamlessness in a designed DSS is also facilitated by VB-driven automated calls to applications. This has the potential for engendering greater confidence in the developed DSS, as well as in the developers. This chapter covers several approaches to working with such applications in roughly the order in which they have been introduced throughout this book.
Calls to MapPoint
The Chp12_MapPointCall workbook provides a template through which we can demonstrate how Excel, through VB, can leverage some of the functionality of MapPoint. As with all other demonstrations in this chapter, we'll present only a smattering of what can actually be done. To start, let's consider a hypothetical need to get information regarding a route that starts in Seattle and passes through four additional cities before returning to Seattle. The workbook outlines these stops (shown in Figure 12.1).
People make decisions every several minutes. Some of these decisions may appear trivial, such as what shirt to wear or what to have for lunch. Some decisions may appear routine: Should I provide my PIN number? Should I respond to a question from a colleague? Others are more complex: Should I recommend that my client invest in a particular firm? Should I offer to take on additional work? Should I purchase a new technology? Should I recommend a settlement in a lawsuit? These aren't simple questions and they don't have obvious answers and outcomes. Sometimes, we need help with decisions. The sources of help can vary, but increasingly these sources tend to have two things in common: analytical strength and ease of use. These sources of assistance often take the form of prepackaged off-the-shelf software tools. However, they can also be uniquely customized, and more and more frequently, this customization is being developed by individual users.
We can describe these tools by listing their potential benefits. Some of them appear in Figure 1.1. For many developers and analysts, only a few of those attributes are listed important. For others, the full complement of potential benefits must be considered. Toward this goal, visualization is very important. Indeed, one could argue that the application of visualization in data analysis is critical to the development of Decision Support Systems (DSS).
Before attempting to construct a home, architects and builders need to know what resources they have at their disposal and how they can access these resources. The same holds in decision support development, but as you'll learn in subsequent chapters, the tools available to Excel developers are more numerous than a typical user would imagine. In this chapter, we'll start with the low-hanging fruit by becoming acquainted with the general nature of Excel's front-end development environment. Figure 2.1 provides an annotated view of what people typically see when they open a new file in Excel. Only a few key elements of this interface are central to our initial discussion.
Excel files are called “workbooks.” They contain any number of worksheets (spreadsheets) that can be used in tandem. For the vast majority of its use in practice Excel is relegated to storing information. This is largely because most users don't know what else Excel can do. Not that storing information in Excel is bad, but there are often better alternatives available for storage, such as databases, in the case of very large sets of data. Functionally, the storage capability of Excel represents only the bare tip of this technological iceberg.
Many effective decision support systems rely not only on the ability of a manager to present information, analysis, and meaningful dynamics (for example, through graphics), but also on enabling users to realize the intended use of those elements by themselves (without the developer holding their hand). This is often going to mean providing sufficient documentation that might go beyond cell labeling and embedded comments. It may mean coming up with some kind of a customized user-driven help or wizard component as part of the DSS that makes use of not only automated numerical and graphical demos, but also other objects, such as images and .wav files, which could be incorporated into the workbook. This is often going to mean a level of automation that stretches the limits of the kind of work that can happen at the spreadsheet interface alone. In fact, it may be impossible to achieve by using only the top layer of an Excel workbook. Let's see how macros and the Visual Basic (VB) Editor might provide us with some new options in this regard.
The Visual Basic Editor
Let's take a deeper look into one of the first macros I introduced. Opening the Chp8_LobosInventory workbook provides us with an opportunity. To see the code associated with this macro, select the Developer tab on the main menu bar and then select Visual Basic (which will open the general VB Editor screen) or click Macros (see Figure 11.1), and from the associated dialog box select the specific name of the program code you are interested in viewing (in this case, generically called Macro1) and then Edit.
Decision modeling describes the use of data and logic to clarify the specific nature of a situation for which assistance in the decision-making process may be needed. The hope is that in clarifying such details, the development of meaningful suggestions and solutions may be easier to create. Most management problems for which decisions are sought can be represented by three standard elements: objectives, decision variables, and cons-traints.
Objectives
Maximize profit
Provide earliest entry into market
Minimize employee discomfort and turnover
Decision variables
Determine what price to use
Determine the length of time tests should be run on a new product or service
Determine the responsibilities to assign to each worker
Constraints
Can't charge below cost
Must test enough to meet minimum safety regulations
Ensure responsibilities are shared by two workers at most
All of these elements can be visualized graphically, often to the benefit of analysis and general insights. Our initial discussion will be limited to objectives and decision variables; we'll discuss constraints further on in this chapter. In most business scenarios, managers are faced with making a set of decisions that impact a final outcome (objective). This tends to make the decision process more complex, and sometimes the rationale for making specific decisions is difficult to describe.
As you've probably guessed by now, decisions can become increasingly complex as we increase the number of variables and constraints to maintain reality and practicality in our decision-making process. Similarly, the ability to concisely provide visualizations of what is possible and what is ideal (and, conversely, what isn't) becomes increasingly challenging. Given this complexity and the perceived need in industry to nevertheless pursue means of assisting people in decision making, the concept of the dashboard has come into being and continues to gain popularity.
A dashboard, from a general decision-making perspective, is basically a computer interface that allows individual users to simultaneously view various depictions (that is, presented structures) of data and information, as well as various subsets of data (that is, content) relevant to a particular task and user context. For example, Figure 13.1 shows four dashboards that I’ve personally put into use for research and consulting purposes in the recent past.
Two of these are highly oriented toward geographic (specifically, logistics) tasks; the other two are designed with project management tasks in mind. You’ll notice that each of these consists of multiple frames and multiple control- and form-based interfaces. Some make use of parameterization forms more so than others. Some make use of graphs and charts predominantly, whereas others make rich use of tables with key indices summarized. All of them were designed as applications that could function through the use of Excel alone, and are highly mobile from a distributional perspective.
A natural extension of a discussion of simulation, given our existing understanding of optimization, is how the two methods can be used together. The basic question behind simulation optimization is:
What decision (if any) tends to provide relatively superior results regardless of the uncertainty associated with the real-world problems they are designed to resolve?
Simulation provides the means by which to incorporate uncertainty into the evaluation of a specific decision or a predetermined handful of such decisions; however, this question implies a much greater scope. It suggests a formal search for the best decision across a vast range of possible alternative decisions. For simulated variants, the term best takes into account not just the average or expected value of parameters describing the setting (as would be common in discrete optimization), but also the potentially extreme performance of outliers, be that good or bad. For system simulations, the best would necessarily need to further relate to performance as the result of a sequence of events where the interplay of initial guiding decisions, complicated by uncertainty, might be extremely difficult to assess without sufficient simulation runs. The follow-up question then is:
How can we integrate the techniques associated with simulation and optimization into a single solid mechanism for meaningful decision support?
Excel gives us Solver, a great tool that helps us determine what specific decisions (values of our decision variables) should be used to obtain our objectives that are subject to the issues constraining us. Generally, Solver can be accessed under the Data tab in the Analysis section (Figure 6.1). If you do not find Solver in your Excel Data tab, it means that either Solver was not selected for installation at the time your copy of Excel was installed, or it is currently not activated. To activate Solver, click Options>Add-Ins. Select Excel Add-Ins in the Manage drop-down menu and then click Go. The Add-Ins dialog box opens, which enables you to choose Solver Add-In (Figure 6.2).
Optimization with Solver
The general structure of Solver fits perfectly with the description in Chapter 4 of the three key elements of decision structuring: objectives, decision variables, and constraints (Figure 6.3). Solver is designed to provide the best solutions possible, based on the information we provide. It has its limits (it breaks down with extremely complex or large problems), but it does a nice job for smaller problems that still present challenges to decision makers.
Industrial design processes can be described as human design problem solving, incorporating the acquisition, evaluation, production and transfer of specific knowledge. In this paper, we will describe the connection and interaction between visualization and reasoning during different stages of the design process. Thereby we focus on three early stages of this process: clarifying the task, concepting and designing an overall solution.
This paper provides a rather general description of design processes and more detailed remarks on design knowledge and design actions. It specifically focuses on design concepts as visual key elements in industrial design processes. We will address the importance of externalization and visualization as means for thinking and knowledge generation and transfer in industrial design in general.
The design process is described as an interplay of the parallel and iterative developments of three domains: knowledge, concept and design. In contrast to linear schemes, this paper proposes a design process scheme focusing on iterative circles and parallel processing possibilities. Industrial design knowledge will be described and compared to relevant knowledge in other disciplines, in particular, engineering design knowledge. We will describe the strong link between the designer's individual biographies, design knowledge and the outcome of design processes.
Design concepts will be discussed as extremely compact representations of core characteristics of the artifacts to be designed, serving as a guide to the design process.
Design actions as described in this paper are characterized by the simultaneous occurrence of thought and externalization processes. Different kinds of visualization are discussed in regard of their role in reasoning during industrial design processes.
This paper concludes by sketching two perspectives. One addresses the need for interdisciplinary research on new visualization tools with regard to human reasoning in design processes. The second one gives an impression of how visualization tools and methods of industrial design can supplement other disciplines.
We investigate language-agnostic algorithms for the construction of unsupervised distributional semantic models using web-harvested corpora. Specifically, a corpus is created from web document snippets, and the relevant semantic similarity statistics are encoded in a semantic network. We propose the notion of semantic neighborhoods that are defined using co-occurrence or context similarity features. Three neighborhood-based similarity metrics are proposed, motivated by the hypotheses of attributional and maximum sense similarity. The proposed metrics are evaluated against human similarity ratings achieving state-of-the-art results.
We study cross-graph charging schemes for graphs drawn in the plane. These are charging schemes where charge is moved across vertices of different graphs. Such methods have recently been used to obtain various properties of triangulations that are embedded in a fixed set of points in the plane. We generalize this method to obtain results for various other types of graphs that are embedded in the plane. Specifically, we obtain a new bound of O*(187.53N) (where the O*(⋅) notation hides polynomial factors) for the maximum number of crossing-free straight-edge graphs that can be embedded in any specific set of N points in the plane (improving upon the previous best upper bound 207.85N in Hoffmann, Schulz, Sharir, Sheffer, Tóth and Welzl [14]). We also derive upper bounds for numbers of several other types of plane graphs (such as connected and bi-connected plane graphs), and obtain various bounds on the expected vertex-degrees in graphs that are uniformly chosen from the set of all crossing-free straight-edge graphs that can be embedded in a specific point set.
We then apply the cross-graph charging-scheme method to graphs that allow certain types of crossings. Specifically, we consider graphs with no set of k pairwise crossing edges (more commonly known as k-quasi-planar graphs). For k=3 and k=4, we prove that, for any set S of N points in the plane, the number of graphs that have a straight-edge k-quasi-planar embedding over S is only exponential in N.
The concept of a function is of great importance in design. This paper describes from theory how designers should reason about functions when designing. This paper introduces the link model, showing how functions and properties link the product and its use, to the perceived value of the product. The important and useful distinction between functions and properties is made along with the distinction between “wirk functions,” which is what the product does when operating, and “use functions,” which is what the product is used for. The paper makes a novel contribution beyond previous literature, showing that not only is a product's behavior or mode of action designed but also the use activity of the end user. Based on the theoretical perspective unfolded, the authors offer nine mind-sets for both design practitioners and researchers to consider when reasoning about functions.
Function modeling is considered potentially useful in various fields of engineering, including engineering design. However, a close look at practices reveals that practitioners do not use formal function modeling so much, while the concept of “function” frequently appears in many practical methods without a vigorous definition. This paper tries to understand why formal function modeling is not practically utilized in industry by analyzing usage cases of function. By observing product development activities in industry, the paper identifies three problems that prevent formal function modeling from wider applications in practices, namely, practitioners' neglect of function modeling, the lack of practically useful function reasoning, and the complexity of the methods and tools of formal function modeling that make them impractical. Finally, the paper proposes strategies to tackle these problems and illustrates some research efforts in this regard.