To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
reg-is-ter: a device for storing small amounts of data
al-lo-cate: to apportion for a specific purpose
Webster's Dictionary
The Translate, Canon, and Codegen phases of the compiler assume that there are an infinite number of registers to hold temporary values and that move instructions cost nothing. The job of the register allocator is to assign the many temporaries to a small number of machine registers, and, where possible, to assign the source and destination of a move to the same register so that the move can be deleted.
From an examination of the control and dataflow graph, we derive an interference graph. Each node in the inteference graph represents a temporary value; each edge (t1, t2) indicates a pair of temporaries that cannot be assigned to the same register. The most common reason for an interference edge is that t1 and t2 are live at the same time. Interference edges can also express other constraints; for example, if a certain instruction a ← b ⊕ c cannot produce results in register r12 on our machine, we can make a interfere with r12.
Next we color the interference graph. We want to use as few colors as possible, but no pair of nodes connected by an edge may be assigned the same color. Graph coloring problems derive from the old mapmakers' rule that adjacent countries on a map should be colored with different colors.
mem-o-ry: a device in which information can be inserted and stored and from which it may be extracted when wanted
hi-er-ar-chy: a graded or ranked series
Webster's Dictionary
An idealized random access memory (RAM) has N words indexed by integers such that any word can be fetched or stored – using its integer address – equally quickly. Hardware designers can make a big slow memory, or a small fast memory, but a big fast memory is prohibitively expensive. Also, one thing that speeds up access to memory is its nearness to the processor, and a big memory must have some parts far from the processor no matter how much money might be thrown at the problem.
Almost as good as a big fast memory is the combination of a small fast cache memory and a big slow main memory; the program keeps its frequently used data in cache and the rarely used data in main memory, and when it enters a phase in which datum x will be frequently used it may move x from the slow memory to the fast memory.
It's inconvenient for the programmer to manage multiple memories, so the hardware does it automatically. Whenever the processor wants the datum at address x, it looks first in the cache, and – we hope – usually finds it there.
lex-i-cal: of or relating to words or the vocabulary of a language as distinguished from its grammar and construction
Webster's Dictionary
To translate a program from one language into another, a compiler must first pull it apart and understand its structure and meaning, then put it together in a different way. The front end of the compiler performs analysis; the back end does synthesis.
The analysis is usually broken up into
Lexical analysis: breaking the input into individual words or “tokens”;
Syntax analysis: parsing the phrase structure of the program; and
Semantic analysis: calculating the program's meaning.
The lexical analyzer takes a stream of characters and produces a stream of names, keywords, and punctuation marks; it discards white space and comments between the tokens. It would unduly complicate the parser to have to account for possible white space and comments at every possible point; this is the main reason for separating lexical analysis from parsing.
Lexical analysis is not very complicated, but we will attack it with high powered formalisms and tools, because similar formalisms will be useful in the study of parsing and similar tools have many applications in areas other than compilation.
LEXICAL TOKENS
A lexical token is a sequence of characters that can be treated as a unit in the grammar of a programming language. A programming language classifies lexical tokens into a finite set of token types.
Despite more than 30 years of development of computer-based medical information systems, the medical record remains largely paper-based. A major impediment to the implementation and use of these systems continues to be the lack of evaluation criteria and evaluation efforts. It is becoming apparent that the successful implementation and use of computer-based medical information systems depends on more than the transmission of technical details and the availability of systems. In fact, these systems have been characterized as radical innovations that challenge the internal stability of health care organizations. They have consequences that raise important social and ethical issues. This chapter provides a thorough historical and sociological context and analyzes how computer-based medical information systems affect (1) professional roles and practice patterns, (2) professional relations between individuals and groups, and (3) patients and patient care. In a point that is crucial for the development of health information systems, the authors argue that, aside from quality control, risk management, or fiscal efficiency, there is an ethical imperative for conducting system evaluations. This means that no commitment to computational advancement or sophistication is sufficient unless it includes a well-wrought mechanism for evaluating health computing systems in the contexts of their use. Failure to perform such evaluations becomes a shortcoming that is itself ethically blameworthy.
Introduction and history
Medical computing is not merely about medicine or computing. It is about the introduction of new tools into environments with established social norms and practices.
reg-is-ter: a device for storing small amounts of data
al-lo-cate: to apportion for a specific purpose
Webster's Dictionary
The Translate, Canon, and Codegen phases of the compiler assume that there are an infinite number of registers to hold temporary values and that move instructions cost nothing. The job of the register allocator is to assign the many temporaries to a small number of machine registers, and, where possible, to assign the source and destination of a move to the same register so that the move can be deleted.
From an examination of the control and dataflow graph, we derive an interference graph. Each node in the inteference graph represents a temporary value; each edge (t1, t2) indicates a pair of temporaries that cannot be assigned to the same register. The most common reason for an interference edge is that t1 and t2 are live at the same time. Interference edges can also express other constraints; for example, if a certain instruction a ← b ⊕ c cannot produce results in register r12 on our machine, we can make a interfere with r12.
Next we color the interference graph. We want to use as few colors as possible, but no pair of nodes connected by an edge may be assigned the same color. Graph coloring problems derive from the old mapmakers' rule that adjacent countries on a map should be colored with different colors.
mem-o-ry: a device in which information can be inserted and stored and from which it may be extracted when wanted
hi-er-ar-chy: a graded or ranked series
Webster's Dictionary
An idealized random access memory (RAM) has N words indexed by integers such that any word can be fetched or stored – using its integer address – equally quickly. Hardware designers can make a big slow memory, or a small fast memory, but a big fast memory is prohibitively expensive. Also, one thing that speeds up access to memory is its nearness to the processor, and a big memory must have some parts far from the processor no matter how much money might be thrown at the problem.
Almost as good as a big fast memory is the combination of a small fast cache memory and a big slow main memory; the program keeps its frequently used data in cache and the rarely used data in main memory, and when it enters a phase in which datum x will be frequently used it may move x from the slow memory to the fast memory.
It's inconvenient for the programmer to manage multiple memories, so the hardware does it automatically. Whenever the processor wants the datum at address x, it looks first in the cache, and – we hope – usually finds it there. If there is a cache miss – x is not in the cache – then the processor fetches x from main memory and places a copy of x in the cache so that the next reference to x will be a cache hit.
Privacy and confidentiality rights to nonintrusion and to enjoyment of control over personal information are so well known as to be regarded by some as obvious and beyond dispute. In unhappy fact, though, confidentiality protections are often meager and feckless because of the ease with which information is shared and the increasing number of people and institutions demanding some measure of access to that information. Health data are increasingly easy to share because of improvements in electronic storage and retrieval tools. These tools generally serve valid and valuable roles. But increased computing and networking power are changing the very idea of what constitutes a patient record, and this increases the “access dilemma” that was already a great challenge. The challenge may be put as follows: How can we maximize appropriate access to personal information (to improve patient care and public health) and minimize inappropriate or questionable access? Note that “personal information” includes not only electronic patient records, but also data about providers – physicians, nurses, and others – and their institutions. This chapter reviews the foundations of a right to privacy and seeks out an ethical framework for viewing privacy and confidentiality claims; identifies special issues and problems in the context of health computing and networks; considers the sometimes conflicting interests of patients, providers, and third parties; and sketches solutions to some of the computer-mediated problems of patient and provider confidentiality.
Telling right from wrong often requires appeal to a set of values. Some values are general or global, and they range across the spectrum of human endeavor. Identifying and ranking such values, and being clear about their conflicts and exceptions, is an important philosophical undertaking. Other values are particular or local. They may be special cases of the general values. So when “freedom from pain” is offered in the essay by Professors Bynum and Fodor as a medical value, it is conceptually linked to freedom, a general value. Local values apply within and often among different human actions: law, medicine, engineering, journalism, computing, education, business, and so forth. To be consistent, a commitment to a value in any of these domains should not contradict global values. To be sure, tension between and among local and global values is the stuff of exciting debate in applied ethics. And sometimes a particular local value will point to consequences that are at odds with a general value. Debates over these tensions likewise inform the burgeoning literature in applied or professional ethics. In this chapter, Bynum and Fodor apply the seminal work of James H. Moor in an analysis of the values that apply in the health professions. What emerges is a straightforward perspective on the way to think about advancing health computing while paying homage to those values.
The front end of the compiler translates programs into an intermediate language with an unbounded number of temporaries. This program must run on a machine with a bounded number of registers. Two temporaries a and b can fit into the same register, if a and b are never “in use” at the same time. Thus, many temporaries can fit in few registers; if they don't all fit, the excess temporaries can be kept in memory.
Therefore, the compiler needs to analyze the intermediate-representation program to determine which temporaries are in use at the same time. We say a variable is live if it holds a value that may be needed in the future, so this analysis is called liveness analysis.
To perform analyses on a program, it is often useful to make a control-flow graph. Each statement in the program is a node in the flow graph; if statement x can be followed by statement y, there is an edge from x to y. Graph 10.1 shows the flow graph for a simple loop.
Let us consider the liveness of each variable (Figure 10.2). A variable is live if its current value will be used in the future, so we analyze liveness by working from the future to the past. Variable b is used in statement 4, so b is live on the 3 → 4 edge.
dom-i-nate: to exert the supreme determining or guiding influence on
Webster's Dictionary
Many dataflow analyses need to find the use-sites of each defined variable or the definition-sites of each variable used in an expression. The def-use chain is a data structure that makes this efficient: for each statement in the flow graph, the compiler can keep a list of pointers to all the use sites of variables defined there, and a list of pointers to all definition sites of the variables used there. In this way the compiler can hop quickly from use to definition to use to definition.
An improvement on the idea of def-use chains is static single-assignment form, or SSA form, an intermediate representation in which each variable has only one definition in the program text. The one (static) definition-site may be in a loop that is executed many (dynamic) times, thus the name static single-assignment form instead of single-assignment form (in which variables are never redefined at all).
The SSA form is useful for several reasons:
Dataflow analysis and optimization algorithms can be made simpler when each variable has only one definition.
If a variable has N uses and M definitions (which occupy about N + M instructions in a program), it takes space (and time) proportional to N · M to represent def-use chains – a quadratic blowup (see Exercise 19.8).
The intersection of bioethics and health informatics offers a rich array of issues and challenges for philosophers, physicians, nurses, and computer scientists. One of the first challenges is, indeed, to identify where the interesting and important issues lie, and how best, at least initially, we ought to address them. This introductory chapter surveys the current ferment in bioethics; identifies a set of areas of ethical importance in health informatics (ranging from standards, networks, and bioinformatics to telemedicine, epidemiology, and behavioral informatics); argues for increased attention to curricular development in ethics and informatics; and provides a guide to the rest of the book. Perhaps most importantly, this chapter sets the following tone: that in the face of extraordinary technological changes in health care, it is essential to maintain a balance between “slavish boosterism and hyperbolic skepticism.” This means, in part, that at the seam of three professions we may find virtue both by staying up-to-date and by not overstepping our bounds. This stance is called “progressive caution.” The air of oxymoron is, as ever in the sciences, best dispelled by more science.
A conceptual intersection
The future of the health professions is computational.
This suggests nothing quite so ominous as artificial doctors and robonurses playing out “what have we wrought?” scenarios in future cyberhospitals. It does suggest that the standard of care for information acquisition, storage, processing, and retrieval is changing rapidly, and health professionals need to move swiftly or be left behind.