To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
As a large developing country with an ambition to become one of the world's economic superpowers, China sees its future closely tied to its information technology industry, as well as to the deployment and use of IT, the Internet, and e-commerce. However, currently there is a great disparity between this vision for “informatization” and the reality of e-commerce diffusion and use. That disparity is rooted in aspects of China's environment and policy which shape the diffusion, use, and impacts of e-commerce.
China's economy has grown at an annual rate of more than 8% since 1995, but that growth has been accompanied by increasing inequality in income. There is also wide geographic inequality, with the eastern coastal regions around Beijing, Shanghai, and Guangdong having much higher incomes than the rest of the country. These regions, and especially their cities, have much better infrastructure and many more Internet users than the remote and economically poorer provinces elsewhere in the country. The larger enterprises, especially those located in these coastal regions, have larger IT budgets and better-trained staff than small and medium-sized enterprises, and are more capable of engaging in e-commerce, but tend to be conservative. The smaller, more entrepreneurial companies often lack the financial and human resources to engage in e-commerce.
This appendix describes the Global E-Commerce Survey (GEC Survey) – the primary data source used for the country case studies – including objectives, instrument construction, survey administration, sampling methodology, and response rates.
Objective
The GEC Survey was designed to benchmark the state of e-commerce diffusion in firms, industries, and countries, and to determine whether the Internet and e-commerce are making some more competitive than others. Specifically, the survey focuses on six areas: 1) globalization of firms and markets; 2) use of e-commerce technologies (EDI, Internet, extranet, etc.); 3) drivers for Internet use; 4) barriers to conducting business on the Internet; 5) prevalence of online sales and online service offerings; and 6) benefits from e-commerce use. Each country chapter uses the GEC Survey as the primary data source. Country cases may also employ secondary data sources, and authors were encouraged to supplement GEC data as needed.
Countries and time period
Data were collected via telephone interviews in ten economies: Brazil, China, Denmark, France, Germany, Japan, Mexico, Singapore, Taiwan, and the United States. Interviews were conducted during the period 18 February 2002 to 5 April 2002. A total of 2,139 companies were interviewed.
Instrument design
The questionnaire was designed by researchers at the University of California, Irvine and reviewed and critiqued by International Data Corporation's Global Research Organization and its global subsidiaries in the countries studied.
Brazil presents an interesting case study of local factors influencing the adoption and impacts of e-commerce. Globalization is typically associated with the adoption of innovative technologies such as e-commerce that facilitate expansion into international markets and management of cross-border transactions. In the case of Brazil, however, its large size and considerable geographic distance from global production networks create a relatively inward-oriented economy. Other factors besides globalization have thus driven e-commerce. These include the need for financial efficiency driven by historically rampant inflation, as well as low GDP per capita typical of developing economies. Moreover, severely disproportionate wealth distribution impedes widespread adoption of certain forms of e-commerce. The overall result is the importance of local forces relative to global forces in driving e-commerce, the leadership of the financial sector in ecommerce adoption, and the innovation of large firms relative to small firms in the use of the technology.
Less international orientation. On average, firms in Brazil are less internationally oriented than those in other economies. Only 4% of firms in the sample have establishments abroad, versus 24% in the global sample. Sales from abroad are less than a third of firms in other economies (4% versus 12%), and procurement from foreign firms is less than one half (10% versus 20%).
Local forces key. Local forces are more important than global forces in driving e-commerce diffusion. Reasons include Brazil's inward orientation, its large domestic economy, and its unique economic history and government policies.
Germany not only has a long history of being a leading innovator in several areas, but has also been a fast follower in adopting innovations, including information technologies. German firms generally have embraced and implemented IT solutions only after they have proved successful in other countries, but once proven, there is widespread adoption across large and small firms, and new technologies are integrated with existing technologies to obtain maximum benefits. This is somewhat analogous of many firms' adoption strategy for new information and communications technologies. These firms are unwilling to be the guinea pigs for brand-new, often cutting-edge or bleeding-edge ICT which is often unproven, “buggy,” unstable, and not perfected in many ways. Instead, fast-follower firms wait until right after early adopters have started the diffusion and just before “critical mass” has been achieved.
Two important factors driving adoption of IT in Germany are the international orientation of the country's economy and the dynamism of its small and medium-sized enterprises (SMEs), the so-called Mittelstand. Large multinational firms use technologies such as EDI very heavily to coordinate regional and global operations and to compete in a high-wage environment. However, Germany stands out among other countries in that its SMEs use many of these technologies to an equal, and sometimes greater, extent than large firms. As suppliers to large multinationals and as international competitors in their own right, German SMEs have had to be innovative and flexible to survive.
By the complexity of a graph we mean the minimum number of union and intersection operations needed to obtain the whole set of its edges starting from stars. This measure of graphs is related to the circuit complexity of boolean functions.
We prove some lower bounds on the complexity of explicitly given graphs. This yields some new lower bounds for boolean functions, as well as new proofs of some known lower bounds in the graph-theoretic framework. We also formulate several combinatorial problems whose solution would have intriguing consequences in computational complexity.
For an integer $b \geq 1$, the $b$-choice number of a graph $G$ is the minimum integer $k$ such that, for every assignment of a set $S(v)$ of at least $k$ colours to each vertex $v$ of $G$, there is a $b$-set colouring of $G$ that assigns to each vertex $v$ a $b$-set $B(v) \subseteq S(v) \; (|B(v)|=b)$ so that adjacent vertices receive disjoint $b$-sets. This is a generalization of the notions of choice number and chromatic number of a graph. Using probabilistic arguments, we show that, for some positive constant $c > 0$ (independent of $b$), the $b$-choice number of any graph $G$ on $n$ vertices is at most $c (b\chi) (\ln (n/\chi)+1)$ where $\chi = \chi(G)$ denotes the chromatic number of $G$. For any fixed $b$, this bound is tight up to a constant factor for each $n,\chi$. This generalizes and extends a result of Noga Alon [1]wherein a similar bound was obtained for 1-choice numbers of complete $\chi$-partite graphs with each part having size $n/\chi$. We also show that the proof arguments are constructive, leading to polynomial time algorithms for the list colouring problem on certain classes of graphs, provided each vertex is given a list of sufficiently large size.
We derive the distribution of the number of links and the average weight for the shortest path tree (SPT) rooted at an arbitrary node to $m$ uniformly chosen nodes in the complete graph of size $N$ with i.i.d. exponential link weights. We rely on the fact that the full shortest path tree to all destinations (ie, $m=N-1$) is a uniform recursive tree to derive a recursion for the generating function of the number of links of the SPT, and solve this recursion exactly.
The explicit form of the generating function allows us to compute the expectation and variance of the size of the subtree for all $m$. We also obtain exact expressions for the average weight of the subtree.
The SGI Power C compiler (PCA) does not allow more threads than processors (cf. the document “Multiprocessing C Compiler Directives”). In this sense, programs execute like the fork() programming model.
The keyword critical corresponds most closely with mutex in that only one thread at a time can execute this code and all threads execute it. The keyword synchronize corresponds most closely with barrier in that all threads must arrive at this point before any thread can go on.
There are also additional directives. The directive one processor means that the first thread to reach this code executes it meanwhile other threads wait. After execution by the first thread, the code is skipped by subsequent threads. There is an enter gate and corresponding exit gate directive. Threads must wait at the exit gate until all threads have passed the matching enter gate.
Loops to run in parallel must be marked with the pfor directive. It takes the argument iterate (start index; number of times through the loop; increment/decrement amount).
A reduction variable is local to each thread and their contributions must be added in a critical section.
The need for speed. Since the beginning of the era of the modern digital computer in the early 1940s, computing power has increased at an exponential rate (see Fig. 1). Such an exponential growth is predicted by the well-known “Moore's Law,” first advanced in 1965 by Gordon Moore of Intel, asserting that the number of transistors per inch on integrated circuits will double every 18 months. Clearly there has been a great need for ever more computation. This need continues today unabated. The calculations performed by those original computers were in the fields of ballistics, nuclear fission, and cryptography. And, today these fields, in the form of computational fluid dynamics, advanced simulation for nuclear testing, and cryptography, are among computing's Grand Challenges.
In 1991, the U.S. Congress passed the High Performance Computing Act, which authorized The Federal High Performance Computing and Communications (HPCC) Program. A class of problems developed in conjunction with the HPCC Program was designated “Grand Challenge Problems” by Dr. Ken Wilson of Cornell University. These problems were characterized as “fundamental problems in science and engineering that have broad economic or scientific impact and whose solution can be advanced by applying high performance computing techniques and resources.” Since then various scientific and engineering committees and governmental agencies have added problems to the original list. As a result, today there are many Grand Challenge problems in engineering, mathematics, and all the fundamental sciences. The ambitious goals of recent Grand Challenge efforts strive to
build more energy-efficient cars and airplanes,
design better drugs,
forecast weather and predict global climate change,
Let $H$ be a fixed graph on $h$ vertices. We say that a graph $G$ is induced$H$-free if it does not contain any induced copy of $H$. Let $G$ be a graph on $n$ vertices and suppose that at least $\epsilon n^2$ edges have to be added to or removed from it in order to make it induced $H$-free. It was shown in [5] that in this case $G$ contains at least $f(\epsilon,h)n^h$ induced copies of $H$, where $1/f(\epsilon,h)$ is an extremely fast growing function in $1/\epsilon$, that is independent of $n$. As a consequence, it follows that for every $H$, testing induced $H$-freeness with one-sided error has query complexity independent of $n$. A natural question, raised by the first author in [1], is to decide for which graphs $H$ the function $1/f(\epsilon,H)$ can be bounded from above by a polynomial in $1/\epsilon$. An equivalent question is: For which graphs $H$ can one design a one-sided error property tester for testing induced $H$-freeness, whose query complexity is polynomial in $1/\epsilon$? We settle this question almost completely by showing that, quite surprisingly, for any graph other than the paths of lengths 1,2 and 3, the cycle of length 4, and their complements, no such property tester exists. We further show that a similar result also applies to the case of directed graphs, thus answering a question raised by the authors in [9]. We finally show that the same results hold even in the case of two-sided error property testers. The proofs combine combinatorial, graph-theoretic and probabilistic arguments with results from additive number theory.
The theorems of Hindman and van der Waerden belong to the classical theorems of partition Ramsey Theory. The Central Sets Theorem is a strong simultaneous extension of both theorems that applies to general commutative semigroups. We give a common extension of the Central Sets Theorem and Ramsey's theorem.
Numerical computations are a fundamental tool for engineers and scientists. The current practice of science and engineering demands that nontrivial computations be performed with both great speed and great accuracy. More and more, one finds that scientific insight and technologial breakthroughs are preceded by intense computational efforts such as modeling and simulation. It is clear that computing is, and will continue to be, central to the further development of science and technology.
As market forces and technological breakthroughs lowered the cost of computational power by several orders of magintude, there was a natural migration from large-scale mainframes to powerful desktop workstations. Vector processing and parallelism became possible, and this parallelism gave rise to a new collection of algorithms. Parallel architectures matured, in part driven by the demand created by the algorithms. Large computational codes were modified to take advantage of these parallel supercomputers. Of course, the term supercomputer has referred, at various times, to radically different parallel architectures. This includes vector processors, various shared memory architectures, distributed memory clusters, and even computational grids. Although the landscape of scientific computing changes frequently, there is one constant; namely, that there will always be a demand in the research community for high-performance computing.
When computations are first introduced in beginning courses, they are often straightforward “vanilla” computations, which are well understood and easily done using standard techniques and/or commercial software packages on desktop computers. However, sooner or later, a working scientist or engineer will be faced with a problem that requires advanced techniques, more specialized software (perhaps coded from scratch), and/or more powerful hardware.
Every node of an undirected connected graph is coloured white or black. Adjacent nodes can be compared and the outcome of each comparison is either 0 (same colour) or 1 (different colours). The aim is to discover a node of the majority colour, or to conclude that there is the same number of black and white nodes. We consider randomized algorithms for this task and establish upper and lower bounds on their expected running time. Our main contribution are lower bounds showing that some simple and natural algorithms for this problem cannot be improved in general.
We consider the complexity of approximating the partition function of the ferromagnetic Ising model with varying interaction energies and local external magnetic fields. Jerrum and Sinclair provided a fully polynomial randomized approximation scheme for the case in which the system is consistent in the sense that the local external fields all favour the same spin. We characterize the complexity of the general problem by showing that it is equivalent in complexity to the problem of approximately counting independent sets in bipartite graphs, thus it is complete in a logically defined subclass of #P previously studied by Dyer, Goldberg, Greenhill and Jerrum. By contrast, we show that the corresponding computational task for the $q$-state Potts model with local external magnetic fields and $q>2$ is complete for all of #P with respect to approximation-preserving reductions.