To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We saw in the previous chapter that the application selection function plays a pivotal rôle in the design of any card-reading terminal. In this chapter, I will review the requirements for application selection and the options for cards and terminals to implement this function.
Scope and functions
Application selection is required for any microprocessor card, not only for multi-application cards, since it is the process by which the card application is started up.
For memory and wired-logic cards, even though there is no firmware ‘application’ on the card, the initial process is similar but application selection is implicit. In this case the terminal must select the appropriate supporting program and functions.
Where either the card or the terminal is multi-application, in the widest sense described in Chapter 2, application selection plays a pivotal rôle in determining which of the applications will be run; it links the technical protocol-handling functions with the logical transaction flow.
In some cases, the application to be selected is known before the card is inserted, either from the context of the transaction, or because an operator or user has already selected a function. In other cases the card and terminal must agree the selection, or offer a choice to the operator.
Card initialisation
Power up and reset
The process starts with powering up of the card and the way the card responds to the reset signal: its answer-to-reset (ATR).
Multos was originally developed by Mondex International, the developers of one of the world's first electronic purses.
Mondex foresaw that issuers of electronic money would want to allow the e-cash application to reside on any smart card, not just ones issued by financial institutions. Hence, the electronic purse could find itself co-residing with other applications, such as a club membership or a credit–debit function, on the same card. It was also envisaged that applications on the smart card might be updated or added during the life of the card. And to protect the e-cash application from attack by fraudsters or hackers, it was essential that the multi-application smart-card platform must be secure and controlled by the card issuer.
Having worked on the electronic purse card for some years, the banks that owned Mondex could see the wide range of potential applications for the technology and, in particular, for the security functions it could offer. High levels of security, proof of security and tight control by the card issuer were key requirements. The design objectives of Multos were, therefore, fourfold:
A very high security platform;
A platform-independent programming language and application architecture;
The ability for multiple applications to share card memory and data in a secure and controlled way;
The ability to download applications after the card has been issued, but without the risk that unauthorised applications could be loaded or could corrupt existing applications.
Multos products first appeared in 1996, and in the same year the Maosco consortium was formed (by smart-card manufacturers and integrators, telcos and payment-card schemes MasterCard and American Express) to manage the specifications and to promote the operating system as an open standard.
This chapter picks up the story from Chapter 3 and looks at advances and variants in chip and card technology, particularly those that affect multi-application cards.
Microcontrollers
Architecture
The microcontroller chip is at the heart of smart-card technology; as we saw in Figure 1.2 an increasing proportion of all smart-card chips use one.
As in all computing, more advanced operating systems and applications are demanding more power from the processor. But while microprocessors for mainstream computing are able to satisfy the demand for more power by increasing the number of transistors packed onto the chip, and hence the heat generated, smart-card chips are limited in both the area of the chip (25 mm2 is normally considered the limit for reliability) and the amount of heat that can be dissipated.
A growing number of chips, therefore, make use of reduced instruction-set computing (RISC) cores, which give faster processing for a given power input. Separating the cryptography into its own processor can also help, and it is also more efficient if the input–output is handled by its own processor. Longer word sizes (32 bit words are now the norm in mainstream computers, and 64 bits quite common) are less beneficial in smart cards, and a 32 bit processor does not necessarily give a better speed–power ratio than 16 bits.
Cards can be tailored to the specific applications they will run: Multos cards have for some time been tailored to running the specific code that this operating system generates, and some processors are now optimised to run Java byte code directly.
The previous five chapters have identified almost 100 suitable applications for smart cards, each with demonstrable benefits in some situations. Few people would imagine that all these applications could all reside on one card, although this would be technically possible, even today.
It is, therefore, useful to consider which applications have a good ‘fit’, so that they could share a card. Two perspectives are equally important:
The card and application issuers must find it both easy and profitable to work together;
For the card-holder, there should be a logical connection between the applications.
This chapter looks at each of these in turn: what makes it easy for organisations to work together and what are the barriers? And what ‘domains of use’ make sense to the card-holder?
Corporate culture
Probably the first factor that affects any co-operation or partnership between organisations concerns their corporate cultures and the personalities of the individuals involved. A large, slow-moving, risk-averse organisation is unlikely to make a good partner for a small, entrepreneurial business.
This, and indeed most of the organisational and commercial effects described in this chapter, can apply to multi-application projects involving several departments in one company, as well as to multiple companies. A successful co-operation in one country may not be transferable to other national operations of the same company.
In the case of card projects, attitudes to card-holders and users can vary widely, even within a sector.
Any card issued by a central or local government is liable to be branded as an ‘identity card’. In many Western liberal countries that poses automatic grounds for suspicion of the issuer's motives; this chapter explores some of these motives and the issues surrounding government-issued cards.
Databases and cards
All card systems depend on a central database in some form. But for government cards in particular, it is important to distinguish card projects from the databases that underlie them. The growth in government ID card projects has been accompanied by growing concerns, from civil liberties groups in particular, about the potential loss of privacy these projects entail, and the potential for abuse and discrimination.
In practice, the use of large-scale databases is expanding strongly and does offer some scope for abuse; a correctly implemented card system linked to these databases offers the potential to control access to the data and give individuals more power over the way their own data are used. It is ironic that much of the opposition to identity cards implies that the use of a card represents an infringement of privacy, whereas a well-implemented card system should actually help to manage the privacy risk and to give citizens a degree of control over access to their records that they are unlikely to gain without such a card.
In 1996, smart-card manufacturer Schlumberger demonstrated a card operating system to which it had added Java bytecode interpreter functions for a small number of methods. This initial implementation was very limited and involved an intermediate format, but it attracted considerable interest because it offered, for the first time, an opportunity for mainstream computer programmers to become involved in smart-card application development.
At the same time as Schlumberger was working on this development, Visa was working with Integrity Arts (a subsidiary of another smart-card manufacturer, Gemplus) to specify an ‘open’ smart-card operating system that could work on any manufacturer's card, permit the use of programmer-friendly development tools and allow applications to be downloaded to the card.
The two streams of activity together triggered Sun Microsystems to buy Integrity Arts and to endorse a specification for a Java implementation on a smart card, known as JavaCard 1.0, which drew on both efforts. Gemplus and Schlumberger joined with other smart-card companies to form the JavaCard Forum, which released the JavaCard 2.0 API at the end of 1997. This second release was considerably more detailed and allowed many more practical implementations.
However, even this version did not ensure portability of applications between smart-card platforms, and did not define in any detail the mechanisms for downloading applications to the card. To overcome these limitations, Visa published its Visa Open Platform specification in 1998, which defined mechanisms for secure applet download and on-card management.
The very first smart cards were issued in the mid 1980s as disposable prepaid cards for public telephones. They replaced magnetic stripe, optical or inductive cards with a technology that was generally more reliable; over 95% of all telephone cards for automatic use now employ smart card technology. The competition is not from other card-based systems but from centralised systems where the account is held on a host system rather than on the card itself (even if a card is used to deliver the account number to the user).
The business requirement for a public telephone operator is to eliminate the collection of cash in telephones using a reliable system with a minimum of moving parts. Coin-operated telephones are expensive to build, since they must be very robust; they are expensive to operate (the cash must be collected regularly) and to maintain (coin mechanisms frequently become jammed or are vandalised).
Smart cards can be sold by retailers like any other goods and so can be made very widely available. The ‘float’ of value sold but not yet used is available to the operator and can earn interest (although it does still represent a liability in accounting terms). Smart cards, therefore, offer a very convenient and portable way to sell value; however they also have some disadvantages:
The cards themselves must be manufactured and distributed; card cost may be as low as 8–10 cents but as a proportion of the smallest denomination value (which may be $2 or even less) this is still substantial. In addition, the distributors and retailers need to make a margin, which may be 5–15% of the card value;
In previous chapters, I have listed many potential barriers to a successful multi-application chip-card scheme, but also several ‘enablers’ – factors that will contribute to making a project successful.
This chapter is set out as a checklist; it sets these barriers and enablers against the phases of a project and should help those planning or implementing a multi-application project to make the right decisions at the right time.
Defining the project scope and road-map
For any project, clear definitions of scope and objectives are critical for success. In the case of a multi-application-card project, the business objectives must be clear and should drive the scope of the project. For the initiating organisation, its own business objectives must come first and should not be derailed by those of other partners.
There is a strong tendency for the scope of multi-application projects to grow during the life of the project: what is known as ‘function creep’. Whilst it is important to keep an eye open for changing circumstances, and some additional functions or markets may make good sense, any change to the original scope must be weighed against its effect on the business objectives and timescales.
Many multi-application projects will start with some basic features and add others as time goes on. It is very helpful to design a road-map that answers the questions:
What do we have now?
What will we add at each stage and how long will that take?
Most card issuers will need some form of card management system (CMS), that allows them to keep track of the cards they have issued, expiry dates, etc. The CMS may also contain the details on the card, or may refer to another database (for example, a personnel database) that contains this information. The CMS may also include functions for maintaining the data (for example, name and address data), but for larger systems this is more often regarded as a separate customer management function.
For complex applications such as credit-card issuing, the CMS may link to several other systems, such as an authorisation system, call centre, statement and mailing management.
A CMS for magnetic stripe cards is usually a fairly simple ‘flat’ file structure, providing a link between the card number and the external data. With this kind of structure it is quite easy to give a call centre, for example, limited read-only access and the ability to make notes linked to the card or account, but they cannot affect transactions carried out by the card. The CMS can also act as the interface to a bureau or outside processor, so that the card issuer maintains the database but the bureau handles all the card-related functions.
Additional functions for smart-card management
When an issuer moves to a smart-card platform, it often assumes that it will need a smart-card management system (SCMS).
Data to be accessed via communication networks or transmitted over public networks must be protected against unauthorized access, misuse, and modification. Security protection requires three mechanisms: enablement, access control, and trust management. Enablement implies that a cohesive security policy has been implemented and that an infrastructure to support the verification of conformance with the policy is deployed. Perimeter control determines the points of control, the objects of control and the nature of control to provide access control and perform verification and authorization. Trust management allows the specification of security policies relevant to trust and credentials. It ascertains whether a given set of credentials conforms to the relevant policy, delegates trust to third parties under relevant conditions, and manages dynamically, if needed, the level of trust assigned to individuals and resources in order to provide authorization.
Public key infrastructures (PKIs) represent an important tool to be used in enablement, while biometric-based infrastructures are gaining an important role in providing robust access control. Biometrics are automated methods of recognizing a person based on a physiological or behavioral characteristic. Biometric-based solutions are able to provide for confidential financial transactions and personal data privacy. The need for biometrics can be found in electronic governments, in the military, and in commercial applications. In addition, trust management systems start to be used in a large set of environments such as electronic payment and healthcare management, where transactions and accesses are highly sensitive.
The Internet is growing to become the major means through which the services can be delivered electronically to businesses and customers. System vendors and service providers are pushing toward the definition of protocols, languages, and tools that support the improvement, use, and operation of electronic services. The main goal of the e-service paradigm is to provide the opportunity for defining value-added composite services by integrating other basic or composite services. However, security issues need to be addressed within such an open environment.
Introduction
The notion of service is getting increasingly valuable to many fields of communications and information technology. Nowadays, the current development in service provision through communication networks are moving from tightly joined systems towards services of loosely linked and dynamically related components. The major evolution in this category of applications is a new paradigm, called e-service, for which project developers and service providers are pushing for the definition of techniques, methods, and tools as well as infrastructures for supporting the design, development, and operation of e-services. Also, standards bodies are urging to specify protocols and languages to help the deployment in e-services.
E-services are self-contained and modular applications. They can be accessed via Internet and can provide a set of useful functionalities to businesses and individuals. Particularly, recent approaches to e-business typically view an e-service as an abstraction of a business process, in the sense that it represents an activity executed within an organization on behalf of a customer or another organization.
Public key cryptosystems represent a basic tool for the implementation of useful security services that are able to protect the resources of an organization and provide an efficient security for the services and Web sites that an enterprise may offer on the Internet. This chapter describes the main components, functions, and usage of a public key cryptosystem. It also discusses some major attacks that have been developed to reduce cryptosystem efficiency.
Introduction
A text containing data that can be read and understood without any special measure is called plaintext. The method of transforming a plaintext in a way to hide its content to unauthorized parties is called encryption. Encrypting a plaintext results in unreadable text called ciphertext. Therefore, encryption is used to ensure that information is hidden from anyone for whom it is not intended, including those who can capture a copy of the encrypted data (while it is flowing through the network). The process of inversing the ciphertext to its original form is called decryption. Cryptography can be defined as the science of using mathematics to encrypt and decrypt data. Cryptography securely provides for the storage of sensitive information and its transmission across insecure networks, like the Internet, so that it cannot be read (under its original form) by any unauthorized individual (Menezes et al., 1996).
A cryptographic algorithm, also called cipher, is a mathematical function used in the encryption and decryption processes.
In enterprise systems, a security exposure is a form of possible damage in the organization's information and communication systems. Examples of exposures include unauthorized disclosure of information, modification of business or employees' data, and denial of legal access to the information system. A vulnerability is a weakness in the system that might be exploited by an adversary to cause loss or damage. An intruder is an adversary who exploits vulnerabilities, and commits security attacks on the information/production system.
Electronic security (e-security) is an important issue to businesses and governments today. E-security addresses the security of a company, locates its vulnerabilities, and supervises the mechanisms implemented to protect the on-line services provided by the company, in order to keep adversaries (hackers, malicious users, and intruders) from getting into the company's networks, computers, and services. E-service is a very closely related concept to e-privacy and it is sometimes hard to differentiate them from each other. E-privacy issues help tracking users or businesses and what they do on-line to access the enterprise's web sites.
Keeping the company's business secure should be a major priority in any company no matter how small or large is the business of the company, and no matter how open or closed the company network is. For this intent, a security policy should be set up within the company to include issues such as password usage rules, access control, data security mechanisms and business transaction protection.
This chapter discusses the importance and role of e-security in business environments and networked systems. It presents some relevant concepts in network security and subscribers protection. It also introduces some basic terminology that is used throughout the book to define service, information, computer security, and network security. This chapter aims at providing self contained features to this book.
Introduction
Every organization, using networked computers and deploying an information system to perform its activity, faces the threat of hacking from individuals within the organization and from its outside. Employees (and former employees) with malicious intent can represent a threat to the organization's information system, its production system, and its communication networks. At the same time, reported attacks start to illustrate how pervasive the threats from outside hackers have become. Without proper and efficient protection, any part of any network can be prone to attacks or unauthorized activity. Routers, switches, and hosts can all be violated by professional hackers, company's competitors, or even internal employees. In fact, according to various studies, more than half of all network attacks are committed internally.
One may consider that the most reliable solution to ensure the protection of organizations' information systems is to refrain from connecting them to communication networks and keep them in secured locations. Such a solution could be an appropriate measure for highly sensitive systems.
Data that can be accessed on a network or that are transmitted on the network, from one edge node to another, must be protected from fraudulent modification and misdirection. Typically, information security systems require three main mechanisms to provide adequate levels of electronic mitigation: enablement, perimeter control, and intrusion detection and response. Enablement implies that a cohesive security plan has to be put in place with an infrastructure to support the execution of such a plan. The public key infrastructure (PKI) being discussed in this chapter falls under the first approach.
Introduction
One of the most decisive problems in business transaction is the identification of the principal (individual, software entity, or network entity) with which the transaction is being performed. As the traditional paperwork in business is moving to electronic transactions and digital documents, so must the reliance on traditional trust objects be converted to electronic trust, where security measures to authenticate electronic business actors, partners, and end-users before their involvement in the exchange of information, goods, and services are provided. Moreover, the obligation to provide confidentiality and confidence in the privacy of exchanged information is essential. Extending this list of security services should include the necessity to establish the non-repudiation of transactions, digitally attest the validity of transactions by trusted third parties, or securely time-stamping transactions.
Biological features have been investigated extensively by many researchers and practitioners in order to identify the users of information, computer, and communications systems. There are an increasing number of biometric-based identification systems that are being developed and deployed for both civilian and forensic applications. Biometric technology is now a multi-billion dollar industry and there is extensive Federal, industrial, business and academic research funding for this vital technology especially after September 2001.
An automated biometric system uses biological, physiological or behavioral characteristics to automatically authenticate the identity of an individual based on a previous enrollment event. In this context, human identity authentication is the focus. However, generally this should not necessarily be the case.
This chapter aims at reviewing state-of-the-art techniques, methodologies, and applications of biometrics to secure access to e-based systems and computer networks. It will also shed some light on its effectiveness and accuracy of identification as well as trends, concerns, and challenges.
Introduction
Biometrics deals with the process of identifying persons based on their biological or behavioral characteristics. This area has received recently a great deal of attention due to its ability to give each person unique and accurate characteristics. Moreover, the cost of implementing such technology to identify people has decreased tremendously. Biometrics techniques have been widely accepted by the public due to their strengths and robustness (Obaidat, 1997; Obaidat, 1999).
Identifying the identity of an individual involves solving two major issues: (a) verification, and (b) recognition.