With knowledge workers abounding in today’s service-based economy, organizations continue to invest in training and learning opportunities to enhance human capital. According to the Association for Talent Development’s 2014 State of the Industry Report, U.S. organizations spend $1,208 per employee/per year on training. Clearly, one of the primary goals of such training investments is to enhance positive transfer, or the degree to which learning from training is applied to the job, and to produce relevant changes in employee and job performance (Burke, Hutchins, & Saks, Reference Burke, Hutchins, Saks and Paludi2013; Grossman & Salas, Reference Grossman and Salas2011). Unfortunately, there is empirical evidence that little of what has been acquired during training is transferred to the workplace (Saks, Reference Saks2002). Given less than ideal transfer rates, means to promote transfer represent a key challenge for training scholars and practitioners alike (Burke, Reference Burke2001). If trainees fail to successfully transfer new knowledge and skills, training expenditures are ultimately poor investments.
Many studies over the last few decades have examined methods for promoting transfer. For example, research has focused on enhancing transfer through posttraining interventions such as goal-setting and self-management training (e.g., Brown, Reference Brown2005; Burke & Baldwin, Reference Burke and Baldwin1999; Richman-Hirsch, Reference Richman-Hirsch2001; Taylor, Russ-Eft, & Chan, Reference Taylor, Russ-Eft and Chan2005; Tews & Tracey, Reference Tews and Tracey2008), transfer climate (e.g., Holton, Bates, & Ruona, Reference Holton, Bates and Ruona2000; Rouiller & Goldstein, Reference Rouiller and Goldstein1993; Tracey & Tews, Reference Tracey and Tews2005), and supervisory and peer support (e.g., Cromwell & Kolb, Reference Cromwell and Kolb2004; Hutchins, Burke, & Berthelsen, Reference Hutchins, Burke and Berthelsen2010; Lim & Johnson, Reference Lim and Johnson2002). Furthermore, various exhaustive reviews have summarized the existing body of research on transfer of training (Blume et al., Reference Blume, Ford, Baldwin and Huang2010; Burke & Hutchins, Reference Burke and Hutchins2007; Grossman & Salas, Reference Grossman and Salas2011). While there is evidence that transfer is influenced by various individual trainee characteristics, training design, and different aspects of organizational support, there is less clarity on which antecedents matter most (Hilbert, Preskill, & Russ-Eft, Reference Hilbert, Preskill and Russ-Eft1997), though some have offered their insights (Grossman & Salas, Reference Grossman and Salas2011).
Despite advances in transfer research, we contend that overall the body of research lacks synthesis (Blume et al., Reference Blume, Ford, Baldwin and Huang2010) and remains principally atheoretical. The training and development field has organizing frameworks to help classify transfer elements (such as before, during, and after training; Broad, Reference Broad2005), yet further theoretically grounded guidance would help design specific transfer strategies to use in different workplace contexts. Toward this end, the goal of the present chapter is to create an integrative conceptual framework that is theory driven and provides context-relevant implications for stakeholders of training transfer design.
The fundamental premise of this chapter is that accountability may be lacking in organizations for trainees to apply what they have learned in training on the job. As Kopp (Reference Kopp2006) claims, transfer seems to be viewed as “nice-to-have,” and often stakeholders are not held accountable for transfer success in a meaningful way. Drawing on Schlenker and colleagues’ (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) model of accountability and Yelon and Ford’s (Reference Yelon and Ford1999) multidimensional model of transfer, the present chapter delineates means to enhance accountability for training transfer in different work contexts. Burke and Saks (Reference Burke and Saks2009) recently applied Schlenker’s framework to transfer in general, and in this chapter we go further by considering work context. Specifically, this chapter focuses on the means to promote accountability for transfer of open and closed skills performed under either supervised or autonomous working conditions. It is important to consider the nature of the skill and degree of supervision because these dimensions affect ease of proficiency acquisition, latitude in adapting skills, responsibility for monitoring posttraining behavior, and the level of posttraining support required.
The structure of the chapter is as follows. First, we will review previous research related to accountability and transfer of training. Then, we will provide an overview of Schlenker and colleagues’ (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) model of accountability and Yelon and Ford’s (Reference Yelon and Ford1999) multidimensional model of transfer. We then synthesize these theoretical perspectives and articulate strategies to promote accountability for transfer in various work contexts.
The Importance of Accountability in Training Transfer
Frink and Klimoski (Reference Frink and Klimoski1998) defined accountability as a perceived need to justify an action to some audience that has reward or sanction power. From a management perspective, when individuals know they will be held accountable for their behaviors and decisions, they are more motivated to focus on achieving specific outcomes, use self-regulatory strategies, and exert greater effort and persist in overcoming obstacles. Organizations have a range of formal mechanisms and informal mechanisms to influence perceived accountability (Frink & Klimoski, Reference Frink and Klimoski1998). Formal accountability mechanisms may include performance reviews, promotions or demotions, disciplinary action, and incentives like merit or bonus pay. Informal mechanisms may include cultural norms, peer influence, and coaching from supervisors. While accountability has been studied in decision making, selection, and performance appraisal contexts, it has been examined to a lesser extent in the training context (i.e., DeMatteo, Lundby, & Dobbins, Reference DeMatteo, Lundby and Dobbins1997). In the training transfer domain, accountability addresses the extent to which transfer is expected from trainees (Brinkerhoff & Montesino, Reference Brinkerhoff and Montesino1995; Burke & Saks, Reference Burke and Saks2009).
We contend that accountability can be deficient with respect to transfer on several fronts. At a general level, training may be perceived as an isolated event, divorced from the natural work environment. In this respect, trainers may be held accountable only for designing and delivering an effective training session, without an eye toward transfer. Furthermore, training practitioners and managers may not be fully aware of the typically limited rates of transfer and merely assume that transfer spontaneously occurs. Extending these arguments, supervisors may not be cognizant of the transfer problem, or they might also view promoting transfer as someone else’s responsibility. Following that trainers and supervisors may not focus on transfer, trainees may not place a priority on transfer and instead direct their efforts toward what they perceive to be more pressing demands or more likely rewarded. Given these challenges, it is important to strengthen responsibility mechanisms to help ensure that transfer is maximized (Broad & Newstrom, Reference Broad and Newstrom1992).
The inclusion of accountability in transfer models and measures is surprisingly lacking despite various studies presenting direct or indirect implications for integrating accountability in transfer interventions (Burke & Saks, Reference Burke and Saks2009). In another study by Baldwin and Magjuka (Reference Baldwin and Magjuka1991), it was demonstrated that trainees expecting some form of follow-up assessment after training reported stronger intentions to transfer. In commenting on this finding, Tannenbaum and Yukl (Reference Tannenbaum and Yukl1992) stated: “The fact that their supervisor would require them to prepare a post-training report or undergo an assessment meant that they were being held accountable for their own learning and apparently conveyed the message that the training was important” (418). In addition, DeMatteo and colleagues (Reference DeMatteo, Lundby and Dobbins1997) conducted a systematic manipulation of training accountability in a lab setting and found that accountability interventions (either a postdiscussion with the trainer or a video critique) produced more note taking, learning, and trainee satisfaction if the trainees were notified prior to the training of such assessments. Longenecker (Reference Longenecker2004) also identified enhancing accountability for application, such as requiring posttraining reports from trainees, as a key learning imperative articulated by managers. Saks and Belcourt (Reference Saks and Belcourt2006) subsequently found such accountability mechanisms to be significantly related to transfer. Further support for the role of accountability in transfer can be found in Taylor, et al.’s (Reference Taylor, Russ-Eft and Chan2005) meta-analysis of 117 behavioral modeling studies that demonstrated a larger effect for transfer when sanctions and rewards were instituted in trainees’ work environments, such as the incorporation of newly learned skills into performance reviews.
Transfer climate constructs, which encompass a variety of support mechanisms to facilitate transfer, also highlight the need for accountability in directing transfer. Rouiller and Goldstein (Reference Rouiller and Goldstein1993) conceptualized transfer climate as encompassing two dimensions – situational cues and consequences. Situational cues included goal cues, social cues, and task and structural cues. Consequences included feedback and rewards. Tracey, Tannenbaum, and Kavanagh (Reference Tracey, Tannenbaum and Kavanagh1995) found support for the relationship between transfer climate and transfer, with Tracey and Tews’s (2005) conceptualization of transfer climate encompassing organizational support, managerial support, and job support. Organizational support refers to policies, and reward systems to support training. Managerial support refers to managers encouraging learning and supporting transfer. Finally, job support refers to whether jobs are designed to promote continuous learning. While such transfer climate constructs may not have explicitly identified accountability as a specific factor, they point to the importance of responsibility and consequences associated with transfer.
More recent research further signals the importance of accountability in the context of transfer. In an empirical study, Cheramie and Simmering (Reference Cheramie and Simmering2010) found that learners low in conscientiousness exhibited higher levels of learning when they perceived accountability for training outcomes as high, and concluded that “organizations should implement formal controls to increase perceived accountability and improve learning” (44). Lastly, Saks and Burke (Reference Saks and Burke2012), in another empirical investigation, provided evidence that training evaluation frequency was related to higher rates of transfer in organizations when organizations measured behavioral change and results-oriented criteria after training (as compared to trainee satisfaction or learning).
Schlenker’s Accountability Theory and Training Transfer
Schlenker and colleagues’ (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) theory of accountability (i.e., responsibility) provides an overarching theoretical framework that we contend can usefully guide transfer research (Burke & Saks, Reference Burke and Saks2009). Schlenker et al. (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994) argue that accountability is a key mechanism through which social entities control the behavior of members. In particular, accountability reflects “being answerable to audiences for performing up to prescribed standards that are relevant to fulfilling obligations, duties, expectations, and other charges” (Schenkler, Reference Schlenker1997: 249). When individuals are held responsible by others for adhering to a course of action, they can be evaluated with respect to a relevant event. Moreover, when individuals perceive themselves as accountable for executing a course of action, individuals become more motivated to follow prescribed behaviors and achieve goals and objectives. Thus, enhancing responsibility helps guarantee that organizational members adhere to performance expectations.
Fundamentally, designing accountability into the transfer process enhances stakeholders’ sense of ownership and responsibility for skill enhancement such that trainers, trainees, and managers face a source of discomfort from the organization if they do not follow through on their obligation to transfer learning to their job. As Schlenker (Reference Schlenker1997) claims, gaps between an employee’s behavior and “oughts,” such as “I ought to transfer skills I learn”; “I ought to hold my employees responsible for transfer”; or “I ought to share evidence of my training program’s influence on behavioral change with top managers,” produce a state of incompleteness. To embed accountability in transfer of training, individuals must understand what transfer behaviors are expected of them, how their actions will be measured, and what rewards or sanctions will be imposed for transfer or lack thereof (Santos & Stuart, Reference Santos and Stuart2003).
Schlenker et al. (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994) argue that accountability “involves an evaluative reckoning in which individuals are judged” (634). The essential facets of accountability include inquiry, accounting, and verdict. That is, questions are raised about how well a person performs (inquiry), evidence is presented and evaluated (accounting), and the person is rewarded or punished (verdict). All such evaluations, whether by others or by the actor him- or herself, involve information about three key variables: prescriptions, identity, and event. Prescriptions refer to expectations, rules, and standards of conduct to guide an individual’s behavior, which may be formal or informal, explicit or tacit. Identity refers to attributes of the actor, such as his or her roles, values, commitments, and aspirations. The event refers to a focal course of action that is anticipated or has transpired. Promoting accountability to engage in future desired behaviors, such as training transfer, should focus on these variables and the links between them.
Schlenker et al. (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994) connect these three variables to form three links that contain the adhesive glue to bind individuals to situations and courses of action. Strengthening these links increases responsibility and, thereby, accountability. The prescription-event link relates to whether a clear set of expectations applies to a given event, resulting in goal and process clarity. In the context of transfer, the prescription-event link reflects expectations for the use and application of trained skills. When the link is strong, individuals have clear goals for transfer and know how to proceed to meet them. The prescription-identity link relates to whether prescriptions are applicable to individuals by virtue of their roles and other personal characteristics, serving to enhance a sense of ownership of a course of action. This link captures the extent to which individuals believe transfer is important because of their role in an organization or their personal sense of obligation. Finally, the identity-event link relates to whether individuals are connected to the event and have control and freedom over their actions. In the context of transfer, this link reflects the extent to which individuals have personal control over their transfer behavior.
The three variables and their links are presented graphically in Figure 9.1, known collectively as “the responsibility triangle” (Schlenker et al., Reference Schlenker, Britt, Pennington, Murphy and Doherty1994: 635). To summarize, a strong prescription-event link requires goal and process clarity; the prescription-identity link necessitates a sense of ownership of the goal and process by individuals; and the identity-event link requires individuals’ perceived control over the event. When these links are strong, individuals’ self-regulatory systems engage, and they become more determined, committed to goals, and “unwavering in pursuing them despite obstacles, distractions, and temptations” (Schlenker, Reference Schlenker1997: 268). Moreover, when people feel more personally responsible, they are less apt to make excuses and engage in avoidance strategies (Schlenker, Reference Schlenker1997). By overlaying accountability mechanisms across stakeholders involved in training transfer, a “psychological adhesive” connects these critical parties to common expectations for and commitment to transfer outcomes.

Figure 9.1. Accountability mechanisms driving training transfer.
Burke and Saks (Reference Burke and Saks2009) recently applied Schlenker’s (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) model specifically to transfer of training. In particular, they articulated the accountability linkages for trainers, trainees, and supervisors to ensure these stakeholders focus on transfer enhancement. According to Burke and Saks (Reference Burke and Saks2009), trainers should have clear expectations for incorporating transfer enhancement in training programs (prescription-event link), a clear sense of duty to include transfer in training content (prescription-identity link), and control over developing training that focuses on focal skills as well as transfer (identity-event link). Trainees should have clear goals for transfer (prescription-event link), a clear sense of obligation to apply what they have learned (prescription-identity link), and personal control over opportunities to transfer (identity-event link). Finally, supervisors should have clear performance expectations to aid their employees in transfer (prescription-event link), an obligation to focus on trainee transfer (prescription-identity link), and personal control to help facilitate trainee transfer (identity-event link) (Burke & Saks, Reference Burke and Saks2009).
Burke and Saks further propose practical strategies to enhance accountability among these different stakeholders. Trainers, for example, can clearly establish transfer expectations prior to training, devote time specifically to transfer enhancement during training sessions, and systematically evaluate the extent to which trainees use new knowledge and skills on the job. In turn, trainees can set specific goals with supervisors for transfer before training, commit to transfer during training, and document as well as share their learning posttraining. With respect to supervisors, they can discuss transfer objectives with trainees prior to their training, prepare a list of activities to commit to after training to promote transfer among their employees, provide rewards and incentives for transfer, and include transfer in employee performance appraisals. As such, Burke and Saks’s application of Schlenker’s (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) model provides theoretically derived strategies to enhance the transfer of training, as summarized in Table 9.1.
Table 9.1 Stakeholder accountability mechanisms for transfer
| Stakeholder | Prescription-Event Link | Prescription-Identity Link | Identity-Event Link |
|---|---|---|---|
| Trainer | Should have clear expectations for incorporating transfer enhancement in training programs | Should have a clear sense of duty to include transfer strategies in training | Should have control over developing training that targets focal skills and transfer |
| Trainee | Should have clear goals for transfer | Should have a clear sense of obligation to apply what has been learned | Should have personal control over opportunities to transfer |
| Supervisor | Should have clear performance expectations to aid their employees in transfer | Should have a personal obligation to focus on trainee transfer to the job | Should have personal control to help facilitate trainee transfer |
Notwithstanding the value of their contribution, we contend that accountability strategies should vary based on the work context. Toward this end, we extend Burke and Saks’s (Reference Burke and Saks2009) work to promote trainee accountability in various work contexts, specifically in terms of the nature of the trained skill set and the degree of supervision the trainee is subject to, as originally crafted by Yelon and Ford (Reference Yelon and Ford1999) and described next.
Yelon and Ford’s Multidimensional Framework of Transfer
Most models of training transfer present prescriptions and theoretical relationships presumably applicable to all trained tasks. However, such universalistic models may not be appropriate in all contexts. To better delineate the conditions for successful transfer, Yelon and Ford (Reference Yelon and Ford1999) developed a multidimensional transfer of training model. By multidimensional, Yelon and Ford are referring to two dimensions of the skill set to be performed that should be considered when selecting among transfer strategies. The first dimension of the Yelon and Ford model is task adaptability, and the second dimension is the degree of employee supervision.
The first dimension of the framework, task adaptability, refers to the extent to which the task to be performed ranges from closed to open. Performing a closed task involves responding to predictable situations with standardized responses. In contrast, performing an open task involves responding to variable situations with adaptive, tailored responses. There is one best way to perform closed tasks; whereas there are multiple ways to perform open tasks that are contingent upon the situation at hand. Examples of closed tasks include preparing food in a fast food restaurant, cleaning a hotel room, filling out a report, and checking in an airline passenger. Examples of open tasks include facilitating discussions in a training session, performing a role in a stage play, motivating employees, and responding to difficult customers.
Yelon and Ford’s (Reference Yelon and Ford1999) second dimension is the degree of supervision under which trained skills are performed on the job. This dimension ranges from heavy supervision to autonomous working conditions. Under heavily supervised conditions, supervisors can closely monitor employee performance on trained skills, provide positive and constructive feedback, and administer appropriate rewards. Further, under such conditions, employees may be more apt to engage in appropriate behaviors with the knowledge of close observation. Under autonomous working conditions, however, employees have more discretion whether to engage in trained behaviors and must be more responsible for ensuring that behaviors are appropriately executed.
Based on these two dimensions – task adaptability and degree of job supervision – Yelon and Ford (Reference Yelon and Ford1999) developed a corresponding 2 × 2 matrix. The resulting four cells in the Yelon and Ford model include: (1) supervised trainees and closed skills; (2) autonomous trainees and closed skills; (3) autonomous trainees and open skills; and (4) supervised trainees and open skills. For each cell, Yelon and Ford then provide differentiated strategies to enhance transfer, as summarized in Table 9.2.
Table 9.2 Exemplar transfer enhancement strategies from Yelon and Ford’s (Reference Yelon and Ford1999) multidimensional framework
| Supervised Trainees | Autonomous Trainees | |
|---|---|---|
| Closed Skill |
|
|
| Open Skill |
|
|
Supervised Trainees and Closed Skills
For supervised trainees performing closed skills, trainees are required to execute specific standards under the guidance of their supervisors. With respect to employee selection, organizations should choose candidates with a detail orientation and willingness to follow direction. Training content should focus on declarative knowledge, procedural knowledge, shaping favorable attitudes toward adhering to standards, and cooperating with supervisors. Yelon and Ford (Reference Yelon and Ford1999) also recommend high-fidelity training, training in specific and detailed procedures, providing practice opportunities to facilitate automaticity, and providing trainees with detailed procedural checklists. Once on the job, supervisors should manage adherence to standards and provide appropriate incentives and rewards.
Autonomous Trainees and Closed Skills
Yelon and Ford’s (Reference Yelon and Ford1999) prescriptions for autonomous trainees and closed skills diverge from supervised trainees with closed skills on several fronts. On a selection front, hiring those with a learning orientation is advised as such individuals persist in challenging situations that might more often occur when working independently. To help ensure trainees are motivated to perform the closed skill, trainers should relate the skill to trainees’ personal and job goals. Trainers should also engender favorable attitudes toward seeking support to help ensure that trainees perform skills effectively posttraining. Regarding specific instructional strategies, trainees should discuss their intentions to use the closed skill, perform an obstacle assessment and create a response plan, and practice meta-cognitive skills to reflect on their behavior. Once on the job, there should be avenues for trainees to obtain any needed guidance.
Autonomous Trainees and Open Skills
Again, the difference between an open skill and closed skill is that there is not necessarily one best way to perform an open skill. Performing an open skill requires attention to contingency variables to determine a course of action. That is, successful performance requires strategic knowledge – information about when and why to use a particular knowledge or skill on the job (Kraiger, Ford, & Salas, Reference Kraiger, Ford and Salas1993). Accordingly, training for an open skill requires attention to strategic knowledge to determine when and how trainees should perform trained skills. To enhance the flexibility required for performing open skills, trainers could develop meta-cognitive competence and favorable attitudes toward experimentation. Regarding individual differences on which to select employees, those with a learning orientation may also be better able to learn and transfer open skills. Such individuals focus on developing new skills, attempt to understand their tasks, and successfully achieve self-referenced standards for success (Button, Mathieu, & Zajac, Reference Button, Mathieu and Zajac1996). Related to the autonomous nature of the skill being performed, the prescriptions for transfer are similar to those discussed in the preceding text for autonomous trainees performing closed skills.
Supervised Trainees and Open Skills
For supervised trainees with open skills, training should include many of the strategies advised for such skills performed autonomously. To successfully transfer open skills in a heavily supervised work setting, the following three strategies may be appropriate. First, training employees with favorable attitudes toward cooperation with supervisors may facilitate productive supervisor-subordinate relationships. Second, training supervisors in the target skills may lead to better management of trainees on the job. Third, the trainee and supervisor should jointly perform an obstacle assessment to determine potential barriers to transfer and means to overcome them.
Context-Dependent Transfer Accountability Strategies
Yelon and Ford’s (Reference Yelon and Ford1999) multidimensional framework and Burke and Saks’s (Reference Burke and Saks2009) application of Schlenker’s accountability model (1994, 1997) have each independently advanced research on transfer of training. Burke and Saks have highlighted the importance of the prescription-event, prescription-identity, and identity-event links to increase accountability and articulated transfer strategies to strengthening these links. That said, Burke and Saks did not address means to strengthen these links in different work contexts. Yelon and Ford’s work is valuable as they emphasized the importance of context for transfer and delineated strategies appropriate for different skills and degrees of supervision, but they did not position their model and corresponding transfer strategies in the context of accountability. Yelon and Ford’s framework thus represents a useful means to extend Burke and Saks’s accountability work. It should be noted that not all the original transfer strategies proposed by Yelon and Ford are necessarily applicable to the three accountability links, nor are all their transfer strategies exhaustive. However, they serve as a valuable guide and foundation. Rather than treat these two theoretical perspectives as independent, we suggest that value is to be gained by integrating them to provide greater theoretical precision on how best to foster accountability for training transfer.
Toward this end, we articulate context-dependent transfer accountability strategies. For each of Schlenker’s (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) three links, several strategies are discussed for specific contexts. Namely, to strengthen each of the prescription-event, prescription-identity, and identity-event links, we propose how to modify transfer enhancement strategies for closed and open skills and, within each skill set, for supervised and autonomous working conditions. When doing so, we also discuss strategies to overcome challenges that might weaken these links. In practice, human resource practitioners combine strategies to bolster the prescription-event, prescription-identity, and identity-event linkages for their specific context.
Bolstering the Prescription-Event Link
In this section, we discuss means to strengthen the prescription-event link for transfer. Fundamentally, the foundation for bolstering this link, and thereby enhancing goal and process clarity, is trainees knowing what and how to transfer. The strategies in this section address how to articulate to individuals that they should focus on the transfer of newly acquired skills and the standards to which they should adhere. Using diverse and complementary strategies helps ensure that trainees do not revert to old patterns of behavior, divergent personal preferences, or superordinate prescriptions advocated by irrelevant others (Dose & Klimoski, Reference Dose and Klimoski1995). In order, we discuss strategies to explicitly communicate expectations for transfer and the use of reward systems to indirectly communicate expectations for transfer, while including strategies to overcome challenges of competing job responsibilities.
Communicating Transfer Expectations
The communication of transfer expectations is relatively straightforward for closed skills, given that the successful execution of closed skills requires following standard procedures. Yelon and Ford (Reference Yelon and Ford1999) advocate high-fidelity training, training in specific and detailed procedures, and providing trainees with detailed procedural checklists, and these strategies will likely serve to enhance the prescription-event link. Because there is one best way to execute a closed skill, there must be a high degree of fidelity between training content and job requirements, which may be obtained through careful needs assessment (Broad, Reference Broad2005). When closed skills are supervised, supervisors may also require training in the standards so they can manage behavior consistently. When supervisors’ standards differ from training standards, trainees will more likely follow the supervisors’ standards, given the inherent power supervisors possess. The reinforcement of closed skills is arguably more challenging with autonomous employees, as supervisors cannot readily reinforce the closed skill. Thus, a greater responsibility is placed on organizations to appropriately select conscientious workers and on trainers to ensure that skills are fully learned during training. In these situations, accessible on-the-job reference information and trainer follow-ups may also be needed to cement standards.
Communication of expectations for transfer is less straightforward for open skills. A strong prescription-event link requires clear expectations, yet open skills need to be adapted to a variety of situations. This requirement for adaptation could be translated as ambiguity. It may not always be clear how to execute open skills, nor is it feasible to delineate all possible variations of open skill adaptation. In this respect, trainees should be taught general principles that can be applied to different situations (Yelon & Ford, Reference Yelon and Ford1999). Trainers should identify a manageable, yet diverse, set of response-by-situation models to help trainees develop skill adaptability. Moreover, we contend that boundaries should be established to ensure individuals execute adaptation within limits and possess an apt sensitivity to what degree of variability is acceptable. Given that all possible variations cannot be prescribed, trainees may also need to be encouraged to seek guidance and support and socialized to comply with general principles and company values.
When employees are supervised transferring an open skill, supervisors can help them refine their skill in adapting to different situations by incorporating feedback from customers, co-workers, and other key constituents where appropriate. In doing so, supervisors are further reinforcing performance expectations and strengthening the prescription-event link. A challenge is that some supervisors could be tempted to micromanage the open skill. Supervisors likely have expertise and could possess particular ideas for how open skills should be transferred. That said, appropriately selected and trained supervisors in these situations should allow employees to adapt the skill within the general principles and parameters taught in training. Along the same lines, supervisors need to communicate through informal coaching or more formal performance appraisal that exercising discretion and making sound decisions are inherent components of open skills. If supervisors micromanage or if employees do not make their own decisions, an open skill may ultimately devolve into a closed skill, which could render employees inflexible and at times ineffective.
When employees are working autonomously to transfer an open skill, posttraining interventions could strengthen the prescription-event link. For example, requiring employees to complete action plans and follow-up reports detailing how they intend to use or have used trained skills could be valuable (Tews & Tracey, Reference Tews and Tracey2008). Interventions such as these could signal to trainees that transfer is important and motivate individual effort toward applying new knowledge and skills. In this respect, they direct individuals toward transfer and further skill development, rather than other demands. Further, such interventions represent a vehicle to reinforce training content and expectations that may not have been fully cemented during training. Following that open skills are often more complex than closed skills, open skills are less apt to have been fully learned during training. Accordingly, training content and performance expectations may need to be further learned on the job. One caveat is that these posttraining interventions reflect additional work for individuals. To ensure use of these interventions, they should be simple and easy to use, and there can be accountability mechanisms built into systems such as performance reviews to help guarantee they become a part of trainees’ work routines.
Rewarding Transfer
Reward systems could strengthen the prescription-event link by signaling that transfer is important in organizations (Dose & Klimoski, Reference Dose and Klimoski1995). One challenge of rewarding the transfer of any particular skill is that the skill may only represent one aspect of an employee’s job performance. It is often the case that an employee’s total job performance is rewarded, not just a specific behavior, such as transferring one skill. Consequently, trainees may perceive little connection between transfer and rewards obtained. Another issue is that the rewards for transfer may not have enough value to motivate employee effort. That said, if transfer is not rewarded meaningfully, employees may not transfer and may instead focus on tasks where such recognition is present.
Yelon and Ford (Reference Yelon and Ford1999) suggest rewarding adherence to standards for closed skills. When such skills are supervised, it may be relatively easy to reward both behavior and results. In the context of a receptionist following a script, a supervisor could reward adherence to the script and ratings of customer satisfaction. However, when such skills are performed autonomously, rewards may need to only focus on results because results are the only source of easily accessible information (e.g., ratings of customer satisfaction). When a closed skill is relatively simple, which is sometimes the case, informal recognition may be more appropriate and feasible than formal rewards. To the extent that a closed skill has less economic value than other skills, managers may need to reward a closed skill primarily with informal recognition, feedback, and praise. Of course, this strategy depends on the closed skill in question.
With respect to open skills, Yelon and Ford (Reference Yelon and Ford1999) suggest rewards for experimentation. Rewards should focus on experimentation as opposed to performance because open skills may not be fully learned in training due to their inherent complexity and the need for adaptation on the job. When open skills are supervised, supervisors could certainly reward trainee experimentation. However, supervisors typically reward maximum performance, which is reduced by experimentation. Accordingly, it may be unclear what to reward when transfer does not meet expectations, and rewarding effort such as experimentation is fraught with ambiguity. Therefore, supervisors may need to allow for a “penalty free” period of experimentation after training that does not affect the reward system.
When autonomous employees perform open skills, rewarding experimentation is more difficult. Managers may need to base rewards on objective measures of experimentation. If none exist, managers and trainees may have no choice but to use measures of outcomes, such as focusing on patient recovery rates or student evaluations of instruction. In such cases, however, employees do not have full control over such measures, and metrics may decline posttraining if individuals are experimenting. These arguments are certainly not meant to undermine the potential value of rewards in promoting transfer, but rather highlight that if not properly designed, reward systems could send mixed messages and weaken the prescription-event link.
Managing Other Job Responsibilities
Finally, careful attention needs to be paid to managing the trainee’s total set of job responsibilities to free attention and time to practice the trained skill. When work demands are high, individuals will focus on core job responsibilities and immediate job performance in lieu of focusing on transfer (Marx, Reference Marx1982). Temporarily curtailing job responsibilities is perhaps most relevant for open skills when further skill learning is required on the job, as compared to closed skills that are relatively straightforward. That said, not all closed skills are simple skills, and even simple skills may require further learning on the job. Managing job responsibilities to accommodate transfer is relatively straightforward when trained skills are closely supervised as supervisors directly manage employee workloads. When the trained skill is autonomously performed, trainers bear a stronger responsibility for helping trainees best manage their job responsibilities to accommodate transfer.
Bolstering the Prescription-Identity Link
The prescription-identity link in Schlenker’s (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) framework is the extent to which prescriptions apply to individuals by virtue of who they are. In the transfer context, the extent to which the trained skill is required and perceived as appropriate by a trainee is a function of both his or her formal role and personal attributes. When individuals view prescriptions as consistent with their identities, they are more apt to transfer the trained skill (Burke & Saks, Reference Burke and Saks2009). Strengthening this link requires an alignment between the transfer expectations and the trainee’s perceptions of their appropriateness. As Schlenker (Reference Schlenker1997) argues, when goals are not ennobling and the prescription is perceived to have little value, individuals will avoid them. If trainees’ identities are threatened, individuals are more likely to engage in avoidance strategies, such as delaying or concealing attempts to follow prescriptions for transfer, distracting others so as not to notice their transfer efforts or lack thereof, or perhaps discrediting those who seek to hold them accountable for transfer. As means to bolster this link, we discuss selecting employees whose individual differences match the skill set and the degree of supervision and inculcating favorable attitudes toward the skill and supervision context. In addition, we discuss strategies for managing the challenge of mismatches between prescriptions and trainees’ overall job.
Selecting Employees
A long-run strategy for maximizing fit between transfer prescriptions and trainees’ identities is selecting employees with individual differences congruent with a skill set and degree of supervision. While many individual differences could be positioned in the context of Yelon and Ford’s (Reference Yelon and Ford1999) framework, here we will highlight a few exemplars. Although conscientiousness is important irrespective of context (Colquitt, LePine, & Noe, Reference Colquitt, LePine and Noe2000), the orderliness and dutifulness facets of conscientiousness are likely particularly important for executing closed skills that require adherence to strict standards (Costa, McCrae, & Dye, Reference Costa, McCrae and Dye1991). General mental ability is also one of the most dominant predictors of training success (Colquitt et al., Reference Colquitt, LePine and Noe2000), but is certainly more important for open skills high in complexity. As discussed by Yelon and Ford, a learning orientation is also key for open skills as they require adaptation, reflection, and problem solving. Noe, Tews, and Marand (Reference Noe, Tews and Marand2013) recently demonstrated that zest, where individuals approach life with eagerness, energy, and anticipation (Peterson & Seligman, Reference Peterson and Seligman2004), was a significant predictor of informal learning. Following that open skills require informal learning on the job, zest should be relevant to this skill set. Regarding degree of supervision, the compliance facet of agreeableness is especially relevant for supervised trainees (Costa et al., Reference Costa, McCrae and Dye1991), whereas a proactive personality is especially germane for autonomous trainees because such individuals are unconstrained by situational forces and persevere until they achieve desired results (Parker & Sprigg, Reference Parker and Sprigg1999).
Promoting Favorable Attitudes
Another strategy is for trainers and supervisors, where appropriate, to inculcate favorable attitudes toward the specific skill and supervision context (Tracey et al., Reference Tracey, Tannenbaum and Kavanagh1995). Links are weaker when a prescription appears arbitrarily imposed or to only benefit the person imposing it (DeHart-Davis, Reference DeHart-Davis2009). For closed skills, trainers could focus on strengthening attitudes toward the standards and the process of executing them, noting adverse consequences of deviations. Imposing nonvalued standards is likely less of an issue for closed skills performed under supervision due to the nature of a close employee-supervisor working relationship. With respect to open skills, trainers could attempt to enhance trainees’ appreciation for creativity, risk taking, problem solving, and the ability to adapt principles in different contexts. When skills will be transferred under close supervision, trainers could develop in trainees positive attitudes toward supervision, accepting feedback from others, and the benefits of teamwork. Supervisors should accept some adaptation and variation and confer with trainees to gain their acceptance of prescriptions. In turn, for autonomously performed open skills, trainers could target the benefits of working independently, freedom, and self-determination. Extending the aforementioned arguments, a potentially fruitful strategy is to leverage trainees’ individual differences by linking them to the specific characteristics of the skill to be transferred. For example, for closed skills, trainers could appeal to trainees’ orderliness and dutifulness, and for autonomously performed skills, trainers would appeal to trainees’ proactive personality.
Limiting Mismatches
One challenge that may weaken the prescription-identity link is a mismatch between prescriptions for a specific skill set and the overall nature of an individual’s job. That is, the link could be compromised when trained skills are inconsistent with the overall set of responsibilities for a given job and the degree of supervision an individual typically receives at work (Mathieu & Martineau, Reference Mathieu, Martineau and Ford1997). There appears to be an implicit assumption in the transfer literature that there is congruence on this front, which may not always be the case. For example, an administrative assistant who normally performs closed tasks may be trained in more complex project management skills. As another example, a college professor who normally performs autonomously may be evaluated three times a semester in his or her use of a new classroom management technology, a potential affront to autonomy. In such instances, trainees may resist transfer if they are given prescriptions inconsistent with their identity and job.
To overcome this challenge and strengthen the prescription-identity link, practitioners can either limit mismatches or acknowledge them when they are necessary. Furthermore, alternative strategies should be employed to minimize identity threats depending on whether trainees will be expanding or narrowing the scope of their work. When moving from closed to open skills or supervised to autonomous working conditions, trainers should seek to explicitly expand the trainee’s identity to encompass the new task. For example, trainers and supervisors could appeal to an individual’s need for growth, development, and autonomy. However, when moving from open to closed skills or from autonomous to supervised working conditions, people may experience threats to their perceived competence because they are being constrained. To limit identity threats in these instances, trainers should seek to appeal to trainees’ willingness and ability to do the task well for the benefit of the organization and recognize and reward their sacrifice.
Bolstering the Identity-Event Link
The identity-event link reflects the extent to which the actor has personal control over the event; higher perceived control enhances felt responsibility (Dose & Klimoski, Reference Dose and Klimoski1995). In the context of transfer, this link is stronger when individuals have confidence in their ability to successfully use new knowledge and skills and favorably influence the desired outcome of transfer, namely improved job performance. Self-efficacy beliefs, which have been demonstrated to have a positive impact on behavior across a wide set of domains (Bandura, Reference Bandura1986; Judge & Bono, Reference Judge and Bono2001; Stajkovic & Luthans, Reference Stajkovic and Luthans1998), are central to strengthening the identity-event link (Schlenker, Reference Schlenker1997).
Bandura (Reference Bandura1986) defined self-efficacy as individuals’ “judgments of their capabilities to organize and execute courses of action required to attain designated types of performances” (391). Wood and Bandura (Reference Wood and Bandura1989) contend that self-efficacy beliefs relate to individuals’ perceived capabilities “to mobilize the motivation, cognitive resources, and courses of action to meet given situational demands” (408). Self-efficacy beliefs may include both traitlike individual differences (Chen, Gully, & Eden, Reference Chen, Gully and Eden2001; Judge, Erez, & Bono, Reference Judge, Erez and Bono1998; Judge, Locke, & Durham, Reference Judge, Locke and Durham1997) and task-specific states that can be enhanced through mastery experiences (Bandura, Reference Bandura1986; Stajkovic & Luthans, Reference Stajkovic and Luthans1998). As such, strengthening self-efficacy to strengthen the identity-event link could be achieved by careful employee selection and providing posttraining support (Gist & Mitchell, Reference Gist and Mitchell1992) including goal-setting and self-management training, to which we now turn. A major challenge to strengthening this link is unrealistic expectations.
Selecting Employees
Two traits could be used in the selection process to facilitate trainees’ perceived personal control. The first is generalized self-efficacy (GSE), which refers to the extent to which an individual has an enduring belief that he or she is capable of accomplishment irrespective of the situation or task demands (Chen et al., Reference Chen, Gully and Eden2001; Judge et al., Reference Judge, Erez and Bono1998; Judge et al., Reference Judge, Locke and Durham1997). Given that those higher in GSE believe they can succeed in any achievement situation, they likely will have confidence in their ability to transfer. In addition, the perceived ability to learn and solve problems (PALS), which relates to self-efficacy in acquiring new knowledge and skills and effective problem solving, may have particular relevance for transfer of training (Tews, Michel, & Noe, Reference Tews, Michel and Noe2011). Although PALS has not been examined in the context of learning explicitly, Tews and colleagues demonstrated that PALS was significantly related to job performance for managers and entry-level employees. Moreover, PALS was found to explain additional variance in performance beyond general mental ability, personality, and similar constructs related to learning and problem and solving. In the context of Yelon and Ford’s (Reference Yelon and Ford1999) model, GSE and PALS are more relevant for open skills and for those autonomously performed as they place greater demands on individuals. GSE and PALS are likely less relevant for closed, supervised work because its standardized nature reduces the importance of human judgment and creativity.
Providing Posttraining Support
A variety of goal-setting interventions may bolster accountability for autonomous employees, particularly those performing open skills. In an early study in this area, Wexley and Nemeroff (Reference Wexley and Nemeroff1975) demonstrated in the development of managerial and negotiation skills that trainees who received assigned goals, coupled with on-the-job coaching sessions with trainers, exhibited superior on-the-job performance compared to trainees who attended classroom training only. Richman-Hirsch (Reference Richman-Hirsch2001) illustrated that goal-setting training focused on action planning within the formal classroom resulted in better customer service performance for trainees who participated in this supplement compared to those who received classroom training only. Furthermore, Tews and Tracey (Reference Tews and Tracey2008) demonstrated that a self-coaching program designed to improve the transfer of interpersonal skills for managers resulted in higher posttraining performance and self-efficacy beliefs for trainees compared to those who received classroom training only. This intervention involved trainees completing written self-assessments in which they reflected on their performance and established learning and performance goals for several weeks after completing the formal training.
Self-management training, which is similar to goal-setting interventions, could also have relevance for transferring autonomously performed skills. Self-management training is training in the formal classroom environment designed to equip individuals with skills necessary to support successful transfer (Marx, Reference Marx1982; Richman-Hirsch, Reference Richman-Hirsch2001). This training typically involves lectures and discussions on these self-management strategies, as well as opportunities for trainees to establish goals for themselves, identify potential challenges to successful performance, and develop specific strategies to facilitate transfer (Richman-Hirsch, Reference Richman-Hirsch2001). It should be noted that while accountability mechanisms typically involve an external audience to evaluate an individual’s performance (Frink & Klimoski, Reference Frink and Klimoski1998), by definition, autonomous trainees lack such an audience much of the time. Consequently, autonomous trainees must serve as the first line of accountability, making self-management training and related techniques necessary vehicles. Some research has demonstrated a positive impact for self-management training on posttraining performance (Noe, Sears, & Fullenkamp, Reference Noe, Sears and Fullenkamp1990; Tziner, Haccoun, & Kadish, Reference Tziner, Haccoun and Kadish1991), but other studies have not (Burke, Reference Burke1997; Gaudine & Saks, Reference Gaudine and Saks2004; Richman-Hirsh, Reference Richman-Hirsch2001; Wexley & Baldwin, Reference Wexley and Baldwin1986). Self-management training may rely too heavily on individuals to manage their performance on the job. One potential means to improve its effectiveness is to have trainees formally meet with supervisors or trainers, which would increase the degree of accountability and allow for follow-up coaching and advice.
Minimizing Unrealistic Expectations
Unrealistic expectations for a successful event diminish the strength of the identity-event link (Schlenker, Reference Schlenker1997), representing a key challenge. Accordingly, successful attempts at transfer should not be perceived as too difficult. Given that closed skills are relatively easier to acquire than open skills, initial proficiency may be expected sooner posttraining for closed skills; however, not all closed tasks are simple, so proficiency is not always quickly attained. Supervisors should be sure there is enough time and excess trainee capacity to facilitate transfer. Because open skills are more complex, supervisors need to pull back in initial proficiency expectations even more than for closed skills and reward (formally and informally) progression. Supervisors should place a greater emphasis for open skills on learning goals, where individuals are allowed to focus on further skill acquisition, effort, challenge, and errors (Kozlowski et al., Reference Kozlowski, Gully, Brown, Salas, Smith and Nason2001). When skills are autonomously performed, trainees bear a greater responsibility for setting realistic transfer goals to enhance their personal control. Moreover, for autonomously performed skills, there is a greater need for goal-setting and self-management training to help trainees traverse the transfer process.
Guidelines for Practice
We have identified several useful strategies in the preceding sections to strengthen the prescription-event, prescription-identity, and identity-event links to enhance accountability and transfer in different contexts. The framework encourages careful consideration of what happens during training and in the broader organization to increase trainees’ role clarity, sense of ownership, and perceived control over transfer. A common theme throughout this chapter has been that one size does not fit all and that careful consideration must be paid to the nature of the skill and the conditions under which the skill will be performed. It is important that practitioners not necessarily assume that a transfer strategy that worked well in one context will work well in another. Practitioners should be well versed in a broad set of transfer accountability strategies, as there is no quick fix.
The complexity and challenges of transfer highlight the importance of conducting a careful needs assessment to determine the specific context for transfer and the availability of support and accountability mechanisms to facilitate the application of new knowledge and skills. Along the same lines, multiple stakeholders should be involved in the needs assessment process and the implementation of accountability strategies. Transfer is not the responsibility of one, but of many stakeholders – trainers, supervisors, and trainees.
We summarize the conditions that must be satisfied to bolster Schlenker’s (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) three accountability links in Yelon and Ford’s (Reference Yelon and Ford1999) four contexts in Tables 9.3–9.6. These tables provide a useful reference for practitioners to design strategies for transfer; a sample job and task is provided for each context to help the reader. In designing transfer strategies, practitioners should strive to bolster all three links. Consider the context of actors developing a character role, a supervised open skill. To address the prescription-event link, the actors would require clear outcome goals for putting on a good performance, clear expectations for skilladaptation in developing their characters, and clear expectations that they will be directed and not be wholly autonomous. To address the prescription-identity link, the actors must value skill adaption, as well as value and be receptive to taking direction. Finally, to address the identity-event link, the actors must perceive that they have the ability to create a well-developed character. We acknowledge that designing accountability strategies may not always be easy, and they may not be necessary at all times. However, we encourage practitioners to make the attempt for knowledge and skill sets of particular strategic importance.
Table 9.3 Transfer conditions for supervised closed skills
| Example Job: Fast Food Restaurant Cook Focal Skill to Transfer: Making Food to Standards | ||
|---|---|---|
| Transfer Requirement | Accountability Link | Trainee Conditions Necessary for Transfer |
| Trainees must follow precise standards under closely supervised conditions | Prescription-Event: The extent to which clear and unambiguous expectations exist for transfer | Trainees require precise standards for skill application and following direction from supervisors |
| Prescription-Identity: The extent to which prescriptions are relevant to trainees by virtue of their role, values, or other personal attributes | Trainees value adhering to precise standards and willingly accept direction | |
| Identity-Event: The extent to which trainees have personal control over their ability to transfer | Trainees have the capacity to adhere to standards and follow direction | |
Table 9.4 Transfer conditions for autonomous closed skills
| Example Job: Hotel Guest Room Attendant Focal Skill to Transfer: Cleaning a Guest Room to Standards | ||
|---|---|---|
| Transfer Requirements | Accountability Link | Trainee Conditions Necessary for Transfer |
| Trainees must follow precise standards under autonomous working conditions | Prescription-Event: The extent to which clear and unambiguous expectations exist for transfer | Trainees require precise standards for skill application and clear expectations for taking responsibility for monitoring standards themselves |
| Prescription-Identity: The extent to which prescriptions are relevant to trainees by virtue of their role, values, or other personal attributes | Trainees value adhering to precise standards and working independently | |
| Identity-Event: The extent to which trainees have personal control over their ability to transfer | Trainees have the capacity to adhere to standards and work independently | |
Table 9.5 Transfer conditions for autonomous open skills
| Example Job: Supervisor Focal Skill to Transfer: Motivating Employees | ||
|---|---|---|
| Transfer Requirements | Accountability Link | Necessary Trainee Conditions for Transfer |
| Trainees must adapt skill autonomously | Prescription-Event: The extent to which clear and unambiguous expectations exist for transfer | Trainees require clear outcome goals and clear expectations for skill adaptation and taking responsibility for monitoring their performance |
| Prescription-Identity: The extent to which prescriptions are relevant to trainees by virtue of their role, values, or other personal attributes | Trainees value skill experimentation and working independently | |
| Identity-Event: The extent to which trainees have personal control over their ability to transfer | Trainees have the capacity to adapt skills and work independently | |
Table 9.6 Transfer conditions for supervised open skills
| Example Job: Actor Focal Skill to Transfer: Character Development | ||
|---|---|---|
| Transfer Requirements | Accountability Link | Trainee Conditions Necessary for Transfer |
| Trainees must adapt skill under supervised conditions; must have receptivity to take direction | Prescription-Event: The extent to which clear and unambiguous expectations exist for transfer | Trainees require clear outcome goals and clear expectations for skill adaptation and taking direction from supervisors |
| Prescription-Identity: The extent to which prescriptions are relevant to trainees by virtue of their role, values, or other personal attributes | Trainees value skill experimentation and are open to taking direction from others | |
| Identity-Event: The extent to which trainees have personal control over their ability to transfer | Trainees have the capacity to engage in skill experimentation and take direction from others | |
Future Research
By integrating Schlenker’s (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) and Yelon and Ford’s (Reference Yelon and Ford1999) frameworks, we have offered a number of strategies to enhance transfer. Some of these propositions were empirically referenced, but others remain theoretical. While we have identified several potentially viable transfer enhancement strategies, organizations may be using a host of additional strategies. Descriptive research would therefore be valuable to generate data on accountability strategies already being employed in workplaces. To extend the contribution of this chapter, research is needed to test the extent to which the strategies offered herein are effective. A fundamental tenet of this chapter is that situational specificity matters. As such, when validating these strategies, context must either be experimentally manipulated or measured and modeled in survey research. In addition to assessing the direct influence of the strategies on transfer, research could examine trainees’ perceptions of goal/process clarity, ownership of transfer, and personal control as mediators in strategy-transfer relationships. Such work would validate the hypothesized central role of accountability for transfer.
An area in need of research is how best to promote favorable trainee attitudes toward performing open and closed skills under varying degrees of supervision. One challenge to development of favorable attitudes may be threats to an individual’s identity. For example, performing a closed skill under supervised working conditions may likely be resisted by those who prefer to perform open skills autonomously. Research would be worthwhile that compares whether a discussion-based training format where trainees generate the benefits and importance of executing a specific skill yields more favorable attitudes than an approach where trainers communicate the benefits. Following Deci and Ryan’s self-determination theory (Reference Deci and Ryan2002), which posits that individuals seek to be self-directed agents, such an approach might yield potentially high returns.
We suggest several specific comparisons to validate our proposed integrated model. One potentially useful comparison is to examine the value of a single strategy across different contexts. For example, research could examine the different effects of self-management training for open versus closed skills. Given the complexity inherent in open skills, we would hypothesize that self-management training would be more effective in facilitating transfer in this context. Another useful comparison is to assess the effectiveness of different strategies that address a specific link. Both personality characteristics and perceived role breadth were argued to influence the prescription-identity link, and research would be worthwhile that examines which matters more.
Further, research would be valuable that examines the relative importance of the different links in a particular context. While the combination of all links forms the social adhesive to promote accountability, the links may not always be of equal importance in specific situations. We believe that the prescription-event link may be more important than the prescription-identity link for supervised closed skills and that the prescription-identity link is more important than the prescription event-link for autonomous open skills. Testing such relationships is warranted to ascertain whether greater precision is in fact necessary or whether a more a parsimonious set of strategies suffice.
These avenues for research could be addressed either through survey research or through experimental manipulations. A survey of employees could assess training context, training design and delivery, work environment support, individual differences, and transfer, preferably with data collected at multiple points in time. In an ideal design, employees from a large organization or across multiple organizations would be sampled to provide the necessary variability for comparative studies. Although it is difficult to secure such samples for training research, the increased availability of online panels through Qualtrics, for example, makes such research more feasible. When conducting transfer studies across contexts, the nature of the job, performance, and focal training content are likely different. Thus, researchers must pay careful attention to the selection of dependent variables that are applicable across contexts and lend themselves to meaningful comparisons. In this regard, measures of transfer should be general (e.g., “I successfully apply material from training on the job”) as opposed to content specific (e.g., “I successfully apply customer skills from training on the job”).
Experimental studies could also be conducted in the field to further substantiate cause-and-effect relationships. Given the challenge of access to organizations, experimental research with samples of working students may also be valuable. The aforementioned DeMatteo et al. (Reference DeMatteo, Lundby and Dobbins1997) lab study is a useful exemplar. In their 2 × 2 design, they studied two accountability interventions and their temporal position (before vs. after training) using a student sample and measured students’ satisfaction and learning. Although DeMatteo and colleagues did not measure transfer, it is feasible to do so with the advent of mobile technologies such as Socrative® that enable researchers to survey individuals with simple questions by phones, tablets, and laptops about their transfer of class learning to the workplace.
Conclusion
In this chapter, we provided a synthesized framework for transfer of training that is theory driven, integrative, and context sensitive. By integrating the work of Schlenker and colleagues (Reference Schlenker, Britt, Pennington, Murphy and Doherty1994; Reference Schlenker1997) and Yelon and Ford (Reference Yelon and Ford1999), this chapter has discussed how to enhance trainees’ role clarity, sense of ownership, and perceived control over transfer for open and closed skills performed either under supervised or autonomous working conditions. Promoting transfer of training represents a perennial challenge for scholars and practitioners. Yet, promoting transfer is critical to ensure that employees possess the knowledge and skills to succeed in today’s competitive and dynamic business environments. It is our hope that this chapter has provided a useful framework for understanding accountability issues associated with transfer, guiding future research efforts, and facilitating transfer design in practice.
Any company has the potential to build deep specialization. Think about those job roles that define your company’s competitive advantage. In most, these roles are in research, engineering, or manufacturing. Start here – and ask your business leaders to define what an expert really is. Study these people and use them as models to build deep specialization programs for others. (Bersin, Reference Bersin2009)
Work is becoming more knowledge driven and global in scope requiring a deeper combination of information, experience, understanding, and problem-solving skills that can be applied to decisions and actions around strategically critical situations (Kraiger & Ford, Reference Kraiger, Ford and Koppes2007). This reality highlights the need for enhancing the development of deep specialization for functional experts in key or “mission critical” jobs that are important for future growth (Ziebell, Reference Ziebell2008). For example, at Intel, 80% of their worldwide staff works in technical positions (Bersin, Reference Bersin2009). This company and others like it are dependent on the level of knowledge and expertise in specialized areas. For example, the nuclear industry sets criteria for job position risk assessment with the highest level being jobs where: (1) individuals have critical and unique knowledge or skills that can have the potential for significant reliability and safety impacts, (2) years of training and experience are required, and (3) there are no ready replacements available (International Atomic Energy Agency, 2006).
The importance of understanding and enhancing deep specialization is particularly relevant today in response to the imminent retirement of such experts (i.e., the “grey tsunami”) in many strategic business areas, especially in the United States (Moon, Hoffman, & Ziebell, Reference Moon, Hoffman and Ziebell2009). It is important to not only understand in broad terms the distribution of expertise by job category, but also to identify what type of expertise (what tasks, skills, knowledge) is likely to be lost (risk assessment) through expected promotions, turnover, and retirements and to create processes to address expertise loss and strategically develop talent where there will be critical shortages before they occur (Ziebell, Reference Ziebell2008).
As noted by Schein (Reference Schein1993), an individual’s career can be studied as a series of movements along three different dimensions: (1) moving up in the hierarchy; (2) moving laterally across various subfields; and (3) moving toward the centers of influence in an organization. The concept of deep specialization in core areas that are critical to organizational success is targeted toward this third, and less researched, career movement. As noted by Bersin (Reference Bersin2009), it is time to rethink the traditional career pyramid by increasing efforts to develop and enhance deep levels of skills and knowledge and hence accelerate the time it takes to become an expert in a career field. While organizations rely on experts in critical jobs to achieve strategic goals, they often do not fully appreciate or understand the impact of expertise on day to day operations (Borton, Reference Borton2007; Prietula & Simon, Reference Prietula and Simon1989).
Organizations often focus on bringing newcomers up to speed on a job to get immediate value out of the investment of recruitment, selection, and initial training costs (Byham, Reference Byham2008). Less attention is paid to longer-term developmental strategies to facilitate the move to deep specialization (Lord & Hall, Reference Lord and Hall2005; Ziebell, Reference Ziebell2008). Identifying effective strategies for long-term development is a critical step in considering how to speed up the development cycle. Given that it can take years of training, learning activities, and work experiences to develop deep specialization in a technical career field, it is important to identify key levers to accelerate individual development (Hoffman et al., Reference Hoffman, Ward, Feltovich, DiBello, Fiore and Andrews2014).
The purpose of this chapter is to examine intentional learning strategies that can be enacted to develop individuals in highly specialized jobs throughout the course of their career. There have been some attempts to focus not just on training in isolation but on continued development throughout a career (e.g., Caligiuri & Tarique, Reference Caligiuri, Tarique, Osland, Li and Wang2014; Cerasoli et al., Reference Cerasoli, Alliger, Donsbach, Mathieu, Tannenbaum and Orvis2014). Salas and Rosen (Reference Salas, Rosen, Kozlowski and Salas2009) reviewed the literature on the development process as individuals move from novice to expert and provided a framework for understanding the characteristics of expertise, as well as discussed how expertise can be developed and maintained. Based on this framework, they developed 17 principles for developing expertise at work (e.g., provide variability in learning activities). While the framework and principles have value, only one of the 17 principles dealt explicitly with differences in learning strategies to support learning needs through transitions from beginner to intermediate and to advanced learners.
The present chapter explores changing developmental needs and effective learning strategies as an individual moves from a relative newcomer to becoming a valued employee with deep specialization. We first identify three key characteristics that evolve as individuals develop expertise in their job and performance indicators that are associated with this evolution. The chapter then provides a framework for the goals of development over time. The final section provides insights into strategies for building knowledge and skills as a person progresses toward expertise.
The Road to Expertise
The goal of development is the achievement of consistent, superior performance through enhanced mental and physical processes. This goal is pursued by providing employees experience, training, and on-the-job learning activities that have a “developmental punch” (Ford & Kraiger, Reference Ford, Kraiger, Cooper and Robertson1995; Kraiger & Ford, Reference Kraiger, Ford and Koppes2007; Quiñones, Ford, & Teachout, Reference Quiñones, Ford and Teachout1995). Researchers have begun to identify the depth of knowledge and skill building that characterize deep specialization in a career field (Ericsson, Nandagopal, & Roring, Reference Ericsson, Nandagopal and Roring2009; McCall, Reference McCall2004). This section discusses the shifts that occur to change a person’s status from a relative novice to expert, as well as indicators of expert performance.
Qualitative Shifts That Define Expertise
Dreyfus and Dreyfus’s (Reference Dreyfus and Dreyfus1986) identify five developmental stages: novice, advanced beginner, competent, proficient, and expert. Variants of this stage model exist, adding terms such as naivete, initiate, apprentice, and journeyman (Alexander, Reference Alexander2003; Hoffman, Reference Hoffman, Williams, Faulkner and Fleck1996; Hoffman et al., Reference Hoffman, Shadbolt, Burton and Klein1995; Kinchin & Cabot, Reference Kinchin and Cabot2010). Research generally supports that there is a stepwise (but not necessarily linear) progression of skill and practice in developing expertise (Dall’Alba & Sandberg, Reference Dall’Alba and Sandberg2006; Kinchin & Cabot, Reference Kinchin and Cabot2010). During the beginning phase, novices are focused on learning facts and using deliberate reasoning, and they rely on general strategies across situations. By the competent stage, learners have begun to organize related pieces of information into mental models and have routinized many (but not all) types of processes. By the time learners have reached the expert stage, they can deliberately reason about their own intuitions regarding a situation or problem and generate new rules or strategies to use (see Dreyfus & Dreyfus, Reference Dreyfus and Dreyfus1986).
The stage models highlight that the knowledge of an apprentice is thus not just an incomplete version of the knowledge of an expert (Lajoie, Reference Lajoie2003). Rather, there are three key qualitative shifts that occur over time. One shift concerns depth of knowledge. With deep specialization, the knowledge of the learner has become proceduralized and principled so that individuals can not only recall facts and figures as an advanced beginner can, but the person can distinguish between situations when that knowledge (or skill) is applicable and when it is not (Ericsson & Charness, Reference Ericsson and Charness1994). The ability to apply this type of knowledge leads individuals to use their depth of knowledge in the appropriate context and at the appropriate time to achieve superior performance. Over the course of various job experiences, individuals learn rules and heuristics and can test the boundaries of those heuristics to recognize constraints of the problem space (Ericsson & Charness, Reference Ericsson and Charness1994). As learners encounter a greater number of situations, pattern recognition increases. Consequently, an individual with deeper knowledge can do a better job of relating information to changing demands and predicting what might happen next given the current situation. For instance, Dreyfus and Dreyfus (Reference Dreyfus and Dreyfus1986) posit that a difference between competent and proficient performers is that competent individuals lack the “know-how” of understanding how to approach a problem, while proficient individuals have many more experiences that allow them to think and act more automatically.
A second qualitative shift concerns the complexity of a learner’s mental models or ways of organizing knowledge. As individuals gain experience with a task or job, they begin to form relational knowledge that defines how various pieces of information fit together (Klein & Hoffman, Reference Klein, Hoffman and Rabinowitz1993). Learners who have achieved deep specialization have well-defined mental models that help them recognize connections between seemingly disparate pieces of information that then lead to problem solutions. In particular, they possess knowledge structures that contain both problem definitions and specific solutions while learners who are at the earlier stages tend to possess separate knowledge structures for problem definitions and solutions (Ericsson & Charness, Reference Ericsson and Charness1994). Thus, when exposed to a domain early in a career, individuals focus on the fundamental elements of the problem and seek out evidence to confirm or disconfirm their hypotheses. Over time, they look at the features of the problem and the overall situational patterns, and reflect on job experiences that may have been similar. In this way, individuals progressively focus more on the entire situation holistically to see the interconnectedness of features within the problem space to move forward to a solution (Salas & Rosen, Reference Salas, Rosen, Kozlowski and Salas2009). This understanding of interconnectedness leads to well-organized schemas such that when individuals are confronted with a problem that requires the knowledge stored in long-term memory, the brain recalls the schema and it is place in working memory (Prietula & Simon, Reference Prietula and Simon1989; Salas & Rosen, Reference Salas, Rosen, Kozlowski and Salas2009). As a result, this knowledge no longer places as large of a burden on an individual’s working memory. Experts rely less on context-free information, instead focusing on the context of the situation and their past personal experience in related contexts and settings (Farrington-Darby & Wilson, Reference Farrington-Darby and Wilson2006). For example, high-level programmers can mentally group steps within a task so that when they see a particular symptom or problem, they can identify several alternative strategies to take and can rank order these strategies in terms of their likelihood of success (Ford & Schmidt, Reference Ford and Schmidt2000).
A third qualitative shift involves the development of effective self-regulatory skills that include the ability to know what the appropriate strategies are for facilitating further knowledge and skill acquisition (Lord & Hall, Reference Lord and Hall2005). Individuals who have achieved deep specialization are able to more accurately monitor or assess their own mental states, more likely to know when they have understood task-relevant information, and more likely to discontinue a problem-solving strategy that would ultimately prove to be unsuccessful (Salas & Rosen, Reference Salas, Rosen, Kozlowski and Salas2009). They are also better able to estimate the number of trials they will need to accomplish a task. For example, highly valued information technologists have superior understanding of programming tasks and of ideal working strategies, and have a better awareness of their own performance strategy options (Sonnentag, Reference Sonnentag1998).
Performance Indicators of Expertise
As individuals increase in knowledge and skill level, they become increasingly reliant on the situation to inform them of the problem, take a more holistic approach to recognizing patterns in the problem, base decision making on intuition, and become absorbed in their performance. As noted by Dreyfus (Reference Dreyfus, Ericsson, Charness, Feltovich and Hoffman2006), those who reach high levels of expertise “do not solve problems, and do not make decisions; they do what normally works” (24). Individuals who have reached this high level can also begin to contribute to their domain by generating new knowledge creatively (Ericsson & Charness, Reference Ericsson and Charness1994).
Researchers have begun to explore performance characteristics that can be used to determine if a person has obtained expert status (Germain & Tejeda, Reference Germain and Tejeda2012; Van der Heijden & Verhelst, Reference Van der Heijden and Verhelst2002; Weiss & Shanteau, Reference Weiss and Shanteau2003). As noted by the Electric Power Research Institute, although it may take up to 25 years to become an expert in mission-critical activities, a proportion of utility personnel with that level of tenure are not recognized as having expertise; they have simply become very good (i.e., proficient) at what they do (see Ziebell, Reference Ziebell2008). Hoffman et al. (Reference Hoffman, Ward, Feltovich, DiBello, Fiore and Andrews2014) provided behaviorally based indicators to distinguish between those who are proficient and those considered to be expert in a specialized area. Some of the indicators of experts include: (1) is highly regarded by peers because of their highly organized body of knowledge; (2) shows consummate skill (i.e., has qualitatively different strategies and has economy of effort); (3) deals effectively with rare cases; (4) recognizes aspects of a problem that make it novel and brings strategies to solve tough cases; and (5) contributes new knowledge and procedures. Hoffman and Hanes (Reference Hoffman and Hanes2003) provide evidence that experts at particular highly skilled jobs (e.g., repairing large turbines) can be identified by plant managers and fellow engineers based on these characteristics.
Framework for Understanding the Development of Deep Specialization
While research on expertise has expanded our understanding of underlying characteristics and expected changes in performance from novice to expert, the majority of expertise studies have focused on tasks in well-defined domains such as chess masters, hockey players, and pianists (e.g., Ericsson, Reference Ericsson, Ericsson, Charness, Feltovich and Hoffman2006). In addition, the emphasis in these studies has been on differences between experts and novices with less focus on the developmental challenges of moving an individual from a novice to a competent level and from a competent level to deep specialization.
In well-defined, highly focused tasks the strategy of deliberate practice has been highlighted as a key impetus for development (Ericsson, Krampe, & Tesch-Romer, Reference Ericsson, Krampe and Tesch-Römer1993). In comparison, problems in highly technical jobs, such as product development engineers for the auto industry or power plant operators, are much more ill-defined and varied, thus posing several challenges for developing deep specialization. These types of jobs require complex learning within dynamic situations so that individuals can learn how to generalize information and strategies from one type of problem situation to new, unpredictable problems. For example, when dealing with a traffic accident, police officers must learn how to attend to various pieces of information they obtain about the actions of the drivers involved both directly from witnesses and indirectly from the accident scene. They must develop general strategies for how to obtain this information and how to make judgments as to the accuracy of the information, which they then employ dynamically across accident situations altering those strategies as needed based on the specific context (such as when, e.g., there are no witnesses and information provided by the involved drivers’ conflicts). While moving through the stages of expertise, there is also an interplay between immediate demands for performance and longer-term expertise development, as individuals are required to perform at high levels each day while learning to perform tasks that they have not yet faced.
Table 10.1 presents a framework for thinking about the development of deep specialization for highly complex skills within specialized jobs. The model consists of two dimensions; the first dimension targets the expected outcomes of developmental activities. The conceptual approach or framework that has utility when examining development over time involves thinking in terms of the need for building routine or for building adaptive expertise (Hatano & Inagaki, Reference Hatano, Inagaki, Stevenson, Azuma and Hakuta1986; Holyoak, Reference Holyoak, Ericsson and Smith1991). The second dimension of the framework focuses on the two major types of learning activities for developing skills – through formal training or through work experience (Ford & Kraiger, Reference Ford, Kraiger, Cooper and Robertson1995). Research consistently shows that organizations that invest more in training have higher levels of productivity (Kim & Ployhart, Reference Kim and Ployhart2014; Sung & Choi, Reference Sung and Choi2014; Zwick, Reference Zwick2006). Learning activities can also be incorporated into the work experience of the learner to facilitate building the depth of knowledge, organized mental models, and strong self-regulatory processes required as individuals move toward deep specialization (Sonnentag, Reference Sonnentag and Kleine2000).
Table 10.1 Heuristic model of development toward deep specialization
| Routine Expertise | Adaptive Expertise | |
|---|---|---|
| Training |
|
|
| Work Experience |
|
|
Routine and Adaptive Expertise
Hatano and Inagaki (Reference Hatano, Inagaki, Stevenson, Azuma and Hakuta1986) and Holyoak (Reference Holyoak, Ericsson and Smith1991) have distinguished between developing routine and adaptive expertise. Routine expertise focuses on knowledge and skills that individuals apply to well-learned and familiar contexts and situations. Through learning activities, individuals compile declarative knowledge into procedural, condition-action knowledge and continued practice leads to automatic and efficient performance. Adaptive expertise involves the capability to integrate simultaneously multiple sources of knowledge for use in addressing changing conditions and unfamiliar situations.
Hoffman et al. (Reference Hoffman, Ward, Feltovich, DiBello, Fiore and Andrews2014) have noted characteristics of tasks that impact the need for routine or adaptive expertise, such as whether situations are relatively static or dynamic. In dynamic conditions (relevant for many work tasks), goals are often shifting and may even be competing with other goals and problems are often ill-structured and information is incomplete and ambiguous (Oransanu & Connoly, Reference Oransanu, Connolly, Klein and Oransanu1993). Action feedback loops are often lengthy in dynamic situations, making the connection between behavioral cause and effect more difficult to establish. Dynamic conditions also require an individual to be able to recall multiple perspectives and schemas to determine the meaning of the situation or problem before moving to solutions.
Although routine experts can solve familiar problems quickly and accurately, they have difficulty with novel problems or situations. In contrast, adaptive experts are said to be able to invent new procedures based on their knowledge and make predictions regarding possible outcomes that may occur depending on the strategy taken. Adapting requires an understanding of deeper principles underlying the task, executive-level capabilities to recognize and identify changed situations and knowledge of whether the existing repertoire of strategies can be applied (Smith, Ford, & Kozlowski, Reference Smith, Ford, Kozlowski, Quiñones and Ehrenstein1997). If the situation requires individuals to reconfigure procedures, extensive knowledge about a variety of procedures as well as how to select and combine them is necessary. The progression from competent to adaptive expert requires that decision making and indeed an understanding of the particular situation is intuitive such that it may “take longer to reach than any of the intermediate stages, if it’s ever reached at all” (Kinchin & Cabot, Reference Kinchin and Cabot2010: 155).
The concepts of routine and adaptive expertise are consistent with calls from instructional design researchers to distinguish between training for recurrent and nonrecurrent skills. As noted by Young et al. (Reference Young, van Merriёnboer, Durning and Cate2014), for more novice or advance beginner learners, the aspects that need to be developed are recurrent skills (i.e., practicing tasks that are consistent from problem situation to problem situation). This development of recurrent skills continues throughout a career as an individual comes to automatize certain tasks and develops more efficient and effective strategies relevant to these recurrent skills. When moving beyond basic competency, there is a need to also develop nonrecurrent skills in which the effectiveness of behaviors differs across problem situations. Thus, the development of nonrecurrent skills builds upon the foundations laid by developing competencies in the recurrent skills.
Learning Activities
Much of the empirical work on understanding learning and transfer of learning has focused on relatively simple tasks (Wulf & Shea, Reference Wulf and Shea2002). Recent work has emphasized that learning complex skills is different than learning simple skills during training and work experience, and that principles of learning must reflect that reality (Wulf & Shea, Reference Wulf and Shea2002). In addition, while there has been much research on work experience as helping build complex leadership skills, less attention has been paid to the role of experience in building complex skills in highly technical and specialized jobs. The following sections will provide strategies for developing complex skills in such specialized jobs through both more formal training design and on-the-job learning principles within an organizational setting.
Training for Complex Skills
Complex learning must target the integration of knowledge, skills, and attitudes and the transfer of what is learned to work (van Merriёnboer, Kirschner, & Kester, Reference nboer, Kirschner and Kester2003). Cognitive load theory provides a foundation for discussing the instructional design principles that facilitate learning while not overwhelming the learner with the complexity of the tasks to be learned.
Cognitive load theory focuses on how cognitive resources are distributed in a learning situation and how the amount of resources is negatively related to the processing of working memory (Chandler & Sweller, Reference Chandler and Sweller1991). It has been used as a guiding framework to describe an expertise reversal effect, or the change in the effectiveness of particular learning techniques as an individual progresses through stages of expertise (Kalyuga et al., Reference Kalyuga, Ayres, Chandler and Sweller2003). When teaching complex skills, novices and advance beginners need much more structured information (e.g., low intrinsic cognitive load) than competent performers to understand the basics and learn productively. As skill level increases, this same amount of structure is no longer productive to learning. The intrinsic cognitive load (associated with performing the essential aspects of the task) then becomes extraneous load (associated with the nonessential aspects of the task) such that the information provided is redundant to the individual and prevents them from learning new information (Kalyuga, Reference Kalyuga2007). For example, with newer employees, working memory can only process a limited number of elements at any given time, which can create a bottleneck for learning (Young et al., Reference Young, van Merriёnboer, Durning and Cate2014). To facilitate learning, extraneous load must be decreased.
Learning from Work Experience
Developing routine or adaptive expertise requires not only formal training but also systematic and intentional work experiences that build upon trained skills. Ford et al. (Reference Ford, Quiñones, Sego and Sorra1992) identified three ways of operationalizing work experience, including the breadth or number of different trained tasks performed on the job, the activity level or the number of times each of these tasks is performed, and the task type or the difficulty of the tasks performed on the job. In a meta-analytic review by Quiñones et al. (Reference Quiñones, Ford and Teachout1995), they found that the different measures of work experience were differentially related to performance outcomes. Tesluk and Jacobs (Reference Tesluk and Jacobs1998) built on the work of Quiñones and colleagues and described work experience as having quantitative (amount, time) and qualitative (variety, challenge, complexity) components. In addition, they identified density and timing as interactional components (different combinations of quantitative and qualitative aspects of experience). Density focuses on the intensity of work experiences. For example, if a person obtains experience in a variety of challenging situations, that person will have more density of experience than someone with the same tenure level who is given relatively fewer challenging assignments. The timing dimension refers to “when a work experience occurs relative to a longer sequence of successive experiences such as those that characterize a career” (329). For example, having a mentor who observes and immediately provides detailed feedback on how to improve following a challenging assignment facilitates learning. This dimension captures the notion that specific work experiences can be ordered or sequenced in ways that maximize motivational, learning, and performance outcomes depending on the developmental need of the learner.
Strategies for Developing Deep Specialization
In this section, we examine intentional strategies that can facilitate the development of expertise. We emphasize the different approaches for each of the four cells in Table 10.1.
Building Routine Expertise through Training
The literature on the training of complex tasks highlights three principles that fit with building routine expertise. In addition, there is an emerging literature on building skills to achieve routine expertise through targeted and intensive practice of key skills – especially given advances in training technologies such as virtual reality and serious gaming.
Incorporate Appropriate Instructional Design Principles
According to Koedinger and colleagues (Reference Koedinger, Booth and Klahr2008), learning principles consist of multiple instructional techniques that, when combined, create multiple possible instructional and learning paths. Of these, the most researched instructional design principles for facilitating learning of complex tasks are contextual interference, augmented feedback, and scaffolding.
Contextual interference is defined as the factors within the training environment that can enhance or inhibit learning. When learning a skill, contextual interference is specified through a practice schedule. Usually, contextual interference manifests as blocked practice (i.e., practicing all trials on a task before transitioning to another task) or random practice (i.e., transitioning between trials on different tasks). The contextual interference effect provides evidence that high contextual interference (random practice) results in better learning and transfer of simple skills (Shea & Morgan, Reference Shea and Morgan1979). However, the opposite appears to be true for learning complex skills – especially for novice learners attempting to learn a complex motor skill. According to Wulf and Shea (Reference Wulf and Shea2002), low contextual interference (blocked practice) is better for learning complex skills. The cognitive load when learning a complex skill is considerably higher than when learning simple skills due to the need for more memory and attentional resources. When individuals are starting to learn a complex task and have had little practice, low contextual interference will have a more positive impact on learning. As Wulf and Shea conclude, “the results seem to indicate that when the tasks are more difficult because of high attention, memory, and or motor demands (or when learners are relatively inexperienced), random practice may overload the system and thus disrupt the potential benefits of random practice” (188). The implication is that as the individual practices the complex skill more often and becomes more efficacious, introducing high contextual interference may benefit performance by allowing for better learning transfer to varied contexts and settings.
Another principle for facilitating complex skill learning is augmented feedback. Feedback consists of providing knowledge of results or knowledge of performance to individuals. Wulf and Shea (Reference Wulf and Shea2002) argue that augmented feedback should be delivered frequently during complex skill practice trials to effectively transfer learning at the onset of practicing a new complex skill. By delivering frequent feedback for a task that could possibly overload individuals’ cognitive resources, learners obtain critical and specific performance information that helps them to understand their current learning progress and adapt their strategies and behaviors to improve. This helps learners to develop a working understanding of generally successful and unsuccessful principles and strategies and to learn to self-reflect on their performance, increasing the likelihood that complex skills will transfer to real-world contexts. In this way, individuals are able to develop their own checks and balances to their learning system.
The third principle of scaffolding includes strategies that support learning over time as a learner develops (van Merriёnboer, Kirschner, & Kester, Reference nboer, Kirschner and Kester2003). Van Merriёnboer et al. (Reference nboer, Kirschner and Kester2003) argue that instructors should structure learning tasks from easy to difficult over time and to vary the context in which these tasks occur to encourage generalization. From this perspective, as individuals progress, the amount of instructor support received should be reduced (e.g., fading). They encourage instructors to use supportive information (e.g., information that supports learning and performance; the productive intrinsic load) up front before starting any instruction on a set of tasks. As individuals progress through the tasks, instructors should provide procedural information (e.g., how to perform the task) just in time to not overload working memory. Scaffolding reflects a strategy that facilitates enhancing awareness and self-regulation as well as building knowledge and skills (Azevedo, Cromley & Seibert, Reference Azevedo, romley and Seibert2004; Najjar, Reference Najjar2008). For instance, for relatively new learners, it is important to provide more structured opportunities for learning. At earlier stages of development, trainees need guidance as to what to practice and how to practice it so that they can develop enduring, successful practice habits and experience (Barzilai & Blau, Reference Barzilai and Blau2014). However, for intermediate learners (those building routine expertise), an emphasis on developing independent learning and self-monitoring is most likely beneficial, as employees at this stage are more prepared to take charge of their own learning. Advanced learners benefit most from access to mentor guidance, as they are able to engage effectively in self-regulated learning, but may not yet have the experience that more senior employees can draw from when making decisions. Thus, instructional design principles suggest that moving from novice to expert involves adapting learning strategies from instructor-guided, structured, and focused on more simple tasks in early stages of learning toward more independently directed, variable, and focused on more complex tasks across situations in advanced stages of learning.
Targeted Practice
A critical component in starting and sustaining the process of building toward routine expertise is through deliberate practice (Ericsson et al., Reference Ericsson, Krampe and Tesch-Römer1993). As noted by Ericsson (Reference Ericsson, Ericsson, Charness, Feltovich and Hoffman2006), executing proficiently during routine work may not lead to further improvement. Instead, improvements depend on deliberate efforts to continually refine and enhance one’s skills.
Advances in new training technologies allow for much more targeted and intensive practice than through traditional training programs. Technologies such as virtual reality (VR) and simulation training provide a safe environment in which to make mistakes and observe the consequences of actions. Scenarios can stimulate the senses, incorporate interactivity and cause and effect linkages, as well as include a cycle of judgments, behaviors, and feedback that allow for high physical and psychological fidelity (Ford & Meyer, Reference Ford and Meyer2014).
With simulation and VR training, learning occurs by immersing the trainee in media rich contexts that are similar to those encountered in real life (Brooks, Reference Brooks1999). A VR training system can simulate many different types of situations and learning events within a short time frame (Gupta, Anand, Brough, Schwartz, & Kavetsky, Reference Gupta, Anand, Brough, Schwartz and Kavetsky2008). VR training applications are now more numerous, more powerfully realistic, and more innovative. VR technology has been applied to areas such as driving simulators (Cockayne & Darken, Reference Cockayne and Darken2004), medical situations such as surgical procedures (Hague & Srinivasan, Reference Hague and Srinivasan2006), military tactics (Knerr, 2007), and aircraft maintenance tasks (Bowling, Khasawneh, Kaewkuekool, Jiang, & Gramopadhye, Reference Bowling, Khasawneh, Kaewkuekool, Jiang and Gramopadhye2008).
The evidence for the effectiveness of VR is striking for training very specific skills within discrete tasks such as medical procedures. Hague and Srinivasan (Reference Hague and Srinivasan2006) found that simulators lessened the time taken to complete a given surgical task in the operating room and lead to no differences in error rates in comparison with traditional clinical training. Larsen and colleagues (Reference Larsen, Soerensen, Grantcharov, Dalsgaard, Schouenborg, Ottosen, Schroeder and Ottesen2009) examined studies on training for laparoscopic surgery through randomized controlled trials and found that VR training led to the equivalence of the experienced gained from 25 surgeries and reduced the time to complete operations. McGaghie, Issenberg, Cohen, Barsuk and Wayne (Reference McGaghie, Issenberg, Cohen, Barsuk and Wayne2012) showed that simulation with embedded deliberate practice enhanced specific clinical skill acquisition goals over traditional clinical medical training efforts. Larsen et al. (Reference Larsen, Oestergaard, Ottesen and Soerensen2012) found that operating time was reduced by 17% to 50% through VR training depending on the simulator type and training principles incorporated. They also found that deliberate practice was a superior approach than training based on a fixed time or fixed numbers of repetitions. Interestingly, Bongers, and colleagues (Reference Bongers, Diederick van Hove, Stasssen, Dankelman and Schreuder2015) incorporated problems into the laparoscopic skills simulator that a trainee was forced to address while continuing with the simulated operation. Results indicated that while the interruptions impacted all trainees’ performance the intervention group (that received problems) was significantly faster in solving problems on the posttest evaluation.
Building Routine Expertise through Work Experience
Glaser and Chi (Reference Glaser, Chi and Chi1988) noted the need to know how expertise is acquired and how beginning learners can be presented with the appropriate experiences to build competency. Strategies for incorporating intentional developmental job experiences into work to build functional depth are just becoming more a focus of research efforts. Two intentional development strategies that are most promising for developing routine expertise through work experiences include deliberate performance and intentional efforts to enhance self-regulatory skills via cognitive apprenticeship.
Deliberate Performance
Deliberate performance is defined as “effort to increase domain expertise while engaged in routine work activity” (Fadde & Klein, Reference Fadde and Klein2010: 5). Similar to just-in-time training, deliberate performance involves leveraging everyday job situations as learning opportunities, but with a focus on building expert-like mental frameworks and situational awareness that allow an individual to approach decision making and problem solving more efficiently (Fadde & Klein, Reference Fadde and Klein2010). Fadde and Klein (Reference Fadde and Klein2010) suggest four deliberate performance strategies that can be used to build routine expertise while performing day-to-day work activities: estimation, experimentation, extrapolation, and explanation.
Estimation involves weighing what is known about a task or project and how it might be related to other tasks or projects, as well as how it could be affected by the environment. The individual can then be asked to approximate the amount of time and resources needed to complete it prior to working on the task. For example, an engineer could be prompted to estimate the amount of time and effort that would be needed to troubleshoot an electrical problem and then be asked to reflect after completing the task on why or why not the prediction was accurate.
Experimentation allows an individual to try out different strategies for accomplishing a task or goal and then to make adjustments based on whether the strategy was successful or not. The individual is pushed to consider alternative strategies and to try them out and reflect, often with a mentor, on what strategies worked best and why.
Extrapolation is using some prior situation that was completed successfully or unsuccessfully (whether through direct experience or the experience of others) as a reference point for thinking about why success was achieved or how things might have been done differently to prevent a negative outcome.
Explanation involves having the individual discuss the steps taken on a project, explain why things were done the way they were, justify the order of the steps taken, and explain what cues prompted different responses and why. Explaining actions can help the learner make better sense of those actions, the situation, and the feedback loop to identify more effective strategies, understand the bottlenecks in the system, and make improvements.
Though few have examined deliberate performance in the workplace, researchers have investigated deliberate practice on the job. One successful instance of incorporating deliberate practice into the workplace was described by Sonnentag and Kleine (Reference Sonnentag and Kleine2000). They measured systematic attempts at deliberate practice by insurance agents during work through regularly performing key tasks with the aim of developing their competence. They found that the more time agents spent on deliberate practice above the number of cases handled and the amount of time on the job, the higher their rated performance was. Overall, mental stimulation (e.g., imagining difficult situations with a client and mentally exploring what to do) and asking for feedback were determined to be the main aspects that were practiced in a deliberate way while working on the job. As Fadde and Klein (Reference Fadde and Klein2010) note, the most difficult part of deliberate performance is to make sense of the feedback received in the workplace environment. By estimating and experimenting (similar to the notion of mental stimulation), on-the-job learners are able to make inferences about their environment, consider strategies to test those inferences on their own or with colleagues, and receive feedback from their environment on the success of those strategies.
Cognitive Apprenticeship
Another strategy for building expertise through work experience is through the promotion of self-regulation during task performance (Salas & Rosen, Reference Salas, Rosen, Kozlowski and Salas2009). Research has shown that self-monitoring and metacognition are useful in improving performance by fostering awareness of progress toward a task or goal (Fiorella, Vogel-Walcutt, & Fiore, Reference Fiorella, Vogel-Walcutt and Fiore2012; Pintrich, Wolters, & Baxter, Reference Pintrich, Wolters, Baxter, Schraw and Impara2000).
Lajoie (Reference Lajoie2003) describes a “cognitive apprenticeship model” that can be used to enhance cognitive and metacognitive processes in technical troubleshooting tasks while on the job. The model was developed to help facilitate the transfer and enhancement of skills from the initial extensive training program to help learners continue to learn on the job and accelerate the development of proficiency. The approach focuses on identifying how experts go about troubleshooting problems, and then using that type of information to develop realistic job situations and problems and to help build an intelligent tutoring system to use as a performance aid. The learners can then obtain extensive practice and build skills relevant to how experts approach and solve the troubleshooting tasks while at work. The apprenticeship includes the use of realistic problems through computer simulation of the job environment and the intelligent tutoring system to provide a safe environment for coaching support. Lajoie (Reference Lajoie2003) noted that after 24 hours of practice on this system, airmen who had been on the job for six months were able to troubleshoot test station failures at a level similar to others who had been on the job for four years. In addition, the more time learners spent in the tutoring environment, the fewer steps taken to solve the problem and the closer trainees’ troubleshooting processes were to expert processes and solutions.
Research on the cognitive apprenticeship strategy demonstrates that making learners aware of the strategies in identifying and solving problems that experts use can help guide practice and learning processes by building stronger self-regulatory skills. During the workday, an individual can be prompted to recognize patterns (identification of successful vs. unsuccessful strategies), feedback seeking (looking for additional information regarding performance progress), and the development of long-term goals. Thus, self-regulation can be a vehicle for engaging in a form of deliberate practice while performing job-relevant work. Such systems also allow individuals to learn from mistakes through work in simulated environments; the intelligent tutoring system can then help pinpoint areas in need of more development. These opportunities are important not only because they provide learners with the ability to experiment and make mistakes outside the context of their job performance, but also because they push learners to move beyond their current achievement and competency levels. Providing immediate, constructive, and high-quality feedback to learners as they navigate these learning tasks allows them to make adjustments and strategize with an emphasis on development rather than on performance, as is typically the case for formal job tasks. By providing employees with opportunities to more quickly tackle advanced skill development, the timeline that might be expected for developing expertise is shortened as they develop critical skills and cognitions earlier and independent of job title or formal job tasks.
Building Adaptive Expertise through Training
Adaptability has been conceptualized as the capacity to alter one’s performance in response to shifting challenges and the ability to anticipate changes and to modify strategies (Ely, Zaccaro, & Conjar, Reference Zaccaro, Banks, Kiechel-Koles, Kemp and Bader2009). Three processes that help to build adaptability are guided discovery learning, cognitive frame changing, and adaptive thinking training.
Guided Discovery Learning
The traditional learning approach (especially for building routine expertise) uses a deductive approach in which trainees are explicitly instructed on the complete task and its concepts, rules, and strategies. In contrast, research indicates the importance of taking an inductive approach to build more learning depth, as well as to promote adaptive expertise (Smith et al., Reference Smith, Ford, Kozlowski, Quiñones and Ehrenstein1997). In this discovery learning process, individuals must explore a task or situation to infer and learn the underlying rules, principles, and strategies for effective performance. As noted by Mayer (Reference Mayer2004) and Kirschner, Sweller, and Clark (Reference Kirschner, Sweller and Clark2006), it is also clear that guided discovery learning is more effective than pure discovery. Research also indicates that guided discovery learning is more effective for individuals with experience and some level of competency on the task (Taylor, Russ-Eft, & Chan, Reference Taylor, Eft and Chan2005). A recent meta-analysis by Hutchins et al. (Reference Hutchins, Wickens, Carolan and Cumming2013) found that exploratory learning provided more benefits as guidance increased and for far transfer more than near transfer.
There are several reasons why guided rather than pure discovery learning is beneficial (Ford & Schmitt, Reference Ford and Schmidt2000). First, in the discovery learning approach, individuals are typically more motivated to learn because they are responsible for generating correct task strategies and are thus more actively engaged in learning. Second, discovery learning allows learners to use hypothesis testing and problem-solving learning strategies. In contrast to the traditional deductive learning approach, this active process requires more conscious attention for its application and adds depth to the learning process. Third, individuals engaged in exploratory learning are also likely to experiment with a greater range of strategies. The development of these strategies for discovering information helps individuals to identify novel or unpredictable job situations and, thus, promote a search for new ways to approach the situation. The new knowledge that is acquired by trying out alternative strategies can then become better integrated with the learner’s existing knowledge.
There are several ways to implement a guided discovery approach to learning for perceptual-motor and problem-solving tasks. Guidance can include the following types: giving partial answers to problems, providing leading questions or hints to the learner, varying the size of steps in instruction (part vs. whole learning), and providing prompts without giving solutions. In addition, guidance can be given to learners on how to form hypotheses and test out those ideas in an effective way. For example, trainees can be presented with case studies of previous situations and asked to draw inferences about effective and ineffective responses to these situations. From these specific incidents, general principles of effective response can be generated and discussed.
Cognitive Frame Changing
Cognitive frame changing (DeYoung, Flanders, & Peterson, Reference DeYoung, Flanders and Peterson2008; Ohlsson, Reference Ohlsson1992) is the process of breaking free of inappropriate assumptions, increasing insight problem solving, and creating or adopting new task-relevant strategies. To the extent that individuals can switch cognitive frames, they will be better able to find solutions to complex, novel problems and adapt their skills to meet unanticipated environmental demands (Ely et al., Reference Ely, Zaccaro and Conjar2009). McKenzie and colleagues (Reference McKenzie, Woolf, Winkelen and Morgan2009) found that successful performers must also possess the ability to manage opposing demands. As noted by Quinn and Cameron (Reference Quinn and Cameron1988), individuals must “have the capacity to see problems from contradictory frames, to entertain and pursue alternative perspectives” (45) so as to learn strategies for fulfilling competing expectations.
Learning strategies that can enhance an individual’s ability to react to and anticipate job challenges are emerging and being incorporated into training scenarios (Zaccaro et al., Reference Ely, Zaccaro and Conjar2009). In terms of fostering cognitive frame-changing skills, Ely et al. (Reference Ely, Zaccaro and Conjar2009) propose that experiential variety. Similarly, Nelson, Zaccaro, and Herman (Reference Nelson, Zaccaro and Herman2010) expanded on the frame-changing nature of these training approaches and noted that training that varies surface characteristics is appropriate for building routine expertise, but that adaptive expertise requires structural variation that includes varying the problem domain so that trainees must change their preferred strategy or approach.
In a series of studies, Ansburg and Dominowski (Reference Ansburg and Dominowski2000) examined the effects of different learning strategies on improving verbal insight problem solving. They found that strategic information provision before and during training to help guide trainees, in combination with practice involving surface variation, led to an increase in insight problem solving (i.e., cognitive frame-changing skills). In addition, designating time during training for elaborating on problems and facilitating the search for finding structural similarities between problems was found to increase insight problem solving. This approach is clearly different from the emphasis on developing routine expertise through repetitive practice to reach a certain standard or criterion of success.
Cognitive frame changing requires accurate situational assessments and critical thinking. As noted by van den Bosch and de Beer (Reference Bosch and Beer2007), traditional training programs often provide insufficient opportunities for learning the situational nuances that make application of an already learned procedure either appropriate or inappropriate. Developing the ability to assess situations requires practicing cases from different perspectives so that learners have more opportunities to recognize various situational factors that can impact the effectiveness of different strategies and to recognize appropriate cues and their interdependencies. This experience-based interactive problem-solving approach is based on realistic and challenging work issues that require skills beyond foundational competency in a job domain. By engaging in practice experiences under controlled and safe conditions, the learner can gain a better understanding of situation-response relationships.
Klein (Reference Klein2004) describes two studies that compared critical thinking training design with a traditionally designed training program. The critical thinking program allowed learners to produce different explanations for events, question assumptions of situational assessments, and critique and revise strategies. Trainers provided support and feedback on the critical thinking process by asking key questions and challenging assumptions. They also would use the “devil’s advocate” procedure to push for deeper thinking about an issue. Results indicated that the critical thinking training led to higher levels of augmentation (e.g., explaining conflicting evidence, criticizing assumptions) and stronger contingency planning (e.g., anticipating alternative courses of events in the plan and the quality of precautionary steps to be taken).
Adaptive Thinking Training
Adaptive thinking training was developed for the military to help teach soldiers how to think as well as how to fight. Adaptive thinking is one component of the adaptability model developed by Pulakos et al. (Reference Pulakos, Arad, Donovan and Plamondon2000). The training focuses on the deliberate practice of thinking skills to enhance problem identification, analysis, and problem solving so that the learner comes closer to the mental models of (and the methods employed by) experts. In this way, trainees can respond effectively to complex tasks under changing conditions and with ambiguous or missing information. The approach relies on developing (through discussions with experts in a domain) a set of themes or elements that should be considered when thinking through an ambiguous and changeable situation. For example, in the military, important elements of thinking might be seeing the big picture, using all assets available, visualizing the battlefield, and considering timing (Shadrick & Lussier, Reference Shadrick and Lussier2004; Shadrick & Lussier, Reference Shadrick, Lussier and Ericson2009). Trainees are presented with a variety of scenarios and are encouraged to incorporate thinking elements into their understanding of the issues and developing possible solutions. Then, they discuss and defend what considerations they had relevant to the scenario, are presented with the expert model, and are given individualized feedback including a discussion about why certain elements were considered and incorporated into the expert model.
Evaluation of the adaptive thinking training has shown increases in the percent of critical information identified by learners as relevant to the scenario, as well as performance gains (Shadrick et al., Reference Shadrick, Lussier and Fultz2007). Adaptive thinking training has also been expanded to include training teams (e.g., Zimmerman et al., Reference Zimmerman, Sestokas, Burns, Bell and Manning2012). For example, Schaefer et al. (2009) report on the training of planning teams that must respond to crises such as power grid shutdowns and industrial plant explosions using adaptive thinking training methods; they found improved cognitive task performance as a result of the team training. While these training approaches have been incubated in military settings, the notion of enhancing skills is quite relevant for building deep specialization in other industries as well.
Building Adaptive Expertise through Work Experience
Barneff and Koslowski (Reference Barneff and Koslowski2002) note that performance is affected by longer-term, more extensive life experiences than those developed from short-term training and development programs. Thus, there is a need to understand and study more directly the effect of more extensive and long-term experience. Nelson, Zaccaro, and Herman (Reference Nelson, Zaccaro and Herman2010) stress that factors such as experiential variety need to be incorporated into work experience, as well as formal training activities. Similarly, Tannenbaum et al. (Reference Tannenbaum, Beard, Salas, Kozlowski and Salas2010) discuss the importance of studying informal learning that occurs on the job. They note that four key informal learning components are the intent to learn and develop, experience and action, feedback, and reflection. This process of informal learning becomes even more important after an individual gains competency in the job, as they now have the foundational knowledge, mental models, and self-regulatory skills to grow and develop toward deep specialization.
Two major strategies for building expertise through work experience are creating learning opportunities within the current job and assigning challenging job assignments to broaden one’s skill base beyond the immediate job, thus enhancing understanding of how systems work in the organization.
On-the-Job Learning
A critical component of employees being able to learn while performing on the job is to tackle challenges above their current skill level. Tannenbaum et al. (Reference Tannenbaum, Beard, Salas, Kozlowski and Salas2010) contend that meaningful learning from these types of opportunities requires intentional steps ensure tolerance for deviation (learning from mistakes), tolerance of inefficiency, and feedback from mentors to drive reflection. Strategies for enhancing learning during these challenging job experiences can be taken prior to task performance, during task performance, and after task performance (Ford & Schmidt, Reference Ford and Schmidt2000; Ford et al., Reference Ford, Quiñones, Sego and Sorra1992).
Nelson et al. (Reference Nelson, Zaccaro and Herman2010) describe the importance of preperformance instruction to facilitating the development of adaptive expertise. While based on training research, the findings have direct implications for improving performance in the job domain. Preperformance briefings by supervisors or mentors might review what types of situations are likely to occur while performing a task and how to respond to them. The briefs can also highlight different ways of thinking about the upcoming task and the problems embedded in that task so as to prompt more critical thinking.
During task performance, mentors can be available to provide adaptive guidance (Bell & Kozlowski, Reference Bell and Kozlowski2002). Adaptive guidance refers to support given to the person performing a task by providing diagnostic task-related information, timely suggestions on how to improve performance, and encouragement to seek alternative strategies when appropriate. This support is particularly important for developing strategic task skills that include an understanding of how to integrate various pieces of information related to a task and gaining a broader contextual understanding of when to use specific strategic skills. Strategic skills are needed in complex work domains that are malleable and require learners to shift their behavior and cognitions in response to changing situations. Adaptive guidance can help learners to move beyond the acquisition of basic task skills involving declarative and procedural knowledge and an emphasis on simple, routinized operations (Kanar & Bell, Reference Kanar and Bell2013)
Once the task has been completed, mentors can aid in the learning experience by conducting after-action reviews. Chatham (Reference Chatham2009) discusses the development of the Top Gun approach that incorporated a strong emphasis on after-action reviews to force individuals to confront what happened and why and how different strategies might have been appropriate. Ellis and Davidi (Reference Ellis and Davidi2005) showed that performance of soldiers on a navigation exercise improved when debriefed on failures and successes compared to those who were debriefed only about failures. These findings suggest that focusing on reasons for success and reasons for failures enhanced the development of useful mental models. Tannenbaum and Cerasoli (Reference Tannenbaum and Cerasoli2013) note that the essential elements of after-action reviews include active self-learning in which participants engage in discovery with a clear intent for learning in a nonjudgmental way, and a focus on specific events and performance episodes rather than general performance, and receive input from outside observers. They conducted a meta-analysis of after-action reviews and found that such reviews improve effectiveness by about 25%, with the average effect size similar for simulated training environments and real work settings.
Developmental Job Assignments
McCall (Reference McCall2004) notes that experience in the form of job assignments should form the core of development. This focus on job assignments requires understanding what experiences are developmental and what people can learn from those experiences. Some types of jobs and tasks are likely to be more developmental than others (McCauley & Brutus, Reference McCauley and Brutus1998). In addition, different kinds of developmental assignments are most likely associated with different kinds of learning. A strong research base has identified key developmental assignments of leaders, such as handling unfamiliar responsibilities, creating change, dealing with job overload, handling external pressure, and influencing others without formal authority (McCauley, Ruderman, Ohlott, & Morrow, Reference McCauley, Ruderman, Ohlott and Morrow1994).
Variety in assignments throughout one’s career is clearly as important for highly technical and specialized jobs as it is for developing leaders. Developmental assignments are needed for facilitating the creation of a broad and holistic perspective, enhancing skills, and building adaptive expertise. Not surprisingly, engineers at Intel are encouraged to move into developmental assignments associated with new projects so as to work on a range of projects that build overall experience and judgment (Bersin, Reference Bersin2009). Similarly, Sonnentag (Reference Sonnentag1995) found that highly proficient (adaptive) software professionals had worked in a variety of projects with more difficult programming languages as compared to those with routine expertise.
Hoffman et al. (Reference Hoffman, Ward, Feltovich, DiBello, Fiore and Andrews2014) highlight the importance of developing high levels of proficiency in employees in organizationally critical job and skill areas as quickly as possible. While researchers acknowledge the need for continuous learning to build functional depth and develop high levels of proficiency (Salas & Rosen, Reference Salas, Rosen, Kozlowski and Salas2009), what intentional developmental job experiences should be incorporated into work has not been identified. Thus, it is an open question as to how generalizable the results from managerial development studies on learning from job experiences by McCauley et al. (Reference McCauley, Ruderman, Ohlott and Morrow1994) are to highly technical, mission-critical job domains. For example, job transitions, while an important component of management development, may not be as critical for individuals who are likely to stay (and are needed to stay) in core technical jobs. Thus, future research is needed to identify these developmental experiences. Individual and/or focus group interviews with employees at different stages on the road to expertise in highly specialized fields could be useful next steps. The focus of these interviews would be to uncover work experiences and learning opportunities that have helped move them from relative newcomers to routine and then adaptive experts. The data from the interviews could be used to categorize the key learning opportunities, and based on those categories, survey items can be written by category on the learning opportunities similar to the approach taken by McCauley et al. (Reference McCauley, Ruderman, Ohlott and Morrow1994). The developed survey of learning experiences that facilitate the move to routine and/or adaptive expertise cam be used to predict the level of expertise achieved over time
In addition, as noted by McCall (Reference McCall2004): “People do not automatically learn from experience. They can come away with nothing, the wrong lessons, or only some of what they might have learned” (128). Thus, using targeted assignments as part of a development system for building deep specialization in highly technical skills must also be linked other development strategies like coaching, providing role models, and intentional training experiences.
Discussion
A recent popular press book, called Outliers, made claims based on this type of research (Gladwell, Reference Gladwell2008) by touting the conclusion that it takes 10,000 hours of practice to become an expert in a domain. It is also estimated that to obtain expertise, the amount of “deliberate practice” (intensive, continual repetition) of a skill requires 10 years or 10,000 hours of practice (e.g., Charness et al., Reference Charness, Tuffiash, Krampe, Reingold and Vasyukova2005; Ericsson, Reference Ericsson, Ericsson, Charness, Feltovich and Hoffman2006; Ericsson et al., Reference Ericsson, Krampe and Tesch-Römer1993). These claims have become a catchall conclusion that as can be seen by the discussion in this chapter is not likely to generalize to core, highly technical jobs at work. Such estimates do not provide insight into the variety of learning strategies through training and work experience that can be incorporated into the workplace to accelerate the development of expertise. Our chapter has attempted to provide some insight into the variety of learning strategies that have been found relevant in the development of expertise.
Thus, in this chapter, we have stressed the importance of considering both training and systematic job experience factors that can help build routine and adaptive expertise over time in highly technical and critical jobs. For example, it is clear that expertise is more than a function of job tenure and number of hours worked. As noted by the Electric Power Research Institute, while it may take up to 25 years to become an expert in mission-critical activities, a proportion of utility personnel with that level of tenure are not recognized as having expertise, as they have simply become very good (proficient) at what they do (see Ziebell, Reference Ziebell2008). Our frameworks points to the need for intentional strategies to develop adaptive expertise as well as routine expertise to build deep specialization.
Another takeaway from the research is that the time to move from newcomer to competent is likely to be shorter than the time required to move from competent to expert. The progression from competent to expert requires that “decision making and indeed an understanding of the particular situation is intuitive such that it may take longer to reach than any of the intermediate stages, if it’s ever reached at all” (Kinchin & Cabot, Reference Kinchin and Cabot2010: 155). Much effort is often focused on bringing newcomers up to speed and developing them into competent performers. While this makes sense, as organizations want to get value out of the investment of recruitment, selection, and initial training costs (Byham, Reference Byham2008), less attention is often paid to the intentional strategies needed to move an individual from competent to expert (Ziebell, Reference Ziebell2008). A corollary to this idea is that the time to move from newcomer to competent and from competent to proficient is most likely longer for the more nonroutine and difficult tasks within a job compared to the simpler and more routine tasks in a job. While this may seem obvious, thinking in this way questions the conclusions often drawn regarding estimates of 10,000 hours of practice needed to become expert in a domain. Our chapter has highlighted the challenges in building expertise as well as identifying that the strategies for building routine expertise are different from strategies to enhance adaptive expertise. Finally, it is also likely that individuals identified as having expertise within a job domain will differ in terms of which tasks that they are considered as an expert and/or in what situations they are considered to be the “go to person.” It is likely that a person will have some tasks within the job where he/she has a particularly valuable skill and thus possesses critical knowledge that would be difficult to quickly replace. It is important to understand what training and job experiences have led to expertise in particular tasks but not others within a job domain. Such research will help identify the more effective learning strategies. In addition from a practical perspective, identifying what areas a person is expert in and in what situations they are the “go to” person provides a critical baseline for understanding the impact of impending retirements on loss of organizational expertise.
A report completed by the Association for Talent Development (2014) illustrates that organizations are increasingly shifting from relying on conventional training techniques to implementing technology-based forms of training. One such training strategy that arose as a function of technological advances is simulation-based training (SBT). This approach is becoming increasingly prevalent within organizations (Summers, Reference Summers2004) because of its capacity to meet a multitude of training needs. It offers advantages beyond traditional training techniques, such as lectures, by delivering salient information but coupling that delivery with the opportunity to practice (Bell, Kanar, and Kozlowski, Reference Bell, Kanar and Kozlowski2008). Specifically, simulations are ideal for delivering instruction pertaining to new technology and furthering the ability of employees to work within the complex, dynamic conditions that characterize the modern workplace (Day, Reference Day2014). For example, simulations can recreate an environment that would otherwise be too difficult, costly, or dangerous to provide training within, such as a mass causality scenario for health care workers (Heinrichs et al., Reference Heinrichs, Youngblood, Harter and Dev2008). Simulations are also ideal for targeting technical skills or furthering understanding of new technology being implemented within an organization (e.g., Aggarwal et al., Reference Aggarwal, Ward, Balasundaram, Sains, Athanasiou and Darzi2007). Given the ability of SBT to address the growing needs of organizations, it is unsurprising that this training medium is becoming more commonly implemented.
As the use of SBT increases, there is a corresponding need to review the evidence for its use. Our purpose in this chapter is to provide both researchers and practitioners a comprehensive understanding of the state of SBT in science and practice. First, we begin by defining SBT and discussing some of the advantages and disadvantages associated with using this technique. Second, we review the science that informs this training technique. Next, we outline the multitude of ways in which SBT is currently being utilized in practice and briefly discuss how organizations can most effectively design SBT by outlining best practices gleaned from the literature. Finally, we close by suggesting future directions for research initiatives that will benefit both scientific understanding and practical applications of SBT.
Defining Simulation-Based Training
SBT is an amalgamation of simulation (i.e., an artificial environment designed to mirror reality; Bell et al., Reference Bell, Kanar and Kozlowski2008) and training (i.e., the acquisition of attitudes, cognitions, knowledge, and skills through a systematic program; Goldstein, Reference Goldstein, Dunnette and Hough1991). Moreover, SBT can be defined as “any synthetic practice environment that is created in order to impart these competencies (i.e., attitudes, concepts, knowledge, rules, or skills) that will improve a trainee’s performance” (Salas, Wildman, and Piccolo, Reference Salas, Wildman and Piccolo2009: 2008). SBT is proven to be effective (e.g., Aggarwal et al., Reference Aggarwal, Ward, Balasundaram, Sains, Athanasiou and Darzi2007; Gaba et al., Reference Gaba, Howard, Fish, Smith and Sowb2001; McGaghie et al., Reference McGaghie, Issenberg, Petrusa and Scalese2006; Steadman et al., Reference Steadman, Coates, Huang, Matevosian, Larmon, McCullough and Ariel2006) and offers several advantages to organizations.
The U.S. workforce has shifted to being more knowledge based, among other changes (e.g., increased diversity) and, correspondingly, employees are required to be increasingly skilled in more complex areas (Bogdanowicz and Bailey, Reference Bogdanowicz and Bailey2002; Burke and Ng, Reference Burke and Ng2006). For example, organizations have become more reliant upon employees who are successfully able to adapt to perpetually changing circumstances (Huang et al., Reference Huang, Ryan, Zabel and Palmer2014) and researchers suggest that this skill, referred to as adaptivity, should be trained using active learning strategies; active learning methods can be encompassed within SBT (Bell and Kozlowski, Reference Kozlowski, Bell, Fiore and Salas2007). In addition, global virtual teams and telecommuting are becoming more common (e.g., Reichard, Serrano, and Wefald, Reference Reichard, Serrano, Wefald, Bligh and Riggio2013), increasing the need for training methods that can be delivered to those who are not physically present. SBT offers the elasticity to be implemented from almost anywhere at any time, thereby reducing the need for inconveniences associated with traditional, classroom-based training (Summers, Reference Summers2004).
A recent trend within the selection industry is providing applicants and new job incumbents a realistic job preview (RJP) during the hiring and training processes. RJPs can be defined as materials and/or representations that provide candidates with realistic information, which can be positive and/or negative in nature, about a job (Breaugh and Starke, 2000). RJPs are associated with decreased turnover (Phillips, Reference Phillips1998; Premack and Wanous, Reference Premack and Wanous1985). SBT provides organizations the ability to train employees while delivering a RJP to them by immersing them in a realistic practice environment (Cannon-Bowers and Bowers, Reference Cannon-Bowers, Bowers, Kozlowski and Salas2009) that may reduce turnover (Phillips, Reference Phillips1998).
Another primary advantage of SBT is that it provides learners an opportunity for monitored practice without endangering others (Deering et al., Reference Deering, Brown, Hodor and Satin2007) and affords trainees the ability to practice with reduced risks to life and capital (Gordon et al., Reference Gordon, Wilkerson, Shaffer and Armstrong2001); this is especially beneficial for training teams or individuals that need to acquire skills necessary for dealing with high-stress, dynamic, and/or complex situations (e.g., military soldiers, emergency medicine technicians, pilots). In regard to the educational domain, traditional teaching mechanisms within the management education domain have been criticized for placing an emphasis on teaching theory without providing opportunities to practice (Lane, Reference Lane1995), and SBT may provide an answer to this critique (Salas et al., Reference Salas, Rosen, Held and Weissmuller2009). SBT also allows for reduced training time, but similar developmental capabilities in comparison to traditional training methods (Lane, Reference Lane1995). In addition, SBT can be affordable as some of the most simplistic and free forms of SBT (e.g., Tinsel Town; Devine et al., Reference Devine, Habig, Martin, Bott and Grayson2004) are effective for certain goals. Other advantages to SBT include increased engagement (Keys and Wolfe, Reference Keys and Wolfe1990) and more enhanced outcomes than traditional education and training programs (e.g., McGaghie et al., Reference McGaghie, Issenberg, Cohen, Barsuk and Wayne2011).
Evidence indicates that it is possible to cultivate training outcomes at the individual, group, and organizational level with SBT (McGaghie et al., Reference McGaghie, Issenberg, Cohen, Barsuk and Wayne2011). At the individual level, research suggests that SBT improves knowledge, skill, and ability levels of trainees (e.g., Sweeney et al., Reference Sweeney, Warren, Gardner, Rojek and Lindquist2014). For instance, McKinney and colleagues (Reference McKinney, Cook, Wood and Hatala2013) meta-analytically demonstrated that SBT produced positive changes in cardiac auscultation skills. These researchers also found that hands-on practice with a simulator appeared to be important for effectiveness. In addition, Singer and colleagues (Reference Singer, Corbridge, Schroedl, Wilcox, Cohen, McGaghie and Wayne2013) found that first-year medical residents who were trained with a SBT program outperformed traditionally trained third-year residents on a clinical skills assessment.
At the team level, research has shown SBT to improve team performance and processes (e.g., Kaplan, Lombardo, and Mazique, Reference Kaplan, Lombardo and Mazique1985; Shapiro et al., Reference Shapiro, Morey, Small, Langford, Kaylor, Suner, Salisbury, Simon and Jay2004). Specifically, Sweeney and colleagues (Reference Sweeney, Warren, Gardner, Rojek and Lindquist2014) tested the effectiveness of a SBT program within emergency department teams and found a significant increase in communication efficiency. Shapiro and colleagues (Reference Shapiro, Morey, Small, Langford, Kaylor, Suner, Salisbury, Simon and Jay2004) further found that simulation-based teamwork training also contributed to a greater improvement of clinical team performance in comparison to didactic training.
However, despite these many benefits, it is important to note that there are certain disadvantages associated with these programs. According to Funke (Reference Funke1998), one drawback is that it is hard to compare across individual trainee outcomes because most SBT environments are dynamic. Relatedly, a plethora of behavioral data is typically produced following SBT, making it difficult for researchers and practitioners to analyze the data (Funke, Reference Funke1998). Moreover, complex SBT programs can be very costly and may require additional supervision to implement (e.g., Preisler et al., Reference Preisler, Svendsen, Nerup, Svendsen and Konge2015; Shetty et al., Reference Shetty, Zevin, Grantcharov, Roberts and Duffy2014). Jacobs and Baum (Reference Jacobs and Baum1987) noted that adopting a SBT program might not be the most cost-effective strategy, which suggests that smaller companies may want to implement more traditional training methods. Thus, practitioners must consider if SBT is ideal for meeting their training needs before choosing to implement this technique. Reviewing the science underlying the effectiveness of SBT can help in determining when to implement this strategy.
The Science of Simulation-Based Training
Although there is not an integrated theory in which SBT is grounded, broad theoretical advancements in the science of training have shaped training practices like SBT. The science of training has expanded at a rapid rate over the past few decades, contributing new theoretical frameworks and constructs that illuminate the conditions under which training is most effective (Salas and Cannon-Bowers, Reference Salas and Cannon-Bowers2001). One common thread across these models is the necessity of taking a systematic approach to the design, delivery, and assessment of training. This point is consistent with empirical evidence, which consistently indicates that when training is systematic, it is effective across multiple levels of analysis (Aguinis and Kraiger, Reference Aguinis and Kraiger2009; Arthur et al., Reference Arthur, Bennett Jr., Edens and Bell2003; Keith and Frese, Reference Keith and Frese2008).
The systems approach, which advocates viewing “training as a system embedded in an organizational context” (Salas and Cannon-Bowers, Reference Salas and Cannon-Bowers2001: 491), has informed the development of a conceptual model detailing the specific stages that should be taken when implementing SBT (Salas et al., Reference Salas, Wilson, Burke and Priest2005). Specifically, Salas et al. (Reference Salas, Wilson, Burke and Priest2005) delineated the following training principles as necessary to the success of SBT interventions: (1) completion of a training needs analysis, (2) development of task competencies, (3) specification of training objectives, (4) design of training events, (5) development of performance measures, (6) diagnosis of performance, and (7) delivery of feedback and debriefing. We propose an augmented framework. Specifically, we suggest that performance measures should be developed before training events are designed, which we discuss in the following text. We also note that if observers are rating the performance of the trainees, they should receive the appropriate training to increase rater accuracy (Lievens, Reference Lievens2001; Woehr and Huffcut, Reference Woehr and Huffcutt1994). The adapted framework is presented in Figure 11.1.

Figure 11.1. Adapted system-based approach to developing simulation-based training.
First, conducting a needs analysis results in information regarding where training needs are most pronounced and where specific deficiencies lie (Moore and Dutton, Reference Moore and Dutton1978). This information can be leveraged to determine what competencies should be targeted with the training program and, relatedly, learning objectives. Once the objectives have been clearly delineated, ideal and minimum levels of performance can be defined to correspond to the learning objectives. Performance measures can then be created based upon the previously defined levels of performance. Next, simulation scenarios or specific events can be developed that require the use of the chosen competencies. This evokes scenario-based training, which advocates embedding events within training that require the use of the targeted competencies as well as the opportunity for the trainee to recognize when they should apply the skills (Fowlkes et al., Reference Fowlkes, Dwyer, Oser and Salas1998). Embedding events in such a manner also provides observers the opportunity to anticipate and measure the desired skills at predefined intervals throughout the SBT, reducing confusion in measurement and ultimately standardizing evaluation, feedback, and debriefing processes. At this phase, measures should be modified to ensure that they are appropriately paired with these events. For example, an observer measure should be created to follow the order in which competencies are triggered within the SBT. Observers should also receive the appropriate training before providing ratings. Finally, the performance data should be utilized to inform feedback and debriefing such that prescriptive information is provided regarding the targeted competencies. Feedback can be delivered in a traditional manner (e.g., a trainer delivers feedback) but it can also be embedded within the training scenario such that trainees discover it through debriefing and dialogue within the events.
Following these steps ensures that a systematic approach grounded in empirical evidence is taken. However, it is unknown which features specific to simulations lead to effective training outcomes. In other words, it is not precisely known how or why SBT is effective in facilitating targeted results or why some studies have indicated that it is more effective than other training techniques (e.g., Keys and Wolfe, Reference Keys and Wolfe1990). Compounding this problem, Cannon-Bowers and Bowers (Reference Cannon-Bowers, Bowers, Kozlowski and Salas2009) noted that although training research has begun to consistently take a systematic approach, research examining the effectiveness of simulations has yet to fully parallel this trend.
As there are a plethora of underlying features in any one training intervention, it is difficult to precisely determine how such interventions lead to targeted outcomes. This ambiguity is especially salient to the study of SBT, as most SBT interventions are implemented in practice (Summers, Reference Summers2004), where a myriad of factors are prone to vary, such the setting of the training, the sample, and scenarios used. However, although research conducted on SBT is largely nascent, the literature has begun to examine several components of training and how they can be used within the specific context of SBT to further training effectiveness. Specifically, interactivity, fidelity, and feedback and debriefing are components that research has begun to assess. These features are summarized in Table 11.1.
Table 11.1 Simulation-based training features integral to success
| Training Feature | Definition | Source |
|---|---|---|
| Interactivity | Degree to which trainees interact with trainers, the system, and/or other trainees | Kozlowski and Bell, Reference Kozlowski, Bell, Fiore and Salas2007 |
| Fidelity | Degree to which the simulation depicts reality or recreates a real-world system | Alessi, Reference Alessi2000; Meyer et al., Reference Meyer, Wong, Timson, Perfect and White2012 |
| Feedback | Information received regarding different aspects of performance | Komaki et al., Reference Komaki, Heinzmann and Lawson1980 |
| Debriefing | The process through which trainees are guided through making sense of what they have learned and how to generalize it to real-world situations | Fanning and Gaba, Reference Fanning and Gaba2007; Lederman, Reference Lederman1983 |
Interactivity
Kozlowski and Bell (Reference Kozlowski, Bell, Fiore and Salas2007) identified interactivity as an integral component of SBT. This refers to the “characteristics that can influence the potential degree and type of interaction between users of the system, between trainers and trainees, and potentially, between teams or collaborative learning groups” (Bell et al., Reference Bell, Kanar and Kozlowski2008: 1421). Interactivity refers to the interaction between the system and the trainee and is related to learner control (Lepper and Malone, Reference Lepper, Malone, Snow and Farr1987). Kraiger and Jerden (Reference Kraiger, Jerden, Fiore and Salas2007) found that, in the case of computer-based training, trainees with a higher degree of learner control exhibited increased declarative and procedural knowledge as compared to trainees without this feature. However, the effect was small.
A recent meta-analysis completed by Karim and Behrend (Reference Karim and Behrend2014) further explored two dimensions of learner control, perceived learner control (i.e., individuals self-report perceiving control over the learning experience), and objective learner control (i.e., different features of the training are manipulated such that trainees actually have some degree of control over the learning experience, such as choosing the training modules to complete), and found that each dimension demonstrated a different relationship with learning. Perceived learner control had stronger relationships with learning outcomes than more objective measures of learner control, suggesting that other aspects (e.g., motivation) may play a large role in these relationships. Other work suggests that trainees may not make sufficient use of control over training materials and may even make poor decisions that negatively influence learning or performance outcomes (e.g., Brown, Reference Brown2001). However, taken as a whole, the findings of research conducted on learner control suggest that this feature can marginally increase learning outcomes but the nature and level of interactivity within SBT should be chosen carefully, informed by the desired training competencies and outcomes.
Fidelity
Fidelity is the degree to which the simulation accurately portrays reality (Alessi, Reference Alessi2000) by representing a real-world system (Meyer et al., Reference Meyer, Wong, Timson, Perfect and White2012). Fidelity was originally conceptualized as being either high or low, but researchers have begun to denounce this distinction as too simplistic (Beaubien and Baker, Reference Beaubien and Baker2004; Bowers and Jentsch, Reference Bowers, Jentsch and Salas2001) and have instead begun more frequently conceptualizing fidelity as multidimensional. One particularly influential typology of fidelity was described by Rehmann, Mitman, and Reynolds (Reference Rehmann, Mitman and Reynolds1995); this typology includes equipment, environment, and psychological fidelity. Equipment fidelity is the degree to which the simulation recreates the actual system (e.g., the tools and technology) trainees will later be required to interact with while performing. In other words, this aspect of fidelity can be defined as how accurately the simulation portrays the displays, controls, and other features of any equipment trainees will use on the job. For example, if a simulation for a surgical team was designed such that all equipment (e.g., surgical tools) provided mirrored that found in the real operating room, the simulation would be rated as high in equipment fidelity.
The second aspect of fidelity, environment fidelity, can be defined as the degree to which the simulation replicates the task environment. The task environment includes features such as sensory information and motion cues. For example, if an aircraft simulation did not include environmental elements that would be experienced in flight, such as motion, it would not be labeled high on this dimension. Rehmann et al. (Reference Rehmann, Mitman and Reynolds1995) noted the importance of this dimension in ensuring learning, as it enhances realism experienced by the trainees and can more closely evoke the actual environment trainees will later be required to apply newly learned skills within.
Finally, the third dimension described within the taxonomy delineated by Rehmann et al. (Reference Rehmann, Mitman and Reynolds1995), psychological fidelity, refers to the extent to which the cues and consequences of the task are realistically portrayed within the simulation. For example, if a simulation created for surgical teams lacked the appropriate consequences of surgical error (e.g., indicating patient harm in some manner), it would be considered low in psychological fidelity. Bowers and Jentsch (Reference Bowers, Jentsch and Salas2001) noted that often simulations were built to have high environment and equipment fidelity, at a high cost, but psychological fidelity was not always emphasized. It has been argued that psychological fidelity can “minimize the transfer problem by closing the gap between training and the real-world task” (Kozlowski and Deshon, Reference Kozlowski, DeShon, Schiflett, Elliott, Salas and Coovert2004: 27). Kozlowski and Deshon (Reference Kozlowski, DeShon, Schiflett, Elliott, Salas and Coovert2004) posited that enhancing psychological fidelity, rather than primarily focusing on other aspects of fidelity, may enable cost-effective simulations that maximize transfer.
Feedback and Debriefing
Research has consistently indicated that feedback, or information received from an outside source regarding performance, is critical in ensuring transfer in all training contexts (e.g., Kluger and DeNisi, Reference Kluger and DeNisi1996; Komaki, Heinzmann, and Lawson, Reference Komaki, Heinzmann and Lawson1980). It serves several critical functions, such as highlighting discrepancies between ideal and current levels of performance, which can motivate trainees to attain higher levels of performance (Locke and Latham, Reference Locke and Latham1990). It can also provide information to trainees regarding how to rectify prior errors they have committed (Ilgen, Fisher, and Taylor, Reference Ilgen, Fisher and Taylor1979).
Specificity and immediacy of feedback are components of feedback posited to affect the degree to which feedback positively affects training outcomes (Annett, Reference Annett1969; Bernardin and Beatty, Reference Bernardin and Beatty1984; Kluger and DeNisi, Reference Kluger and DeNisi1996; Kopelman, Reference Kopelman1986). Specificity refers to the degree of detail provided within the feedback as well as the extent to which the feedback refers to actual instances of performance (Annett, Reference Annett1969). Immediacy can be defined as the timeliness with which feedback is delivered after performance has occurred (Daft and Lengel, Reference Daft and Lengel1986). The traditional view of feedback held that feedback was most effective for furthering training outcomes when it was delivered frequently and immediately. However, Schmidt and Bjork (Reference Schmidt and Bjork1992) suggested that consistent feedback is not always beneficial for transfer; specifically, they posited that withholding feedback provides trainees with needed time to think critically about their errors and make predictions about the type of behavior needed to facilitate higher levels of performance. Although this will lead to more mistakes during training, this is argued to ultimately enhance training transfer.
In line with the idea that feedback is not indefinitely effective in facilitating learning, more recent studies have begun to suggest it is only successful when designed in accordance with the training context (Watling et al., Reference Watling, Driessen, Van Der Vleuten and Lingard2012). A recent meta-analysis indicated that feedback delivery may be effective in SBT, but the effectiveness of the feedback is contingent on the level of experience of the trainees and when it is delivered (Hatala et al., Reference Hatala, Cook, Zandejas, Hamstra and Rydges2014). Specifically, feedback was more effective when delivered at the end of a practice attempt, also referred to as terminal feedback, for more experienced trainees; it was more effective when delivered during each practice attempt, referred to as concurrent feedback, for less experienced trainees. Similarly, other findings suggest that terminal feedback leads to better outcomes for simple tasks and concurrent feedback is more effective for facilitating training outcomes in complex tasks (Wulf and Shea, Reference Wulf, Shea, Williams and Hodges2004). This is in accordance with the suggestions of Schmidt and Bjork (Reference Schmidt and Bjork1992) described in the preceding text. Multiple sources of feedback were also found to lead to enhanced training effectiveness, as compared to one source (Hatala et al., Reference Hatala, Cook, Zandejas, Hamstra and Rydges2014). Thus, depending on the sample and the nature of the task being trained, feedback should be structured accordingly for SBT interventions.
Similar to feedback, debriefing has also been identified as an integral component of training and, like feedback, can be embedded within the simulation. Debriefing is the process through which trainees are guided through analyzing, making sense of, and learning how to generalize what they have learned to other situations (Fanning and Gaba, Reference Fanning and Gaba2007; Lederman, Reference Lederman1983). Arafeh and colleagues (Reference Arafeh, Hansen and Nichols2010) emphasized that debriefs should be structured around learning objectives and targeted at facilitating transfer, focusing on the internal mental framework of the learners. Internal mental frameworks refer to trainees’ knowledge structures built around previous knowledge and experience. These frameworks guide trainees’ behavior (Rudolph et al., Reference Rudolph, Simon, Rivard, Dufresne and Raemer2007) and the goal of a debrief should be to change those frameworks such that new knowledge gleaned from the training scenario is integrated for future reference.
The Practice of Simulation-Based Training
SBT has been utilized in the aviation and military industry for decades (Moorthy, Vincent, and Darzi, Reference Moorthy, Vincent and Darzi2005). However, although SBT may have initially burgeoned within the military and aviation industry, the application of this delivery method now crosses industries. For example, SBT has been incorporated into domains such as education and management (Salas et al., Reference Salas, Rosen, Held and Weissmuller2009). Particularly within the past few decades, SBT has been widely implemented within the medical industry (e.g., Andreatta et al., Reference Andreatta, Chen, Marsh and Cho2011; McKinney et al., Reference McKinney, Cook, Wood and Hatala2013; Zendejas et al., Reference Zendejas, Brydges, Hamstra and Cook2013; Zevin, Aggarwal, and Grantcharov, Reference Zevin, Aggarwal and Grantcharov2012). Based on annual revenues of the largest simulation companies, Summers (Reference Summers2004) estimated that there is a worldwide market between $623 and $712 million. The increased popularity of SBT may be due to the match between this training mechanism’s strengths and current market needs, such as the capacity of SBT to train both technical and nontechnical skills across a multitude of domains.
Simulation-Based Training in Practice across Industries
The military has historically utilized SBT in a variety of ways, providing soldiers with a scenario that cannot otherwise be replicated, delivering custom training for the learner, and offering multiple practice sessions for skill retention (TRADOC, 2011). SBT has also been frequently used in aviation (Moorthy et al., Reference Moorthy, Vincent and Darzi2005). Reducing human error is often a primary goal of the use of this intervention in the aviation context. Closely aligned with SBT is crew resource management (CRM), a training program developed in the 1980s to target specific teamwork skills (Helmreich, Reference Helmreich1997). Several studies also indicate that low-fidelity simulations are an effective means of fostering skills through CRM (Baker et al., Reference Baker, Prince, Shrestha, Oser and Salas1993; Bowers et al., Reference Bowers, Salas, Prince and Brannick1992).
Yet another area where SBT is commonly utilized is the health care domain (e.g., Gaba et al., Reference Gaba, Howard, Fish, Smith and Sowb2001). The health care sector is always working to improve patient safety; however, as they are limited by the amount of patient interactions, SBT allows practitioners to develop cognitive and psychomotor skills in the absence of actual patient interaction (Motola et al., Reference Motola, Devine, Chung, Sullivan and Issenberg2013). Research indicates that SBT is also an effective approach to training technical skills in the medical industry, with scenarios ranging from cardiopulmonary bypass emergencies, coronary artery bypass graft operations, and anesthesia crisis resource management capable of fostering the necessary associated skills (Gaba et al., Reference Gaba, Howard, Fish, Smith and Sowb2001; Tokaji et al., Reference Tokaji, Ninomiya, Kurosaki, Orihasi and Sueda2012; Wahr et al., Reference Wahr, Prager, Abernathy, Martinez, Salas, Seifert, Groom, Spiess, Searles, Sundt, Sanchez, Shappell, Culig, Lazzara, Fitzgerald, Thourani, Eghtesady, Ikonomidis, England, Sellke and Nussmeier2013).
Management education is another area where simulations are beginning to be used more frequently for training purposes. There has been an increased demand for this training technique in educational settings because training has been shown to be a critical factor in improving performance and maintenance of skills (Salas and Bowers, Reference Salas and Cannon-Bowers2001). SBT can provide students with an opportunity to actively engage in applying learned concepts rather than just learning theory (Lane, Reference Lane1995). More corporations are beginning to follow this trend after observing the widespread use of SBT in the military and aviation industries.
Businesses are constantly changing so simulations must evolve to accommodate any associated training needs (Fripp, Reference Fripp1997). For example, production is exponentially growing, markets are globalizing, there is an increasing number of stakeholders from diverse areas, and social and environmental issues are also influencing the work environment (Fripp, Reference Fripp1997). As simulations can be designed to emulate virtually any scenario, organizations are increasingly opting to implement them. For example, a computerized simulation called MyMuse was used for a management accounting course to illustrate the complexities that occur in a business setting (Wynder, Reference Wynder2004); the simulation provided students with an opportunity to instantly test ideas that would otherwise require a long time to implement. Students had the freedom to exercise their creative problem-solving skills by dealing with customer satisfaction, demand, and short-term profitability. Such simulations can help trainees learn how to deal with unstructured tasks and acquire higher-level skills.
In a business setting, managerial training is particularly important for project management. Project management typically involves delivering a specific output under a limited budget and time frame (Zwikael and Sadeh, Reference Zwikael and Sadeh2007). Numerous studies support the usage of SBT for management education so that project managers can gain the necessary experience and expertise for accomplishing their assigned tasks (Keys and Wolfe, Reference Keys and Wolfe1990; Washbush and Gosen, Reference Washbush and Gosen2001; Zantow, Knowlton, and Sharp, Reference Zantow, Knowlton and Sharp2005). For example, Cohen and colleagues (Reference Cohen, Iluz and Shtub2014) recently utilized SBT as an approach to project management training for systems engineers. Project Team Builder was used to integrate both the technical and managerial concerns that are unique to the demands of a systems engineer. In this simulation, trainees are guided through every phase of a project (i.e., initiation, conceptual design, planning, and implementation). Project Team Builder was evaluated positively by users ranging from beginning systems engineering students to systems engineers at various experience levels.
The banking industry has also recognized SBT for management operations such as BankSim, and simulation games such as BankExec, the Stanford Bank Game, and Bank President (Koppenhaver, Reference Koppenhaver1993). These computer simulations incorporate bank functions while addressing management issues that are necessary for bankers to successfully operate a bank. BankSim, for example, requires trainees to work in teams. Each team must analyze their financial statements, create goals, and develop strategies to further the accomplishment of previously identified goals. The two-week simulation emulates a two-year management experience equipped with a computer printout of the team’s financial standing, call reports, and regulatory agency examination of books and records (Koppenhaver, Reference Koppenhaver1993). Moreover, Faria and Dickinson (Reference Faria and Dickinson1994) analyzed both academic and management training simulations for sales management, and noted that Sales Management Simulation deals with all aspects of a sales manager’s job. Given the realistic setting, it has been suggested that this simulation can serve to train new employees, screen current or prospective managers, and serve as ongoing management training (Faria and Dickinson, Reference Faria and Dickinson1994). This conclusion heightens the importance of investing time and resources into ensuring the simulation accurately portrays the targeted tasks.
This section has elaborated on the use of SBT across industries. However, it should be noted that conventional forms of training may be equally as effective in some cases. Therefore, organizations must evaluate whether the use of SBT is the best choice to meet their training needs. Ultimately, SBT has the potential to be effective in creating changes in knowledge, skill, and ability level; however, it must be implemented in the right way to engender such outcomes. As such, we now discuss how practitioners should design their SBT programs to ensure effectiveness.
Developing Simulation-Based Training
Following suit with the push for improving SBT development skills within industry, researchers argue that extensive preparation is required for SBT programs to reach their full training potential (Keys and Wolfe, Reference Keys and Wolfe1990; Tannenbauem and Yukl, Reference Tannenbaum and Yukl1992; Thornton and Cleveland, Reference Thornton and Cleveland1990). The extensive research on SBT has shed some light on what preparatory steps and design techniques need to be utilized by practitioners to ensure training effectiveness. For example, Thornton and Cleveland (Reference Thornton and Cleveland1990) recommend that management training simulations include examples of effective management behaviors coupled with descriptions of managerial competencies. In addition, several researchers have noted the importance of implementing a structured debrief after the simulation (Keys and Wolfe, Reference Keys and Wolfe1990; Thornton and Cleveland, Reference Thornton and Cleveland1990), as discussed previously.
In an attempt to encourage scientists and practitioners to design SBT programs aligned with this science, we have identified best practices (see Table 11.2) informed by salient theory and evidence as well as a synthesis of previously suggested best practices (e.g., Rosen et al., Reference Rosen, Salas, Wilson, King, Salisbury, Augenstein, Robinson and Birnbach2008). Following these best practices can enhance the effectiveness of SBT. Although much has been learned about how to design and implement SBT, which has led to the delineation of best practices such as the ones provided here, there are many areas where additional research is required to further understand how and when SBT is effective.
Table 11.2 Best practices for designing systematic SBT programs
|
Suggestions for Future Research
Although SBT has been extensively studied over the past decade, several questions have yet to be addressed. Therefore, in the following section we discuss several future directions for research examining SBT. We suggest the following areas for future study in SBT: using additional measures, exploring mechanisms of SBT effectiveness, examining how the effectiveness of SBT may differ under varying conditions, determining minimally necessary fidelity levels, and examining the influence of technological advances on the use and effectiveness of SBT.
Use Additional Measures
One primary limitation of the SBT literature concerns the type of outcomes generally examined. Several researchers have noted that SBT studies generally collect self-report data encompassing affective reactions rather than more systematic, objective indicators of performance and transfer (Salas and Cannon-Bowers, Reference Salas and Cannon-Bowers2001; Wideman et al., Reference Wideman, Owston, Brown, Kushniruk, Ho and Pitts2007). Future empirical studies examining the effectiveness of SBT should include more objective, systematic learning and performance metrics. This aligns with the call of researchers to incorporate observable measures of behaviors within training endeavors that map onto actual behaviors required on the job (Weaver et al., Reference Rosen, Weaver, Lazzara, Salas, Wu, Silvestri, Schiebel, Almedia and King2010). Bell and colleagues (Reference Bell, Kanar and Kozlowski2008) suggested that, in addition to using more objective and robust measures, future research should also examine a broader range of outcomes. For example, rather than primarily focusing on assessing performance, measures such as adaptability, transfer, and other more implicit measures of knowledge should also be evaluated.
Explore Mechanisms
Bell and colleagues (Reference Bell, Kanar and Kozlowski2008) further suggested that future research should address the mechanisms through which instructional features of simulations facilitate targeted outcomes. Specifically, they posited that the areas of content, immersion, interactivity, and communication can all be utilized to achieve training objectives. However, they noted that prior work addressing the utility of different training features remains limited. More work is needed in this area to garner a better understanding of how different training features can be implemented to achieve optimal levels of training outcomes. Although work has been completed addressing which features can improve training (e.g., McGaghie et al., Reference McGaghie, Issenberg, Petrusa and Scalese2010), it is not precisely known through which learning mechanisms or cognitive processes and at what time these features are able to contribute to enhanced learning. Although initial work assessing how several features utilized in SBT interventions contribute to overall training effectiveness has been completed, further work investigating additional features and under what conditions they are necessary should be conducted.
Examine Relative Effectiveness
As an extension of the research agenda outlined in the preceding text, another direction for future work includes determining when SBT is most effective. As SBT scenarios can require extensive time and money to develop and implement, depending upon the scenarios constructed (Summers, Reference Summers2004), it is critical that organizations prioritize when to use SBT. Salas and colleagues (Reference Salas, Rosen, Held and Weissmuller2009) noted that, in the context of management education, it is largely unknown when SBT is more effective than other training techniques. However, this is a common thread across a majority of contexts where SBT is implemented and understanding remains limited regarding when SBT is most beneficial. Thus, future research should examine when to implement SBT and under which circumstances it is more effective than more conventional training approaches. For example, as previously discussed, personnel that must work in dynamic conditions characterized by high complexity may especially benefit from SBT (Salas et al., Reference Salas, Rosen, Held and Weissmuller2009) whereas employees who work in relatively stable conditions may benefit equally from primarily information and demonstration-based methods.
Determine Minimally Necessary Fidelity Levels
Another question that has remained largely unanswered is the degree to which fidelity is necessary for facilitating targeted training outcomes. This is especially important given the degree to which fidelity can drive the price of developing the SBT intervention (Summers, Reference Summers2004). Although there is evidence that a low-fidelity simulator can be as effective as a high-fidelity simulator in facilitating targeted results (e.g., Norman, Dore, and Grierson, Reference Norman, Dore and Grierson2012), additional work exploring potential moderators could further illuminate the relationship between fidelity and learning outcomes. For example, all aspects of fidelity, as detailed in the taxonomy described by Rehmann and colleagues (Reference Rehmann, Mitman and Reynolds1995), may be integral to facilitating learning in employees working in fields where a high degree of precision is required (e.g., surgeons) but may be less necessary for producing effective training outcomes in employees working in other fields.
Examine the Effect of Technological Advances
Future work should also examine the impact of advances in computer technology, such as virtual reality, that will allow training to be delivered in a routine and ongoing fashion. As the price of technology continues to lower, organizations can more frequently incorporate more technologically advanced forms of training (e.g., Blackmur et al., Reference Blackmur, Clement, Brady and Oliver2013). This will enable trainees to receive high-fidelity forms of training on a more frequent basis than was previously possible. This raises the question of whether the line between simulations and real work environments will grow increasingly blurred. As trainees are eventually able to move seamlessly between simulations and the actual environment experienced on the job, it is important to examine the impact this will have on training effectiveness.
In sum, future research should seek to primarily address under which conditions SBT is most effective and whether the necessity of certain features of SBT is contingent upon different factors and processes. This includes examining a variety of potential moderators, including elements such as individual differences and organizational features. Additional measures of effectiveness (e.g., objective measures of employee performance) should also be incorporated into studies of SBT. Future work should also examine the impact of more advanced forms of technology being increasingly available to supplement training practices. In answering these questions, a more nuanced understanding of SBT will be attained and subsequently allow for its more successful implementation; this will enable organizations to prioritize when to utilize SBT and ultimately promote enhanced performance within employees.
Conclusion
The purpose of this chapter was to provide a comprehensive understanding of the science of SBT by reviewing theory that should guide it use. We also sought to provide an explanation of the current state of SBT in practice, by discussing the domains where this training technique is most commonly implemented as well as providing several examples of simulations that have been successfully utilized. We also offered suggestions regarding how to best design and implement SBT, informed by relevant literature, for practitioners. Finally, we concluded by discussing several areas where more research is needed in this area. By gaining familiarity with the theory underlying SBT, understanding how it is currently implemented in practice, and following the best practices outlined within this chapter, training developers can create an appropriate and effective strategy for training that efficiently and effectively improves employee knowledge, skill, and performance.
Acknowledgements
This work was supported in part by contract NNX16AB08G with the National Aeronautics and Space Administration (NASA) and contract NBPF03402 with the National Space Biomedical Research Institute (NSBRI) to Rice University. The views expressed in this work are those of the authors and do not necessarily reflect the organizations with which they are affiliated or their sponsoring institutions or agencies.
With the advent of low-cost wearable computing devices, smartphones, and heads up displays, we are approaching an era of human-technology integration that is unparalleled in the past. Augmented reality (AR) represents a useful tool for human-technology integration. AR, defined in the broadest sense, is the integration of physical reality with digital information overlays, such as visual or auditory information. The major difference between AR and virtual reality is that with AR there is always a component of the physical reality present (Stevens, Reference Stevens1995). More specifically, AR often enhances reality through the display of information that is not normally accessible directly through the users’ senses.
AR systems have existed in some form or another for more than five decades, but only recently have become affordable for a consumer market. The first AR systems were developed at Harvard in the 1960s, and were further refined at NASA, the U.S. Air Force, MIT, and UNC over the next two decades (Krevelen & Poleman, Reference Van Krevelen and Poleman2010). The term augmented reality wasn’t used to refer to these types of systems until the 1990s (Caudell & Mizell, Reference Caudell and Mizell1992)
AR has evolved over the past two decades since the term was originally coined. Historically, many have argued that an AR system must use a head mounted display (HMD). However, many modern systems have shown that AR can exist outside of HMDs. In fact, AR technologies have moved away from such displays (Azuma, Reference Azuma1993). A modern definition entails that 3D objects be integrated into real environments for a system to be considered “augmented reality.” Nevertheless, this definition has not always held true as AR systems developed. For instance, systems have augmented the auditory or tactile information provided to the AR user (Azuma, Reference Azuma1993; Krevelen & Poleman, Reference Van Krevelen and Poleman2010). This change in the definition of the term has been accompanied by the development of devices and software that streamline the creation of AR and its availability at a consumer level – such as with smart mobile devices – allowing for the rapid advancement of AR applications. In summary, we believe that one of the best definitions of AR systems was devised in Krevelen and Poleman’s review of the technology (Krevelen & Poleman, Reference Van Krevelen and Poleman2010). The authors discuss three key aspects that identify a system as being “AR.” For the remainder of this chapter, AR is defined as a system that: (1) combines real and virtual objects in a real environment, (2) registers real and virtual objects with each other, and (3) runs dynamically in three dimensions in real time.
AR systems have been implemented in a wide range of contexts, from training environments in health care and industry settings, to high-risk professions including the cockpits of fighter jets and police force facial recognition software for identifying criminals (Newman, Reference Newman2014). Further, AR has implications for aiding those with cognitive deficits (Chang, Kang, & Huang, Reference Chang, Kang and Huang2013), educating children (Dieterle, Reference Dieterle2009), and creating museum displays (Lydens, Saito, & Inoue, Reference Lydens, Saito and Inoue2007).
To narrow the focus for this chapter, we will focus on the potential learning and training outcomes of these systems, as well as performance enhancement capabilities within operational environments. In addition, we will discuss potential next steps and future directions for AR technology, research, and application in training systems.
Taxonomies of Augmented Reality Systems
Although AR and virtual reality are highly similar, there are a few key differences that make the two types of technology quite unique from one another. The role that physical reality plays is one of the major differentiating factors between the two types of systems. By definition, AR requires that some aspect of physical reality is used in the augmented system. Thus, the distinguishing factor is that AR supplements the real world, whereas virtual reality creates an entirely artificial environment that replaces the real world (Azuma, Reference Azuma1993). Some have mapped this on a continuum spanning from real environments on one end to entirely virtual environments on the other, with AR falling in between the two (see Figure 12.1; Milgram et al., Reference Milgram, Takemura, Utsumi and Kishino1994; Mistry, Maes, & Chang, Reference Mistry, Maes and Chang2009).

Figure 12.1. Adapted version of Milgram et al.’s (Reference Milgram, Takemura, Utsumi and Kishino1994) reality-virtuality continuum.
Another useful taxonomy considers a set of factors concerned with how one’s body relates to the system, coupled to the amount of artificiality the system creates. These dimensions have been used to classify AR systems. Figure 12.2 summarizes this idea – the x-axis equals the dimension of immersion of the physical body (i.e., low immersion remains in reality vs. high immersion is entirely in a virtual environment) while the y-axis equals the dimension of artificiality of environment (level of actual physical/reality presented vs. level of synthetic reality presented) (Benford et al., Reference Benford, Greenhalgh, Reynard, Brown and Koleva1998).

Figure 12.2. Adapted version of Benford et al.’s (Reference Benford, Greenhalgh, Reynard, Brown and Koleva1998) classes of shared space in mixed reality.
If we examine the requirements for AR versus virtual reality, we find that they differentiate across three subsystems (Azuma, Reference Azuma1997): scene generation, display device, and tracking/sensing. Specifically, AR does not need to create scenery – instead it overlays virtual information on a real-world environment so it can use much simpler displays then virtual reality. The major difference, and the aspect of AR that usually needs to be more advanced compared to virtual reality, is the tracking and sensing of the virtual objects in relation to the real world. In other words, the coupling of the virtual image to its location is a key component of the efficacy of a given AR system. This leaves the challenge for AR systems to match expectations due to their existence in an ever-changing dynamic environment, whereas virtual reality does not face the same issue because it entirely fabricates the environment, leaving no room for error in regard to the computer’s ability to respond to outside environmental cues.
In summary, AR and virtual reality appear to be different species of virtual environments, differentiated by the ratio of virtual to physical reality, with AR being associated with a high level of physical reality embedded with virtual information, and virtual reality being a virtual environment that may or may not have some aspects of physical reality apparent to the user.
Integrating Augmented Reality into Training Environments
An ideal training environment is designed around the knowledge, skills, and attitudes (KSAs) required for optimal task performance (Goldstein & Ford, Reference Goldstein and Ford2002). The main goal of training is to enable trainees to familiarize with and adopt task-related KSAs to optimally perform job-related tasks. In the past, technology has been used as a tool to teach knowledge, skills, and abilities within training environments. Moreover, AR offers the opportunity to improve on-the-job training by providing learning opportunities while a trainee is in the work environment. Currently, there are several components of a training environment in which AR can be integrated to afford for KSA acquisition in trainees.
Augmented Reality as a Learning Tool for Embedded Training
Studies performed on learning tasks have used AR as a supplementary tool because the premise of AR is that it augments some aspect of the real environment. Several studies have documented the benefit that AR has on improving learning outcomes during a task. First, it seems that AR may help motivate individuals to learn (Chang, Morreale, & Medicherla, Reference Chang, Morreale, Medicherla, Gibson and Dodge2010), although further work is needed to understand why this occurs. Additionally, AR can be used as a tool to demonstrate complex systems. For example, Liarokapis and colleagues (Reference Liarokapis, Mourkoussis, White, Darcy, Sifniotis, Petridis and Lister2004) used AR to demonstrate systems in a mechanical engineering course (Liarokapis et al., Reference Liarokapis, Mourkoussis, White, Darcy, Sifniotis, Petridis and Lister2004). The AR was implemented onto a tabletop environment, allowing the trainee to interact with multimedia while learning. Students interacted with 3D models of real objects being discussed in class to understand the mechanism of a camshaft in relation to other engine components (Liarokapis et al., Reference Liarokapis, Mourkoussis, White, Darcy, Sifniotis, Petridis and Lister2004).
To determine whether an AR system is useful in education, Radu (Reference Radu2014) conducted a systematic review that identified several themes in successful AR learning system. Evidence from literature on successful AR learning system suggests that systems should meet all the following statements:
1. The application transforms the problem representations such that difficult concepts are easier to understand.
2. The application presents relevant educational information at the appropriate time and place, providing easy access to information and/or reducing extraneous learner tasks.
3. The application directs learner attention to important aspects of the educational experience.
4. The application enables learners to physically enact, or to feel physically immersed in, the educational concepts.
5. The application permits students to interact with spatially challenging phenomena. (9)
Augmented Reality as Feedback during On-the-Job Training
AR can provide feedback in real time. For instance, the display could reveal whether the trainee conducted a task correctly. For example, assembly tasks require the user to identify different components and bring them together in a systematic order to assemble the final product. Typically, these tasks are guided by a drawing of the components in a paper-based format (Hou & Wang, Reference Hou and Wang2010). On-the-job training can instead use AR as a cognitive aid during the assembly decision process, allowing the user to manipulate components and receive guidance in the form of vocal instructions, animations, video, or visual cues (e.g., arrows highlighting important components) that are directly related to the steps required to perform each task (Hou & Wang, Reference Hou and Wang2010). This form of guidance while performing tasks or learning new skills may decrease the cognitive load required to interpret instructional information, such as an assembly manual, and enable the user to divert more of their working memory capacity to learning and performing a task. In fact, evidence of improved skill transfer of training to an actual task was reported for an assembly task using AR over conventional two-dimensional engineering drawings (Boud et al., Reference Boud, Haniff, Baber and Steiner1999). Results of the experiment suggested that training to perform the assembly task using AR led to shorter task performance time because the AR system allowed the trainees to reference the virtual information for guidance and begin to develop their motor skills while learning the task (Boud et al., Reference Boud, Haniff, Baber and Steiner1999). Given this apparent effect, future research will need to disentangle whether learning was hindered in the face of better performance. In other words, participants are performing the tasks better due to the AR, but they aren’t learning as well as more traditional methods due to overreliance on the AR system.
Integrating Augmented Reality into Medical Environments
Recently, AR has been increasingly used in medical settings. AR systems are being designed to afford practice prior to patient contact, and in some cases during procedures with patients. In the following text, we review the evidence AR implementation in medical education and medical procedures within the past five years.
Augmented Reality in Medical Education
AR as an education tool has many of the potential benefits of simulation-based training, such as allowing health care providers to maintain or master skills outside of actual patient scenarios, without the repercussions associated with errors (Alaraj et al., Reference Alaraj, Charbel, Birk, Tobin, Luciano, Banerjee and Roitberg2013; Ungi et al., Reference Ungi, Yeo, U-Thainual, McGraw and Fichtinger2011).
Multimodal (e.g., haptic and visual information) AR training systems have been proposed as a platform for medical residents learning neurosurgical procedures (Alaraj et al., Reference Alaraj, Charbel, Birk, Tobin, Luciano, Banerjee and Roitberg2013). For example, a ventriculostomy is a procedure that attempts to provide relief to those who have brain injury, hydrocephalus, or brain tumors by draining some of their cerebrospinal fluid (Yudkowsky et al., Reference Yudkowsky, Luciano, Banerjee, Schwartz, Alaraj, Lemole and Frim2013). Residents who were trained to perform the ventriculostomy using an AR system showed increased performance in both the simulated practice sessions and subsequent surgical procedures on real patients (Yudkowsky et al., Reference Yudkowsky, Luciano, Banerjee, Schwartz, Alaraj, Lemole and Frim2013).
Bruellmann and colleagues (Reference Bruellmann, Tjaden, Schwanecke and Barth2013) developed a reliable method for assisting endodontic studies by creating a system that identifies root canal orifices and tooth types. They posit that AR imaging can be overlaid onto the site to aid an endodontic surgeon in detecting where root canals were after they have been filled (Bruellmann et al., Reference Bruellmann, Tjaden, Schwanecke and Barth2013).
Ungi et al. (Reference Ungi, Yeo, U-Thainual, McGraw and Fichtinger2011) examined medical student accuracy of needle placement for percutaneous facet joint injection. Results showed improvement in successful needle placement for those trained using AR guided placement tools versus students who were trained without the assistance of AR. The authors suggest that the time required to learn the skill proficiently may be less when students are trained using the AR guided training method.
AR has also been investigated in laparoscopic skills training. Vera et al. (Reference Vera, Russo, Mohsin and Tsuda2014) conducted a study that investigated the efficacy of a laparoscopic skills training that overlays guidance from a mentor onto the trainees laparoscopic monitor while they are performing a surgical task. Using the augmented reality telementoring (ART) tool, the mentor can essentially show the trainee how to perform complex sutures (Vera et al., Reference Vera, Russo, Mohsin and Tsuda2014).
Results indicate that the trainees who received the AR telementoring developed adequate skills faster, performed the suturing tasks faster, and made fewer errors than the students that did not receive the AR telementoring (Vera et al., Reference Vera, Russo, Mohsin and Tsuda2014). Volonte et al. (Reference Volonte, Pugin, Bucher, Sugimoto, Ratib and Morel2011) suggests that AR visualization techniques can have an impact on laparoscopic surgery by making surgeries easier, faster, and safer (Volonte et al., Reference Volonte, Pugin, Bucher, Sugimoto, Ratib and Morel2011).
AR applications have been implemented and tested from mobile technologies, such as phones and tablets. One example is the mobile augmented reality blended learning environment (mARble; von Jan et al., Reference von Jan, Noll, Behrends and Albrecht2012; Albrecht et al., Reference Albrecht, Folta-Schoofs, Behrends and von Jan2013). This system allows medical students to become immersed in the topic that they are studying and enable simulation of events that rarely occur. See Figure 12.3 for an example of the mARble system visualization.

Figure 12.3. Example of the mARble system visualization from von Jan et al. (Reference von Jan, Noll, Behrends and Albrecht2012).
Albrecht and colleagues (Reference Albrecht, Folta-Schoofs, Behrends and von Jan2013) conducted a study examining the implementation of the mARble system in a third-year medical student cohort who was learning to treat gunshot wounds. They examined knowledge and attitudinal outcomes between students who used the mobile AR system and those that used traditional textbook material. Results indicated that AR led to increased knowledge integration and decreased feelings of numbness and fatigue, which may suggest that AR motivates trainees to learn (Albrecht et al., Reference Albrecht, Folta-Schoofs, Behrends and von Jan2013). However, the authors note that it is difficult to determine whether the results were due to the mode of learning (i.e., AR or textbook) or if a confounding factor impacted learning, such as the approach to training. Specifically, textbooks required individuals to learn on their own, while the AR system encouraged social interaction while learning and it may be this difference in the learning experience that accounts for the observed results.
AR has been studied within the anatomy education realm as well. Anatomy knowledge is foundational for medical education (Chien, Chen, & Jeng, Reference Chien, Chen and Jeng2010). Because it is difficult to get a sense for the 3D structures of anatomy using textbook images, AR has been used as a substitute (Chien, Chen, & Jeng, Reference Chien, Chen and Jeng2010). One example is the miracle system, which is an AR training tool designed to teach bone structure and abdomen anatomy (Figure 12.4; Blum et al., Reference Blum, Kleeberger, Bichlmeier and Navab2012).
Figure 12.4. Mirracle system from Blum et al. (Reference Blum, Kleeberger, Bichlmeier and Navab2012).
The mirracle system provides the trainee with a reflection of the world, much like a mirror, and augments the visual scene so that the anatomy of their abdomen is displayed on the trainee’s body. This visualization allows users to see the layout of the bone or organ anatomy within the abdomen. Additionally, the authors suggest that this system could be beneficial in demonstrating the crucial steps that will be performed during a surgery to patients (Blum et al., Reference Blum, Kleeberger, Bichlmeier and Navab2012). Other anatomy training systems take into consideration multiple modes of information presented to the trainee in an effort to maximize learning. For instance, the BodyExplorerAR allows a trainee to interact with anatomy using the tactile, auditory, and visual senses (Samosky et al., Reference Samosky, Nelson, Wang, Bregman, Hosmer, Mikulis and Weaver2012). The BodyExplorerAR system uses a projector to overlay anatomy onto a physical mannequin. The visualization allows the trainee to interact with the AR by requesting additional information about an organ, viewing physiological information, listening to simulated heart sounds, or interacting with the mannequin in ways akin to interactions with real patients. For example, the trainee can listen to the heart beat of the simulated patient, examine the electrocardiogram, simulate the injection of a drug, and observe the change in a patient’s heart rate and sounds (Samosky et al., Reference Samosky, Nelson, Wang, Bregman, Hosmer, Mikulis and Weaver2012). Thomas, John, and Delieu (Reference Thomas, John and Delieu2010) describe the Bangor Augmented Reality Education Tool for Anatomy (BARETA), which is a system that allows trainees to touch and see AR content (Thomas, John, & Delieu, Reference Thomas, John and Delieu2010). The aforementioned AR tools are a promising avenue for anatomy training. However, additional investigation is required to validate training systems and examine the role of multimodal learning in aiding training outcomes.
Augmented Reality as an Aid during Procedures
Marzano and colleagues (Reference Marzano, Piardi, Soler, Diana, Mutter, Marescaux and Pessaux2013) present a successful AR guided surgery that performed an artery-first pancreatico-duodenectomy in a 77-year-old patient. The surgery required real-time interaction between the surgeon and a computer scientist controlling the AR system to ensure that the AR was accurately overlaid onto the patient. The authors state that utilizing the AR system afforded the surgeon the ability to understand the appropriate dissection planes and margins during the surgery to reduce the chance of unintentionally damaging surrounding areas (Marzano et al., Reference Marzano, Piardi, Soler, Diana, Mutter, Marescaux and Pessaux2013).
Research conducted on the integration of AR in laparoscopic surgery suggests that AR can be a useful tool to add depth cues to an otherwise 2D visualization and afford surgeons with a more comprehensive view of the operating field by digitally integrating images into the workflow (Kang et al., Reference Kang, Azizian, Wilson, Wu, Martin, Kane and Shekhar2014). AR systems are helpful for 3D visualizations as surgeons were able to easily identify the major organs during AR visualization and were more accurate with their procedures (Kang et al., Reference Kang, Azizian, Wilson, Wu, Martin, Kane and Shekhar2014; Lopez-Mir et al., Reference Lopez-Mir, Naranko, Fuertes, Alcaniz, Bueno and Pareja2013).
However, there is evidence that suggests that using AR during surgical procedures can hinder health care provider’s ability to detect relatively obvious stimuli. Dixon and colleagues (Reference Dixon, Daly, Chan, Vescan, Witterick and Irish2013) performed a study that found that surgeons who performed an endoscopic navigation task using AR provided anatomical contours were more accurate in navigating the endoscope, but were less likely to identify critical complications and foreign bodies when compared to surgeons who did not use AR (Dixon et al., Reference Dixon, Daly, Chan, Vescan, Witterick and Irish2013). This suggests that the use of AR led to an attentional tunneling effect, essentially distracting the surgeon from relatively obvious (and important) stimuli. Errors such as this could be potentially dangerous in an applied setting, where patient well-being is at risk.
Limitations of Augmented Reality in the Medical Setting
Overall, it seems that AR may be a useful tool for affording skill acquisition and improving performance in medical settings. However, there are some limitations to the implementation of AR in these environments. First, training systems need to be developed with a target audience in mind and tested with specific trainees (e.g., surgical residents) before being implemented fully into curricula. Second, trainees’ experience levels should be accounted for so that the training system is designed around a specified level of experience. Third, the innate learning requirements of an AR-based environment need to be considered (Alaraj et al., Reference Alaraj, Charbel, Birk, Tobin, Luciano, Banerjee and Roitberg2013); proficiency with the system is particularly critical when that system is being used with real patients.
Although there has been evidence supporting the effectiveness of AR-based training in medicine as outlined within this section, more research is needed to determine whether skills learned during training will transfer to an actual work task, and how potential deleterious effects on performance, like the tunneling effect, can be averted. We discuss future research in the next section.
Future Directions for Research and Practice in Augmented Reality
As we have demonstrated in the preceding text, AR has become increasingly pervasive and beneficial in the medical domain for purposes of training and improving workplace performance. Of course, the potential for such technologies is profound and research has only begun to scratch the surface regarding the many possibilities. More, the increasing adoption of technology in the workplace is rapidly changing the way work is done. It seems as if the limits of what is possible with this technology seem to only be constrained by what humanity can imagine. Highly sophisticated AR systems have been conceived of and visualized in popular science fiction films, such as Minority Report and Iron Man, and such examples only serve to highlight what is on the not too distant horizon. With the increasing adoption of not only mobile phones with AR capabilities, but also wearable technologies such as Google Glass, future research is needed to investigate the implications of AR for improving training and workplace performance. We cannot claim to comprehensively cover all the possibilities here, but we hope to outline several proximal areas of research worth pursuing. In particular, we discuss how AR could be used to augment learning and accelerate training, improve human-machine system interactions, and improve social and team interaction in the workplace.
Augmenting Learning and Accelerating Training
Prior work has shown that AR systems can be used to augment learning (e.g., Keebler et al., Reference Keebler, Wiltshire, Smith, Fiore and Bedwell2014) and, perhaps, accelerate the training process. Recent technological advances have been applied to the learning sciences with the aim of developing intelligent systems that can adapt to a learner to accelerate the learning process. Generally a subfield of the learning and cognitive sciences called accelerated learning has focused specifically on how to reduce the amount of training required for a given domain, improve retention of knowledge, and put individuals on a path to developing adaptive expertise (Hoffman et al., Reference Hoffman, Feltovich, Fiore, Klein and Ziebell2009; Hoffman et al., Reference Hoffman, Ward, Feltovich, DiBello, Fiore and Andrews2013). The learning and training environment, particularly in the workplace, is increasingly becoming a multimodal engagement in which learning materials are distributed across technological systems (i.e., computers), artifacts (i.e., books), and people (i.e., a mentor). Further, many modern training tasks are performance based in which it is often not feasible to refer to training materials. Regardless of the type of training, what is important for accelerating the learning process is an emphasis on the integration of knowledge directly into the environment, as well as the development of the metacognitive capacities involved in learning. This is another area where AR could be applied in future research to augment learning and accelerate training. Specifically, the following question should be addressed by future work: How can AR be used to accelerate the learning process by facilitating the integration of knowledge and developing metacognitive capacities?
In general, AR applied to learning has focused on presenting information in novel ways such as providing three-dimensional representations of anatomical structures, which are commonly taught with two-dimensional representations. Future research applying AR to learning should move beyond just developing new ways to represent information to focusing on ways to integrate knowledge. For example, when a learner is reading information in a text, AR displays could provide prompts that indicate how that information is related to prior knowledge. Further, the gap between learning from instructional materials and their application in real-world contexts, as well as transfer from one learning context to the next, is potentially abridged. AR could provide the mechanism to remind learners or trainees of relevant information they acquired during training as it is relevant to the present workplace task.
Another application of AR is that it could be used to deliver prompts during the training process that induce metacognitive capacities (e.g., Fiore & Vogel-Walcutt, Reference Fiore and Vogel-Walcutt2010; Wiltshire et al., Reference Wiltshire, Rosch, Fiorella and Fiore2014). For example, an AR display could show text-based prompts to a trainee while they are preparing for a performance session that activates prior knowledge related to the task, prompts during the task that elicit monitoring of performance, and prompts after the task that elicit reflection on performance during the task. Such strategies have been argued to be essential for accelerating learning and developing expertise and have not yet been administered through an AR system.
In sum, research on AR applied to learning should investigate: (1) how AR can facilitate the integration of knowledge with preexisting knowledge during learning and performance, (2) how AR can facilitate the connection of learned content with the actual environment and novel situations, and (3) how AR can provide prompts that engage a learning in metacognition before, during, and after training.
Augmenting Human-Machine System Interaction
All too often the modern-day work environment is characterized by increasingly complex technological systems for which effective performance is often predicated by in-depth knowledge of the system and continually updated information regarding how that system is performing. As system complexity has increased, so too have the interfaces designed to convey information regarding system statuses and performance. Evidence for this claim can be seen by examining, for example, images of nuclear power plant control rooms, the cockpits of modern aircraft, or the International Space Station Mission Control room. Indeed, many of these domains have been subject to study by the field of cognitive engineering: an interdisciplinary approach that applies knowledge and techniques from cognitive science to study and design better human-machine systems (see Wilson, Helton, & Wiggins, Reference Wilson, Helton and Wiggins2013 for review). Common across complex domains characterized as human-machine systems is the need for technology to convey information regarding the system such that the human understands the system and what actions they can perform. This is another area future research and advances in AR should pursue. In particular, the following question should be addressed – in what ways can AR be used to help system users cope with information complexity of the technology?
The answer to this question is likely domain specific. For example, in domains such as nuclear power or space flight, the work environment is information dense with many displays. AR could provide a simple and unobtrusive technological capacity for cueing the operator to the appropriate display at a given time. Additionally, it could be used as a visual-spatial storage space to offload cognitive demands for memory-intensive tasks. For example, if there are several pieces of information from one display that need to be compared with information on another display, then an AR system could allow for storage of this information in a mobile display. Alternatively, in domains in requiring human operation of autonomous systems (e.g., human-robot interaction), AR displays could be used to provide information about how those systems are performing as well as their location. Ultimately, the application of AR to improving work in human-machine systems will be contingent on the domain, but could likely be determined by focusing on: (1) identifying the most salient and task-critical pieces of information, and (2) determining how the information could be conveyed with AR to reduce the complexity commonly associated with the domain to lead to better workplace performance.
Augmenting Social and Team Interaction at Work
The modern workplace is fundamentally a social environment in which effective performance requires coordination that spans the boundaries of biological and technological systems (i.e., across humans, resources, and technology; Hutchins, Reference Hutchins1995; Malone & Crowston, Reference Malone and Crowston1994). In the workplace, joint activities between humans are commonplace and are typically geared toward aligning actions, knowledge, and objectives of team members who have different, albeit interdependent, roles (e.g., Rico et al., Reference Rico, Sánchez-Manzanares, Gil and Gibson2008). Essential to these joint actions is that the individuals involved can understand each other’s intentions and other mental states to bring about the desired changes in the environment (e.g., accomplishing a task; Knoblich, Butterfill, & Sebanz, Reference Knoblich, Butterfill, Sebanz and Ross2011). This is one area we suggest that future AR research and development could be applied. The general question here is: How can AR be used to enrich an individual’s understanding of the social environment such that it helps them interact with team members and others to better accomplish their work?
One of the most robust areas of computer vision research in this regard is the detection of basic emotions from human facial expressions (e.g., Janssen et al., Reference Janssen, Tacken, de Vries, van den Broek, Westerink, Haselager and Ijsselsteijn2013). Indeed, recent applications, particularly for Google Glass have focused on leveraging this capability to enrich the user’s information regarding the social environment. Specifically, applications have already been developed that augment a user’s view of the environment by layering it with social information such as the displayed emotion of an individual being observed.
While these technological capabilities are just recently being developed for the purposes of AR, the implications for the workplace, we can speculate, are far reaching. For example, how often is it that we misinterpret the mood or intention of one of our colleagues or co-workers? Or misremember the last time we spoke with someone? Or in a large company, forget an employee’s name? Augmentations could alleviate these problems. AR in combination with computer vision techniques can provided an augmented and enriched view of the social environment. Future research should identify: (1) the many possible types of social information that could be displayed with AR systems (e.g., intentions, emotions, actions, beliefs, personal history) and (2) examine the effects that displaying such information has on the interactions, relationship development, and, ultimately, effective task completion.
Conclusion
AR is an emerging technology that has several implications in training systems. With enhancements in technology used to create AR (e.g., cameras) the application of the technology will continue to increase. Corporations such as Microsoft have invested heavily in devices like the Hololens – an advanced wearable AR system that will enable individuals to interact in a virtual space overlaid on reality without having to rely on a complex computing system to accompany the augmentation. The potential for AR, especially as simulation-based training systems, is quite high – with appropriate implementation, refinement, and evaluation, AR systems could forever change the way we train. But the potential for AR extends much further, as integration of these systems into daily work could alter the information that is available in real time. Overlaying needed information in real time potentially removes the need to ever cross-reference information from other sources, enhancing human performance in the workplace beyond current capabilities.
Nearly everyone has a mobile phone with web capabilities. Recent data suggests that even in emerging economies, large numbers of citizens have access to web-enabled phones and use them voraciously. The Economist, not known for hyperbole, noted that more than half of the adult population, worldwide, has a web-enabled smartphone and estimates that 80% will have one by 2020 (The Economist, 2015). This is great news for consumers – it is inexpensive and easy to keep in touch with one’s elderly Aunt Betty and Uncle Dale or post photos of breakfast pastries at the cool new bakery on Stonegate Drive, and makes it easy to settle random trivia bets about questions such as, “Does Mr. T’s real last name begin with the letter ‘T’?” Tighter integration with the lives of friends and family, immediate weather data, directions, and winning said trivia bets (Mr. T was indeed born Lawrence Tureaud) are generally viewed as advancing quality of life. This advance is especially notable in emerging economies, where access to crop information (McCole et al., Reference McCole, Culbertson, Suvedi and McNamara2014), health advice (Chang et al., Reference Chang, Njie-Carr, Kalenge, Kelly, Bollinger and Alamo-Talisuna2013), and basic services such as banking (Shaikh & Karjaluoto, Reference Shaikh and Karjaluoto2015) has greatly improved the quality of life for millions.
The ubiquity of web-enabled mobile devices should also be great news for educators and training professionals. And seventh-grade teachers, college professors, piano instructors, and corporate trainers across the globe should be thrilled that mobile technology has made possible the ultimate goal of effective training: anywhere, anytime! Indeed we have anecdotal evidence of successful implementation of mobile learning solutions across a variety of educational, governmental, and industrial settings (Deriquito & Domingo, Reference Deriquito and Domingo2012; Mercado & Murphy, Reference Mercado and Murphy2012; Messier & Schroeder, Reference Messier and Schroeder2014; Sharples, Corlett, & Westmancott, Reference Sharples, Corlett and Westmancott2002). However, a careful review of the relevant research from the education, psychology, and information technology literatures suggests that we know little about mobile learning and that the mobile learning ecosystem is shifting rapidly.
Similar to the early days of web-based training (e.g., Hall, Reference Hall1997), use of mobile learning in organizations has outpaced rigorous research on the topic. Thus, in this chapter, we will identify what we do and do not know and improve our understanding of the trajectories that technology platforms, learning content, and learner attitudes are traveling along. We will explore both foundational and emerging research and identify a few areas of focus that will help practitioners and researchers better leverage mobile learning to move us closer to the goal of effective training anywhere, anytime.
What Is Mobile Learning?
Mobile learning is defined in many ways across the different literatures. Practitioners and researchers each take different approaches to defining mobile learning to suit their needs in different disciplines, including education, training, psychology, and information technology. The variety of definitions makes it difficult to identify and understand key terminology, theory, and developing best practices. For example, one definition provided by an educational technology organization is “the ability to obtain or provide educational content on personal pocket devices” (mobl21, 2015: par. 1). Similarly, the Association for Talent Development defined mobile learning in 2010 as deploying learning content on a mobile device in a way that allows organizations to provide new learning possibilities for workers who are not tied to a specific location (Woodill, Reference Woodill2010). These definitions are grounded in the use of a specific class of devices, which makes it difficult to use from a research perspective. A construct that is solely reliant on ever-changing technology will be short lived, and will make it challenging for the field to accumulate a body of knowledge over time.
In the academic literature, definitions also tend to focus on specific attributes of mobile learning such as connectivity, accessibility, immediacy, portability, and customizability as key elements (Korucu & Alkan, Reference Korucu and Alkan2011; Laouris & Eteokleous, Reference Laouris and Eteokleous2005; Traxler, Reference Traxler and Ally2009). The general approach is that mobile learning means learning anywhere, anytime. However, we see a wide range of learning activities within this approach, ranging from asynchronous access to written materials to sets of short, video-enhanced, interactive modules, or even synchronous video-based instruction (Laouris & Eteokleous, Reference Laouris and Eteokleous2005; Paul, Reference Paul2014).
These important variations in the definition and use of mobile learning across the literature lead to some interesting questions. To what extent is mobile learning “device driven”? Does it matter if a learner is using a laptop, a mobile phone, or just a book for learning? Perhaps the key is location. Does it matter if the learner is in the office, on the subway, or on an airplane? Alternatively, it may be the interaction between device and location. If a learner completes training on a mobile phone while sitting at a desk in the office, is that mobile learning? What about reading a book while on an airplane? Clearly, the existing definitions in the literature do not fully circumscribe the mobile learning context.
The range of critical aspects of the various definitions of mobile learning can be distilled into four broad themes based around devices, time and place, social interaction, and context. We discuss each of these themes, and then explore two important dynamics of mobile learning – accessibility and distractibility – as opposing forces in the future development of mobile learning. We then examine mobile learning in the traditional training cycle to more clearly identify opportunities and challenges moving forward. Finally, we discuss some research opportunities for mobile learning that may help mobile learning stakeholders (both individuals and organizations) better understand the mechanisms that can yield the best return on both time and financial investments.
Themes in Defining Mobile Learning
We reviewed literature on mobile learning from more than 20 years of research in the education, psychology, and information technology literatures. This review suggests that there are four broad themes embedded in the various definitions of mobile learning. The first theme is device based (e.g., mobl21, 2015; Motiwalla, Reference Motiwalla2007; Saccol et al., Reference Saccol, Reinhard, Schlemmer and Barbosa2010; Wang, Wu, & Wang, Reference Wang, Ming-Cheng Wu and Wang2009). These definitions distinguish mobile learning from other types of learning in that the learner uses a portable, Internet-enabled device. The implication is that if the learner is using a mobile technology device, the learner is engaging in mobile learning. Completing training on a PC that is physically connected to the wall is clearly not mobile learning. The construct of mobile learning is clearly derived from a technological perspective, and is not meant to cover reading a book on the airplane. But then, if a manager uses her smartphone to complete training in her office, is that mobile learning?
This leads to the second theme within mobile learning definitions: time and place (e.g., Denk, Weber, & Belkfin, Reference Denk, Weber and Belfin2007; Korucu & Alkan, Reference Korucu and Alkan2011). This theme focuses on when and where the learner is engaging in the training or learning. Many authors set the defining feature of mobile learning as taking place, literally “anytime, anywhere” (Korucu & Alkan, Reference Korucu and Alkan2011) and includes any learning that happens when the learner is not at a fixed, predetermined location (Crompton, Reference Crompton, Ally and Tsinakos2014). In this class of definitions, mobile learning would mean that the learner is engaging in a learning activity:
(1) in a place other than in her/his traditional learning location (classroom, office, job site, training site, etc.), and/or
(2) at a time other than when the learner would traditionally engage in learning activities (outside of normal work shifts or typical school hours, etc.).
While the type of device permits mobility, it is the time and place dimension that determines more directly if the learning is actually mobile. This is similar to the distinction within the learner control literature of learner control tools being offered versus learners using them or not (Brown, Howardson & Fisher, Reference Brown, Howardson and Fisher2016; Howardson et al., Chapter 5 this volume). It matters little if learner control (or mobility) is possible if no learner uses it.
The third theme is the social aspect of mobile learning. Many authors include a social dimension in their definitions of mobile learning, arguing that mobile learning inherently involves social interaction and collaboration (e.g., Crompton, Reference Crompton, Ally and Tsinakos2014; Koole, Reference Koole and Ally2009; Sharples, Reference Sharples2005). For example, mobile learning can be shared with relevant individuals (colleagues, suppliers, customers, etc.) or shared with loosely or completely unrelated individuals (nonwork friends, family). The social context is created using either social communication tools built directly into the mobile learning platform or learners’ own communication tools/apps such as Facebook and Twitter. Learners may communicate with specified co-workers or trainees or with members of their own professional and personal networks. The social aspect of mobile learning may also occur in person, as learners may share content on a mobile device in a social manner, such watching videos together or receiving help on a knowledge assessment. This social functionality of mobile learning is similar to Kraiger’s (Reference Kraiger2008) discussion of third-generation learning as the role of learner-to-learner interaction becomes an important function of training in general. However, despite the centrality of social interaction in many mobile learning definitions, we argue that the social aspect is one possible feature of mobile learning, but it is not a necessary condition. A learner could be using a smartphone or tablet to learn something away from his or her traditional location, outside of the traditional time, but be doing it without direct interaction with other learners.
The fourth aspect of mobile learning we derived from our review is the intended usage context in which the learning event takes place (Traxler, Reference Traxler and Ally2009). By this we mean how learners are expected to use, apply, and retain the knowledge and skills addressed in the training. A mobile learning program can be designed to help users learn new content that they would use in the near future, to review or refresh knowledge and skills learned earlier, or to provide immediate performance support. One unique characteristic of many mobile learning tools is that they can be easily used as performance support tools on the job. Training designers report that users are demanding extremely short training modules, less than five minutes at a time (Roberts, Reference Roberts2012). It seems likely that in this length of time, learning objectives may include some explicit knowledge that needs to be memorized but less tacit knowledge that requires deep and complex understanding. Learning modules may build together to help learners develop a new skill or decision-making expertise, but these would need to be carefully crafted to bring together the elements of knowledge, provide opportunities for practice, and give feedback.
Brown, Charlier, and Pierotti (Reference Brown, Charlier, Pierotti, Hodgkinson and Ford2012) made a clear distinction between information and instructional learning resources. In their typology, an information learning resource is one that provides easy access and retrieval of information that can be used to help develop job-related knowledge or for learning something as needed. Sometimes information needs to be retrieved to complete a task and the user has no intent of learning it. In this case, the retrieval process does not qualify as a learning event. Similarly, Pimmer and Gröhbiel (Reference Pimmer and Gröhbiel2008) argue that retrieval of information is not learning. Thus, even informational learning resources delivered through mobile technology should have the goal of helping employees learn something rather than relying on that external source each time they need the information. Mobile learning platforms may serve a dual role as both learning stimulus and easily accessed performance support, a role of which those dusty training binders on everyone’s cubicle bookshelves would be envious.
Now we return to the questions posed at the beginning of this section about what is mobile learning and what is not. The inherent technological nature of the construct eliminates reading a book on a plane as mobile learning. We are unwilling to tie the construct specifically to a particular technology, but it clearly is meant to be technology enabled. What if that traveler is reading a book on an e-reader? He or she is using appropriate technology, is out of the office, and perhaps on nonwork time. The e-reader has a social component where it can mark the most commonly highlighted passages by all readers on that platform. Thus, it would seem to meet the social criterion. The usage context of this experience is less well defined. The reader is probably not using it as an immediate performance support tool, but may be trying to identify one or two leadership principles that could be applied at work. Imagine another passenger on the airplane who is using the seat-back entertainment system to learn some Japanese while flying to Tokyo. This passenger seems like he or she is using a type of mobile training. It involves technology and is conducted out of the office and away from work time. It may lack the social component, unless the passenger is trying out some new words and phrases on the person in the next seat. He or she may have specific goals for applying the new language skill upon arrival in Tokyo. We could go on, but at this point in the evolution of mobile learning, we advocate defining it broadly, including knowledge development in both training for immediate knowledge and skills for longer-term development.
It will be important for researchers to clearly operationalize the instances of mobile learning that they are studying and developing to help build a common understanding of the related phenomena. We build off of Kukulska-Hulme (Reference Kukulska-Hulme2010), Sharples, Taylor, and Vavoula (Reference Sharples, Taylor, Vavoula, Andrews and Haythornthwaite2007), and Traxler (Reference Traxler and Ally2009) and offer a definition of mobile learning as knowledge and skill building using technological tools that allow learners on-demand access to instructional resources untethered to or enhanced by geographic location.
Thus, the core of mobile learning is knowledge and skill development with tools that are connected and portable. Beyond connected and portable, mobile learning can be: (1) customized to learner needs in terms of time and location and (2) social in that information can be easily shared with peers and instructors both formally and informally. Most interestingly, the combination of customization and socially connected can result in the ability to tailor a learning program to the capabilities of specific devices, and be situationally connected – that is offering learning experiences that are relevant to the learner’s location in time and space at the exact moment the learning occurs. For example, the training module can change based on the device’s GPS and sensors and on the location and conditions in which the learner finds him/herself (plant location, weather conditions, type of equipment, etc.). This can be a mild intervention, such as showing videos on equipment once you near that equipment (situated learning), or extensive intervention, such as using augmented reality (AR) where the camera view from the phone will include layers of data that will help a learner see things that she or he could not see otherwise, embedded within the actual physical scene that she or he views through his or her device (Wojciechowski & Cellary, Reference Wojciechowski and Cellary2013). In the most extreme sense of situational connectedness, the learner can be taught or shown material based on exactly what she or he is seeing at that very moment and be connected with peers who can offer expertise at the moment that expertise is needed to enhance learning.
One important question is how our definition of mobile learning differs from e-learning. To sharpen the focus on understanding mobile learning, the next section addresses similarities and differences between e-learning and mobile learning.
E-learning vs. Mobile Learning
Proponents of mobile learning are still struggling to find a literature and rhetoric distinct from conventional tethered e-learning. (Traxler, Reference Traxler2007: 16)
A major definitional problem is disentangling e-learning and mobile learning. This problem arises because computing power has become smaller and increasingly portable. The ability to retrieve documents, watch video, take assessments, and communicate with video and audio in real time has moved from desktop and laptop technology that people use in an indoors, officelike, or classroom setting to something pocketable and useable anytime, anywhere with a smartphone, tablet, or watch. Technology has freed learners from being tethered to a classroom or their desk.
Technology advances have created a lag in addressing the e-learning/mobile learning boundary. Mobile learning has been viewed as either an evolutionary step in the e-learning continuum (e.g., Mostakhdemin-Hosseini and Tuimala, Reference Mostakhdemin-Hosseini and Tuimala2005) or as a separate subset of e-learning (e.g., Georgiev, Georgieva, & Smrikarov, Reference Georgiev, Georgieva and Smrikarov2004). Others argue that there is a more important, nuanced distinction. For example, Sharples (Reference Sharples, Vavoula, Pachler and Kukulska-Hulme2009) suggests that “mobile learning creates new contexts for learning through interactions between people, technologies, and settings” (18).
For the purposes of this chapter, we consider mobile learning to be an evolutionary step beyond e-learning but also with the potential to be fundamentally separate from e-learning. We believe it to be a step along the path of e-learning that creates opportunities and challenges that were unthinkable in the 1990s. It is evolutionary because it shares many of the same learning and learner requirements as more traditional e-learning. There are significant concerns about learner motivation, learner control over various activities, and persistence and drop-out rates. This may be because e-learning typically offers greater control to learners, and increases the psychological distance between the learner and instructor and other learners. With less direct accountability, the relationships between instructor and learner and among learners change, and this holds in both e-learning and mobile learning.
Mobile learning offers similar opportunities for learning “anytime, anywhere” as we saw in early descriptions of e-learning (DeRouin, Fritzsche, & Salas, Reference DeRouin, Fritzsche and Salas2004), although now the “anywhere” element is taken to a greater extreme. Finally, both e-learning and mobile learning offer customizability. As Traxler (Reference Traxler2007) pointed out, “Learning that used to be delivered ‘just in case’ can now be delivered ‘just in time, just enough, just for me’” (14). Traxler was describing mobile learning, but that description could just as easily be applied to many instances of e-learning.
We also think mobile learning has the potential to exist as a separate category from e-learning, both as a different type of educational/training intervention and as a technology to create different types of learning processes. One mobile learning practitioner suggested that “[m]obile learning is a bigger deal than most organizations realize. . . . It represents an amazing disruption and opportunity in how we educate” (Daniel Burrus, quoted in Roberts, Reference Roberts2012: para. 4). Mobile learning, as an intervention, can allow interaction with content in more flexible and location-aware ways. It can also better integrate learning with social technologies that can enhance the richness and applicability of knowledge development (Roberts, Reference Roberts2012). Mobile learning offers location-aware abilities to customize the learning content to the exact location of the learner. An example of this is AR applications, where learners use display technology (e.g., phones, headsets, glasses) to overlay digital content onto their visible surroundings. The result is that components or hazards can be identified or instructions can be visually displayed within the user’s line of sight (Westerfield, Mitrovic, & Billinghurst, Reference Westerfield, Mitrovic and Billinghurst2015). These are related to the intentionality of the training designer to leverage the valuable aspects of mobile technology in terms of the length of learning modules, the use of video, the use of camera and other sensing technologies on the mobile device, and the ability to conduct real-time assessment. All of these support the possibility of thinking about mobile learning as a distinct process from e-learning.
The current state of mobile learning, however, is largely analogous to the early days of e-learning when training developers simply recorded lectures or put Microsoft PowerPoint slides online and called it e-learning. We now see sophisticated e-learning programs that offer guidance, adapt to learners, and integrate complex gaming techniques. We expect that mobile learning will follow a similar trajectory. Mobile learning practice is currently in its infancy, with many mobile learning programs consisting of little more than delivery of Microsoft Word or PDF documents to a mobile platform (Roberts, Reference Roberts2012). Creating learning modules that leverage mobile technology in terms of social sharing, instant feedback, location-specific customization and tools for real-time performance support, and longer-term transfer of training may lead to a unique class of training material distinct from traditional e-learning. We believe it makes sense to deal with mobile learning and e-learning as related categories along the evolutionary path, but with mobile learning having unique characteristics and capabilities that can and should be leveraged in unique ways.
In summary, mobile learning consists of education and training that use easily transportable and web-enabled technologies to allow significant flexibility in the learning environment. Mobile learning can be different from both traditional classroom learning and e-learning because of how the learning process is both affected by and affects location and time. Mobile learning offers the advantage of making learning material accessible across time, location, and device beyond what e-learning can offer and the idea of situational connectedness separates mobile learning from e-learning.
But, should we rush headlong into AR and situational connectedness? There is a potential dark side to mobile learning that must be addressed. By the very nature of attempting to learn on a mobile device that wants to let you know every time an e-mail or tweet arrives, learners will be exposed to distractions. Further, being out in the world rather than in a relatively controlled environment of the office or classroom invites more distraction.
Two Key Forces That Drive Effective Mobile Learning
Two potentially opposing forces are relevant in the design and deployment of mobile learning: accessibility and distractibility.
Accessibility
Accessibility is one key attribute of mobile learning. As described in the preceding text, learning on a mobile device is intended to be convenient. Most mobile learning uses a multipurpose device that in many cases a user already owns and uses, and this device can be used almost anytime, anywhere, in any conditions. The notion of accessibility, or closely related terms such as access or convenience, is a consistent component of almost every definition currently in use in the research and practitioner literatures.
Accessibility impacts both the training designer and the training consumer. From the training design perspective, where and when the trainee uses the training materials should be of concern. The duration of learning events, the type of methods used (simple vs. complex) and the type of media used (video vs. text) may all need to change depending on how mobile the trainees will be. On the other side, the device and operating system become relevant to designers in determining how to best reach the intended learners. For example, should the tool be browser based or app based? Is the training program designed to take advantage of existing social networks or operate in isolation? These platform decisions are important components of accessibility.
Distractibility
The second key attribute of mobile learning as it is designed and practiced today is distractibility. Distractibility is something that is not generally at the surface of most mobile learning definitions, but it underlies much discussion in many of the existing published papers in this domain. Distraction is a cognitive state whereby individuals shift attention away from a primary task and toward a secondary, noncritical task, and this shift is not typically intentional (Lleras, Buetti, & Mordkoff, Reference Lleras, Buetti, Mordkoff and Ross2013). This construct has been well studied in the psychology literature, in the workplace related to task attention, and in the educational research in terms of learning effort and retention (e.g., Appelbaum, Marchionni, & Fernandez, Reference Appelbaum, Marchionni and Fernandez2008; Fang, Reference Fang2009; Lavoie & Pychyl, Reference Lavoie and Pychyl2001).
The distractibility of the user and the situation are likely key factors in the user’s ability to attend to, process, store, and even apply information provided in mobile learning. Mobile devices are designed for constant notifications and multitasking. While multitasking is recognized as a common requirement of many job environments, and skill in multitasking is listed as a required or desired ability in many job postings, much research has found negative effects of multitasking on performance (Appelbaum et al., Reference Appelbaum, Marchionni and Fernandez2008). The social aspect of mobile learning is often viewed as a benefit. Users can share perspectives or opinions, offer additional information, provide critical feedback, or ask questions of peers or instructors in real time. This same benefit, however, can also be a negative impact if peers are distracting each other with off-task information, providing irrelevant information or asking unrelated questions. The availability of social media on the core learning device offers additional opportunities for distraction, as demonstrated in studies examining student use of laptops in college classrooms (e.g., Fang, Reference Fang2009; Lavoie & Pychyl, Reference Lavoie and Pychyl2001). Learners may also confuse surface-level social media interaction with learning. Someone could find an idea in the training interesting and use a social media channel to “share” the information. This learner may feel that he or she has done something to participate in the training, but the simple act of sharing does not mean that the information was processed in a way that leads to real learning.
Even though these aspects of mobile learning that interject real-time information and allow social interaction are intended to benefit the learner, they may ultimately serve to distract the learner and decrease the effectiveness of the learning event. One mechanism for examining distractibility is through interruptions. At this point, there has been little research on the impact of interruptions on training effectiveness and we do not know what the ultimate impact will be (Noe, Clark, & Klein, Reference Noe, Clarke and Klein2014). However, Sitzmann et al. (Reference Sitzmann, Ely, Bell and Bauer2010) demonstrated that even very brief interruptions from simulation technical difficulties inserted into an online training program led to decreases in short-term learning and increases in attrition, although high training motivation insulated learners from these negative effects. Further, offering learners too many choices can be distracting, especially in an e-learning environment, and can disrupt focus on the learning task at hand (Scheiter & Gerjets, Reference Scheiter and Gerjets2007).
Mobile learning also has potential to distract learners from other important activities, either in their work or home lives, such as a meeting or a family dinner. At the extreme, learners could be participating in mobile learning to such an extent that it reduces job performance. Stanko and Beckman (Reference Stanko and Beckman2015) examined the need for organizations to exhibit boundary control over the interplay between personal and work-related activities that are facilitated by personal technologies, such as cell phones. They defined individual use of informational and communication technologies as an event that would shift employees’ attention away from work. With mobile learning, however, we see potential role conflict within the workspace as employees must decide how to handle incoming messages and prompts about mobile learning events that may conflict with other job requirements. Mobile learning could also serve to increase work-life conflict if the implication of the “anytime, anywhere” learning environment is that learning is done at home outside of work time. In the next section, we examine the implications of mobile learning, focusing on these two characteristics of accessibility and distractibility, in the context of the traditional instructional design process.
Mobile Learning in the Traditional Training Cycle
In this section we examine the impact of mobile learning on three key phases of the instructional design process (Goldstein & Ford, Reference Goldstein and Ford2002): needs assessment, design/development, and evaluation (see Table 13.1). To provide context for this review, we use a variety of mobile learning scenarios from the workplace setting (Pimmer & Gröhbiel, Reference Pimmer and Gröhbiel2008), looking at sales representatives, engineers, nurses, and apprentices. We examine issues related to accessibility and distractibility in each of the three phases.
Table 13.1 Mobile learning in the instructional design cycle
| Phase | Key Concerns |
|---|---|
| Needs Assessment | |
| Organizational Analysis |
|
| Task Analysis |
|
| Person Analysis |
|
| Design and Development |
|
| Evaluation | Using mobile devices to collect evaluation data |
Needs Assessment
Needs assessment is often divided into three components: organizational assessment, task assessment, and individual assessment.
Organizational Analysis
The organizational analysis broadly addresses factors that may affect the success of the training such as goals, resources, and climate (Goldstein & Ford, Reference Goldstein and Ford2002). At the organizational level, there are several factors that should be examined when planning a mobile learning program, including the mix of employees, features of the job, devices in use at the organization, and features of the organizational climate for training.
Employee mix. There is a general assumption in much of the practitioner literature that because members of the millennial generation have grown up with ubiquitous mobile devices, these users will prefer or even demand training on this platform. Companies that follow fads to recruit and retain talent might cater to this demographic and offer mobile learning on platforms that millennials use. Even if younger employees do prefer mobile learning, there is little evidence that matching learner preferences to the training results in better learning (Pashler et al., Reference Pashler, McDaniel, Rohrer and Bjork2008).
Job flexibility. Given the anytime, anywhere nature of mobile learning, one important part of organizational analysis is determining if jobs offer sufficient flexibility to gain value from the accessibility characteristic of mobile learning. Eaton (Reference Eaton2003) defines flexibility as “the ability to change the temporal and spatial boundaries of one’s job” (148). Following this definition, jobs can be flexible in many ways, including location of the work, timing of task performance, and choosing when to take breaks. Consider a mobile learning application in which nursing staff were asked to record and post videos demonstrating how to operate infrequently used hospital equipment (Pimmer & Gröhbiel, Reference Pimmer and Gröhbiel2008). In the nursing environment, choices around work location and timing of task performance can be limited. Shifts are scheduled, the location is set, and emergencies can happen at any time. It could be quite difficult for a nurse to take the time to record or view mobile learning material in the face of competing demands for continual patient care. The mobile technology may fit well with the distributed location of nurses all across a hospital or doctor’s office, but they may need more structured time to dedicate to learning. Limited job flexibility may actually reduce the overall accessibility of mobile learning resources.
Organizations with a distributed workplace in which employees are often out of the office may have not only the opportunity to use mobile learning, but also a greater need for flexible mobile learning solutions that match the flexibility of the job. For example, salespeople who travel frequently would benefit from the enhanced accessibility of mobile learning, using it between appointments or during travel to brush up on new product features, pricing schedules, or organizational compliance requirements (Pimmer & Gröhbiel, Reference Pimmer and Gröhbiel2008). Thus, from an organizational needs assessment perspective, it is important to note the extent to which it is possible for learners to engage in mobile learning as well as the extent to which the environment creates a demand for mobile learning.
Device selection. As part of the organizational level assessment for mobile learning, it will be important for designers to know if the organization provides common mobile devices such as smartphones and tablets to employees or if it uses a “bring your own device” (BYOD) policy. If the organization provides the same devices to all employees, it will be easier to develop mobile learning programs within a single hardware and software environment. Attributes such as screen size, processor speed, battery life, and software compatibility vary across devices (Liu, Han & Li, Reference Liu, Han and Li2010). Organizations with a BYOD policy may have challenges implementing certain types of training strictly on a mobile platform if they cannot be certain that everyone has access to an appropriate device. A 2016 industry survey suggests that over half (59%) of organizations have a BYOD policy (Maddox, Reference Maddox2016). Training designers will need to know what kind of devices are in use and how many employees have access to these devices, and provide alternative access to training materials if some employees do not have appropriate devices. With the BYOD policy, accessibility of mobile learning is not evenly assured.
Climate/Culture. Organizational climate is another characteristic that may impact the success of mobile learning and should be investigated during the training needs analysis. Climate for training includes aspects of an organization’s environment such as supervisory and support for training, and policies and procedures that emphasize the importance of training (Tracey & Tews, Reference Tracey and Tews2005) and is associated with more positive training results. Organizational climate is generally a more effective predictor when it describes a specific aspect of employee perceptions. For example, in addition to climate for training, climate for training transfer has been examined as an important predictor of transfer (Goldstein & Ford, Reference Goldstein and Ford2002). Thus, we may find a more specific climate for mobile learning that examines the extent to which managers, peers, and organizational policies are supportive of the flexibility needed for effective mobile learning. In the case of mobile learning, other aspects of organizational climate or culture may also affect the success of such initiatives. For example, if we expect that mobile learning should be completed outside of typical work hours, an organization with a strong cultural value for work-life balance may experience more resistance from employees and consequently less success with mobile learning. Even if the mobile learning is technologically accessible, culture or climate will likely affect employees’ willingness to access it.
Task Analysis
The task level analysis details what trainees should learn and the conditions under which the tasks should be performed (Goldstein & Ford, Reference Goldstein and Ford2002). As discussed previously, mobile learning can and has been used for a variety of training topics, and the basic process for describing desired tasks and conditions should not change. Two areas that may change are determining what should be learned versus merely accessible, and opportunities for more effective data collection.
Learning vs. performance support. Mobile learning may affect task analysis in a fundamental way, as the easy accessibility of information through mobile technology may change the types of learning outcomes required. As noted by Traxler (Reference Traxler and Ally2009), “Finding information rather than possessing it or knowing it becomes the defining characteristic of learning generally and of mobile learning especially” (14). As discussed earlier, Brown et al. (Reference Brown, Charlier, Pierotti, Hodgkinson and Ford2012) distinguish between informational and instructional learning resources, and the purpose of the mobile learning activity must be clearly stated during the task analysis phase. The learning goal may be related to how to use the platform to find needed information rather than learning the information.
There are several examples of this shift in task analysis. In example with nurses recording and viewing video, instead of learning how to use the medical equipment, the nurses would need to learn how to access and use the database to call up the needed video when preparing to use the equipment. Similarly, Pimmer and Pachler (Reference Pimmer, Pachler, Ally and Tsinakos2014) report that IBM switched from a strategy of providing mini-courses that could be accessed anytime, anywhere to a system of “in-field performance support” (196). This support was available not just anytime, but at the right time. For example, before attending an important meeting with clients, the performance support system would allow IBM employees to download a checklist with critical information to cover during the meeting. The learning outcome is then not what needs to be covered during the meeting, but what resources are available and how to access them.
In the mobile learning context, task analysis may become more about defining the task performance conditions that require “just in case” learning in advance. Goldstein and Ford (Reference Goldstein and Ford2002) describe a “level of recall” scale used in task analysis that allows incumbents to specify if they only need general familiarity with a task and can look up the details, or if they need full recall of the task to perform it “without referring to source documents” (69). Task analysis for mobile learning applications should address how quickly and how often performance is required. For infrequent tasks, performance support tools may be the better option. For frequent tasks, or tasks that require immediate performance, advance learning may still be required. In the nursing example, we certainly hope that in a medical emergency the nurses do not need to pause and view a three-minute video on how to operate the necessary life-saving equipment. However, the on-demand approach may be very effective for physical therapy equipment or adjusting the hospital bed for patient comfort.
Collecting task data. Technologies associated with mobile learning may offer opportunities for improving the process of task analysis through more effective data collection. Rather than relying on retroactive reports of the most important tasks on a job, for example, employees could use their mobile phone to create diary entries about the tasks they are performing at a specified time or when prompted through a text message. Employees could easily take photos or record video of important tasks or situational features. However, scheduling of such data collection should be done carefully to limit the distractions introduced into the work environment.
Person Analysis
The phase of person analysis addresses which employees need the training and if they are prepared to participate in the training (Goldstein & Ford, Reference Goldstein and Ford2002). This section focuses on individual skill levels and characteristics related to training preparedness in the mobile learning context.
Skill using devices. In the mobile learning environment, one important prerequisite is skill using the device(s) on which the training will be delivered. Even if the company has a BYOD policy and employees are using their own devices, they may not know how to use the specific features required in the mobile learning environment and some guidance on this might be required. In the nursing example, they may use their own smartphones to record videos demonstrating use of equipment. Most people probably have sufficient smartphone skills to complete the recording. However, they may be less familiar with the necessary steps to edit the videos and upload them to a central repository. Accessibility of mobile learning could be reduced if the intended users do not have sufficient skill with the devices to fully engage with the content.
Individual differences. A broader view of person analysis, and an often-studied topic in the training literature, examines other individual difference characteristics that are associated with successful training performance. We would expect the traditional factors of trainee motivation, self-efficacy, and locus of control (Sitzmann & Ely, Reference Sitzmann and Ely2011) to be relevant predictors of success with mobile learning. Further, given the distractibility inherent in mobile learning, we would expect learners with better capacity to deal with multitasking to be more suited to the mobile learning environment. Grawitch and Barber (Reference Grawitch and Barber2013) found that individuals who are low in self-control perform better in multitasking situations when they express a preference for polychronicity, or engaging in multiple tasks at once. Polychronicity was unrelated to performance when participants had high self-control. Other individual differences potentially associated with success in mobile learning include readiness for mobile learning (Liu et al., Reference Liu, Han and Li2010), the ability to self-manage learning, and comfort with mobile learning technologies.
Results of the person analysis could impact mobile learning initiatives in several ways. If employees are not skilled in using relevant features of the mobile devices, or if motivation is very low, the organization may decide against using mobile learning at that point in time. Another option is to address some of these characteristics as part of pretraining preparation, such as taking time during staff meetings to discuss the approach and boost employee self-efficacy for participating in mobile learning. A third approach is to address the individual readiness or skill concerns as part of the mobile learning. This is discussed more in the section on design and development in the following text.
Design and Development
Modularized Content
One design strategy used for mobile learning is to create shorter modules of content to minimize interference from distractions. Bite sized is a phrase often used to describe the amount of information to be delivered in mobile learning resources (Traxler, Reference Traxler and Ally2009). If the learner is distracted, concentrating for a short period to comprehend the small piece of information may be more effective. Another term for this is microlearning, defined as “a short, focused learning nugget (often 3–5 mins long or shorter) that is designed to meet a specific learning outcome” (Pandey, Reference Pandey2016: para. 3). With sales representatives, microlearning modules could be developed to address features of individual products, changes to the sales process, or refreshers on individual organizational policies regarding returns, repeat orders, or sales bonuses.
Modularizing content also would help address variation in learner skills or readiness. Some of the intended learners may have expert-level video editing and processing skills while others are limited to taking short videos of family events that they never watch again. Offering a series of short modules at the beginning of the learning program would allow learners to enhance their skills as needed to fully benefit from the mobile learning. Other modules could help learners plan an appropriate series of learning events, or make suggestions on how to anticipate and address potential learning distractions.
Learner Control
It is not clear to what extent mobile learning should be instructor guided or user guided. Some authors seem to assume that mobile learning will provide substantial or complete learner control (Cook, Pachler, & Bachmair, Reference Cook, Pachler and Bachmair2011; Liu et al., Reference Liu, Han and Li2010), controlling at least the time and place of learning. Mobile learning may facilitate socially constructed learning with learner-to-learner interaction (Kraiger, Reference Kraiger2008), but given the inconsistent research findings about the effectiveness of learner control (Brown et al., Reference Brown, Howardson and Fisher2016; Kraiger & Jerden, Reference Kraiger, Jerden, Fiore and Salas2007) it would seem wise to consider how to include appropriate guidance from the program or instructor. Consider a situation in which apprentices use their mobile phones to create an electronic learning diary (Pimmer & Gröhbiel, Reference Pimmer and Gröhbiel2008). Each day, learners prepare diary entries describing what they have learned and how they can use these skills in the future. The learners were allowed to choose how to enrich their diaries with media (i.e., photos, videos), but the diary entries were guided by daily questions from the program instructors rather than allowing learners to completely choose their own topics. As in regular e-learning, the objectives of training and the resulting design may dictate different levels or amounts of control. Novice learners may require more guidance from the training program on fundamental learning activities, but be given surface-level control to enhance motivation and engagement (Brown et al., Reference Brown, Howardson and Fisher2016).
Situationally Connected Instruction
As mentioned earlier, mobile technologies offer the possibility of delivering relevant training material when the learner is in a designated location. With the salesperson training, for example, critical information about customer accounts could be delivered to the employee’s mobile phone as he or she approaches the customer site. Location-aware instruction has been used to provide information about sites in a city that learners pass by while on a walking tour (Cook et al., Reference Cook, Pachler and Bachmair2011), or as users approach a painting or other display at a museum (Specht, Reference Specht, Ally and Tsinakos2014). This type of location-aware instruction provides greater context to the learners, maximizing the relevance of the information. Conducting learning outside of a designated classroom space also increases potential for distractions, of course, but initial evidence presented by Cook and colleagues suggested that learners found the location-aware experience to be highly engaging and interesting. The AR applications discussed earlier are interesting examples of how situationally connected learning can be a defining characteristic of mobile learning.
Situationally connected learning could also relate to the social configuration of the learning community. Sometimes learning requires a focused relationship between the teacher/trainer and the learner, especially when the teacher/trainer has specialized, expert knowledge that she or he can impart to the learner. This can happen on an ad hoc basis when both parties are online and willing to interact. Other times, peer-based, learner-to-learner interaction is best for real-time feedback – at times where the teacher/trainer is not available, or at times when peers are located nearby for real-time interaction. Other times, a training platform can offer suggested interactions (teacher/trainer or peer) based on the specific type of learning situation and the location or availability of others in the learning community. As location-aware devices and artificial intelligence become more advanced, the design of these types of training scenarios can become more mainstream.
Design for Small Screens
Many mobile learning programs are intended to be used on a range of devices that typically include mobile phones. While screen size has been steadily increasing, there is still typically much less space available on a mobile phone screen than a laptop screen. Designers need to adjust the amount of text presented, level of detail that can be shown in videos, and associated learning activities. Further, as mentioned earlier, AR creates new design opportunities and challenges. Finally, virtual reality (VR) headsets become a more complex design context.
The web design and app design communities offer tools and techniques for accommodating a variety of screen sizes, enhancing usability, and even designing for “fat fingers” as an input device instead of a desktop mouse (e.g., StartApp, 2014). Of greater concern to the training designer is what types of interactive exercises may successfully be used on small screens. These might include handheld language translation using a smartphone camera or identification of parts with a device that only needs one hand to operate, such as in tight spaces. Small screens present design limitations, such as restricting the number of choices that can be presented on screen or in a drop-down box. However, smartphones also offer opportunities in use of the tilt sensors as an input device for exercises where location or motion is important (e.g., driving simulations) or on-screen tools such as easy-to-manipulate sliders or video control bars within the user interface.
Pimmer and Gröhbiel (Reference Pimmer and Gröhbiel2008) described a mobile learning example in which engineers view training content on VR display headsets while they are repairing machines. VR headsets can display a significant amount of detail, including text, images, and embedded video. They also restrict the learner’s visibility to the physical world around them. This has the often significant advantage of reducing distractions, but could also be a disadvantage, or even a risk, for a learner who needs better situational awareness because of physical danger or a need for vigilance in a training environment (e.g., in a factory or in a retail outlet). But for dedicated, protected training events, headset goggles seem to have significant potential to add content to an immersive and situationally sensitive learning environment.
Coproduction of Learning Content
Many mobile learning platforms offer the potential for coproduction of learning content, but it is not clear to what extent this is a useful feature of mobile learning. Cook et al. (Reference Cook, Pachler and Bachmair2011) suggested that coproduction may be important for engaging students, although their evidence on this point is from primary and secondary education. Pimmer and colleagues (Pimmer & Gröhbiel, Reference Pimmer and Gröhbiel2008; Pimmer & Pachler, Reference Pimmer, Pachler, Ally and Tsinakos2014) have found that in peer-to-peer training, such as with the videos produced by nurses, the employees did not always have the necessary technical or training design skills to produce their own content. A training program that allowed or required coproduction may need to include a module specifically on how to achieve that. In contrast, if learners are more engaged in the learning activity because they are producing some of their own content, then distraction is less likely to be a concern (Fang, Reference Fang2009).
Refresher or Relapse Prevention
One opportunity for training design in the mobile environment is to use the mobile technology to build in refresher or relapse prevention modules that can be pushed out to learners after they have completed training. In their meta-analysis of training transfer, Blume et al. (Reference Blume, Ford, Baldwin and Huang2010) found very small and inconsistent effect sizes for the use of posttraining relapse prevention interventions on transfer performance (ρ = -.06, 95% CI = -.19 – .08). They noted there were few studies that examined relapse prevention techniques (n = 6) and including them at the end of a training session tends not to be effective because the trainees are tired. Mobile learning may provide an alternative route for communicating key learning points to trainees in the weeks and months following the learning event. The social aspect of mobile learning could enhance social support, further promoting transfer of knowledge and skills. Alternatively, the use of mobile devices to provide electronic performance support may minimize concerns about transfer in many contexts. Again, this depends on the nature of the task as discussed in the preceding text.
Evaluation
Mobile learning may help overcome some of the barriers to posttraining evaluation, such as lack of access to trainees after they complete the training, lack of staff to conduct the evaluation, performance apprehension for testing conducted in the work environment, and extended time lags (Ghodsian, Bjork, & Benjamin, Reference Ghodsian, Bjork, Benjamin, Quiñones and Ehrenstein1997). It is easier for evaluation surveys to reach trainees through mobile devices. Testing within the work environment may be as simple as responding to a series of questions delivered on a mobile phone to determine if the trainee has retained knowledge three and six months after the training program. However, organizations should consider further risk of interrupting and distracting employees through requests for evaluation data.
Mobile technologies may not be appropriate for carrying out evaluations of all learning outcomes, but certainly could be used for reactions and knowledge outcomes through surveys delivered through e-mail, web services such as Survey Monkey, or apps. Testing a former trainee’s knowledge without the use of support materials could be problematic, as it would be difficult to ensure that the trainee is only using his or her memory (and not the available training or performance support material) to answer test questions. This may redefine the training outcome measured, changing it from “demonstrates six-month recall of procedures for using physical rehabilitation equipment” to the broader “describes correct procedure for using physical rehabilitation equipment in ten minutes or less.” The outcome of interest may shift to having the ability to locate and use the information in a short period. Further, use of mobile devices to collect data on important transfer outcomes could also make it easier to collect data at multiple points in time and from multiple sources (e.g., from trainees and their supervisors), addressing one of the concerns Blume et al. (Reference Blume, Ford, Baldwin and Huang2010) raised about the quality of the training transfer literature.
As introduced in the preceding text, one innovative approach to collecting evaluation data in mobile learning is requiring apprentices to use mobile phones to create an electronic learning diary, reflecting and even documenting through pictures what they have learned each day (Pimmer & Gröhbiel, Reference Pimmer and Gröhbiel2008; Pimmer & Pachler, Reference Pimmer, Pachler, Ally and Tsinakos2014). This type of data collection could allow researchers and training managers to assess learning trajectories over time. Requiring apprentices to provide input at a particular time each day, or whenever they had free time, could minimize the risks of distraction. Extending this idea one step further, trainees could record video of themselves performing a particular skill such as repairing a machine or using new foreign-language vocabulary. These visual artifacts could be sent to and scored by experts, who could provide feedback to learners and to program designers.
Conclusions and Future Research Directions
Mobile learning clearly offers opportunities for making training more accessible and engaging to learners. These opportunities need to be considered in the context of managing the simultaneous potential for distractions that could limit the effectiveness of the learning and detract from other aspects of work performance or home life. Unfortunately, the existing body of research on mobile learning to help training designers navigate these opposing forces is still quite limited. Further, the research base in mobile learning at the present time is largely anchored in educational settings examining how students learn rather than in work organizations looking at how employees learn. The few studies looking at mobile learning in corporate settings report on some interesting implementations, but tend to have small samples and offer largely descriptive reviews of the training rather than rigorous evaluation. Thus, we see many possible directions for future research to expand on the ideas described in the preceding text and test the application of existing theories and models in this new context to leverage the accessibility while managing distractions. In the following text, we review potential research directions for the topic of mobile learning.
First, regarding strategic choices, research is needed on the extent to which mobile learning should be considered a formal training activity that is developed and implemented by the organization versus an informal way of sharing information and learning through social media. Noe et al. (Reference Noe, Clarke and Klein2014) noted both the importance of informal learning and the rise of social media as a training method. Both trends are related to Kraiger’s (Reference Kraiger2008) discussion of third-generation learning, where learning becomes much more focused on interactions between the learners. Research is needed that investigates the effectiveness of mobile learning techniques for informal, social types of learning compared to more formal, instructor-led training. Informal training may be more suitable for the frequent interruptions likely to occur in mobile learning environments, while more formal training may require face-to-face or computer-based distance learning methodologies.
Second, mobile learning offers the opportunity to reach learners where they are located, expand learner-to-learner interaction, and add location-based features to training. These factors are important in understanding the accessibility and distractibility aspects of mobile learning. This offers potential for increased convenience and effectiveness of organizational training (accessibility), but also carries risks of decreased learning effectiveness and job performance as a result of off-task interruptions during the training process (distractibility). It would also be interesting to research potential positive effects of distraction and interruption in the mobile learning environment, such as learner exploration of additional learning resources introduced through a social exchange. Future research in mobile learning should continue to examine how to leverage the convenience and accessibility while creating designs that help control or minimize problems due to distractions.
A third design topic that needs more research is coproduction of learning content. The process of coproduction is not new (e.g., Kotze & Du Plessis, Reference Kotze and Du Plessis2003). Consider how many university professors require students to deliver presentations in front of the class, presumably both to test the student’s knowledge and to share knowledge with other students. Mobile learning offers more opportunities for sophisticated coproduction, and coproduction using mobile learning tools may help further enhance trainee engagement (Cook et al., Reference Cook, Pachler and Bachmair2011). However, there are many questions about the feasibility and effectiveness of coproduction. First, what are the attentional processes involved in the coproduction of learning content? To what extent do learners focus on the material to be learned, or are they distracted by the content production process? Second, what is the impact of using content produced by other learners rather than by a training designer or other expert? It would be interesting to test different types of peer contributions and how they influence learning behaviors and outcomes. Viewing peer-submitted materials may encourage continued use and deeper processing if learners perceive an emotional or social connection with the other learners. These peer-produced materials may be perceived as more relevant to the learners’ own situation, perhaps leading to better retention and transfer of training (Blume et al., Reference Blume, Ford, Baldwin and Huang2010). Alternatively, peer-produced materials that are unprofessional or simply incorrect may overload or distract learners from key learning points. Some kind of “scoring” or “ratings” of peer content so that expert judged material was shown first may be useful in minimizing negative outcomes.
More research is also needed on the interaction between different features of training environments to determine how, where, and when learners can best leverage mobile learning. For example, are certain types of training best completed in a low-distraction environment, or directly in the workplace setting, as opposed to during a commute or while multitasking on a conference call? Complexity of learning objectives might also come into play in such research. For example, a study that compares learning effectiveness in high- and low-distraction environments for declarative and procedural knowledge training outcomes could help provide guidance in training design choices. Here we caution researchers to avoid testing whether particular mobile learning interventions work, but to focus on generalizable aspects of the training intervention. Development of measures to specifically describe and compare levels of accessibility and distractibility would assist in this direction. Ultimately, from the practitioner perspective, instruction and assistance for learners on how best to use mobile learning, similar to what DeRouin et al. (Reference DeRouin, Fritzsche and Salas2004) offered in their guidelines for effective e-learning, may be warranted.
Even though there is some evidence to suggest that younger employees (i.e., millennials or digital natives) prefer mobile learning over face-to-face learning or even “old fashioned” e-learning, there is little concrete evidence that it is more effective. Research in learner control has examined the extent to which preferences for different aspects of learner control are related to training outcomes, finding some support that trainees with certain personality characteristics are better suited for learner control environments (see Howardson et al., Chapter 5 this volume; Orvis et al., Reference Orvis, Brusso, Wasserman and Fisher2011). Further investigation of this matching hypothesis in mobile learning is warranted. For example, we might expect that extraverted learners are better suited for mobile learning environments that include a substantial social component. This would be an interesting context in which to explore the individual difference aspect of distractibility as well. Are there consistent differences between learners in their ability to manage distractions inherent in the mobile learning environment? Conscientiousness might be an important predictor of the ability to manage distractions in the learning environment.
The more traditional approach of training students and employees “just in case” the knowledge or skill is needed in the future is very different from what mobile learning advocates argue is the “just in time” nature of mobile learning (Pimmer & Pachler, Reference Pimmer, Pachler, Ally and Tsinakos2014). This shift should enhance transfer of training, as the newly learned knowledge or skill can be put to use immediately. However, if trainees know they can always access the information again, will that negatively affect motivation to transfer? In contrast, it is possible that mobile learning could enhance transfer of training through a social effect. We know that peer and supervisor support is a consistent, positive predictor of transfer (Blume et al., Reference Blume, Ford, Baldwin and Huang2010). In a mobile learning environment, trainees may use the technology to share information with peers and supervisors, or others in their network, about training they have experienced and intentions to use this material. It would be interesting to examine if stated social intentions to use knowledge or skills acquired through mobile learning would affect transfer. Imagine a learner posting something to social media describing a training program, or texting a co-worker about it. This may create social expectations that facilitate transfer (Blume et al., Reference Blume, Ford, Baldwin and Huang2010), enhancing motivation to transfer through social support.
Researchers should investigate trajectories of motivation throughout the mobile learning experience, examining pretraining motivation, shifts in motivation during the training experience, and how learners are motivated to retain knowledge and skills for the future. Research in an online learning environment has shown that motivation to learn can change over time in a multipart training program (Sitzmann et al., Reference Sitzmann, Brown, Ely, Kraiger and Wisher2009). If a mobile learning program relies on microlearning to enhance engagement and minimize problems with distractibility, similar decreases in motivation to learn would be problematic. Future research should examine differences in training motivation at multiple points within a mobile learning program, comparing effects for microlearning with longer learning programs.
Conclusion
The shift to methods and models of mobile learning will have a significant effect on both training designers and the consumers of training. Mobile learning offers the opportunity for situational connectedness and greater accessibility for many. Mobile learning also may suffer from the distractibility of learners that ultimately reduces learning effectiveness. In this chapter, we have outlined several ways to examine and analyze the design and delivery of training in a mobile context. However, there is almost no rigorous research on the consumption of training in a mobile context. That is ground fertile for further research. In answering these types of questions empirically in the future, we can improve the accessibility and the effectiveness of employee training.
Learning and development are important for employees and organizations (e.g., Birdi, Allan, & Warr, Reference Birdi, Allan and Warr1997; Maurer, Weiss, & Barbeite, Reference Maurer, Weiss and Barbeite2003). Learning refers to changes in knowledge, skills, attitudes, or behaviour (e.g., Noe et al., Reference Noe, Wilk, Mullen, Wanek and Ford1997). Development refers to a form of growth, characterized by successive changes that move the individual towards a qualitatively distinct state (Moshman, Reference Moshman, Kuhn, Siegler and Damon1998; Parker, Reference Parker2014). Learning and development are presumed to lead to positive outcomes in both professional (e.g., job performance, promotability, leadership potential) and personal (e.g., well-being, identity formation, nonwork competencies (Birdi et al., Reference Birdi, Allan and Warr1997; Maurer et al., Reference Maurer, Weiss and Barbeite2003; Zoogah, Reference Zoogah2010) domains.
In this chapter, we propose that time and the associated thinking offer a unique pathway to learning and development, in comparison with traditional learning and development activities. Traditional activities have been categorized into employee assessment (e.g., 360-degree feedback), job experiences (e.g., temporary work assignments), relationships (e.g., mentoring programs) and formal courses/programs (e.g., professional development courses; Noe et al., Reference Noe, Wilk, Mullen, Wanek and Ford1997). Because time in these activities promotes greater learning and development, most work in this area has searched for levers that can be used to increase participation in such activities (e.g., Birdi et al., Reference Birdi, Allan and Warr1997; Hurtz & Williams, Reference Hurtz and Williams2009; Major, Turner, & Fletcher, Reference Major, Turner and Fletcher2006; Maurer et al., Reference Maurer, Weiss and Barbeite2003). However, these activities typically require scheduling time for the individual to engage in learning and development experiences. We argue that an emphasis on “action” might hold individuals back from maximizing their long-term potential. We call for more attention to “slack time” (similar to layperson conceptualizations of “free time” or “quiet time”) as a crucial enabler of learning and development in the workplace.
Individuals, organizations, and society at large have tended to place little value on slack time. To the extent that periods of nonactivity are built into a work day, they are often the first thing to go when things get busy or the pressure to produce intensifies (Daudelin, Reference Daudelin1996; Elsbach & Hargadon, Reference Elsbach and Hargadon2006). For example, Google’s “20% time,” which reflects the portion of time that is set aside to work on individuals’ own projects, is now referred to as “120% time” to indicate that staff tend to work on personal projects outside of work hours because of increased pressure for production (Mims, Reference Mims2013). Similarly, the popular press describes an epidemic of people believing they are “too busy” to leave portions of work time free of activity (Kreider, Reference Kreider2012; Seiter, Reference Seiter2014).
There are exceptions to this rule, often motivated by the desire for greater innovation. In 1948, 3M introduced an “Innovation Time Off (ITO)” initiative of “10% downtime” per week. More recently, an Australian IT company, Atlassian, has “ShipIt” days – there are four per year that last 24 hours for employees to focus on a non-work-related project (Atlassian, 2016). Some companies offer sabbaticals (paid time off work) to foster recuperation and self-improvement (Kane, Reference Kane2015). These sabbaticals range from one week to one year and employees have used the nonwork time for travel, volunteering, education, and connecting with family. These initiatives have not been informed by data, but indicate that some practitioners believe time that is not filled with specific allocated work activities is important both professionally and personally. These practices are consistent with our contention that time, in and of itself, is important for fostering learning and development.
More specifically, we consider the role of slack time in triggering various forms of thinking. We focus on thinking because our investigation indicates that slack time is critical for thinking and that, in turn, thinking offers an alternative pathway to learning and development relative to traditional development activities. Moreover, the benefits of traditional activities notwithstanding, we suggest thinking, and ultimately learning and development, will not be maximized if individuals’ work time is filled with activities. By devaluing slack time, we are missing a unique pathway to employee growth.
The heuristic model guiding our work is shown in Figure 14.1. We begin by discussing concepts related to time, including slack time. Next, we review the literature on five forms of thinking for which slack time appears to be an important factor, and that likely influence professional and personal outcomes. Delving into the dynamic learning and development process that is likely triggered by slack time and thinking is beyond the scope of this chapter, as is a detailed treatment of potential antecedents or moderators. However, we depict these links in Figure 14.1 as a platform for further research. We also touch on these ideas in our discussion of implications for future research and practice.

Figure 14.1. Heuristic model of time and thinking.
Note. The constructs in the shaded boxes are the core focus on this chapter.
Time as a Key Construct
In this section, we discuss general ideas about time within contemporary society. We then discuss the more specific concept of slack time.
Notions of Time in Contemporary Society
Time, at least in Western cultures, is viewed as a limited, scarce resource – we only have 24 hours per day, and as time passes, or is “used up,” we cannot get it back (Kasser & Sheldon, Reference Kasser and Sheldon2009). Consequently, time is viewed as a resource that needs to be controlled to enhance productivity (Fried & Slowik, Reference Fried and Slowik2004). A control strategy that many have adopted is to “do” as much as possible with one’s time (Rosa, Reference Rosa2003). This strategy is evident from time-use studies that show that we are maximizing work time by eating faster, sleeping less, reducing pauses and intervals between tasks, and engaging in more multitasking than previously – basically, we pack every moment of our day with activity (Kahneman et al., Reference Kahneman, Krueger, Schkade, Schwarz and Stone2004; Turnbull, Reference Turnbull2004). This phenomenon is referred to as the “busy trap,” wherein responses such as “I’m so busy” to the question “How are you?” can represent boasts disguised as a complaint (Kreider, Reference Kreider2012). It is not surprising, then, that the modern workplace is described as experiencing a “time famine” – there is a perception of too much to do and not enough time to do it in (Mogilner, Chance, & Norton, Reference Mogilner, Chance and Norton2012; Szollos, Reference Szollos2009). More broadly, society at large is experiencing an accelerated pace of life, which refers to increased speed and compression of activities in everyday life (Rosa, Reference Rosa2003; Szollos, Reference Szollos2009).
Views regarding a meaningful life might explain the tendency – at least in some countries or cultures – to pack every moment with activity. To understand this phenomenon, Rosa (Reference Rosa2003) draws on Ancillon’s observations from the 19th century – that people’s notion of “the good life” is a life rich in experiences and capacities. Rosa notes that one way to achieve a fulfilled life is to realize as many opportunities as possible from what the world has to offer. This is a tall feat given that the world has more to offer than can be experienced in a single lifetime. However, accelerating our pace of life – such as by taking half the time to realize a goal – can double what we can do within our lifetime. We further argue that on the flipside, people are hesitant to let a moment pass without activity because that would risk the loss of potentially valuable opportunities. As such, taking a break, a day off, or slowing down can be viewed as “wasting time” or “laziness.” Such sentiments are likely fuelled by the increasing time pressure within workplaces arising from global markets and enhanced competition, technological change, new employment arrangements, and more. In the learning and development space, we see evidence of this activity-time compression. Both our work and nonwork time is filled with structured activities designed to foster learning, achieve more and fulfil our potential. For example, it is common for development programs to boast providing staff access to online learning resources “24/7” (e.g., Allianz Insurance, 2016).
There are exceptions to this accelerated pace of life. Many of these are described under the banner of the “slow movement” (Honoré, Reference Honoré2004; Szollos, Reference Szollos2009) that reflects a deceleration of life, such as by taking time out in monasteries or in yoga courses for the purpose of improving well-being and/or facilitating more successful coping with and achievements within the fast pace of life (Rosa, Reference Rosa2003). A related movement is “take back your time,” which advocates for increased value of leisure time and associated paid time off work in an effort to challenge the epidemic of overscheduling and time famine. Consistent with these ideologies, in this chapter we propose an alternative, some might argue riskier, strategy to attain professional and personal benefits, namely to leave some portions of time open without demanding activities (i.e., to actively create slack time) to facilitate various forms of thinking presumed to facilitate learning and development.
Slack Time and Related Concepts
Slack time refers to the perceived or actual availability of time that requires little or no cognitive demands. We see slack time as a form of a “slack resource” because it is a portion of time characterized by a stock of excess cognitive capacity relative to the situational demands (Voss, Sirdeshmukh, and Voss, Reference Voss, Sirdeshmukh and Voss2008), even if the “time” has been deliberately crafted out of a demanding day. Voss et al. (Reference Voss, Sirdeshmukh and Voss2008) proposed that slack resources are characterized according to the rarity and absorption of the resource. In terms of resource rarity, a slack resource can range from being rare (scarce and unique) to generic (commonly available); and in terms of resource absorption a continuum is proposed from absorbed (currently committed but could be redeployed if necessary) to unabsorbed (currently uncommitted and could be deployed). We conceptualize slack time as a rare resource because of its scarce nature, that is, unabsorbed. Our conceptualization of slack time as a rare resource means that we see slack time as being meaningful only in situations in which slack time is atypical, or not the norm, as opposed to situations in which slack time is very common (e.g., when being unemployed).
Going beyond Voss et al.’s (Reference Voss, Sirdeshmukh and Voss2008) framework, we propose that slack time can be concentrated, such as when you block out a two-hour period in your diary or dispersed, such as when you use the full duration of a scheduled two-hour period for a primary task, but the task does not require full resource capacity. Slack time is a malleable construct as it can change over time (e.g., a meeting cancellation can increase both actual and perceived slack time; whereas illness could reduce it) and it can be conceptualized over varying time scales (e.g., hours, days, months). Slack time can also vary across domains (e.g., across tasks or the work/home boundary) and individuals – both perceived (e.g., individuals high on time urgency may perceive less slack time than their counterparts) and actual (e.g., full-time workers may have less slack time than part-time workers). Finally, slack time is not a predetermined consequence of one’s workload (although, of course, slack time is likely to be affected by one’s demands). For example, slack time can reflect conscious efforts in which an individual creates portions of time that place little or no demand on his or her cognitive resources, even in the context of high workload.
We now highlight overlaps and distinctions between slack time and similar constructs in the literature. First, we discuss three constructs that use the term slack. The engineering literature defines slack time as the amount of time a task can be delayed without causing another task to be delayed or impacting the completion date of a given project (BusinessDictionary.com, 2016). This meaning is similar to our concept in that it refers to a period that is unabsorbed; however, the engineering term is more restrictive because the purpose of that unabsorbed time is to be on standby in case unexpected delays occur. The engineering conceptualization is also objective, whereas ours encompasses both objective and subjective components.
The human resources and macro-organizational behaviour literatures define slack resources as a stock of excess resources available to an organization (Voss et al., Reference Voss, Sirdeshmukh and Voss2008). The underlying concept is similar, however, slack resources are conceptualized at the organizational level – usually using objective indicators such as numerical ratios – and as such tend to focus on higher level resources such as human resource, financial, and operational slack (Dolmans et al., Reference Dolmans, van Burg, Reymen and Romme2014).
Finally, the cognitive psychology literature uses the term resource slack – with particular focus on the resources of time and money – to refer to the perceived surplus of a given resource available to complete a focal task (Zauberman & Lynch, Reference Zauberman and Lynch2005). This definition is a narrower, more task-specific conceptualization that relates to our notion of perceived, dispersed slack time at the task level.
Next, we discuss two constructs from the organizational psychology and social anthropology literatures that do not refer to the term slack: time affluence and time pressure.
Time affluence is not explicitly defined in the literature, however, its measures imply that it refers to an individual’s perception of how much available spare time he or she has (Kasser & Sheldon, Reference Kasser and Sheldon2009; Mogilner et al., Reference Mogilner, Chance and Norton2012). This concept has some overlap with our notion of slack time – indeed, one of the items used by Zauberman and Lynch (Reference Zauberman and Lynch2005) to measure resource (time) slack was used by Mogilner et al. (Reference Mogilner, Chance and Norton2012) to measure time affluence – “On the following scale, please circle a number that reflects how much available spare time you have (-5 very little available time to 5 lots of available time).” However, although time affluence might result in slack time (because time affluence implies that one has a set of task requirements with a generous time allocation) slack time does not imply time affluence. For example, busy and overloaded individuals (low in time affluence) can consciously create some slack time in their workday.
Time pressure has been described as a subjective experience felt when the amount of work exceeds the time available (Sonnentag & Fritz, Reference Sonnentag and Fritz2014). As such, time pressure relates to the rate at which resources need to be expended whereas slack time refers to the amount of time that places little or no demands on cognitive resources. There could be a negative association between slack time and time pressure, such that the more slack time, the less time pressure experienced; however, this correlation is unlikely to be perfect. There may be little or no slack time, but the resource requirements are such that the task can be completed without experiencing time pressure. Likewise, an individual might create slack time at work, even though a large part of his or her working hours are characterized by high time pressure.
Slack Time, Thinking, and Professional/Personal Outcomes
We propose that slack time promotes various forms of thinking that have implications for learning and development, and ultimately lead to professional and personal outcomes. We review the literature on five forms of thinking that have potential links with slack time and likely foster learning and development. We review each thinking concept, the theory and empirical evidence relating to professional and personal outcomes, and then the antecedents of the thinking concept with a particular focus on those antecedents related to time.
Creative Thinking
Creative thinking refers to generating novel and useful ideas (Amabile et al., Reference Amabile, Conti, Coon, Lazenby and Herron1996). Theory and research in this literature support the view that creative thought and output facilitate professional (e.g., performance) and personal (e.g., well-being) outcomes (for reviews see Anderson, Potocnik, & Zhou, Reference Zhou and Hoever2014; Zhou & Hoever, Reference Zhou and Hoever2014).
This literature has examined time-related antecedents of creativity. We review this work in relation to time in general, resource slack, and time pressure. Mednick (Reference Mednick1962) proposed that time is important for creativity because novel ideas are far removed from initial problems. Amabile (Reference Amabile1983) highlighted time as an environmental factor in her componential theory of organization creativity and innovation, arguing that time is required to think creatively about a problem so that different perspectives can be explored. Similarly, it is argued that creative insights are not quick “aha!” moments but rather result from protracted periods of sustained effort (Gruber, Reference Gruber1981). Gilson, Litchfield, and Gilson (Reference Gilson, Litchfield, Gilson, Shipp and Fried2014) argued that time is important for creativity because it is required for gathering information, generating ideas, integrating perspectives and developing something novel. These authors highlighted a cognitive pathway, in which time should allow exploration of knowledge such that divergent associations are developed as more obvious ideas are used up, and a motivational pathway, in which time allows movement from less novel, safer ideas to more nonroutine ideas. Many authors have concluded that individuals should “take their time” if they want a creative solution and that organizations should give employees time for creative work (Runco, Reference Runco2004; Shalley & Gilson, Reference Shalley and Gilson2004). Empirical evidence for the ideas about time in general is limited (Anderson et al., Reference Anderson, Potocnik and Zhou2014). Relevant research is primarily qualitative, showing that employees frequently mention time as critical for creativity (for a review see Amabile et al., Reference Amabile, Conti, Coon, Lazenby and Herron1996).
The literature on resource slack generally operationalizes the construct as financial slack (Greve, Reference Greve2003). Resource slack is thought to enable individuals to explore and develop creative ideas (Bledow et al., Reference Bledow, Frese, Anderson, Erez and Farr2009; Voss et al., Reference Voss, Sirdeshmukh and Voss2008). Empirical results have been mixed, showing positive effects of financial slack on investment in research and development (Greve, Reference Greve2003), negative effects on research and development investment in declining organizations (Latham & Braun, Reference Latham and Braun2009), and no effect of resource availability on innovation implementation (Choi & Chang, Reference Choi and Chang2009). These inconsistent findings may reflect that, although surplus time and other resources allow exploration, tangible results are not guaranteed (Bledow et al., Reference Bledow, Frese, Anderson, Erez and Farr2009). Similarly, Shalley and Gilson (Reference Shalley and Gilson2004) advised a balance between allocating enough time to be creative but not too much time, as extreme levels may trigger boredom and reduce motivation.
The bulk of time-related work in the creativity literature relates to time pressure. Researchers acknowledge conflicting arguments (Gilson et al., Reference Gilson, Litchfield, Gilson, Shipp and Fried2014; Ohly & Fritz, Reference Ohly and Fritz2010; Zhou & Hoever, Reference Zhou and Hoever2014). The arguments for a negative impact align with those for time per se, that is, time pressure may be detrimental for creativity because the fast pace is not conducive to delving deeply into problems or fully exploring solutions. By contrast, time pressure may focus attention through necessity. Researchers have attempted to reconcile these views using a curvilinear hypothesis (Baer & Oldham, Reference Baer and Oldham2006; Ohly & Fritz, Reference Ohly and Fritz2010) derived from activation theory (Gardner, Reference Gardner1986; Scott, Reference Scott1966). This theory assumes that activation increases with increased time pressure, and that intermediate levels of activation are associated with optimal engagement because it matches the individual’s characteristic levels of activation. Low or high levels of time pressure are assumed to correspond with deviations from employees’ characteristic levels of activation, resulting in lower engagement. Meta-analytic work indicates wide variability in the relationship between time pressure and creativity (Harrison et al., Reference Harrison, Neff, Schwall and Zhao2006). Time pressure has been negatively (Amabile et al., Reference Amabile, Mueller, Simpson, Hadley, Kramer and Fleming2003), positively (Ohly & Fritz, Reference Ohly and Fritz2010), and nonsignificantly (Amabile & Gryskiewicz, Reference Amabile and Gryskiewicz1989) related to creativity. There is some evidence for a curvilinear effect (Ohly, Sonnentag, & Pluntke, Reference Ohly, Sonnentag and Pluntke2006), though further boundary conditions appear to be important such as openness to experience and organizational support for creativity (Baer & Oldham, Reference Baer and Oldham2006).
Mind Wandering
Mind wandering is defined as “a situation in which executive control shifts away from a primary task to the processing of personal goals” (Smallwood & Schooler, Reference Smallwood and Schooler2006: 946). It has been examined predominantly within performance contexts, with meta-analytic evidence indicating a detrimental link between mind wandering and performance (Randall, Oswald, & Beier, Reference Randall, Oswald and Beier2014). Tasks from primary studies include reading comprehension (Smallwood, McSpadden, & Schooler, Reference Smallwood, McSpadden and Schooler2008), listening to lectures (Risko et al., Reference Risko, Anderson, Sarwal, Engelhardt and Kingstone2012), and list learning (Smallwood et al., Reference Smallwood, Baracaia, Lowe and Obonsawin2003).
Researchers also believe that mind wandering is beneficial for some outcomes, through servicing personal goals that extend beyond current task-related issues (Baird, Smallwood, & Schooler, Reference Baird, Smallwood and Schooler2011; Smallwood, Ruby, and Singer, Reference Smallwood, Ruby and Singer2013). There is a growing body of empirical work supporting these ideas, by demonstrating beneficial links between mind wandering and a range of non-task-related phenomena including creativity (Baird et al., Reference Baird, Smallwood, Mrazek, Kam, Franklin and Schooler2012), autobiographical planning (Baird et al., Reference Baird, Smallwood and Schooler2011), and consolidation of self-memories (Smallwood et al., Reference Smallwood, Schooler, Turk, Cunningham, Burns and Macrae2011).
There is relatively little work on the antecedents of mind wandering (Christoff, Reference Christoff2012; Smallwood, Reference Smallwood2013). However, there is consensus that the initiation of mind wandering is a goal-driven process. The “current concerns hypothesis” was proposed by Klinger, Gregoire, and Barta (Reference Klinger, Gregoire and Barta1973) and adopted by others in the field (e.g., McVay & Kane, Reference McVay and Kane2010; Smallwood & Schooler, Reference Smallwood and Schooler2006). This hypothesis states that when mind wandering occurs, it is because the individual’s most salient experiences or concerns are external to the primary task being performed (Randall et al., Reference Randall, Oswald and Beier2014). Studies have examined indicators of current concerns – such as thoughts regarding personal and primed goals (Stawarczyk et al., Reference Stawarczyk, Majerus, Maj, Van der Linden and D’Argembeau2011), self-reflection, and primed self-memories (Smallwood et al., Reference Smallwood, Schooler, Turk, Cunningham, Burns and Macrae2011) – and shown that they do relate to mind wandering.
Studies have also examined the impact of concepts related to cognitive resources. Regarding situational effects, research suggests that mind wandering is triggered in contexts that have low cognitive resource demands. Mind wandering has been shown to be associated with reduced attentional demands (Smallwood et al., Reference Smallwood, Fitzgerald, Miles and Phillips2009) and task practice (e.g., Smallwood et al., Reference Smallwood, Baracaia, Lowe and Obonsawin2003). Regarding individual differences in cognitive resource availability, meta-analytic evidence shows that mind wandering is more prevalent in individuals with low working memory or ability (Randall et al., Reference Randall, Oswald and Beier2014). Within-person research may reconcile these seemingly inconsistent results by showing that individuals with high cognitive capacity engage in more mind wandering when they experience low task demands compared to when they experience high task demands.
Related research derives from that on brain structures. The “default network,” which is a large-scale brain system involving a set of medial and lateral brain regions (Andrews-Hanna, Reference Andrews-Hanna2012) is associated with passive “resting” states or “task-induced deactivation” (Mazoyer et al., Reference Mazoyer, Zago, Mellet, Bricogne, Etard, Houde, Crivello, Joliot, Petit and Mazoyer2001), and has been shown to be active when participants report mind wandering (e.g., Mason et al., Reference Mason, Norton, Van Horn, Wegner, Grafton and Macrae2007). This research suggests that portions of time characterized by rest or deactivation are likely to promote mind wandering. To the extent that high slack time is characterized by lower demands on cognitive resources, these studies suggest that our notion of dispersed slack time (i.e., slack time experienced while engaged in a primary task) may be positively associated with mind wandering.
Arguments diverge regarding the mechanisms underlying the maintenance of mind wandering. The control x concerns hypothesis (Smallwood et al., Reference Smallwood2013) assumes that the maintenance of mind wandering is a controlled, resource-demanding process. Mind wandering is assumed to rely on the same limited capacity of cognitive resources as on-task thoughts. Thus, current concerns provide the impetus to engage in mind wandering, and then these experiences are supported by controlled processing that redirects resources away from the external task towards internal thought. The control failure x concerns hypothesis (McVay & Kane, Reference McVay and Kane2010) assumes that the maintenance of mind wandering does not require cognitive resources, but rather reflects a disruption of executive control. These authors suggest that mind wandering reflects a default mode of processing whereby thoughts about higher order goals flow in an uncontrolled, resource-free manner.
Mindfulness
Despite varied definitions (Brown, Ryan, & Creswell, Reference Brown, Ryan and Creswell2007), researchers converge on the notion that mindfulness is a form of cognition that involves bringing attention to experiences (both internal and external) occurring in the present moment, and being aware of them in a nonjudgemental or accepting way (e.g., Baer & Oldham, Reference Baer and Oldham2006; Brown & Ryan, Reference Brown and Ryan2003; Hulsheger et al., Reference Hulsheger, Alberts, Feinholdt and Lang2013). Attention is defined as a process of focusing conscious awareness, providing heightened sensitivity to a limited range of experience (Brown & Ryan, Reference Brown and Ryan2003). Awareness is the “background radar” of consciousness, registering internal and external stimuli – thus one may be aware of stimuli without them being at the centre of attention (Brown & Ryan, Reference Brown and Ryan2003; Brown et al., Reference Brown, Ryan and Creswell2007). Mindfulness is considered a psychological state that varies within individuals; however, it is also conceptualized as a more stable trait (Dane, Reference Dane2011). Empirically, most research on naturally occurring (as opposed to induced) mindfulness has assessed it as a trait (for a review see Glomb et al., Reference Glomb, Duffy, Bono, Yang, Martocchio, Liao and Joshi2011).
Mindfulness research originated in clinical disciplines, so most work has focussed on personal outcomes. Most of this research has examined the impact of intervention programs that have a mindfulness training component (Brown et al., Reference Brown, Ryan and Creswell2007; Brown & Ryan, Reference Brown and Ryan2003). This literature has demonstrated beneficial effects of mindfulness on a range of health and well-being indicators (for meta-analyses and reviews see Brown et al., Reference Brown, Ryan and Creswell2007; Chiesa & Serretti, Reference Chiesa and Serretti2009; Dane, Reference Dane2011). More recently, researchers have demonstrated beneficial effects of mindfulness on professional outcomes, including job satisfaction (Hulsheger et al., Reference Hulsheger, Alberts, Feinholdt and Lang2013), work engagement (Leroy et al., Reference Leroy, Anseel, Dimitrova and Sels2013), and cognitive functioning (Mrazek et al., Reference Mrazek, Franklin, Phillips, Baird and Schooler2013).
Theoretical explanations for effects of mindfulness centre on the benefits of present moment attention/awareness and nonjudgemental acceptance. The arguments underlying these benefits relate to self-control in the regulation of cognitive resources. Researchers argue that focussing attention and awareness on the present moment disengages, or disrupts, automatic thoughts and behaviour patterns that are triggered by a broader lens of consciousness (Brown & Ryan, Reference Brown and Ryan2003; Brown et al., Reference Brown, Ryan and Creswell2007). These automatic reactions (e.g., preoccupation with other concerns, rumination, habitual processing) are thought to be detrimental in and of themselves and also through the self-control requirements that draw on limited regulatory resources (e.g., Baumeister et al., Reference Baumeister, Bratslavsky, Muraven and Tice1998). Thus, the disruption of these automatic reactions is thought to remove the need for self-control and conserve resources (Brown & Ryan, Reference Brown and Ryan2003; Brown et al., Reference Brown, Ryan and Creswell2007; Hulsheger et al., Reference Hulsheger, Alberts, Feinholdt and Lang2013). Similarly, nonjudgemental acceptance is characterized by the absence of self-control of cognitive resources (Holzel et al., Reference Holzel, Lazar, Gard, Schuman-Olivier, Vago and Ott2011) – indeed, by definition, it means doing so without controlling or avoiding experiences (Alberts, Schneider, & Martijn, Reference Alberts, Schneider and Martijn2012; Wolever et al., Reference Wolever, Bobinet, McCabe, Mackenzie, Fekete, Kusnick and Baime2012).
There is little work on the antecedents of mindfulness (Brown & Ryan, Reference Brown and Ryan2003; Brown et al., Reference Brown, Ryan and Creswell2007). However, most of these ideas centre on the notion that mindfulness is a human capacity that differs across individuals (Brown & Ryan, Reference Brown and Ryan2003; Brown et al., Reference Brown, Ryan and Creswell2007; Dane, Reference Dane2011). Developmental and social factors have been highlighted as potential barriers to mindfulness – such as chronic experiences of control or feelings of threat, and conditions that foster high ego involvement or contingent self-worth (Brown et al., Reference Brown, Ryan and Creswell2007). More directly related to our focus on time, a traditional focus in this literature has been to cultivate mindfulness through meditation practices (Kabat-Zinn, Reference Kabat-Zinn1990). Meta-analytic evidence supports the notion that meditation practice enhances mindfulness (Chiesa & Serretti, Reference Chiesa and Serretti2009; Glomb et al., Reference Glomb, Duffy, Bono, Yang, Martocchio, Liao and Joshi2011). However, the multicomponent nature of mindfulness programs, in conjunction with a generally low level of research design rigor, makes it difficult to isolate specific antecedents (Brown et al., Reference Brown, Ryan and Creswell2007; Chiesa & Serretti, Reference Chiesa and Serretti2009).
The notion of time as a potential antecedent of mindfulness has not been explicit in this literature; however, its role is inherent in the design of programs aimed at cultivating mindfulness using intervention programs. The Mindfulness Based Stress Reduction program (Chiesa & Serretti, Reference Chiesa and Serretti2009) is a standardized meditation program created in 1979 that integrates Buddhist mindfulness meditation with contemporary psychological clinical practice. The program covers: (1) body scan (which is a gradual sweeping of attention through the body, focusing noncritically on sensations/feelings that incorporates breath awareness and relaxation); (2) sitting meditation (which involves mindful attention on the breath and a state of nonjudgemental awareness of streams of thoughts; and (3) Hatha yoga (gentle, basic yoga classes that include postural and breathing exercises). This program and its variants tend to run for eight weeks, with home-work exercises set each day (roughly 30 minutes per day in total; Williams & Penman, Reference Williams and Penman2011). Therefore, it appears that time is needed to practice these techniques, as well as to engage in mindful thought once the capacity for mindful thought has been developed. However, the potential separation of time as an antecedent and mindfulness as a state fostered by time has not been discussed theoretically or disentangled empirically.
There are also hints at the role of time in discussions of the conceptualization of mindfulness. This work uses phrases such as “mental gap” (or the decoupling of consciousness from mental content; Brown et al., Reference Brown, Ryan and Creswell2007) and “fertile void” (from gestalt approaches to therapy, e.g., Perls, Hefferline, and Goodman, Reference Perls, Hefferline and Goodman1951) to describe the mechanisms associated with letting go of self-control, disrupting automatic reactions, and fostering present moment attention and nonjudgemental acceptance. The literature is not clear regarding the causal ordering among these concepts, but there is suggestion that concepts such as “relaxation experiences” (Marzuq & Drach-Zahavy, Reference Marzuq and Drach-Zahavy2012) and “relaxed attention” (Brown et al., Reference Brown, Ryan and Creswell2007; Perls et al., Reference Perls, Hefferline and Goodman1951) can deactivate self-control and promote the “mental gap” implicated in mindful thought. It seems plausible that slack time would be a necessary condition for fostering these types of relaxation concepts and, more broadly, the chain of cognitive processes implicated in mindfulness.
Self-Reflection
Self-reflection and self-awareness (and less often, self-focus) are used interchangeably to represent a cognitive activity wherein attention is directed inward towards the self and involves inspection of one’s thoughts, feelings, and behaviors (e.g., Daudelin, Reference Daudelin1996; Duval & Wicklund, Reference Duval and Wicklund1972). The social psychology literature distinguishes between trait and state self-focus. Trait self-focus is usually measured with the Self-Consciousness Scale (Fenigstein et al., Reference Fenigstein, Scheier and Buss1975), whereas the corresponding state is often measured by the Situational Self-Awareness Scale (Govern & Marsch, Reference Govern and Marsch2001). A distinction is made between public and private forms of trait self-consciousness and state self-awareness. The public form is characterized by attentiveness to features of one’s self that are presented to others (e.g., physical features) whereas the private form involves attentiveness to the internal aspects of one’s self (e.g., memories, feelings; Fenigstein et al., Reference Fenigstein, Scheier and Buss1975; Govern & Marsch, Reference Govern and Marsch2001). A final distinction is Trapnell and Campbell’s (Reference Trapnell and Campbell1999) separation of private self-consciousness into rumination and reflection sub-components. Rumination is a neurotic form of self-attentiveness characterized by dwelling on negative aspects of oneself, motivated by perceived threats, losses, or injustices to the self and purportedly associated with distress and dysfunctional thinking styles. Reflection is an intellectual form of self-attentiveness characterized by a more open and accepting approach to introspection, motivated by philosophical curiosity and purportedly associated with more adaptive outcomes.
The dominant theory concerning the outcomes of self-reflection/self-awareness is Objective Self Awareness Theory (Duval & Wicklund, Reference Duval and Wicklund1972; see also Carver and Scheier, Reference Carver and Scheier1981). This theory concerns the consequences of focusing attention on the self (Silvia & Phillips, Reference Silvia and Phillips2013) and draws on the concepts of the self, standards, and attentional focus (Silvia & Duval, Reference Silvia and Duval2001). The theory proposes that when attention is directed inward towards the self, this focus automatically triggers an evaluation process by which the individual compares the self to relevant standards. If a discrepancy is detected, negative affect arises, which prompts the individual to change the self to reduce that discrepancy or avoid awareness of that discrepancy. Carver and Scheier (Reference Carver and Scheier1981) argued that Objective Self Awareness Theory (Duval & Wicklund, Reference Duval and Wicklund1972) describes a self-regulatory negative feedback loop proposed within control theory. However, these feedback loops are construed differently in each theory (Pyszczynski & Greenberg, Reference Pyszczynski and Greenberg1987). Duval and Wicklund (Reference Duval and Wicklund1972) argued that self-awareness is aversive whenever a negative discrepancy arises, whereas Carver and Scheier (Reference Carver and Scheier1981) argued that it is only when the individual perceives a low probability of reducing a negative discrepancy that self-awareness is aversive.
Early work in this literature aligned with Duval and Wicklund’s (Reference Duval and Wicklund1972) view that self-awareness is an aversive state – based on the assumption that discrepancies and negative affect were inevitable (Duval & Wicklund, Reference Duval and Wicklund1972; Silvia & Duval, Reference Silvia and Duval2001). Therefore, this theory prompted work on the detrimental effects of self-awareness on personal outcomes, as evident in self-awareness theories of depression (Pyszczynski & Greenberg, Reference Pyszczynski and Greenberg1987), alcoholism, and suicide (Baumeister, Reference Baumeister1991) and supporting empirical evidence (for a review see Ingram, Reference Ingram1990).
More recent work has been influenced by Carver and Scheier’s (Reference Carver and Scheier1981) control theory and proposes beneficial outcomes. Regarding personal outcomes, Silvia and O’Brien (Reference Silvia and O’Brien2004) proposed that self-awareness has positive outcomes if the person has realistic standards and is optimistic about his or her ability to meet the standards. These authors found beneficial effects in the literature for outcomes including perspective taking, self-control, and self-esteem. With regard to professional outcomes, the leadership literature shows that leader self-awareness can be beneficial for outcomes such as leader effectiveness (Atwater & Yammarino, Reference Atwater and Yammarino1992; Fleenor et al., Reference Fleenor, Smither, Atwater, Braddy and Sturm2010). This work proposes that if a leader perceives a discrepancy between how they see themselves and how they believe others to perceive them, this mismatch will prompt action to reduce the discrepancy. Accurate assessments are thus expected to lead to positive outcomes because they help employees correct mistakes and tailor their behaviour to meet organizational demands.
Thus, the positive and negative outcomes demonstrated for self-reflection and self-awareness have been reconciled by some researchers by identifying boundary conditions (Carver & Scheier, Reference Carver and Scheier1981; Silvia & O’Brien, Reference Silvia and O’Brien2004). As noted earlier, another angle for reconciling this work has been to distinguish between adaptive and maladaptive forms of self-focussed attention such as reflection and rumination (Trapnell & Campbell, Reference Trapnell and Campbell1999).
There is little work on antecedents of self-reflection/self-awareness. When antecedents are mentioned, the common theme is that the trigger for self-directed attention is an external event or experience (Daudelin, Reference Daudelin1996; Gray, Reference Gray2007) – in particular, one that creates a state of tension or uncertainty. For example, Feedback Intervention Theory (Kluger & DeNisi, Reference Kluger and DeNisi1996), which is based on a control theory framework, proposes that feedback, particularly that which is normative, can trigger attention to the self. The antecedent states are generally thought to trigger self-directed attention and the ensuing process of reflection automatically (Ellis, Mendel, & Nir, Reference Ellis, Mendel and Nir2006; Nesbit, Reference Nesbit2012). We are not aware of research that has directly tested the link between experience and self-reflection/self-awareness; however, this link is implied in research that uses techniques such as guided memory recall to manipulate self-awareness (e.g., Govern & Marsch, Reference Govern and Marsch2001).
The potential role of time in triggering self-reflection/self-awareness is not raised often. However, researchers converge on the notions that: (1) self-reflection cannot occur without time to think, and (2) modern work places more value on action than reflection and allows little time for reflection (Daudelin, Reference Daudelin1996; Gray, Reference Gray2007; Raelin, Reference Raelin2002). Daudelin (Reference Daudelin1996) hints at potential mechanisms underlying the link between time and reflection that relate to resource capacity. She suggests that activities that do not require the brain’s full attention trigger a spontaneous process of reflection by momentarily suspending the flow of unrelated information into the brain. She suggests that such activities include mindless, physical activities such as jogging or mowing the lawn, as well as habitual routines such as showering or driving a familiar route (for related ideas, see Elsbach & Hargadon, Reference Elsbach and Hargadon2006).
Perspective Taking
Definitions of perspective taking converge on it being an effortful cognitive process that entails trying to understand or consider another’s viewpoint (Hoever et al., Reference Hoever, van Knippenberg, van Ginkel and Barkema2012). A more precise definition is that “perspective taking occurs when an observer tries to understand in a nonjudgmental way the thoughts, motives and/or feelings of a target as well as why they think and/or feel the way they do” (Parker, Atkins, & Axtell, Reference Parker, Atkins and Axtell2008: 151). This definition refers to perspective taking as a malleable state, however it is also conceptualized as a stable trait (e.g., Davis, Reference Davis1980).
Parker et al. (Reference Parker, Atkins and Axtell2008) integrated the literature to examine the link between perspective taking and outcomes. Parker et al. noted that the literature lacked an overarching theoretical framework but there was some commonality underlying the arguments. Being aware of and understanding a broad spectrum of factors (e.g., perceptions, cognitions, motivations, affect) associated with a broad range of targets (e.g., colleagues, organizations) or events (e.g., task-related, environmental) from multiple perspectives is assumed to represent a form of advanced cognition that is advantageous for decision making and goal accomplishment (Parker & Axtell, Reference Parker and Axtell2001).
Empirical research has demonstrated beneficial links between perspective taking and a range of outcomes. Examples of professional outcomes include contextual performance (Parker & Axtell, Reference Parker and Axtell2001), empathy for customers (Axtell et al., Reference Axtell, Parker, Holman and Totterdell2007), and creativity (Grant & Berry, Reference Grant and Berry2011; Hoever et al., Reference Hoever, van Knippenberg, van Ginkel and Barkema2012). Examples of personal outcomes include relationship satisfaction (Franzoi, Davis, & Young, Reference Franzoi, Davis and Young1985), communication satisfaction (Park & Raile, Reference Park and Raile2010) and stereotyping (Galinsky & Moskowitz, Reference Galinsky and Moskowitz2000). Parker et al. (Reference Parker, Atkins and Axtell2008) also argued that perspective taking has potential negative outcomes (e.g., exploitation, preferential treatment). However, they noted that little work has explored such possibilities.
Discussions regarding the antecedents of perspective taking are limited but converge around two themes – one that is developmental and one that relates to cognitive resources. In terms of developmental trajectories, Piaget’s work (1932) demonstrates that young children learn to attend to the viewpoints of others through a natural process of developing cognitive maturity. Kohlberg’s (Reference Kohlberg1976) stages of moral reasoning situates perspective taking at the highest stage – individuals are proposed to progress through the six stages at a slow pace, and it is thought that many adults are unlikely to reach the highest level (see also Bartunek, Gordon, & Weathersby, Reference Bartunek, Gordon and Weathersby1983).
With regard to cognitive resources, there is agreement that perspective taking is an intentional, goal-driven process governed by controlled processing and requires resources for execution (Horton & Keysar, Reference Horton and Keysar1996; Parker et al., Reference Parker, Atkins and Axtell2008; Rosnagel, Reference Rosnagel2000). The antecedents identified for perspective taking are forms of cognitive resource capacity or allocation. It is thought that the greater ability that some individuals have to take the perspective of others is indicative of higher levels of cognitive complexity (the capacity to identify complex connections amongst differentiated aspects of a stimulus; Parker & Axtell, Reference Parker and Axtell2001). Intraindividual changes in perspective taking are assumed to be explained by malleable factors that represent different effort requirements or resource demands (Parker et al., Reference Parker, Atkins and Axtell2008; Rosnagel, Reference Rosnagel2000). Indeed, Parker et al.’s (Reference Parker, Atkins and Axtell2008) review resulted in an integrative framework that included the predictors of the perspective taker’s capacity for perspective taking (e.g., abilities and knowledge), demands of the perspective-taking situation (e.g., task complexity, work demands), and the effort expended to engage in perspective taking.
Time-related constructs have not received much treatment, but are flagged as potential factors under the banner of resource demands. Parker et al. (Reference Parker, Atkins and Axtell2008) speculated that work demands such as time pressure may interfere with management effectiveness, for example, by reducing motivation to attend to the perspective of a subordinate. In terms of empirical evidence, Horton and Keysar (Reference Horton and Keysar1996) showed that time pressure reduced perspective taking in a communication task. Rosnagel (Reference Rosnagel2000) proposed cognitive load may explain this finding – in support, they showed that verbal instructions were adapted in line with the target’s perspective under low cognitive load but not under high cognitive load.
Summary
In summary, all five forms of thinking reviewed have demonstrated important implications for professional and/or personal outcomes. Of primary interest here, these literatures also discuss potential links between slack time and thinking, and to a lesser extent, underlying mechanisms regarding the initiation and maintenance of thought.
The creativity literature houses the most concentrated body of work that directly examines the link between time and thinking. Many scholars have theorized that having sufficient time is important for creative thinking, and although there is some evidence that this is true, there is also contrary and null evidence. The bulk of work has focused on time pressure but as we note in the preceding text, that is conceptually different to our proposed concept of slack time.
The mind-wandering literature does not refer to time per se. However, the arguments regarding the mechanisms that underlie the initiation and maintenance of mind wandering have implications for slack time. There is consensus in this literature that the initiation of mind wandering is automatic and goal driven, but arguments diverge regarding its maintenance. The current concerns hypothesis (Smallwood et al., Reference Smallwood2013) assumes that the maintenance of mind wandering is resource demanding, whereas the control failure X concerns hypothesis (McVay & Kane, Reference McVay and Kane2010) assumes that it is a default mode of thought that flows spontaneously in a resource-free manner unless disrupted by task-directed attention. The arguments within this latter hypothesis relating to deactivated task and/or brain states particularly resonate with our notion of slack time.
In the mindfulness literature, there is little mention of time, but arguments suggest that time is needed for the practice of, and engagement in, mindful thought. Arguments regarding the benefits of mindfulness suggest that its initiation disrupts negative automatic thought processes, which removes the need for self-control and thus is maintained in a resource-free manner. A distinct contribution from this literature is the notion that present focused thoughts need to be nonjudgemental and accepting in nature (also see mention of this by Parker et al., Reference Parker, Atkins and Axtell2008, in relation to perspective taking). Similarly, the self-reflection literature contrasts an open, accepting form of reflection from negatively focussed rumination. Thus, the nonjudgemental and accepting features may be critical for understanding when various forms of thinking are likely to be adaptive or maladaptive for development outcomes. There is little mention of time in the self-reflection literature, except to mention that time is needed for thinking. Regarding the mechanisms involved in reflection, it is proposed that activities that do not require full attention trigger a spontaneous flow of thought (Daudelin, Reference Daudelin1996). In the perspective-taking literature, some work touches on the potential link between time pressure and perspective taking, with evidence consistent with the notion that, as a resource-demanding form of thought, perspective taking requires time to devote the necessary resources to its engagement and maintenance. Consensus in this literature appears to be that the initiation of perspective taking is goal driven and effortful (e.g., Parker et al., Reference Parker, Atkins and Axtell2008).
Implications for Research and Practice
Theoretical Integration
We have proposed that slack time potentially fosters important forms of thinking, which in turn can lead to professional and personal outcomes through learning and development. Given the constraints of space, our focus has been on the link between slack time and thinking, and empirical evidence linking thinking to professional/personal outcomes. We recognise that further theorizing is needed to specify how slack time influences thinking, and how thinking in turn relates to the dynamic process of learning and development. We suggest that a self-regulatory framework (Lord et al., Reference Lord, Diefendorff, Schmidt and Hall2010) will be a useful platform to work with. Despite being isolated from each other, most of the “thinking” literatures reviewed in the preceding text draw on the notion of self-regulation. The way in which self-regulation theories are drawn on varies, yet the aspects of differentiation may point to mechanisms that will be critical for understanding the links of interest here. For example, the propositions differ in terms of whether the form of thinking is resource demanding, whether it is a goal-driven process, and whether its initiation and/or maintenance is governed by controlled or automatic processes; and the resulting arguments for the direction or nature of effects also differ.
The evidence is far from definitive, but we speculate that slack time triggers various forms of thinking in a resource-free manner, which has long-term benefits for learning and development. Slack time refers to a portion of time characterized by little or no cognitive resource demands. During slack time, therefore, there should be less chance for task-oriented thoughts and associated controlled processing to dominate. Said another way, slack time may be associated with the absence of control, or an experience of “letting go of control.” Without controlled processing, thinking may be more likely to flow in a resource-free manner (e.g., see McVay & Kane, Reference McVay and Kane2010). A key implication of this idea is that thinking triggered by slack time may not deplete limited resources, providing a win-win situation in which one may reap the benefits of that thinking without the cost of depletion. These ideas may seem at odds with popular wisdom. The layperson may assume that “doing nothing” is “wasteful/lazy” and that one is more likely to “get ahead” by filling his or her time with activity. However, activity-time compression is likely to come at a cost of resource depletion (and thus may outweigh the potential benefits of those activities). Experiencing or intentionally scheduling time with little or no demands on cognitive resources may be a more assured pathway to success, especially over the longer term, because it has the potential to trigger thinking that has beneficial outcomes without the cost of resource depletion. Research drawing on ego-depletion theory to investigate the resource-depleting nature of self-control may be useful for testing the notion that higher levels of slack time are associated with less resource depletion and greater well-being and performance benefits than lower levels of slack time. Further, these experimental studies could be combined with neuroscientific research to examine whether periods of time thought to represent higher slack time and lower resource depletion are associated with activation of the default brain network that is thought to represent states of rest or task-induced deactivation (e.g., Andrews-Hanna, Reference Andrews-Hanna2012).
Our ideas regarding slack time and letting go of control contrast with the view within self-regulation theories that emphasizes the benefits of self-control (Baumeister et al., Reference Baumeister, Bratslavsky, Muraven and Tice1998). The notions of exploration versus exploitation discussed in the creativity literature are useful for reconciling these opposing views. Our speculations suggest that the traditional strategy of activity-time compression for “getting ahead” relies on self-control to allocate our limited availability of resources judiciously and efficiently. We allocate our limited time to planned activities, and use self-control to stay focussed on the task, resist distractions or temptations, and push through fatigue. The focus of such self-controlled efforts, even implicitly, appears to be targeted towards exploitation, which refers to the creation of value through existing or minimally modified competencies that sustain long-term viability (March, Reference March1991; Voss et al., Reference Voss, Sirdeshmukh and Voss2008). In contrast, slack time may encourage individuals to “let go of control” and to foster exploration, which, although viewed as riskier because it has uncertain payoffs, is thought to have potential for creating novel competencies with long-term benefits (March, Reference March1991; Voss et al., Reference Voss, Sirdeshmukh and Voss2008). Thus, we suspect that slack time and the associated experience of letting go of control are required to experience spontaneous, nonjudgemental thinking that has the potential to promote the types of exploratory thinking that lead to long-term personal and professional benefits.
Consequences of Slack Time
More work is needed on identifying and conceptualizing the types of thinking that are important when considering the consequences of slack time, as well as consideration of other potential outcomes.
First, it is important to acknowledge that although this chapter has focussed on the potential benefits of slack time; it may not be desirable for some people or in some situations. For example, slack time might prompt rumination for some individuals, which is generally believed to have detrimental outcomes (Trapnell & Campbell, Reference Trapnell and Campbell1999). Smillie et al. (Reference Smillie, Yeo, Furnham and Jackson2006) showed that individuals with high neuroticism showed greater performance improvement compared to their counterparts on busier work days or when expending more effort than usual, and argued that this is because the higher need to direct their cognitive resources to on-task activities meant there were fewer resources available for their usual negative thoughts. On the flipside, then, it is possible that increased slack time is detrimental for such individuals through increased rumination. More broadly, it is possible that slack time has a curvilinear effect such that a moderate amount is beneficial for triggering positive thinking patterns but too much could trigger negative patterns. For example, long-term unemployed people may experience uncharacteristically high levels of slack time, which may feed into negative thought spirals.
We speculate that slack time is important for triggering thinking with certain characteristics – for example thinking that is solitary, spontaneous, and nonjudgemental – and that this type of thinking is important for generating thoughts that are novel, exploratory, and future focussed. As a first step, however, we need to develop a taxonomy of thinking that is relevant to this topic. At a minimum, one could consider where various forms of thinking fall on various continuums. Example continuums include temporal focus (i.e., past, present, or future), person focus (intrapersonal or interpersonal), and task focus (on-task or off-task). These continuums describe salient features; however, they are not mutually exclusive and thus not easily portrayed in a two-dimensional matrix. For example, self-reflection is unequivocally characterized as a self-focused cognition (under the person focus category), however, it could vary in terms of temporal focus (e.g., reflecting on the self’s past and/or future) and task focus (e.g., reflecting on events that are task and/or off-task related). Other considerations for a taxonomy include how the thinking was initiated (spontaneously or deliberately) and the function that it serves (e.g., proactive or adaptive). Future work could draw from and extend our review to propose a taxonomic framework with associated measures that differentiate thinking along various continuums, and empirically test proposed links among slack time, thinking, and outcomes that fall out of that framework.
Regarding learning and development, we suspect that the forms of thinking we have considered here are especially important for promoting holistic and long-term oriented development. Although we have not delved deeply into notions of adult development, having instead adopted a generic definition, we suggest the particular relevance of concepts like psychological growth. Staudinger and Kunzmann (Reference Staudinger and Kunzmann2005) identified psychological growth as a kind of positive personality development characterized “by increases in certain virtues such as insight, integrity, self-transcendence, and the striving toward wisdom” (321). They argued that psychological growth requires that we be continuously “challenged by new experiences and we need to emancipate ourselves in thinking and feeling, and transcend the structures within which we have been socialized” (321). Ultimately, such forms of growth, we assert, will be enabled by slack time and the resulting opportunities for thinking.
In this chapter, we have focussed on the potential cognitive states (i.e., thinking) that are triggered by slack time. However, there are likely to be motivational and emotional consequences as well. For example, if slack time triggers a self-reflective process regarding one’s standing on various goals, it may indirectly prompt associated emotions such as dissatisfaction and motivational states such as implementation intentions aimed at reducing those discrepancies.
Triggers and Boundary Conditions
The potential links among slack time, thinking, and outcomes raise questions regarding what factors may trigger these pathways and what factors may influence the direction or strength of the relationships.
There is a growing literature on the importance of breaks for the recovery process (e.g., Fritz et al., Reference Fritz, Ellis, Demsky, Lin and Guros2013). The notion of breaks implies time away from work, however, such breaks may or may not be filled with activity (e.g., attending a training seminar could be considered a break from work; a vacation break can be filled with non-work-related activity). Our arguments suggest that breaks characterized by little or no demand on cognitive resources (i.e., slack time) will be more likely to trigger thinking and associated benefits. Emerging evidence indirectly supports our arguments – a social networking company called Draugiem used a time-tracking productivity app (DeskTime) to show that the 10% of employees with the highest productivity took on average, a break of 17 minutes every 52 minutes. Moreover, one of the most common ways in which these most productive workers spent this time was to take a walk – which presumably could have been a solitary act that promoted slack time and thinking (Evans, Reference Evans2014). Future research could examine the organizational features likely to promote taking breaks (e.g., autonomy, modularized work), and whether various features of the breaks (e.g., length, activity-time compression) influence slack time, thinking, and associated outcomes.
We discussed earlier the fact that workload is increasing in the modern workplace and that it works against the desire to schedule activity-free time. This suggests that one method for promoting slack time may be to reduce the amount of work that is required. Alternatively, it may be fruitful to relax deadlines so that the same amount of work can be completed at a slower pace. The “slow movement” is gaining momentum and shows promise for promoting slack time and associated forms of thinking (Honoré, Reference Honoré2004). Research could investigate these ideas by conceptualizing the concept of “slow work” (e.g., taking longer than usual to do a task) and investigating whether it enhances thinking through slack time more so than reducing workload per se.
Another avenue for promoting slack time is to consider work strategies. Employees are increasingly experiencing the need to juggle multiple goals (Unsworth, Yeo, & Beck, Reference Unsworth, Yeo and Beck2014), and the usual strategy for dealing with this situation is to multitask (Levitin, Reference Levitin2015). However, Levitin (Reference Levitin2015) reviews neuroscientific research highlighting the costs of multitasking for well-being and performance. There may be potential for the alternative strategy of single tasking to lead to equal or even more productivity than multitasking by fostering periods of slack time.
Another strategy for dealing with high workload and multiple goals is to approach work in a structured, task-oriented manner which is presumably associated with controlled processing (McVay & Kane, Reference McVay and Kane2010; Smallwood, Reference Smallwood2013). A contrasting approach to work is “play,” which is a behavioural orientation to performing activities, characterized by elements such as a flexible association between means and ends, being free from external constraints, and perceiving time during play as an internal experience rather than as clock time (Mainemelis & Ronson, Reference Mainemelis and Ronson2006). There is a long history of research on how children can learn through play (Lieberman, Reference Lieberman1977), and growing recognition of how those principles can also apply to adults (Brown, Reference Brown2009). For example, there is evidence that a relatively unstructured approach to work, characteristic of play, can facilitate cognitive processes such as divergent thinking and mental transformations (Mainemelis & Ronson, Reference Mainemelis and Ronson2006) as well as having broader benefits for performance and well-being (Brown, Reference Brown2009). It is possible that play is a vehicle for promoting the experience and/or creation of slack time and associated benefits for learning and development.
Conclusion
In conclusion, although it is early days in regards to this topic, it is likely that important implications for both individuals and organizations will flow as these ideas for slack time are developed and tested. To fully maximize individuals’ thinking, and hence their learning and development, we suggest that they need to be proactive in creating and/or preserving periods of slack time. At the same time, organizations will benefit from putting in place practices and work designs that support and enable periods of slack time for employees. We hope that the ideas put forward in this chapter help stimulate research to guide such practical application.
In our rapidly evolving business landscape, organizations must adapt and remain relevant by tapping into an increasingly diverse employee talent pool. This chapter explores the challenges and opportunities for organizations presented by the fast-growing Hispanic or Latino population in the United States.1 Specifically, the gap between the size of the Latino population relative to their representation among the upper levels of corporations is explored. A solution to this apparent bottleneck in moving Latino up the corporate hierarchy is presented and an illustrative example of a successful approach to address this challenge is discussed. The bottom line is that organizations must take a proactive and systematic approach to identifying and developing Latino talent within their ranks to remain competitive and advance their business objectives.
A Changing Landscape
Organizations are confronting what has been referred to as a VUCA world characterized by volatility, uncertainty, complexity, and ambiguity (Bennett & Lemoine, Reference Bennett and Lemoine2014). For example, in the city of Dallas, which in 2015 was the ninth largest city in the United States, the economy has been very healthy and continues to pick up steam. However, the arrival of an Ebola patient to a local hospital on September 20, 2014 threw the city into chaos and the real possibility of economic impact due to fear of travel into the city was something that business leaders had to consider. Luckily, everyone’s worst fears were not realized. Nevertheless, this type of environment, where events seem to come out of nowhere, demands adaptability, nimbleness, and the ability to develop innovative solutions to problems.
Another aspect of this VUCA world is the changing nature of the marketplace in which businesses operate. Witness, for example, the dramatic change in attitudes toward same-sex marriage over the past 10 years (Pew Research Center, 2015a) or the declining car ownership rates of young millennials (Badger, Reference Badger2014). Even how customers access information and products continues to evolve at a time when the purchasing power of the middle class is stagnant or decreasing (Pew Research Center, 2014). In fact, the most recent Southern Methodist University (SMU) Cox CEO Sentiment Survey found that business leaders today put the changing customer needs and expectations as their top concern going forward (Quiñones & Rasberry, Reference Quiñones and Rasberry2015). Clearly, organizations need to connect with their customers more closely than ever before.
The ability to adapt to a volatile world and connect with a changing customer base is increasingly dependent on the quality of the people working in and running an organization. However, just as the customer base is changing, the labor force from which organizations draw their talent is also undergoing a dramatic shift. Numerous surveys find that companies are having a difficult time filling key positions (PwC 18th Annual Global CEO Survey, 2015). The talent pool from which they are drawing is also much more diverse than ever. The millennial generation (individuals born between 1982 and 2004), is the most diverse generation ever with 43% being nonwhite compared with 28% of boomers (Pew Research Center, 2014). Thus, organizations have to compete much more fiercely for talent from a pool that is very different then what they are used to.
The single-largest demographic trend influencing both the consumer as well as the labor markets in the United States is the dramatic growth of the Hispanic population. According to census data, the U.S. Hispanic population grew by 48% from 2000 to 2011 and is now estimated to be more than 52 million, or 17% of the total U.S. population (Brown and Patten, Reference Brown and Patten2014). In fact, Latinos accounted for 54% of the overall population growth in the United States from 2000 to 2014 (Stepler & Lopez, Reference Stepler and Lopez2016). Among 18 to 24 year olds, Hispanics now make up more than 20% of the population. Even more astonishing, Hispanics will account for more than 75% of the labor force growth from now until 2020 (Rochhar, Reference Rochhar2012). These findings make it clear that to meet the challenges and opportunities presented by the current and future economic, social, and demographic landscape, organizations are going to have to find ways to connect to and tap the talents of the fast-growth Hispanic population.
Hispanic Talent Is Not Reaching the Top
Despite the growing importance of Hispanics for organizations’ success, the record concerning their advancement in Corporate America is not encouraging. Senator Robert Menendez’s (D-NJ) 2014 Corporate Diversity Report that surveyed 69 Fortune 100 companies found that Hispanics occupied only around 4.9% of board and 2.9% of executive positions (Menendez, Reference Menendez2015). Similarly, the Hispanic Association on Corporate Responsibility (HACR), in their annual Corporate Inclusion Index, finds significant Hispanic underrepresentation in board and executive positions among Fortune 100 companies (Hispanic Association for Corporate Responsibility, 2013). The same report found that responding companies had an average hiring rate for Hispanics of 12% but a 17% attrition rate for the same demographic. Also, an analysis of Equal Employment Opportunity (EEO) data by the Latino Leadership Initiative at SMU found that Hispanics have a much lower conversion rate from middle management to executive positions than whites (Quiñones, Reference Quiñones2010).
This evidence suggests that organizations have a two-pronged challenge related to Hispanic talent. First, they must be able to attract and retain this increasingly mission-critical source of talent. Second, they need to find ways to move this talent up the corporate hierarchy. If they can’t solve these two challenges, organizations risk not being able to advance their corporate objectives and fail to innovate new products and services in an ever-changing competitive marketplace. In fact, a recent survey of global CEOs found that talent availability is the biggest barrier to achieving growth for their organizations (PwC 18th Annual Global CEO Survey, 2015). Increasingly, this talent shortage will be caused by the inability to tap into the growing Hispanic population.
Potential Reasons for the Hispanic Talent Gap
There are many possible explanations for the gap in hiring, retention, and advancement among Hispanics. Factors such as biases in the selection process, differential development, and advancement opportunities, and unique Hispanic traits and cultural scripts can account for this gap. Each will be discussed in the following text.
Bias
First, there is a large body of work demonstrating bias against nonwhites in the hiring process (Ruggs et al., Reference Ruggs2013). For example, Hosoda, Nguyen, and Stone-Romero (Reference Hosoda, Nguyen and Stone-Romero2012) found that applicants with a Hispanic accent were rated as less suitable for an open position and less likely to be promoted to management than applicants without an accent (see also Gluszek and Dovidio, Reference Gluszek and Dovidio2010). In general, racial minorities are sometimes viewed as not possessing qualities required for the position (e.g., leadership capacity) as well as possessing traits that translate into lower work performance (e.g., less educated, lazy). Not only does this bias translate into fewer opportunities to enter and move up the organizational ladder, but also overt and subtle discrimination has been shown to have negative physical and psychological health outcomes (Jones et al., Reference Jones, Peddie, Gilrane, King and Gray2016).
Prior Experience and Exposure
The Hispanic talent gap can also be partially explained by real differences in experiences, exposure, and preparation for higher-level roles. For example, studies have long demonstrated the importance of mentoring relationships for career progression and success (Eby et al., Reference Eby, Allen, Evans, Ng and DuBois2008). Mentors can help their protégés navigate organizational politics, provide valuable feedback, as well as impart knowledge and wisdom regarding job-specific issues (Tonidandel, Avery, & Phillips, Reference Tonidandel, Avery and Phillips2007). However, there is evidence to suggest that Hispanics may not have as many mentoring opportunities as other groups (Blancero & DelCampo, Reference Blancero and DelCampo2005). Also, Hispanics are disproportionately more likely to be mentored by other Hispanics, who tend to lack influence within the organization (Ragins, Reference Ragins1997).
Cultural Scripts
Finally, cultural scripts common among Hispanics can impact behavior and perceptions that impede progress and advancement in organizations (cf., Dabbah & Poiré, Reference Dabbah and Poiré2006). For example, a general cultural characteristic among Spanish-speaking countries is power distance, or the acceptance of hierarchical power structures (cf., Hofstede, Reference Hofstede2001). This cultural trait means that, in general, Hispanics are taught to be respectful of authority. This can lead others to wrongfully conclude that Hispanics are lacking ambition or do not have “executive presence.” Thus, when the time comes to select high-potential individuals to receive specialized leadership training or high-visibility assignments, Hispanics may be overlooked.
It is likely that the Hispanic talent gap in organizations is due to a combination of these and other factors. However, it is important to note that these subtle biases in identifying potential and doling out developmental opportunities can remain hidden from view. This leads organizational leaders to express frustration with the talent gap and throw their hands up in resignation. However, what organizations need to do is commit to a proactive, thoughtful, and systematic approach to talent development that focuses on all segments of their talent pool, especially traditionally overlooked groups such as Hispanics.
Creating a Development Pipeline
Recent changes in the supply and demand for talent have increased the need for organizations to develop a robust internal pipeline of top leadership talent (Cappelli, Reference Cappelli2008). This pipeline begins when new talent is selected into an organization, continues when they are subsequently identified as having potential for higher levels of responsibility, and individuals are then given experiences and exposure to round out their skill set to be promoted (Charan, Drotter, & Noel, Reference Charan, Drotter and Noel2011). Therefore, concerted efforts need to be made at these stages in the pipeline.
Recruitment
The challenge of developing a strong pipeline of Hispanic talent begins at the recruitment stage. Reaching a culturally and ethnically diverse workforce requires different approaches than organizations traditionally use (Rodriguez, Reference Rodriguez2007). For example, Hispanics overindex (use at a higher rate that their numbers would predict) on their use social media with Facebook and Instagram serving as favorite platforms (Pew Research Center, 2015b). Additionally, Hispanics tend to score higher on measures of collectivism and power distance suggesting that they may be differentially attracted to organizations with a structure and culture that matches those tendencies (Guerrero & Posthuma, Reference Guerrero and Posthuma2014; Stone & Deadrick, Reference Stone and Deadrick2015). Finally, Hispanics are younger than other demographic groups, a fact that is likely to impact their job search strategies and preferences. The implications of these differences can be profound for generating a strong and diverse pool of job applicants.
Given these factors, organizations must examine and alter their recruitment practices to successfully tap into this talent pool. Rodriguez (Reference Rodriguez2007) recommends that organizations take a long-term view and focus on creating an employment brand that is welcoming to Latinos. This requires a knowledge of the Latino culture, demonstrable career opportunities for advancement for Latinos, and an active involvement in the Latino community. Expanding channels used to communicate career opportunities that take advantage of the above-average use of social media is also critical. Finally, involving current Latino employees, executives, and employee resource or affinity groups in the recruitment process will go a long way toward communicating that Latinos are not only welcome but also can thrive in this organization.
Selection
Even with a diverse applicant pool, biases in the selection process can result in fairly homogeneous hires. In fact, there is ample evidence that applicants from traditionally underrepresented groups can be disproportionately excluded during an organization’s typical selection process (Barron, Hebl, & King, Reference Barron, Hebl and King2011; Ruggs et al., Reference Ruggs2013). Dipboye and Halverson (Reference Dipboye, Halverson, Griffin and O’Leary-Kelly2004) identify several factors that can contribute to discrimination in the workplace such as individually held stereotypes and biases, flawed policies and practices that favor specific groups, and organizational pressures for conformity and “fit” within the prevailing organizational culture. Thus, for organizations to tap into all sources of talent within the labor market, they need to make some changes to their hiring practices.
The most important change that an organization can make to their employee selection process is to understand the nature of the job they are hiring for and identify objective and job-relevant attributes that drive performance for that job (Schmitt, Reference Schmitt2014). It is in situations in which selection criteria are vague and subjective that stereotypes and biases tend to creep in (Huffcutt, Reference Huffcutt2011). This problem is especially true within the context of the most commonly used selective process, the employment interview, where there is room for interviewers to focus on job-irrelevant factors and ask questions that tend to confirm preexisting biases (Rivera, Reference Rivera2012). One of the most effective ways to combat bias in interviews is to employ standardized and structured questions that evaluate all applicants on the same job-relevant factors (McCarthy, Van Iddekinge, and Campion, Reference McCarthy, Van Iddekinge and Campion2010). Other recommendations include increasing the diversity of interviewers, ensuring that any other selection tools such as preemployment tests are not biased, and validating the selection process against job performance.
Development
The final component of the talent pipeline involves identifying high-potential individuals to receive targeted developmental experiences and subsequent promotions to higher levels of responsibility (Ready, Conger, and Hill, Reference Ready, Conger and Hill2010). Unfortunately, it has long been known that employees from underrepresented groups tend to be rated lower in potential for promotion, even after controlling for several relevant factors such as experience and education (e.g., Landau, Reference Landau1995). Unfortunately, commonly used tools such as the nine-blocker, which rates employees on the dimensions of current performance and future potential, leave room for subjective bias (Brook, Reference Brook2014). Furthermore, talent review meetings where decisions regarding who has potential for future promotions can favor individuals who have strong advocates or “sponsors” that can speak on the candidate’s behalf. Unfortunately, underrepresented groups are less likely to have such advocates (Thomas and Gabarro, Reference Thomas and Gabarro1999). There is also anecdotal evidence suggesting that minorities are more likely to be rated as “ready later” or “ready in three years” when managers are asked to indicate the promotion potential of their people (SMU Latino Leadership Initiative, 2015).
This reality points to the need for a reevaluation of the processes by which companies identify high-potential employees. A good starting point would be clear and objective methods to identify criteria for judging potential (see McCall, Reference McCall1998). Utilizing tools such as standardized assessments, or performance in special assignments where clear performance measures are established, would help put all candidates for promotion on equal footing. It is also clear that the role and availability of sponsors in the talent review process needs to be revisited. At one extreme, one could require that everyone being discussed should be represented by someone who knows his or her performance and skill set. At the other extreme, decisions about who should be considered a high potential could be based completely on “objective” criteria such as job performance ratings and assessment scores. Beyond these mechanical changes, it is imperative that Hispanics, like other employees, need to have sponsors who know them well and are credible enough in the organization to open doors to developmental opportunities.
Once high-potential employees are identified, they should be assigned to developmental experiences that prepare them for the next promotion and beyond. Unfortunately, organizations seldom have a clear understanding of the developmental value of different assignments and experiences as well as the prerequisites for success in a given position (McCauley et al., Reference McCauley, Ruderman, Ohlott and Morrow1994). Fortunately, research has identified the following experiences as particularly developmental for managerial and executive roles (McCall, Reference McCall2010; McCall, Lombardo, & Morrison, Reference McCall and Morrison1988):
Initial supervisory assignment – Understand the difference between doing and leading
Task force assignments – Learn to influence without formal authority and understand different points of view
Line to staff switches – How to create value in a completely different area while gaining exposure to corporate strategy and culture
Leading a turnaround – Being decisive and courageous while making difficult choices
Starting something new – Being innovative and resourceful while pulling together a new team
Certain bosses – Working under good and bad bosses gives exposure to what works and what doesn’t
Overcoming hardships – Teaches resilience, resourcefulness, and courage
There is ample evidence linking these developmental experiences with promotion rates and leadership performance (McCall et al., Reference McCall and Morrison1988). Therefore, fair processes should be established to determine which employees are exposed to them.
Evaluation
To ensure that they are tapping into this fast-growing talent pool, it is imperative that organizations develop metrics to evaluate the effectiveness of their efforts. Without deliberate attention to evaluation, good intentions will fail to result in meaningful outcomes as managers and leaders are not held to account for making significant progress. Unfortunately, the ultimate evaluation will come when organizations who don’t track their progress wake up one day and realize that they don’t have the talent they need to execute their plans.
When devising an evaluation regime, the metrics chosen should focus on the entire employment life cycle from recruitment to separation (see Edwards, Scott, & Raju, Reference Edwards, Scott and Raju2003). In terms of recruitment, evaluation should first focus on how the organization is perceived as a place of employment by potential employees, or what is commonly referred to as their employment brand (Sartain & Schumann, Reference Sartain and Schumann2006). Whether collected through focus groups, formal surveys, or reading comments on social media sites, this data can serve as an early warning system to subsequent difficulties in recruiting from diverse talent pools. Other recruitment metrics include tracking the numbers, quality, and composition of applicants identified from different sources (word of mouth, referrals, social media, job boards, etc.).
Selection metrics should focus on the performance of different applicant groups on interviews, tests, and other valid processes used to make hiring decisions. Any differences between groups can signal potential biases and should be investigated. It is important that all selection tools be validated against unbiased criteria based on an analysis of the drivers of job performance. The composition of resulting hires should also be compared to that of the general population as well as the applicant pool to check for adverse impact (i.e., differential selection rates by demographic subgroup).
Development efforts can be evaluated in several ways. First, lists such as those of “high potentials” and succession plans should be examined to see if they systematically exclude particular groups. If so, then the process and criteria used to place employees on those lists should be scrutinized. Second, organizations should document access to developmental experiences such as leadership programs, high-profile assignments, and mentorship opportunities. Finally, the progress of employees throughout the organization should be tracked using metrics such as time in position, last promotion, and the composition of the leadership pipeline at all levels including the C-suite.
To detect leaks in the leadership pipeline, organizations should also collect and evaluate their turnover rates. This date should be analyzed not only by demographic group but also by dimensions such as voluntary/involuntary, avoidable/unavoidable, and functional/dysfunctional turnover. This last dimension refers to the ease of replacing these employees (e.g., skills that are in high demand) and how strategically relevant they are to the future of the organization (e.g., fast-growing divisions). By using the various metrics described in the preceding text, a groups companies recognized the need to develop a comprehensive program to accelerate the development of their Latino talent. This program is described in the following text.
An Illustrative Case: The Corporate Executive Development Program
A group of Fortune 1000 companies examined the career paths of various demographic groups and found that Latinos were reaching middle management quicker than other groups but tended to stall there. Recognizing the untenable nature of this dynamic, these corporations, through their affiliation with the National Hispanic Corporate Council (NHCC), developed a blueprint for a program that could accelerate the careers of these Latino leaders and propel them into the ranks of vice president and beyond. This blueprint identified a core set of competencies that these organizations felt were critical for success at the executive level. The competencies included in the CEDP are described in Table 15.1.
Table 15.1 Competencies targeted by CEDP
| Competency | Description |
|---|---|
| Self-Awareness | Understanding one’s behavioral tendencies, responses, strengths, and weaknesses. |
| Understanding Hispanic Cultural Scripts | Knowledge of cultural elements of Latino culture that impact behavior and perceptions of that behavior. |
| Developing and Deploying Social Capital | Building mutually beneficial relationships that advance personal, professional, and organizational goals. |
| Adaptability | Ability to assess situational demands along with the flexibility to adapt behavior to match those demands. |
| Developing and Leading Teams | Understanding the characteristics of successful teams, managing the stages of team development, and avoiding common team dysfunctions. |
| Building Trust and Influence | Understanding the impact of trust on speed and cost of execution, applying critical behaviors that establish and build trust, and repairing trust when broken. |
| Understanding and Leveraging Differences | Sensitivity and appreciation for different perspectives and approaches. |
| Holding Oneself and Others Accountable | Skills in holding courageous conversations with anyone in the organization regardless of positional authority. |
| Developing Others through Coaching and Mentoring | Understanding the role of coaching and mentoring on organizational performance and the ability to establish strategic mentoring relationships. |
| Thinking Globally and Strategically | Understanding the drivers of global economic activity and the implications for organizational performance. |
| Leading Change and Innovation | Knowledge of the phases of organizational and individual change and ability to lead others through change. |
| Navigating Corporate Culture | Having political skills and the ability to read the “unspoken rules” of an organization. |
After an exhaustive nationwide search, the NHCC chose the SMU Cox School of Business as a partner to help bring their blueprint to life. Several factors led to this selection including a history and reputation for successfully partnering with corporations to develop executive education programs, a commitment to Latino leadership development, our Dallas location at the forefront of the demographic changes taking place across the country, and the increasing prominence of the Dallas-Fort Worth area as the central business district of the United States. With the blueprint as a guide, SMU Cox was tasked with designing and executing a transformative experience that could have a measurable impact on the careers of participants within their organizations. The result was the Corporate Executive Development Program (CEDP).
Participants
The target audience for the CEDP are mid-level functional or general Latino managers from Fortune 1000 companies with potential for advancement into executive positions. Participants to the program are nominated by their supervisor as well as talent managers from their human resources department. They have different levels of acculturation and come from (or are descendent from) a wide range of Latin American countries, including Brazil. The sponsoring organization pays the full cost of the program and associated travel expenses. The program is organized into consecutive cohorts, with the first one selected in 2011. Since then, approximately 168 Latino professionals have completed the CEDP over six cohorts. These participants represent all functional areas including marketing, finance, operations, and human resources and come from throughout the United States and beyond.
Program Design
The CEDP is composed of four broad elements that work in concert to increase its relevance and impact. These include: (1) formal classroom instruction, (2) assessment and feedback, (3) exposure to highly seasoned Hispanic executives, and (4) an action learning team project. The program is divided into three phases that take place over a nine-month period.
The formal instruction component of the program employs an inside-out structure that starts with individual-level concepts and culminates with an organizational-level perspective. Each of the three phases consists of four days of instruction on the SMU campus. The three instructional phases of the program are: (1) Leading with Authenticity, (2) Leading High Performance Teams, and (3) Becoming a Corporate Leader.
Each phase contains specific sessions led by highly experienced instructors utilizing several proven pedagogical methods such as cases, role plays, group activities, hands-on exercises, and simulations. The goal of Phase 1 is to increase the participants’ level of self-awareness, adaptability, intercultural competence, and emotional intelligence. Phase 2 focuses on group and interpersonal skills such as developing trust, building teams, holding others accountable, driving innovation, and coaching others. Finally, Phase 3 is aimed at corporate-wide competencies such as developing social capital, navigating corporate culture, leading change, and understanding the global economic landscape. Table 15.2 summarizes the topics covered in each of the CEDP phases.
Table 15.2 CEDP instructional phases and modules
| Instructional Phase | Modules |
|---|---|
| Phase 1: Leading with Authenticity |
|
| Phase 2: Leading High Performance Teams |
|
| Phase 3: Becoming a Corporate Leader |
|
A unique aspect of the CEDP is the presence of three executive advisors who are current or recently retired Hispanic corporate executives. These advisors perform several roles throughout the program. First, they are present during all instructional sessions to provide a linkage between the content presented and application to a corporate environment, as well as to inject a Hispanic cultural lens to the discussion. For example, during a discussion on accountability, the advisors could observe that, in their experience, Hispanics tend to have difficulty saying no to requests and may become overcommitted and let some obligations slip through the cracks. That observation would spur a further conversation around cultural scripts and how they can impact one’s behavior and others’ perceptions.
In addition to having access to executive advisors, CEDP participants also have the opportunity to receive and act upon feedback throughout the program. During Phase 1, participants are given the results of a 360-degree assessment instrument tapping into the program competencies described in the preceding text. This assessment is completed by supervisor(s), peers, and direct reports from their organization. Program participants work with their assigned executive advisors to make sense of the results and develop an action plan for closing any critical gaps and identifying key strengths. Throughout the program, executive advisors are also in a position to give participants direct feedback regarding their behavior and performance in class sessions and the team project. The use of multiple feedback methods and opportunities enriches the experience, builds self-awareness, and sets participants up for meaningful development.
Another element of the CEDP that helps drive individual development and skill application is the capstone team project. The purpose of the project is to give participants the opportunity to apply the program content within a team context while developing an enterprise-wide perspective. The project leverages the perspectives and knowledge of the participants, who come from a variety of industry and functional backgrounds. The project teams (ranging in size from five to seven depending on the size of the overall cohort) are given the challenge to develop a venture that is financially viable but also addresses a social need. To date, team projects have tackled issues such as childhood obesity, education, texting and driving, fair trade, and environmental sustainability, to name a few.
Program participants work on their team projects while on the SMU campus and remotely when they are between program phases. During Phase 1 of the program, teams complete a team charter outlining expectations of each other as well as their team values. An executive advisor is assigned to each team to monitor team progress and provide candid feedback. This feedback can range from encouraging specific team members to apply program concepts during team meetings to pointing out behaviors they observe that can impact the participant’s effectiveness and career progression. In our experience, teams take the project very seriously and devote a large amount of time to creating a quality deliverable. Before they leave campus at the conclusion of Phase 1, project teams agree on their communication method and frequency and set a schedule for working on their project. Each of the teams’ executive advisor participates in these virtual meetings to provide feedback and general guidance. Upon returning to campus for Phase 2, the teams give a brief update on their progress that includes the general topic or issue they chose to tackle as well as the approach taken. During Phase 3, teams present their final projects to their classmates, company sponsors, and other program guests. These presentations are followed by a graduation ceremony culminating their nine-month journey.
Program Impact
By all measures, the CEDP has been a huge success. First, and perhaps most importantly, approximately 70% of CEDP participants have received opportunities to move into positions with significant increases in scope and responsibility (including promotions to vice president and beyond). Feedback from corporate sponsors also has been positive indicating that the program provides them another tool for developing their high-potential Latino professionals as well as for attracting and retaining top talent. Participants give the program consistently high marks in ratings of program sessions as well as the overall experience. Preliminary research conducted with CEDP participants also shows that the program has helped them improve in the competencies identified by the original blueprint. For example, participants have reported greater awareness of the role that culture plays in their behavior, higher levels of self-awareness, more intentional development of their social capital, and more confidence in leading their teams.
Future Research
Although we know a great deal regarding the importance of leadership development for moving individuals into higher levels of responsibility and scope in organizations, there are still areas that require further study (see DeRue & Myers, Reference DeRue, Myers and Day2014 for a comprehensive review). We know, for example, that experience is one of the primary sources of learning for leaders, that some situations are more developmental than others, and that certain personal characteristics are associated with higher degrees of learning (McCall, Reference McCall2010). However, DeRue and Myers (Reference DeRue, Myers and Day2014) argue that future research must broaden its focus from an individualistic (leader) perspective to include followers and the organization.
What this means for Latino leadership development is that, in addition to ensuring that high-potential Latinos are afforded development opportunities, non-Latinos in the organization may need development that challenges their potentially narrowly held views of what a leader looks and sounds like. Thus, it is not just Latinos that need to raise their awareness of traditional Latino cultural scripts but also their supervisors and potential followers. This will ensure that the proper attributions are made for specific behaviors and untested assumptions about leadership potential or performance do not go unchallenged.
A second area of future research concerns the balance between developing leaders that align an organization’s current goals with the need for leaders that provide a unique perspective that can lead to innovations in new products and services. Changing expectations from customers as well as demographic and technological changes call for different ways of doing things. This suggests that organizations need to be more open to different leadership styles as well as individuals with different backgrounds. How an organization successfully manages this transition as well as the culture, policies, and practices that reinforce this shift need to be identified.
Research is also needed to explore the impact of culturally relevant leadership development. As described, the CEDP relies heavily on contextualizing leadership behaviors within Latino culture. Participants report that this aspect of the program helps them build self-awareness and become more intentional about how they behave and how they are perceived. Future studies should examine this process and the factors that affect who is impacted most. In addition, it is also important to identify the most important and relevant cultural factors that should be incorporated into development experiences. Finally, it would be interesting to determine if contextualizing leadership development within a participant’s culture helps them internalize a leader identity (DeRue, Ashford, & Cotton, Reference DeRue, Ashford, Cotton, Roberts and Dutton2009).
Day and Sin (Reference Day and Sin2011) documented the existence of different leadership development trajectories across individuals. This line of research should be pursued further as it is consistent with the finding by some SMU Latino Leadership Initiative partner companies where Latinos tended to reach middle management at significantly higher rates but tended to stall there. It is possible that the factors that help propel an individual through different stages vary. Identifying these factors will go a long way to ensuring that all sources of leadership talent within organizations are developed.
Conclusion
Organizations are only as effective as the quality of the human capital they possess. Thus, it is critical that no source of talent goes untapped, especially in our rapidly changing business environment. With the increase in the number of Latinos in the labor force, organizations need to ensure that they are attracting, selecting, and developing this mission-critical talent if they are to remain competitive. The techniques outlined in this chapter as well as the specific approach developed in the CEDP can help organizations do just that.




