Hostname: page-component-848d4c4894-ttngx Total loading time: 0 Render date: 2024-06-01T11:35:55.530Z Has data issue: false hasContentIssue false

Trial Advocacy Competence: The Judicial Perspective

Published online by Cambridge University Press:  20 November 2018

Get access

Abstract

The competence of attorneys, and more particularly the competence of trial lawyers, is currently an issue of considerable controversy both within the judiciary and within the legal profession itself. This article addresses the issue of attorneys' courtroom performance as observed by judges. Drawing on data from a survey administered to state and federal judges sitting in trial courts of general jurisdiction, this article explores the relationship of demographic and judicial characteristics to various judicial evaluations of advocacy competence. While it provides important insights into the present state of trial advocacy competence, it also constitutes a systematic examination of the judiciary's perceptions of the trial bar.

Type
Research Article
Copyright
Copyright © American Bar Foundation, 1978 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Burger, Warren E., The Special Skills of Advocacy: Are Specialized Training and Certification of Advocates Essential to Our System of Justice? 42 Fordham L. Rev. 227 (1973).Google Scholar

2 Burger, Warren E., Remarks on Trial Advocacy: A Proposition, 7 Washburn L.J. 15 (1967).Google Scholar

3 See the following articles by Irving R. Kaufman: The Trial Lawyer: The Legal Profession's Greatest Asset, 50 A.B.A.J. 25 (1964); The Court Needs a Friend in Court, 60 A.B.A.J. 175 (1974); Advocacy as Craft-Law School Is More Than a “Paper Chase,” 60 A.B.A.J. 802 (1974); and Does the Judge Have a Right to Qualified Counsel? 61 A.B.A.J. 569 (1975).Google Scholar

4 Bazelon, David L., The Defective Assistance of Counsel, 42 U. Cin. L. Rev. 1 (1973). Traditionally, appellate courts have been resistant to ineffective representation claims as the bases for overturning criminal convictions. Richard Brady, with Rory Albert, Ineffective Representation as a Basis for Relief from Conviction: Principles for Appellate Review, 13 Colum. J. of L. & Social Prob. 1 (1977). Recently, a California District Court of Appeals ordered a new trial in the multiple murder conviction of Juan Corona on the ground that defense counsel had failed to represent the defendant competently. It should be noted, however, that an unusual conflict of interest situation was present in this case. As the appeals judges observed, “In actuality, in the instant case we are confronted with the unprecedented situation where trial counsel assumed a position virtually adverse to his client and totally unsupported by strategic or tactical considerations took deliberate steps to thwart the development of viable defenses available to the accused.” N.Y. Times, May 9, 1978, at 1, col. 5.Google Scholar

5 Burger, supra note 1, at 234. More recently Chief Justice Burger told a British Commission that he had asked state and federal court judges about lawyers who appeared before them and had found that “the most pessimistic view was that only 25 percent of the lawyers appearing in our courts were really qualified to represent their clients properly.”“Some judges placed it as high as 75 percent. Somewhere near the midway mark is probably correct, and it will vary to some extent from place to place.” N.Y. Times, Dec. 1, 1977, at B9.Google Scholar

6 Clare, Robert L., Minimum Qualifications for Admission to the Federal Second Circuit, 6 ALI-ABA CLE Rev., Mar. 21, 1975, at 3.Google Scholar

7 Kaufman, Irving R., Assuring Qualified Advocates in Our Courts, Address to American Bar Association Annual Meeting, Honolulu, Aug. IS, 1974.Google Scholar

8 Smith, Chesterfield, The Legal Profession and Professional Competence, Address to National Conference of Bar Presidents, Houston, Texas, Feb. 2, 1974.Google Scholar

9 A. B. Conant, Jr., Judicial Survey on Quality of Advocacy: The Jury Returns, 12 Docket Call 1 (Spring 1977).Google Scholar

10 The ABA's General Practice Section, Litigation Section, Insurance, Negligence and Compensation Section, and Young Lawyers Division made financial contributions to fund this project, for which the American Bar Foundation and the author express deep appreciation. The principal source of funding, however, was the American Bar Endowment in its general support of the Foundation's program.Google Scholar

11 This focus on performance made such methods as testing advocates on their potential for adequate courtroom performance (e.g., whether the attorneys know how to adequately prepare a case for trial rather than whether they actually do so) inappropriate. The relationship of such tests to actual courtroom performance is unknown and could be the subject of an important ancillary study in itself.Google Scholar

12 The evaluation of courtroom performance of trial advocates necessarily involves some type of observer. Although the evaluations made by observers may be structured, they remain subjective. The subjective nature of human judgment is both a strength and a weakness. It permits evaluations over time of performances in trials that include very different materials and situations. However, other observers (or the same observer at another time) might evaluate the same performances differently.Google Scholar

13 See text at notes 1-4 supra. Even though most of the criticism of trial advocates has come from appellate judges, trial judges' firsthand observations seem a more promising source for determining the extent to which and ways in which incompetent trial advocacy manifests itself in the courtroom.Google Scholar

14 Even active trial advocates are likely to have a much smaller base of experience in observing other advocates.Google Scholar

15 Demographic variables are primarily descriptive attributes of the court. They include such things as the type of court, its geographic location, the size of the population served, and characteristics of the region it is located in, such as relative “legal density.”Google Scholar

16 Respondent characteristics are principally personal attributes of the respondent, but the category also includes certain attributes of the respondent's individual court (i.e., not the type of court where the respondent sits but characteristics of the particular court). Thus, while the category mainly comprises things like respondents' legal experience, it does include items like the caseload of the judge's court. While demographic and respondent characteristics are analytically distinct, in some cases they are empirically interrelated.Google Scholar

17 Judgmental variables include the full range of opinions and evaluations provided by the judges. They are the focus of the study, the material to be “explained” in terms of the demographic or respondent characteristics.Google Scholar

18 The full survey questionnaire contained 32 questions. The questionnaire is available from the author on request.Google Scholar

19 Current lists of judges sitting in courts of general jurisdiction were obtained from the administrative office of the federal courts and from state court administrative offices. In a few cases where we could not reach the state court administrative office, the state bar association provided the list of judges for that state. One hundred state judges from Illinois who had been randomly selected to pretest the survey questionnaire were omitted from the survey population.Google Scholar

20 Twenty-nine of these responses were from federal judges and 56 from state judges.Google Scholar

21 Twenty-eight of the undeliverable group were state judges and three were federal judges.Google Scholar

22 These response rates do not include the 21 responses on which the question about court designation was not answered. Thus the reported response rates for state and federal judges are slightly underestimated.Google Scholar

23 The refusal rate was 1.7 percent for state judges and 5.8 percent for federal judges. The stated reasons for refusing to participate ranged from dislike or distrust of surveys to lack of time due to overwhelming caseloads.Google Scholar

24 Such a test examines whether respondents who hold some particular view or views were predisposed to respond to the survey in significantly greater numbers than those with other views.Google Scholar

25 It was possible to make such matched assignments for over 85 percent of the nonresponding judges in the random sample. It was not possible, however, to ascertain whether the personal contact was actually made in all cases.Google Scholar

26 About half of this 27 percent response rate was probably attributable to the personal contact aspect of our procedures with this group.Google Scholar

27 Respondents were asked whether the population of the area served by their courts was: under 10,000; at least 10,000, but less than 50,000; at least 50,000, but less than 100,000; at least 100,000, but less than 500,000; at least 500,000, but less than 1,000,000; at least 1,000,000.Google Scholar

28 The list of categories used:Google Scholar

a) Criminal felonyGoogle Scholar

b) Criminal misdemeanor (nontraffic)Google Scholar

c) JuvenileGoogle Scholar

d) TrafficGoogle Scholar

e) Corporate (antitrust, securities, shareholder litigation, etc.)Google Scholar

f) Commercial (contracts, lien foreclosures, etc., but not including collections)Google Scholar

g) CollectionsGoogle Scholar

h) Personal injury and property damageGoogle Scholar

i) Products liability and other consumer problems (excluding collections)Google Scholar

j) Civil rights (including EEOC)Google Scholar

k) Condemnation and land titlesGoogle Scholar

l) Domestic relationsGoogle Scholar

m) Probate and guardianshipGoogle Scholar

aExcludes 99 judges who gave no Information on caseload distribution or gave Information that could not be categorized In percentages.Google Scholar

Note: Total percentage differs from 100 percent because of rounding.Google Scholar

29 Answers to this question of competence of attorneys appearing before the judges in the year preceding the survey were expected to total 100 percent (representing the total reference group of attorneys for each judge). Unfortunately they did not. The following measures were adopted to deal with this inconsistency.Google Scholar

Two alternative models of the behavior of respondents were used to organize the data. The first model assumed that the five responses of each judge were essentially independent of each other, i.e., that the failure of the judges to present totals equal to 100 percent was a failure to understand the demand that they do so. The second model assumed that the five responses were directly related to each other and that the failure of judges to present a total of 100 percent was essentially an arithmetic error.Google Scholar

In the first model, a statistic was computed that represented the “predominance of competence,” that is, the ratio of attorneys rated as meeting minimum standards of competence to attorneys not meeting minimum standards of competence. It deserves note that the fraction actually does represent “predominance of competence”-a respondent who presented “competent” responses that summed to 100 percent and “noncompetent” responses that summed to 50 percent would be placed in the category that included 66.7 percent, indicating that two-thirds of his or her total response was in the competent range.Google Scholar

In the second model, the simple sum of the categories of minimal competence was computed. Thus, in the example used in the last paragraph, the respondent would be placed in a category that indicated that the sum of his or her “competent” responses was 100 percent. The fraction computed under the first model was used in two ways:Google Scholar

1. Frequency distributions of responses by decile and quintile categories were assembled using cutoff points determined byGoogle Scholar

X/(100 - X),Google Scholar

where X represents the least upper bound of a given category. These frequency distributions were used for description of the data presented here, as well as contingency table analysis in conjunction with other variables presented in the section “Analysis of Relationships Among Variables”infra. Google Scholar

2. The fraction was used for various statistical tests in its uncategorized form.Google Scholar

The sums computed under the second model were used in replications of the statistical tests that were performed using the first model uncategorized fractions. The significant findings in the replications were the same as those found under the first model (i.e., no significant findings appeared under one model without appearing under the other).Google Scholar

30 If more than one attorney appeared for either side, the judges were directed to rate the chief counsel.Google Scholar

31 Interpretation of composite scores for defendant and plaintiff attorneys in terms of averages of individual scores is potentially less misleading than it might appear to be. Although a score of 10 may reflect various combinations of individual scores covering the full spectrum from “exceptionally competent” to “predominantly incompetent” (instead of five attorney scores of “competent”), this is actually relatively improbable. More than 80 percent of all individual attorney ratings were in the range of full competence described by the categories “exceptionally competent,”“highly competent,” and “competent.” The proportion of “predominantly incompetent” responses was under 3 percent of all responses. This implies that variations from the average individual scores that correspond to composite scores of the 10 through 15 range are probably small, and an interpretation of the composite scores in terms of averages is therefore not likely to be misleading in the majority of cases.Google Scholar

32 If all responding judges had rated 10 attorney appearances, there would have been 14,420 attorney ratings. Of the 1,731 “missing” responses, 326 were responses that were not codable (e.g., multiple ratings of a single attorney appearing in a particular trial). The remaining 1,405 were omitted ratings.Google Scholar

33 If this is true, then ratings of attorney competence at the time the trial occurs should produce an even smaller proportion of negative ratings.Google Scholar

Note: Totals may differ from 100 percent because of rounding.Google Scholar

34 The rates of nonresponse and noncodable responses (which might be expected to rise in light of the limited experience many judges have with rural practitioners) were only slightly higher for this question than for the others, underscoring the necessity for caution in interpreting the results. Some judges' evaluations of rural attorneys may reflect general opinions formed in contact with other members of the judiciary and the legal profession rather than direct experience.Google Scholar

35 The 4.4 percent of judges who failed to answer the question or who provided answers that could not be coded were omitted from these calculations.Google Scholar

36 The questions were constructed without offering such an option to avoid biasing responses by introducing a specific reference to this form of judicial conduct. A primary result of this construction, however, is that unreported cases of judicial intervention might have contributed to lower estimates of prejudicial activity. These figures must be viewed with that fact in mind.Google Scholar

37 Twelve percent of respondents indicated that the criminal question was inapplicable; another 5 percent did not answer; while.5 percent provided answers that could not be coded.Google Scholar

38 Uncodable responses for the civil trials question totaled 5 percent; 4.9 percent omitted their responses altogether; while 11.0 percent indicated that the question was inapplicable.Google Scholar

39 Each factor listed (up to a maximum of 6 per respondent) was assigned to one of 28 coding categories that had been developed through analysis of pretest and main survey data. The coded factors were later collapsed into the 6 categories listed in table 8.Google Scholar

40 The combined preparation category includes responses coded in the following categories: preparation, organization, knowledge of facts of case, knowledge of law applicable to case, and knowledge of alternatives to sentencing.Google Scholar

41 The combined experience and training category includes responses coded in the following categories: education or training, experience, general knowledge of law, and knowledge of rules.Google Scholar

42 The combined presentation category includes responses coded in the following categories: presentation of case, argumentation, brevity, communication or oral skills (unspecified), general courtroom acuity, and client control.Google Scholar

43 The combined personal category includes responses coded in the following categories: diligence, courtroom etiquette, ethical behavior, personality, physical appearance, and punctuality.Google Scholar

44 The combined intellectual category includes responses coded in the following categories: analytical ability, ability to isolate real issues, intelligence, writing skills, and objectivity.Google Scholar

45 Although the respondents were asked to base their answers only on their experiences of incompetence in contested trials, a preliminary analysis indicated the possibility that the responding population had not universally followed these instructions. A check of the responses' validity was therefore implemented, dividing judges into two groups based on their responses to prior questions: first, those who had indicated both that they presided over contested trials and that they had seen some incompetent attorneys in contested trials, and second, those judges who answered the question on types of incompetence without meeting these criteria. Statistically significant differences between the mean responses of the two groups were found for each of the 13 items in the question. Consequently, the responses of the 152 judges that failed to meet the criteria of validity were excluded from further analysis.Google Scholar

46 Mean percentages from the categories do not reflect differences in the rate of response to the 13 items. While the response rate varied from item to item, a statistical analysis comparing these response rates with the mean response rate indicated that no item had a response rate significantly larger or smaller than the average. In light of this fact, response rates were not taken into account in further comparisons of average scores.Google Scholar

47 The mean percentage presented with each subtable is based on the ungrouped data, not the collapsed distributions.Google Scholar

48 The average or expected distribution was estimated by computing an average of the proportion of responses in each of the quintile categories across the 13 items. The relative proportions of this “average” distribution were then used as expected values and compared with each of the actual proportional distributions across the quintile categories in each of the 13 items. A χ2 (chi-square) statistic was computed (with 4 degrees of freedom) for each of the 13 items. Where significant x2 values were found (at a = 05), the items were examined to determine where the greatest discrepancy from the estimated category proportions was to be found.Google Scholar

49 The deviations of the distributions of “preparation” and “ethics” from the estimated average were not surprising. The high mean score of “preparation” reflects the concentration of responses in the highest category while the low mean score of “ethics” results from a distribution skewed toward the lowest quintile.Google Scholar

50 The “inadequate knowledge of rules of evidence” and “inadequate preparation” categories were the exceptions, with plurality of responses in the highest quintile (81-100 percent).Google Scholar

51 “Verbal rebuke in open court,”“communication through friend,”“complaint to firm,” and “written rebuke” followed in decreasing order of frequency of affirmative responses, with a majority of the responses being negative in each of these categories. A total of 102 judges (or about 7 percent of respondents) indicated that they had used some “other” cautionary technique.Google Scholar

52 An additional question asked judges to indicate which methods had been used most frequently. “Instruction or advice in chambers” was mentioned far more frequently than any other method. Almost two-thirds of the responding judges (66 percent) indicated this as the most frequent choice. The method mentioned with second greatest frequency was “rebuke in chambers” (18 percent). “Instruction or advice in court” led the remaining categories, but all of these items were mentioned by less than 10 percent of the respondents.Google Scholar

53 The remaining 28 answers, which covered the range from 4 instances of discipline to 15 instances, represented less than 10 percent of the judges who provided answers that could be quantified. Five judges provided answers that could not be quantified, but that indicated a relatively small number of instances of formal discipline.Google Scholar

54 A general discussion of the statistical techniques used throughout this section is contained in appendix A. The level of significance used in all tests was ≤ 05, i.e., a probability of error of 5 percent or less was accepted.Google Scholar

55 See appendix B.Google Scholar

56 See note 27 supra. Google Scholar

57 The population categories were: at least 11 million; at least 6 million, but less than 11 million; at least 3.4 million, but less than 6 million; and less than 3.4 million.Google Scholar

58 The regional groupings were: North East (Maine, New Hampshire, Vermont, Massachusetts, Rhode Island, Connecticut); Mid-Atlantic (New York, New Jersey, Pennsylvania); East North Central (Ohio, Indiana, Illinois, Michigan, Wisconsin); West North Central (Minnesota, Iowa, Missouri, North Dakota, South Dakota, Nebraska, Kansas) South Atlantic (Delaware, Maryland, District of Columbia, Virginia, West Virginia, North Carolina, South Carolina, Georgia, Florida); East South Central (Kentucky, Tennessee, Alabama, Mississippi); West South Central (Arkansas, Louisiana, Oklahoma, Texas) Mountain (Montana, Idaho, Wyoming, Colorado, New Mexico, Arizona, Utah, Nevada); Pacific (Washington, Oregon, California, Alaska, Hawaii).Google Scholar

59 The collapsed regional groupings were: North East (North East, Mid-Atlantic); North Central (East North Central, West North Central); South (South Atlantic, East South Central, West South Central); West (Mountain, Pacific).Google Scholar

60 “Legal density” was computed from data on population per lawyer by states contained in table 260 of the Statistical Abstract of the United States, 1972, at 159. The categories of legal density were rank-ordered quintiles. U.S. Bureau of the Census, Statistical Abstract of the United States: 1972 (93d ed. Washington, D.C.: Government Printing Office, 1972).Google Scholar

61 See appendix B.Google Scholar

62 The federal respondents exhibited several differences in respondent characteristics, including fewer cases and trials in the previous year, a greater number of years as members of the bar, and a larger number of years in private practice. There were no statistically significant differences in governmental, “in house,” or teaching experience. In the larger context of the study, however, these distinctions are primarily objects of curiosity rather than important substantive points.Google Scholar

63 The full range of variables analyzed in other comparisons was again used. See appendix B.Google Scholar

64 The only significant differences were in respondent characteristics, where there were differences in the average number of years in present judicial position, the number of years of total judicial experience, and the number of years of bar membership of respondents in both state and combined sample. Federal judges showed no significant differences in any of the variables analyzed.Google Scholar

65 Each question was analyzed by letting the three response choices define the subgroups to be compared. Differences in the average scores of the three subgroups on other variables were tested for a statistical significance. The respondent characteristics and judgmental variables included the full list in appendix B.Google Scholar

66 Our lack of information about the strictness of standards used to evaluate competence, however, precludes testing this hypothesis.Google Scholar

67 The lowest average percentage of prejudicial attorneys for “no difference” respondents is consistent with similar relationships for prosecution/defense counsel and appointed/retained counsel comparisons.Google Scholar

68 Longer membership in the bar suggests that judges who chose “sole” practitioners became members of the bar at a time when a higher proportion of the bar was sole practitioners. We do not know, however, whether the judges in this group had been sole practitioners prior to taking the bench.Google Scholar

69 Again, the explanation suggested at note 66 supra may be plausible.Google Scholar

70 See p. 116 supra. Google Scholar

71 The sample was broken down into three groups based on the judges' choices on this item, and the full roster of judgmental variables and respondent characteristics used in the other analyses was studied. See appendix B.Google Scholar

72 The variables studied included:Google Scholar

a) Summary scores of the percentage of competent attorneys appearing before the respondentsGoogle Scholar

b) PA and DA scoresGoogle Scholar

c) Recommendations of measures to ensure the competence of the trial barGoogle Scholar

d) Elements of judicial experienceGoogle Scholar

e) The use of formal disciplinary measures against incompetent attorneysGoogle Scholar

73 Four series of tests were carried out, each series comparing average scores on several judgmental and respondent characteristics of judges who answered “no” to the specific remedial measure with the scores of those who answered “yes.” The judgmental and respondent characteristics used are listed in appendix B.Google Scholar

74 This consistency in attitude cannot be used to impute causality, however. The assumption that receptiveness to certification or apprenticeship is generated by witnessing a less competent trial bar may be unjustified. The data support only the association between these variables; they do not show whether the association reflects an underlying casuality, or if, for example, a more rigorous standard of judgment or more generally negative view might generate both attitudes.Google Scholar

75 See appendix B.Google Scholar

76 See notes 40-44 supra. CrossRefGoogle Scholar

77 See note 29 supra. Google Scholar

78 We performed statistical tests of both the probability that a statistically significant relationship exists and the magnitude of the relationship (how strongly scores on a “predictor” variable affect or “predict” scores on the predicted variable). It should be noted that the status as predicted and as predictor must be assigned a priori, based on the substantive logic of the study rather than on statistical considerations.Google Scholar

79 See text at notes 7 and 8 supra. Google Scholar

80 See Steele, Eric H. & Nimmer, Raymond T., Lawyers, Clients, and Professional Regulation, 1976 A.B.F. Res. J. 917; F. Raymond Marks & Oarlene Cathcart, Discipline Within the Legal Profession: Is It Self-Regulation? 1974 U. 111. L.F. 193 (suggesting that reviewing performance might require a second agency, since existing disciplinary agencies may not be the proper models for approaching the problem of competence).CrossRefGoogle Scholar

81 See p. 105 supra. Google Scholar