Choose Your Judges


Take the Quiz

How Choose Your Judges makes its recommendations

There are a number of different organizations and websites which seek to provide information to voters about judicial candidates, and each of them present voters with different kinds of information. They can be broken down into three categories: openly partisan recommendations, independent candidate evaluations, and endorsement summaries. The first category of websites is not of much interest to anyone who is seeking neutral, substantive information about judicial candidates—these sites have a stated goal of getting rid of a certain type of judge and electing/retaining a different type of judge, and they tend to present skewed versions of a judge’s record in order to further their goals. Unless a voter happens to share the exact goals of the recommender, these recommendations are not likely to be very helpful.  For an example, see California’s Judge Voter Guide.

Independent candidate evaluations are conducted by groups—usually state bar associations or local newspapers—which conduct their own investigation into each of the candidates that they recommend. The evaluators typically send out surveys for candidates to fill out regarding the candidates’ qualifications and judicial philosophy, and they may also send out surveys to attorneys who have practiced in front of the candidate. In addition, these groups may interview each candidate in person. Frequently the recommendation will state whether the candidate is “well qualified,” “qualified,” “adequate,” or “unqualified.” The criteria used by these evaluators is politically neutral—that is, the candidates are being evaluated on such characteristics as disposition, credentials, efficiency, and so on. An example of this independent candidate evaluation is Colorado’s Know Your Judge.

Finally, endorsement summaries simply collect and disseminate the recommendations made by other groups. Some of these other groups are partisan recommendations or represent special interests, such as the Black Women’s Bar Association or the Puerto Rican Bar Association. Others are neutral independent candidate evaluations, such as a state bar association or a local newspaper. These summaries can be very useful to a voter who agrees (or strongly disagrees) with the principles of a certain special interest group, since the voter can see how his or her favored special interest group ranked each candidate. Two excellent examples of endorsement summaries are Chicago’s Vote for Judges  and Washington state’s Voting for Judges.

Choose Your Judges is a unique website in that we do not conduct surveys about each candidate, nor do we simply rank a candidate as “qualified” or “unqualified.” Instead, we conduct our own research into a candidate’s qualifications and political ideology, and then provide each voter with a personalized recommendation based on the voter’s own preferences. This requires three steps. First, we must determine a judge’s political or ideological predispositions. In order to do so, we independently evaluate a candidate’s prior voting record in ten different categories in an attempt to determine patterns in the candidate’s voting on specific issues. Second, we must determine the voter’s preferences; for this, we ask each voter a series of questions (the quiz) and record their answers. Finally, we must combine the previous two pieces of data in order to provide a recommendation.

A. Evaluating voting patterns

Surveys of candidates are useful, but they can only do so much in illuminating a candidate’s true preferences. Judicial candidates are limited by the judicial canon of ethics from making certain statements during a campaign; for example, they are not allowed to say how they will vote on a certain case if it comes before them. Even within these limits, judicial candidates tend to provide very little information about their own beliefs and ideology. For example, a recent candidate for the Ohio Supreme Court was asked how his training, professional experience, and interests had prepared him for the position he was seeking, and he responded:

“I believe my training, professional experience and interest have prepared me well to serve as Justice of the Supreme Court of Ohio. My background is diverse and my personal life experiences will enable me to approach the work of the Court with a unique perspective, understanding and empathy.”

It is hard to imagine that this statement would provide anything even remotely useful to a voter who is trying to decide whether or not to vote for him. Unfortunately, these bland statements supporting fairness and dedication are the norm for judicial evaluation questionnaires. Choose Your Judges therefore adopted a different strategy: we examined and analyzed the candidate’s judicial voting record.

Although this is a novel idea for evaluating judicial candidates, there are many studies of judicial voting records in the political science field. Over the past fifty years there have been a number of political science articles reviewing the voting behavior of federal judges (who are unelected), categorizing their decisions based on outcome in different areas of law, and correlating their decisions with demographic variables or the party affiliation of the President who appointed them.

Choose Your Judges has adopted the methods—and some the categories—of this political science research and applied it to state court judges who are involved in elections. If a candidate is a sitting judge—that is, if he or she is an incumbent or a lower court judge seeking a higher office—we gather data about that candidate’s voting patterns. The first step in gathering this information is to discard all unanimous opinions that the judge has participated in. In theory, the unanimous opinions say little to nothing about a judge’s policy preferences or ideology, since it is likely that the law in the case was unambiguous and mandated a certain result. This, we record only the non-unanimous opinions that the judge has participated in, since in these cases the law was sufficiently ambiguous that a judge’s personal policy preferences could be detected—when looking at the same facts and the same law, some judges decided the case one way and some decided them another way.

After discarding the unanimous opinions, we examined hundreds of opinions for each judge, and categorized each opinion by subject matter: criminal procedure, substantive criminal law, medical malpractice, tax, and so on. We then looked for patterns in voting for each category—whether, in these close cases, a judge was more likely to vote in favor of the prosecutor rather than the criminal defendant, or the large corporate defendant rather than the individual plaintiff.

For every judicial candidate, we also gathered “pedigree” information and recorded it in our database. This essentially includes the candidate’s qualifications—whether the candidate has previously served as a judge; whether he or she has prior political experience; or whether he or she has ever worked as a prosecutor or defense attorney. It also includes the party affiliation of the candidate and some independent neutral evaluations of the candidate.

B. The quiz

Voters who come to our website are not directly presented with information about the judicial candidates. Instead, they are asked a series of questions about the type of judge they would prefer. The first section of the quiz deals with the pedigree information—for example, whether the voter would prefer a candidate who has prior experience as an elected official, or whether the voter would prefer a candidate who has worked as a prosecutor. The second section of the quiz deals with substantive issues—for example, whether the voter would prefer a candidate who tends to vote against large corporations in personal injury cases.

For each question, whether it has to do with qualifications or substantive decisions, we ask the user to tell us how important that issue is to them. We then match the voter’s answer to the information in the candidate’s database, and add (or subtract) a number from the candidate’s “score” based on whether the candidate’s background or voting pattern coincided with the voter’s preference (see section C. below for details). At the end of the quiz, the website calculates the total score for each candidate and return a recommendation based on the voter’s preferences.

One challenge with this method is that judges—unlike legislators—do not make decisions purely on policy grounds or ideological preference. As noted above, this is why we only include non-unanimous decisions in our data set. However, it is important to communicate to the voters—most of whom, presumably, are not lawyers themselves—the fact that a judge’s policy preferences have only a limited influence on his or her decisions. For this reason, we begin the substantive section of the quiz with a brief explanation:

When an appellate judge reviews a lower court decision, he or she is usually bound by established legal principles which control the outcome of the case. However, if the legal question is a close one, or the law is ambiguous, a judge has the authority to interpret the law. Over the years certain patterns can be discerned in a judge’s voting record in these close cases. In the following types of cases, what side would you like your judge to vote on?

Another challenge with this method is that we must choose the right language to present the substantive issues to the voter. We must be careful about the language we use, so we avoid value-laden words. Instead of “Would you prefer a judge who tends to protect the rights of a criminal defendant,” or “Would you prefer a judge who tends to support law enforcement,” we opt for a neutral description of the type of case:

“Cases involving questions of criminal procedure (for example, police authority to search, defendant’s right to an attorney, Miranda issues).
Would you prefer a judge who tends to vote in favor of the prosecutor’s position or in favor of the defendant’s position?”

Finally, we faced a problem when we had data about one candidate in a race but not another. Frequently an incumbent judge is running for re-election against an attorney who has never been a judge before and thus has not established any kind of voting record. Luckily, we can still gather pedigree information about each candidate, so the voter will have some information about the challenger. We still included substantive information about the incumbent so that the voter can determine how well his or her preferences match up to the incumbent’s voting record. In these cases, the quiz results will still provide a recommendation to the voter, but it will include a disclaimer that not all the information about the challenger is available.

In spite of these challenges, we felt the quiz method was the most appropriate tool to use in assessing a voter’s preferences and providing him or her with recommendations. First, asking the user to take a quiz is more user-friendly and engaging than presenting the voter with a series of tables containing names and information about each candidate. Second, we hope that by requiring voters to answer the questions on the quiz, we are encouraging them to think critically about the reasons why they have specific preferences–for example, why they might prefer a judge who had former experience as a defense attorney over one who had held political office; or whether a judge should tend to defer to the legislature when deciding if a statute is constitutional. Third, the interactive nature of the quiz should lead to more accurate results than any other method—when the website learns exactly what the voter believes and how strongly he or she cares about each issue, the algorithm can make refined comparisons between candidates. And finally, the voter must answer the questions before knowing which candidates prefer which position, thus removing any possible preconceptions they may have about a specific candidate or a specific political party. In this way, we can gather an accurate idea of what kind of candidate the voter prefers without the answers being tainted by the voter knowing which candidate has those qualifications or agrees with those positions. The results may end up surprising some voters, who would not expect that their preferences would result in the given recommendation.

C. Processing the quiz

After we gather the voter’s preferences, we compare each of those preferences to each relevant candidate in our database. Every data comparison has a “raw score” depending on how well the candidate’s data matches the voter’s responses on the quiz. In addition, for every response, we use the voter’s “strength of preference” as a multiplier. Thus, if the user responds that a particular issue has a “strong positive influence,” or is “very important,” we multiply the raw score for that reply by 2; “minor positive” or “somewhat important” results in a multiplier of 1, and “not at all” is a multiplier of 0 (that is, the response will not affect the total score for that candidate). Likewise, a “minor negative influence” results is a multiplier of -1 (that is, we will subtract the raw score for this response from the candidate’s total score), and “strong negative” is a multiplier of -2. Finally, if the user fails to answer a specific question, the strength of preference is presumed to be zero, and the score of the candidate remains unchanged.

In determining the raw score for each response, we had two primary considerations. First, we wanted to ensure that no one response was worth a disproportionate amount of points on the raw score—that is, each response should be equally weighted at first and only become more (or less) important if the voter indicates a strong (or negligible) strength of preference. Thus, in most cases the raw score for a response will be equal to the number of standard deviations away from the average for judicial candidates. In practice, this meant that almost all raw scores fell between the range of negative two and positive two. For example, when calculating a candidate’s prior practice experience, some candidates may have had only five years of prior experience, while others may have had twenty or more. Giving the more experienced attorney twenty extra points would skew the results by placing undue emphasis on this particular factor. In order to keep the raw scores within an acceptable range, therefore, we calculated the standard deviation for the amount of practice experience of all judicial candidates and divided each candidate’s years of experience by that number.

The second consideration had to do with the problem of “one-sided” data. In certain situations, data was available for one candidate in a race. This could either be because the opponent had never been a judge, and so therefore had no prior decisions to record, or (more frequently) because the election is a retention election, with only one candidate on the ballot. In these cases the voter’s preferences should still be matched up with the one candidate whose qualifications and preferences are known, but the candidate may deserve a negative raw score if his or her data is contrary to what the voter prefers. In other words, we cannot use zero as the lowest possible raw score for each response, because then a candidate would end up with a positive score even if his or her qualifications and preferences had twenty negative correlations and only one positive correlation with the voter’s preferences. In such a situation, the voter should clearly choose not to retain the candidate (or, if it is a contested election, the voter would most likely want to choose the opponent); therefore, the total raw score of that candidate should be less than zero. Thus, we tried to set the raw score at zero if a candidate had an “average” level of correlation with the voter’s preference. For example, the average candidate had fourteen years of practice experience before running for judge. Thus, a candidate with only five years of experience deserves a negative raw score for practice experience, since his or her five years is well below the average for judicial candidates. In this example, then, we must first subtract fourteen from the candidate’s years of practice experience, to ensure that candidates only get positive points for this factor if they are in fact above average in the amount of practice experience that they have. (We then divide by the standard deviation as described above, in order to reach a raw score within the appropriate range). Thus, the formula we used to calculate raw score in most cases was ((# of years)-(average # of years))/(standard deviation for # of years).

Some of the responses, however, did not fit into this formula—for example, party affiliation or prior political experience. For these variables, we assigned a number between the range of 2 and -2 in order to keep the numbers comparable. By keeping the raw scores for any one response within an acceptable range, and by allowing a voter to increase or decrease the raw score for any given response based on the strength of preference, we hoped to minimize the arbitrariness of these choices.

The following is an explanation of how we calculate the raw score for each response:

Pedigree information

(1) Party affiliation

The raw score is +2 point if there is a match between the user’s preference and the candidate’s affiliation, and -2 point if there is not a match.

(2) Prior practice experience

The average amount of practice experience for a judicial candidate is fourteen years. Therefore, any amount of practice experience above fourteen deserves a positive score, and any amount below fourteen deserves a negative score. The standard deviation for this variable is 6.7. Thus, the raw score for this raw score is the total years of practice experience, minus fourteen, divided by 6.7.

(3) Prior judicial experience

The average amount of judicial experience for a judicial candidate is approximately twelve years. Therefore, any amount of judicial experience above twelve deserves a positive score, and any amount below twelve deserves a negative score. The standard deviation for this variable is 9.6. Thus, the raw score for this is the total years of judicial experience, minus twelve, divided by 9.6.

(4) Prior political experience

The vast majority of candidates have had no prior political experience, so there is no need to make provisions for a negative score, and there is no real way to calculate an average or a standard deviation. The raw score here is two if a candidate has held a statewide office (such as lieutenant governor or secretary of state); 1.5 for extensive time spent in a local office (defined as ten or more years as a mayor or city council member); 1 for moderate time in a local office (defined as 5-10 years); and .5 for 1-5 years in a local office.

(5) Prior experience as a prosecutor

The average amount of experience as a prosecutor for a judicial candidate is approximately 2.8. The standard deviation for this variable is 5.0. Thus, the raw score for this variable is the total years of judicial experience, minus 2.8, divided by 5.0.

(6) Prior experience as a defense attorney

The average amount of experience as a defense attorney for a judicial candidate is approximately .9. The standard deviation for this variable is 2.8. Thus, the raw score for this variable is the total years of judicial experience, minus .9, divided by 2.8.

(7) Ranking of law school

We used the current rankings from U.S. News and World Report to determine the strength of the law school that the candidate attended: candidates got a “1” for third or fourth tier; a “2” for second tier; a “3” for first tier ranked 11-50; and a “4” for a top ten law school. We then calculated an average and a standard deviation for these scores as we did for most other variables. Most judicial candidates (perhaps surprisingly) attended a law school in the third or fourth tier; thus the average ranking was 1.6, and the standard deviation was one.

(8) State bar association endorsements

State bars traditionally give a candidate a ranking along a certain spectrum: for example, not recommended, adequate, qualified, or highly qualified. The majority of judicial candidates receive a “qualified” ranking, so this was set at zero for a raw score. A candidate received a raw score of -2 if he or she is “not recommended;” a raw score of
-1 if he or she is merely “adequate;” and a raw score of 1 if he or she is “highly qualified.”

(9) Newspaper endorsements

A candidate received a raw score of 2 if he or she is endorsed by the newspaper and a raw score of -2 if he or she is not endorsed.

Substantive legal questions

In our research, we recorded decisions in over twenty different categories; some decisions could be recorded in more than one category. Again, only non-unanimous cases were used, since those cases are most likely to show a political or ideological preference. In the end, some categories did not result in a sufficient number of cases to warrant inclusion in the database, and so the final category list included only ten categories:

(1) Criminal procedure (whether the procedural rights of a criminal defendant were violated)

(2) Substantive Criminal law (whether the defendant’s action were covered by the criminal statute in question or whether an imposed sentence was too severe)

(3) Personal injury/tort case against a corporation/company or public entity (not including medical malpractice cases).

(4) Medical malpractice (patient suing a doctor or hospital)

(5) Insurance case (insurance company is suing or being sued)

(6) Employment/housing discrimination (employer or landlord being sued for discrimination)

(7) Tax case (government body or taxing agency is suing or being sued over amount of taxes or assessment or other tax issue)

(8) Election/ballot access case (individual or group is suing the secretary of state challenging the secretary’s decision not to include a candidate or issue on the ballot)

(9) Activist – Judicial Review (did the candidate strike down a state law or agency regulation—or did he or she refuse to do so when other members of the court did)

(10) Activist – Stare Decisis (did the candidate overturn or ignore a prior precedent issued by his or her court— or did he or she refuse to do so when other members of the court did)

In each case, our researchers recorded which side the judicial candidate in question voted, and awarded the candidate one point if he or she ruled on one side, and took away a point if the judge ruled on the other side. In the end, we added up the total number of cases for each category and added up the total “points” for his or her rulings in those cases. A completely balanced voting record would result in a point total of zero—for example, if a candidate ruled in twenty non-unanimous cases regarding medical malpractice, and voted for the patient/plaintiff half in twenty cases and the doctor/defendant in the other half, the candidate would have a point total of zero for that category. On the other hand, if the candidate voted for the doctor/defendant in eighteen of the cases and the patient/plaintiff in only two, the judge would have a point total of sixteen (eighteen positive and two negative).

Thus, for every candidate and each category, we ended up with two numbers: total number of cases (the “count”), and point total of the decisions (the “sum”). We then divided the sum by the count to measure the candidate’s political or ideological preferences for that category. A large number (approaching 1 or negative 1) would mean a very strong ideological preference—in essence, for almost every case in which the candidate’s court was divided on a given issue, this candidate routinely voted for the same side. In our medical malpractice example above, for example, a judicial candidate who voted for the defendant/doctor in eighteen of the twenty non-unanimous medical malpractice cases would end up with a sum/count ratio of .8 ((18-2)/20).

The final step was to convert the judge’s score to a raw score for the purposes of our quiz results algorithm. There was not enough data to compute an average score or a standard deviation, so we tried to ensure that we would end up with a range between two and negative two, which would be comparable. Thus, we set a raw score of 2 if the sum/count ratio was over .5 (this would mean that the candidate ruled a certain way in over 75% of the non-unanimous cases on this issue). We set a raw score of 1.5 if the sum/count ratio was between .33 and .5; a raw score of 1 if the sum/count was between .2 and .33; and a raw score of .5 if the sum/count was less than .2 but greater than zero. As always, this raw score would be multiplied by two if the voter indicated that this issue was very important to him or her, and it would be multiplied by zero (that is, have no effect on the total score for the judicial candidate) if the voter indicated that the issue was not at all important. Likewise, if the voter’s preferences were contrary to the indicated ideological preference of the candidate, the raw score would be multiplied by -1 or -2 (and thus the number would be subtracted from the candidate’s total score).

Results

For contested elections, the website calculates the total score for each candidate and recommends that the voter select the candidate with the higher score. If one of the candidates had less data than the other (for example, an incumbent judge is running against a challenger with no judicial experience and thus no record of judicial votes), the website adds a disclaimer to this effect.

For retention elections, the website calculates the total score for the candidate and notes whether the total score is positive or negative. If the score is negative (thus indicating that the candidate in question is worse than the average candidate in matching up with the voter’s preferences), the website recommends a vote against retention; if positive, the website recommends a vote in favor of retention.


Choose Your Judges is a non-partisan organization dedicated to increasing voter awareness in judicial elections. If you have any comments or feedback about the site, please let us know.