One-run Records as a Basis for Managerial Evaluation
By Bill James
Recent comments by Rany Jazayerli and Joe Posnanski (try saying that three times rapidly) have led me to take an interest in the issue of what may be accurately said about one-run games. Joe and Rany were interested in what one-run records might have to tell us about individual managers.
My interest was a little different: I was focused on the underlying question of whether one-run won-lost records may actually be used in this way, to learn anything about individual managers. Essentially, I asked three other questions as a kind of pathway toward the general question, which is the fourth one below:
1) Is winning one-run games a valid trait of certain teams and/or certain managers?
2) What is the best way of establishing normal expectations for one-run records?
3) Are their any identifiable characteristics of teams which win their one-run games, as opposed to those that lose their one-run games?
4) Can one infer anything about a manager from his record in one-run games?
To study these and related questions, I compiled a data base consisting essentially of seasons since 1901 by all teams which still exist at this time ... that is, all teams since 1901 except the Federal League and those early American League teams that died out and were replaced. There were 1,984 teams in my study.
This data base was compiled essentially by copying and pasting from two sources -- the Sabermetric Encyclopedia, from Lee Sinins, and an article by Tom Ruane on the Baseball Think Factory, which contained one-run records for all teams from 1901 to 1997. I took those two, added a little other stuff, and had a massive wonderful spreadsheet, and a renewed appreciation for the marvels of modern computers. In 1985, it would have taken me three months to do this study, and I couldn't have done it a twentieth as well.
Anyway, the first question I had to ask is "What is the normal expectation for a team's one-run record, given their other characteristics?" In order to determine whether a team has a substandard record in one-run games, first of all we have to know what a normal record would be. How do we establish that?
Establishing normal expectations for one-run records
Tom Ruane, in his article "Looking for Clutch Performance in One-Run Games" establishes each team's "normal" one-run winning percentage sometimes based on their overall winning percentage, and at other points based on their record in games decided by multiple runs. He reports, for example, that teams with overall winning percentages of .375 to .425 had a winning percentage, in one-run games, of .442. That seems like a natural approach to the problem, and at first I was inclined to take this approach, but refine it by the consideration of additional factors.
Thinking more about it, however, I wondered if it might not be better to approach the expected one-run winning percentage not from the overall winning percentage, but from the ratio of runs scored to runs allowed. As soon as I got to that point, I was struck by a simple idea. Might it not be true, I wondered, that a team's ratio of runs scored to runs allowed is their expected winning percentage in one-run games?
Think about it: if all games were 1-0 games, then two things would clearly be true:
1) All games would be one-run games, and
We all know that the ratio of overall wins to losses is essentially the same as the ratio of the square of their runs scored to the square of their runs allowed. But we also know that .600 teams don't play .600 ball in one-run games. They play something more like .550 ball in one-run games-actually .560 according to Ruane. But if you assume that it is .550, then the ratio of runs scored to runs allowed would be about the same as the expected winning percentage in one-run games.
Bingo. Well, it's not a bulls eye, but its pretty close. If we assume that the expected winning percentage in one-run games is simply the ratio of runs scored to runs allowed, then there would be 112 teams in my study which had expected one-run winning percentages between .540 and .549. Those 112 teams had an actual record, in one-run games, of 2853-2385, a .winning percentage of .545.
If all of the data was like that, then the statement that the ratio of runs scored to runs allowed is the expected winning percentage in one-run games would be categorically true. Unfortunately, the world is not that perfect. Teams with run/opposition run ratios of 1.10 to 1 tend to have won-lost ratios in one-run games of 1.09 to 1-in other words, overall, they tend to be a little bit closer to .500 than our formula expects them to be. The actual formula that works best for predicted expected winning percentage in one-run games is:
Runs to the power .865
In other words, the familiar Pythagorean formula for making a winning percentage from runs and runs allowed, but with .865 replacing 2 as the power to which things are raised or reduced.
However, this formula is not markedly more accurate than simply saying that the expected winning percentage in one-run games is the same as the ratio of runs scored to runs allowed. If we projected the expected winning percentage for every team in the study based on the simplest assumption, we would have a gross error for all teams of 5414.3. Using the power .865 as in the formula above, we reduce the gross error to 5399.4-an improvement of .003 (3/10th of one percent.) Although I will go to the trouble of incorporating this .865 power in establishing expected winning percentages in one-run games, because it is easy for computers to do that, the reality is that it makes almost no difference.
It seems to me that this is a better way to establish a team's expected winning percentage than basing it on the team's overall winning percentage. Teams have played as many as 75 one-run games in a full season, and as few as 22. A formula which states the normal ratio between winning percentage and one-run winning percentage for one of these teams can't possibly work for the other one.
Suppose that you have two .600 teams. .. one plays 70 one-run games and
goes 35-35, and the other plays 30 one-run games and goes 15-15. To get
to .600 overall, the team playing 70 one-run games has to play .675 baseball
in their other games. To get to .600 overall, the team playing 30 one-run
games needs only to play .621 ball. That's very different.
Also, or so it seems to me, a team which is at .600 overall could be a great team which finished at .600 because they were lousy in one-run games, or they could be a so-so team which finished at .600 because they won their one-run games.
The 1974 Baltimore Orioles and the 1967 San Francisco Giants both finished 91-71. The Orioles, however, outscored their opposition by only 46 runs (658-612) and went just 51-50 in games decided by more than one run, but climbed to 91-71 by winning 40 one-run games (40-21). The Giants outscored their opponents by 101 runs (652-551) and went 63-42 in games decided my multiple runs, but just 28-29 in one-run games.
Tom Ruane argues (I think) that this one-run performance gap is essentially a fluke. However, assuming that Ruane's conclusion is correct, doesn't it follow that his grouping of teams by overall won-lost records must be incorrect? After all, if Ruane is correct, then these two teams are not really of the same caliber; they merely appear to be of the same caliber because one of them did really well in one-run games, and the other didn't. Ruane's own study, it seems to me, implicitly concludes that his method is not the best way to approach the issue.
Stated another way, the one-run won-lost record is a won-loss outcome. We don't normally project won-loss outcomes on the basis of other won-loss outcomes. We normally project wins and losses based on runs scored and runs allowed. And, since it is dead simple to do so, this seems to me to be the preferable method.
Is Winning One-Run Games a Valid Team Trait, or Just Something That Happens Sometimes?
What we really want to know here is whether winning one-run games is a persistent trait-meaning that the same teams and same managers do it every year-or a transient outcome, meaning that it's probably just luck.
Ruane concluded that "how a team does one year in close games is absolutely no use in predicting how it will do the next," and also cites a study in the 1997 Baseball Research Journal by Bob Boynton, in which Boynton had apparently reached the same conclusion, although I haven't seen that article.
My conclusion is slightly different. My conclusion is that winning a lot of one-run games has a persistence of zero (meaning that it appears to be luck) but that losing a lot of one-run games is not necessarily completely meaningless. It's mostly just bad luck, but it doesn't appear to me that it entirely disappears in the following season.
Here's what I did. First, I established the expected winning percentage
in one-run games for every team in my data base, and then applied that
to the number of one-run games that each team played. By so doing, I identified
all of the teams which were five games better or five games worse than
expected in one-run games.
In the following seasons, however, these teams were -23.6. In other
words, they had, as a group, no tendency whatsoever to be better than
average in one-run games, in the following season. The trait has a persistency
But there were 153 teams in my study which did 5.0 or more games WORSE than expected. In the aggregate, these teams were negative 990.6 wins. In the following seasons, they were also negative 93.9 wins.
In common language, my study suggests that you can't win more than your share of one-run games consistently, but you can lose more than your share, perhaps. It's not a HIGH rate of persistence -- 9% -- and it COULD be just a hiccup in the data. It's a pretty healthy hiccup -- 94 Wins is a pretty fair discrepancy in the data to be written off as luck.
Why did I reach a different conclusion from Ruane and Boynton? Well, first, my method is significantly different.
Ruane identifies the "best" one-run team of all time as the 1974 San Diego Padres, who went 60-102 overall, but an astonishing 31-16 in one-run games (29-86 otherwise), and the second-best one-run team of all time as the 1955 Kansas City A's (63-91 overall, 30-15 in one-run games.)
My study lists the same two teams one and two on the over-achievers list -- but then departs. His list of the five worst one-run teams and my list of the same are completely different, involving none of the same teams, and the rest of his top-five list and mine, after the top two, is also completely different. Using a different method -- I believe a better method -- I just reached a different result.
Second, I focused on extreme teams, the teams at the ends of the list, and ignored the middle of the chart. I'm not interested in how many teams may have gone +2 one year and -3 the next.
Studying the whole list, you could get such a large pile of chaff that
you think you don't have any wheat at all. I think it is better to focus
on the teams with strong tendencies in one season.
Detour: Visiting a couple of side issues
Another thing I was curious about was how well one could predict how many one-run games a team would play. It is intuitively obvious that a team which plays in a low-run environment -- a team that scores and allows 600 runs a season -- is going to play more one-run games than a team that plays in Coors Field or 2001. But what is that relationship?
The number of one-run games that a team plays can be predicted best, within the range of normal run results, by the formula .63 divided by the square root of runs per game in the team's context. In other words, if the team scores and allows 4.00 runs per game on average, then about 31.5% of their games will probably be one-run decisions -- .63, divided by 2. If a team scores and allows 5.00 runs per game, then about 28.2% of their games will probably be one-run decisions -- .63, divided by 2.236.
If you think it through, you will realize that this formula probably would work even if teams scored a very, very low number of runs or a very, very high number of runs, but I won't get into that.
Overall, .30665 of all decisions have been one-run games, over the last hundred years. If you just projected that each team would have that percentage of their games as one-run games, you would have an average error of 5.45 games. Using the formula above reduces the average error to 4.71-an improvement of 14%.
Another issue that I always get into, at least briefly, is extreme teams -- overachieving teams, underachieving teams, teams that played many more or many fewer one-run games than we would expect. The following are the top ten "over-achieving" teams in one-run games:
Team Lg Year Finish W L PCT R G OR 1-Run Expected Margin SD N N 1974 6th 60 102 .370 541 162 830 31-16 .408 19.2 11.8 KC A A 1955 6th 63 91 .409 638 155 911 30-15 .424 19.1 10.9 Cin N N 1985 2nd 89 72 .553 677 162 666 39-18 .504 28.7 10.3 NY N N 1972 3rd 83 73 .532 528 156 578 33-15 .480 23.1 9.9 Det A A 1905 3rd 79 74 .516 511 154 608 32-16 .462 22.2 9.8 Cin N N 1961 1st 93 61 .604 708 154 653 34-14 .517 24.8 9.2 Bos A A 1953 4th 84 69 .549 656 153 632 35-16 .508 25.9 9.1 Was A A 1913 2nd 90 64 .584 596 155 568 32-13 .510 23.0 9.0 Pit N N 1959 4th 78 76 .506 651 155 680 36-19 .491 27.0 9.0 Bal A A 1970 1st 108 54 .667 792 162 574 40-15 .569 31.3 8.7
With a run ratio of 541-830, the 1974 Padres had an expected winning percentage, in one-run games, of .408. Given 47 one-run decisions, they could have expected to win 19.2. They exceeded that by 11.8 wins.
The Kansas City A's of 1955 are interesting team, just because. . .well, hell, the Kansas City A's are ALWAYS interesting. No, they're interesting because, if you take their runs scored and their runs allowed and just knock off the last digit of each, you get their won-lost record-638 runs, 63 wins, 911 runs allowed, 91 losses. This would never normally happen, of course, because a team which is outscored by such a huge margin would normally finish more like 51-103 than 63-91. But the A's outperformed expectations by 11 one-run games, 12 games overall.
Which brings up another point: we all know that won-lost projection systems like the Pythagorean system don't work perfectly. What you may not know is that the majority of the deviation from expectation -- 70 to 80% of it -- is in the one-run games. Anyway, the ten worst teams in one-run games were:
Team Lg Year Finish W L PCT R G OR 1-Run Expected Margin Hou N N 1975 6th 64 97 .398 664 162 711 16-41 .485 27.7 -11.7 NY A A 1966 10th 70 89 .440 611 160 612 15-38 .500 26.5 -11.5 Bro N N 1913 6th 65 84 .436 595 152 613 14-36 .494 24.7 -10.7 Was A A 1919 7th 56 84 .400 533 142 571 14-36 .485 24.3 -10.3 Pit N N 1986 6th 64 98 .395 663 162 700 16-37 .488 25.9 -9.9 KC A A 1999 4th 64 97 .398 856 161 921 11-32 .484 20.8 -9.8 LA N N 1992 6th 63 99 .389 548 162 636 17-40 .468 26.7 -9.7 NY A A 1935 2nd 89 60 .597 818 149 632 15-29 .556 24.4 -9.4 Bro N N 1912 7th 58 95 .379 651 153 744 16-38 .471 25.4 -9.4 Cin N N 1937 8th 56 98 .364 612 155 707 14-36 .469 23.4 -9.4
Tony Muser escapes the distinction of being the worst one-run manager of all time by the aid of Bill Dahlen, a great player who managed the Brooklyn Dodgers from 1910 through 1913. In his four seasons they were negative 4.5, positive 3.6, negative 9.4 and negative 10.7, a total of 20.98 wins below expectation in his four seasons as a manager.
Tony Muser took over the Royals in mid-summer, 1997. Since 1998, the Royals have been +2.1, -9.8, -1.9 and -5.3, a total of -14.88.
The Houston Astros in 1971 played 75 one-run games, which was
a) the most ever, and
The Astros played in a very low-run environment, and thus had an expectation of 54.1 one-run games -- a very high number. They exceeded that by 20-plus, which is the highest total of all time.
On the other end, the records for fewest number of one-run games are split. The 1936 St. Louis Browns played only 22 one-run games in a full season, the lowest total ever. However, the Browns played in a historically high-run environment -- 6.03 runs per team per game. Playing in a historically normal environment of 4.57 runs per game, the 2001 Montreal Expos played only 28 one-run games -- 19.7 fewer than expected. This is the largest shortfall in history.
Are there any identifiable characteristics of teams that win their one-run games, as opposed to those that lose their one-run games?
As long as you don't make a big deal out of it, yes. Teams which do well in one-run games have more or less all of the characteristics you would expect them to have, but only to a small extent.
My method here was to take all teams since 1950, and identify the top 50 and the bottom 50 teams by how they performed in one-run games, relative to expectations. I broke it off at 1950, because I didn't want to get back into the bad-data era. I then figured the average team stats for each group of 50 teams, and compared the two groups.
The 50 teams which did well in one-run games had more stolen bases (96-92 on average), more sacrifice bunts (71-67), more complete games (35-31), more saves (34-30), issued fewer walks (513-531), drew more walks (526-520) and had a better ERA (3.77 to 3.91).
The 50 teams which did poorly in one-run games hit more home runs (127-117), scored more runs (674-658), had a higher slugging percentage (.386-.380), a lower on-base percentage (.325-.323), used more relief pitchers (278-257), threw more wild pitches (47-44) and had more balks (8-7). They were more likely to play in hitter's parks (park factors 100.3 vs. 98.5).
I think that, generally, one would expect all of these things to be true -- one-run teams play one-run ball and have strong pitching. However, the degree to which these things are true is extremely minor. If you tried to project it backwards -- that is, take a team's characteristics and predict whether or not they would do well in one-run games -- you'd get nowhere, because the tendencies just aren't strong enough to work in that way.
Can one infer anything about a manager from his one-run record?
I would have guessed, going into this study, that the answer to that might be a flat "no", or, at least, an equivocal "no" (we can find no evidence within our study that playing well in one-run games is anything but a random occurrence, etc., etc., yada yada yada, snore.) I can't give you that answer, for two reasons:
1) There does seem to be some persistent tendency of teams to play poorly in one-run games, and
2) Teams which play well in one-run games do seem to have some identifiable characteristics, to a small degree.
But I will say this: that I would be careful about drawing any such inferences. Tony Muser is -15 games in one-run decisions. I can't say that this IS just coincindence -- but it certainly could be. It's not an overwhelming number, in and of itself.
Rany began this discussion with a comment about Bobby Cox' relatively poor record in one-run games. Well, from 1990 through 2001 the Atlanta Braves scored 8,836 runs, allowed 7,409. This is a ratio of 1.19 to 1. In one-run games they have gone 297-256, a ratio of 1.16 to 1.
The Braves have missed their expected won-lost record in one-run games, over the ten years, by 2.1 wins. Obviously, no conclusion of any kind can be drawn from such an occurrence. One-run games involve a huge amount of luck. This may be the only safe statement that can be made about them.