Post by dewey1972 on Jan 25, 2014 8:22:29 GMT -5
Reading through MLB.com's "best tools in the business" I came across this line about Billy Hamilton's speed: "Given the way Hamilton's speed plays on the diamond, it really merits a 90 on the 20-80 scouting scale." I understand when someone has a crazy tool, people want a way to express just how good it is, but it always bugs me when someone talks about a 90 on the scouting scale.
My understanding of the scouting scale, and perhaps my understanding is just not accurate, is that it is based on standard deviations. A 50, we all know, is average. Again, my understanding is that average means an average major leaguer. That's a pretty high standard for most tools. But in a normal distribution, 68% of the population falls within one standard deviation. That means that 68% of the hitters in major league baseball at any one time should be between a 40 and a 60 on the scouting scale for a tool. Furthermore, 95% of the population falls between two standard deviations of the mean (a 30 and a 70). In the major leagues at any one time, there are about 390 hitters and 360 pitchers. That means 370 hitters (390*.95) fall between a 30 and a 70, leaving just 10 above a 70 (since 10 are below). Continuing, three standard deviations from the mean contains 99.7% of the population. This is why the scale stops at 80 and why people generally don't look at what falls more than 3 standard deviations from the mean: we're talking about 1 out of 370 falling outside 3 standard deviations. For the baseball population, if it's normally distributed, just 1 major leaguer at a time is either better or worse than an 80, and one would assume just barely so. If we want to talk about what a 90 would be, that would be 1 in about 31,000 being four standard deviations above the mean. Only 7726 players since 1900 have had 100 plate appearances in the major leagues according to my quick sort of Fangraphs leaderboards, for reference.
Perhaps my problem is that I'm assuming the major league population is normally distributed and it's not. But it's hard for me to see how a player could actually be a 90, meaning he would be four standard deviations better than an average player. It's comparable to the difference between the top 10 players in the league at something (a 70) and the worst 10 (a 30). Could the difference between Hamilton and an average major leaguer be the same as the difference between the 10 guys who hit 35 homers in a season and the guys who hit 1? That's hard to imagine.
I'd love to hear other people's thoughts on this or to have people correct any misunderstandings of mine.
My understanding of the scouting scale, and perhaps my understanding is just not accurate, is that it is based on standard deviations. A 50, we all know, is average. Again, my understanding is that average means an average major leaguer. That's a pretty high standard for most tools. But in a normal distribution, 68% of the population falls within one standard deviation. That means that 68% of the hitters in major league baseball at any one time should be between a 40 and a 60 on the scouting scale for a tool. Furthermore, 95% of the population falls between two standard deviations of the mean (a 30 and a 70). In the major leagues at any one time, there are about 390 hitters and 360 pitchers. That means 370 hitters (390*.95) fall between a 30 and a 70, leaving just 10 above a 70 (since 10 are below). Continuing, three standard deviations from the mean contains 99.7% of the population. This is why the scale stops at 80 and why people generally don't look at what falls more than 3 standard deviations from the mean: we're talking about 1 out of 370 falling outside 3 standard deviations. For the baseball population, if it's normally distributed, just 1 major leaguer at a time is either better or worse than an 80, and one would assume just barely so. If we want to talk about what a 90 would be, that would be 1 in about 31,000 being four standard deviations above the mean. Only 7726 players since 1900 have had 100 plate appearances in the major leagues according to my quick sort of Fangraphs leaderboards, for reference.
Perhaps my problem is that I'm assuming the major league population is normally distributed and it's not. But it's hard for me to see how a player could actually be a 90, meaning he would be four standard deviations better than an average player. It's comparable to the difference between the top 10 players in the league at something (a 70) and the worst 10 (a 30). Could the difference between Hamilton and an average major leaguer be the same as the difference between the 10 guys who hit 35 homers in a season and the guys who hit 1? That's hard to imagine.
I'd love to hear other people's thoughts on this or to have people correct any misunderstandings of mine.