|
Post by Chris Hatfield on Feb 26, 2014 22:07:57 GMT -5
minors.mlblogs.com/2014/02/26/2129253/Mayo averaged MLBPipeline, BA, BP, Law, and Hulet. 2. Xander Bogaerts 36. Jackie Bradley, Jr. 48. Henry Owens 56. Garin Cecchini 62. Blake Swihart 67. Mookie Betts 86. Allen Webster 87. Matt Barnes 117. Trey Ball Seems about right to me.
|
|
|
Post by Oregon Norm on Feb 27, 2014 1:48:11 GMT -5
This is better. The next step would be to weight the projections for how well each does with the assigned slot given the history of the players who were placed there and how they've performed in the majors. That's not easy, but it is how you should weight that average. That's how Silver does his poll work.
|
|
|
Post by soxfanatic on Feb 27, 2014 5:45:10 GMT -5
I have basically done the same, but mostly focusing on Red Sox top 10. I've included Sickels and this very site as well. To add: I used a method that's a little flawed as described by ericmvan earlier, but I think for a top 10 it's pretty usable. Basically it says that the #1 prospect is 10 times more valuable than the #10 prospect. In the attachment you can view my spreadsheet. Please let me know if you have any corrections. Top prospects.xlsx (10.18 KB)
|
|
rjp313jr
Veteran
Posts: 14,111
Member is Online
|
Post by rjp313jr on Feb 27, 2014 9:29:47 GMT -5
This is better. The next step would be to weight the projections for how well each does with the assigned slot given the history of the players who were placed there and how they've performed in the majors. That's not easy, but it is how you should weight that average. That's how Silver does his poll work. This makes sense in theory but it assumes there is consistency from the lists year to year both interns of compilation and results. There's no predictive value in weighing the results of something that is all over the map.
|
|
|
Post by Chris Hatfield on Feb 27, 2014 13:28:17 GMT -5
This is better. The next step would be to weight the projections for how well each does with the assigned slot given the history of the players who were placed there and how they've performed in the majors. That's not easy, but it is how you should weight that average. That's how Silver does his poll work. This makes sense in theory but it assumes there is consistency from the lists year to year both interns of compilation and results. There's no predictive value in weighing the results of something that is all over the map. This. Using the Sox list as a very rough example, Will Middlebrooks was once the system's top prospect. But based on his projection at that time, he'd probably be somewhere around 4th in the system as it stands today. Relative rankings are fun for discussion, but they are the RBIs of prospect projection: not value-less but reliant probably more on outside factors than on the player. I'd be much more interested, down the line, in seeing how grading systems fare in terms of predictive value. I like rankings as a jumping-off point for discussion more than as a predictive tool.
|
|
|
Post by brianthetaoist on Feb 27, 2014 13:49:56 GMT -5
This makes sense in theory but it assumes there is consistency from the lists year to year both interns of compilation and results. There's no predictive value in weighing the results of something that is all over the map. This. Using the Sox list as a very rough example, Will Middlebrooks was once the system's top prospect. But based on his projection at that time, he'd probably be somewhere around 4th in the system as it stands today. Relative rankings are fun for discussion, but they are the RBIs of prospect projection: not value-less but reliant probably more on outside factors than on the player. I'd be much more interested, down the line, in seeing how grading systems fare in terms of predictive value. I like rankings as a jumping-off point for discussion more than as a predictive tool. Seems like there'd be much more variance like that in lists of individual systems rather than the overall Top-100. I mean, I supposed there is some variability in the overall quality of the minor leagues at any one time, but it's probably not that high. A bigger issue would be the changes in the methodology of the different publications and the variation in quality as the regimes at each change over. Still, though, I don't think an exercise like Norm's would be valueless. You run across the same issues with looking at polls across time and among pollsters in politics, but it still teaches you something to look at old polls (although polls are far more precise than prospect rankings since they are at least based on some raw empiricism, but then pushed through methodology and statistical analysis applied with more or less rigor depending on the pollster). So, whether it'd be worth the work is a whole other question ... I ain't volunteering to work on it, that's for sure.
|
|
|
Post by jimed14 on Feb 27, 2014 13:54:09 GMT -5
If everyone put up their scouting grades for each player, you could use that to determine accuracy.
|
|
|
Post by JackieWilsonsaid on Feb 27, 2014 14:08:52 GMT -5
I don't think overall rankings carry any meaning outside of a discussion trigger.
Position rankings have some value but of course crossing development levels reduces any comparative value. Also, Position development does not include projected potential switches as well.
It's all good winter fun, but that's about it.
|
|