Quoting: Jarvis
You missed the whole "trending" spot.
Cowan - Projection 2nd line / Trending: Prospect
Tkachuk: Projection First Line / Trending: Under NHL Contract
Really don't know how this is so difficult to understand.
1. Read the
FAQ which specifically says the opposite
Quoting: FAQ
Players signed to NHL contracts are evaluated compared to full-time NHL players. Some players are still developing in the “AHL” or a pro circuit in Europe. All players signed to NHL contracts – whether they are in the league or outside the league – are compared the same way due to their contract status – “Under NHL Contract”
2. I was specifically talking about the attribute ratings. I will table my disagreements on singular assessments (like one guy’s skating or physicality). We all have our own views. But for all the ‘grading on a curve’ type of explanations, these figures do not appear to follow anything like a normal distribution of scores. If one plotted every score on every attribute on every player, my guess it would appear to be a broken dumbbell of figures.
3. If players are sorted on
main page by “overall rating” it is going to mislead most readers into seeing it as single category for all players. Instead of sorting by rating, it should instead be sorted by NHL vs Prospect…or actual rating vs peak estimate…L1 vs L2….anything that allows apples-to-apples comparison.
4. The listing for “under NHL contract” for NHL players is also misleading because it is not what separates prospects vs NHLers (as you know, many prospects are under contract).
5. The L1, L2, P1, P2 system is also challenging as well. The explaining reads as if their line designation is a demographic (ie, as if it were a fixed characteristic like birthdate or righty/lefty). As players develop, they can easily matriculate from one line to another. On many teams, line assignments change night to night. So are we supposed to check the depth chart each time we look at these ratings?
6. The weighting system is a challenge for me too. While I understand it’s necessity for coming up with an ‘overall score’, it raises a lot of other questions that the qualitative side of scouting can interpret. Should prospects and NHLers carry similar weights? How about forwards and defensemen? How about a recently drafted sniper vs fully developed physical dman?
7. There is too much subjectivity involved in assessing whether a player is a prospect or not, or a 1st liner vs 2nd liner. By segmenting the pool of players into so many categories, It renders the comparative utility of these almost useless.
8. There ought to be a better way to compare similar players. For instance, we should be able to screen by players of similar age, similar draft cohort…or even similar league (ie Juniors, NCAA, Euro). Most fans will want to see how their prospects are doing relative to other similar prospects…and right now, there is no way to do that (unless they are assigned the same developmental stage, same line projection…and have actually been scouted).
9. Not all teams have been scouted. There are numerous references to how players are scouted and measured relative to their peers. But if only half the league have assessments, it’s hard to read these properly (how can we know if the cup is half full if we don’t know how big the cup is?).
10. There ought to be a baseline for all players. For instance, amateur scouting departments will have a book on many prospects leading up into their draft. At a very minimum, they have information supplied by central scouting. As a player progresses through their development, the draft ratings can serve as a “starting line” for all players. As it stands now, there is no baseline.
11. There is a weird break in the numbers used. It is highlighted that a different scale was used before and after September 2023. Why include anything before? I appreciate that the methodological change was noted, but it leads me to question why the ‘old way’ was even published. Every player with an older report should have an updated report (or better yet, leave the 0-10 scale, and apply to all new reports).
12. Much of the CapFriendly brand is based on facts and figured. It is free of subjectivity. Transactions, cap figures, contract information, CBA rules. These are hard, incontrovertible and fixed. Tools such as Trade Machine, ACGM can measure fans interpretation of them…but the facts are still the same. CapFriendly has become a hugely trusted resource based on its adherence to this. How many times have we read top hockey writers and observers say, “according to CapFriendly…”? This website has become a tent-pole for hockey fandom. Which is why this ‘scouting report’ exercise seems so off-brand for CF.
13. Instead of advertising one author for scout, it would seem to me CF would be better served to either (a) stay clear of single-source, subjective content or (b) find a way to aggregate multiple sources into more of a consensus datapoint. (With such a large user-base…could even trust the wisdom of crowds to provide such information).
14. I am sorry to be calling folks out on this. But I shouldn’t need a secret decoder ring to make sense of it all. Maybe I am the big dummy, but I am familiar enough with the scouting community and player reports to know the simplest, most straight forward ones are the most useful.
15. I have said that these should be taken down. I know that a lot of work have gone into producing and publishing them and I appreciate that. But there are too many issues here to ignore.
16. My recommendations would be:
(A) Hide them until reports on every player, using same methodology, are ready.
(B) Simplify the categories.
(C) Reduce the number of buckets players can fall into.
(D) Find a common measure for all players (ie “peak ratings” vs “current ratings”).
(E) Use multiple sources.
(F) Present as subjective content.
(G) Allow users to screen based on age, developmental stage or draft year.
(H) Add a tool that allows users to adjust their own weights (or even to dial the categories up and down).
(I) Eliminate the Line groupings (L1, L2, etc).
(J) Use examples in FAQ that point out what is being shared here.
(K) Explain scouting process in FAQ (how many games were factored into player reports? What were dates of those assessments? How many scouts contributed to score?).
(L) Create a forum for the scouts to interact with users (or with other scouts).
(M) Once the data has been fully populated, use for macro assessments for things like measuring draft years against each other…or how well a team does at drafting or development.
(N) Use a baseline.
(O) Show backdated information (ie player X is an 85-skater today, but was 65 at draft time).
(P) Release reports on a schedule (while actually scouting make take place at different times, the reports should be refreshed at least a few times a year, even if there is no change).
(Q) If they ever get in the way of CF core brand, abandon them.
That’s all. Merry Christmas.