View Single Post
Old 05-24-2021, 01:32 PM   #18
WirryWoo
Forever Derbyless
Retired StaffFFR Simfile Author
 
WirryWoo's Avatar
 
Join Date: Aug 2020
Age: 32
Posts: 240
Default Re: Poll: Which global skill rating system is best ?

Quote:
Originally Posted by xXOpkillerXx View Post
Sadly, this is all but true. I thought I made the example simple enough to be understood by everyone but I guess I failed to do that. Firstly, the experiment was 100% independent of rating system. It solely compared a "skill-specific" player to a "generalist" player, and made the claim that both should be rated equally. So not only did you poorly interpret this, you also gave a bad example which almost exactly demonstrate what is wrong with weighted:
The experiments provided were confusing because 1/3 + 2/3 + 3/3 != 1 and 2/3 + 2/3 + 2/3 != 1 for the experiments. Are these the weightings assigned from the system or some amount of skill that each player hold for each pattern? I assumed they were weightings because we're having the conversation about if a weighted system is better or not than the unweighted setting. If that's true, what I initially said applies.

If it's the latter, this goes to my other point about comparing skills. Specifically:

Quote:
Originally Posted by WirryWoo View Post
In terms of "not knowing what is favored (skillset specificity vs varied skillset) nor to what degree it is", this is where hyperparameters are created to set these rules. I created this alpha hyperparameter to simplify a lot of question marks that no one else have collectively been able to address within the community. In this case, do we define skill as being a jack of all trades and a master of none, or being a one trick pony due to successfully being able to maximize your skill ratings? I don't know... This alpha controls how conservative we want this system to be since the answer to the previous question is highly community dependent and cannot be easily determined by the scores given to me. It's only sub-optimal because there is no objective criterion measuring the best way to define skill; it's practically impossible. The best I can do is give the community control to define that, however the hell they want... Despite however optimal or not this approach is, it's the best that we can at least do in an attempt to designing a robust tentative model catered to FFR until rhythm game skill determination and stepfile difficulty measurements are fully standardized across the entire rhythm games community (good luck getting that lmfao).
So what's the answer here? We cannot compare skills unless we have a solid definition of the word "skill". Specifically, who is more skilled than the other. You responded:

Quote:
Originally Posted by xXOpkillerXx View Post
Suddenly, the skill rating ordering of those 4 players become the following (in a weighted system):

A ~= B ~= C > D

Whereas in an unweighted setting, this is what it'd look like:

A ~= B ~= C ~= D

This is mathematically unavoidable, and is the very definition of what I call unfair.
I agree with what is stated here, but this is only one dimension of the current problem. We can easily hyperfocus on this definition and make sure that this equality condition is being met. However, in the unweighted setting, there are a number of examples you need to "sacrifice" in order to fully make A ~= B ~= C ~= D work, including:

Quote:
Originally Posted by WirryWoo View Post
Player A: https://www.flashflashrevolution.com...me=Chloe_edz15 (Weighted: 0 (flagged as inconclusive), Unweighted: 7.83)
Player B: https://www.flashflashrevolution.com...ername=Soure97 (Weighted: 93.25, Unweighted: 74.67)
Player C: https://www.flashflashrevolution.com...=Guilhermeziat (Weighted: 87.7044, Unweighted: 52.17)
(there are more examples)
So when you make these sacrifices, do you really get A ~= B ~= C ~= D? This goes back to my other point about the unweighted system's flaw:

Quote:
Originally Posted by WirryWoo View Post
This is less of a problem to me than what I wrote previously, but one of the main drawbacks I can see with the unweighted system is that it is forced to have this minimum requirement from the players to make the unweighted system work. Because of this forced requirement, you are requiring everyone who hasn't played 50 to 100 songs, to play (ideally seriously) in order to be considered ranked and improve the representation of the unweighted rankings. So there is a huge reliance on the players to play their part in making the unweighted system work. This isn't realistic in practice and this is why I call the unweighted system much more favorable to "active players". The ones who are committed to contributing to the high scores will be the ones who make the unweighted setting work.
And realistically, encouraging many players to fix their ranks will not happen unless if there is a strong incentive for them to do so.

So the following things will most likely have to happen if we move forward with unweighted:
• We change the definition of what "skill rating" means because now, we'll have a poorer definition of skill (examples provided above). People can easily say:
Quote:
Originally Posted by WirryWoo View Post
Seeing Myuka ranked as 100 would be very frustrating from the player's experience. Every now and then, you'll see a high D7 player post "I just beat Myuka's skill rating lmfaoooo!!" on the forums. Is this sort of the dynamic you want skill ratings to be on FFR? Yeah... I don't think so.
• We need to design a mechanism to toggle between when Top 5 vs. Top 100 averages are more accurate, all based on sentiments like "this person is bsing, let's rely on Top 5". Most cases, they are obvious, but in terms of model design, they are subjective.
• We need create more and higher incentives for players to play their high scores as optimally as possible.

But my counter to each point (in respective order)
• Having a skill rating not truly reflective of skill is paradoxical. It will only be as reflective as the most active players. We can do better...
• Why rely on two models when we can rely on one? Introducing any external assessment to choose if unweighted Top 5 or unweighted Top 100 makes more sense, introduces someone else's bias into the system. Do we want that to define skill ratings? I don't. I'd rather rely on my scores to do that determination for me.
• Why bother relying on the inactive players when we have the scores to help us define the skill ratings? We don't need them, and it's likely that they don't give any shits about us too lol.

It's clear that there are many issues with unweighted. The biggest reasons for this is due to the fact that the unweighted mechanism is located on one extreme end of all possible solutions (let's call this 'black' solution). The 'white' solution would be the case where your skill rating is extremely weighted and is defined by the performance of your #1 score. Our current system is set using a "very very light grey" solution, and clearly, it's easy to see the issues of that weighted system; the current system understandably emphasizes the many flaws of a "white" solution. Specifically, they're all of your arguments against weighted, and they're for the most part, valid. There are also many flaws of a "black" solution (e.g. reasons I posted).

The best solution is one that calls for a tradeoff between a "white" and "black" solution where the pros of each extremes are emphasized and the cons of each extremes are minimized. This is why it's obvious that our solution demands a "darker grey" solution since we clearly see the issues of the "nearly white" solution (e.g. our current system).

I've been advocating for this "grey" solution numerous of times:

Quote:
Originally Posted by WirryWoo View Post
I get it. Our current weighted system does not do it well, but this doesn't necessarily translate to "any weighted solution cannot do that". It's a tradeoff between "improving representation of lower ranked files" and "rewarding performance for songs subjectively more challenging than what your current skill rating suggests", and in my opinion, that should be respected.
Quote:
Originally Posted by WirryWoo View Post
Back to (*), this is where the word regulated plays the biggest role in my statement here. The current system highly favors the Top 10ish songs. That is not regulated because you get rewarded wayyy tooo much for scoring your #1 and pretty much nothing from scoring your #15. So what does "regulated" here mean? All it means is that, we need to control the weights appropriately so each song has a representable piece in contributing to the skill ratings metric while maintaining how reflective it is to the player's experience. Controlling the weights includes dealing with any outliers like people not completing their Top 100 and people half assing Top 100. This is why I proposed a linear progression of the weightings because although my satisfaction between my #20 and #15 will be different than someone else's satisfaction between their #20 and #15, the linear progression distributes the weights as consistently as possible without introducing how #15 is significantly much more important than #20 (like our current system which indicates that #1 is >140 times more important than your #15 lmfao)
For some reason, the way you seem to view this is a binary option between black and white. This assumes that every two weighted options will equally perform when compared against each other. Basically any weighted option I propose will yield no change compared to our current weighted system. Do you agree with this? I sure hope not. The Colab notebook provided showed a ton of shifting and recalculations of skill ratings, so some things are changing...

Here are a few examples:

Format: Username (Current, Regulated Weighted, Unweighted)

RadiantVibe (97.53, 92.2631, 90.91)
Andrew WCY (96.67, 94.0343, 93.01)

CammyGoesRawr (93.80, 88.0389, 86.59)
Hakulyte (93.25, 90.5301, 89.72)

Currently, both RadiantVibe and CammyGoesRawr are rewarded for their top scores much more than Andrew WCY and Hakulyte. You see this under "current".

Both regulated weighted and unweighted settings agree that Andrew WCY > RadiantVibe and Hakulyte > CammyGoesRawr. This is the regulated solution acknowledging the pros of the unweighted solution and factoring that into the overall calculations.

I'm happy to chat through more examples if two users want to do a comparative analysis...

However, where the unweighted solution fails is based on the counterexamples provided above:

Quote:
Originally Posted by WirryWoo View Post
Player A: https://www.flashflashrevolution.com...me=Chloe_edz15 (Weighted: 0 (flagged as inconclusive), Unweighted: 7.83)
Player B: https://www.flashflashrevolution.com...ername=Soure97 (Weighted: 93.25, Unweighted: 74.67)
Player C: https://www.flashflashrevolution.com...=Guilhermeziat (Weighted: 87.7044, Unweighted: 52.17)
(there are more examples)
This is the regulated solution acknowledging the pros of the weighted solution. Do you now see how valuable it is to look at the "greys" rather than hyperfocusing between the "black" and "white" as only options?

For Zageron's case:

Zageron (61.80, 54.517, 43.31)

Clearly, the weighted solution suggests that Zageron can score at the competency of someone who can AAA around a difficulty ~54, and low and behold... he did (Rat Twist)! Was that a fluke run? I don't know. All the model did was capture the relevant signals seen in his Top 100 high scores.

If you still think ~54 is too generous for Zageron, this is where alpha comes to play. We can tweak alpha to make this algorithm more or less conservative. I give the developers and the community power to define these standards in accordance to what they think is best for all players moving forward. As an individiual, it's not my right to define this on behalf of the community. That's the value of where alpha and head comes into play in the model.

The closer alpha is to 0, the more conservative the weighted model is, and therefore the more representative top 15 is to their skill rating. The closer alpha is to 1, the less conservative the weighted model is, and therefore the more representative top 100 is to their skill. Alpha is a tradeoff parameter between the overall value of top 15 vs. top 100 (in machine learning speak, think regularization). This alpha has to be set standard for all players unless if we devise a new algorithm estimating alpha for each player as a function of their high scores.

Quote:
Originally Posted by xXOpkillerXx View Post

2.1. Top X size
Say we put X at 50. A player would have their skill level biased toward a single skill (in any system) once they have achieved at least 25 scores

Now say a player's level should be based mostly on files that are ±5 levels around their skill level (assuming enough files provided in that range). We also know that files difficulty ranges between 1 and ~120 (for simplicity).

This means that every new file has a (5+5) / 120 = 8.333% chance of being in your range.

FFR releases files at a rate of ~4 files per week + an additional 80 files ish for events yearly. Per year, that's around 4 * 52 + 80 = 288 files, so lets make this 300 to account for events I might be forgetting (a higher number favors your argument). This means that every year, there are about 288 * 8.333% = 24 files in your specific range (assuming you stay at the same level).

To reach the necessary 25 skill-specific files, you would need a minimum of 1-ish year of content if all files were specifically biased toward your strong skill. If we consider the fact that there are various skills, lets say 5 (a bit less than etterna), that means every 5 years there is a high probability that some players will have enough files to still generate a biased skill rating despite it being a top 50.

However, we all know that there aren't that many extremely biased files after nearly 20 years of content. This simply shows that files are on a wide spectrum regarding what skill they test. That being said, we can see that in theory, we Should scale any system's top X size every 5 years or so, but that in practice, it's probably something closer to 50+ years. Why 50+ ? Because I dare you to find a handful of players with relatively optimal top 50 scores where at least 25 scores are clearly focused around a single skill.
"This means that every new file has a (5+5) / 120 = 8.333% chance of being in your range." This quote makes the false assumption of every new file having equal probability being rated between difficulty 1 to 120, which is honestly pretty silly and untrue. It also assumes that every batch has the same representation of difficulties but it's clear that just by looking at Official Tournaments, there's a higher bias for harder songs. (I also think a lot of stepartists have the self-interest to step harder songs in general, but this is just my personal sentiment that maybe many people can agree) Therefore this computation is inaccurate. It's realistically dependent on the player's skill and the contributing stepartist's submissions in the batch. Therefore your analysis to 50+ years is unreliable.

Instead, let's look at past data where we know for a fact that songs are released in some sequential order. Before 2004, when files are held in Legacy engine, there is a good amount of opportunities to score well on songs requiring trilling: One Minute Waltz, Flight of the Bumblebee, and Runny Mornings (SGX Mix), debatably Molto Vivace. Players who excel at trilling will perform relatively well on these files and will highly benefit from any system imposed. You can consider their performance as "flukes" (similar to Zag's performance on Club and AIM Anthem). Due to smaller file frequency in the engine, a well designed skill rating system back then would require something like Top 10, regardless if is weighted or unweighted.

Fast forward to today, we get La Campanella, Giselle, MAX Forever, and other trilly songs that I can't think of right off the top of my head. But I'm very confident that there are at least 10 songs currently in the engine that emphasizes trilling. This is equivalent to Zageron having more files similar to Club and AIM Anthem, and therefore, has more opportunities to fluke. Top 10 will easily be filled with trilly songs, so there is a need to scale out. This time span is less than 20 years at least for trilling files. For other patterns, this length will vary depending on previously mentioned variables (stepartist song submissions, batches, events, etc.).

Regardless of how long it takes for the need of scalability to exist, the main point to make about scalability is that there is a point in time somewhere in the future where we need to revamp the system. Maybe Top 100 would be too small of a sample size, so we need Top 150, or Top 200, etc. in order to maintain the accuracy of skill ratings.

The issue posed with the unweighted setting is that it will be difficult to retrack these players who joined FFR at 2002 and then stopped playing the game. Your solution (which I personally characterize as "hacky") is to filter out these players so that their ratings don't get factored into the overall high scores. This goes back to my thoughts about the minimum requirement:

Quote:
Originally Posted by WirryWoo View Post
It's perfectly fine to enforce a minimum requirement in both settings (it probably is better in both cases because it's ridiculous to assign a skill rating to having one song played). This is less of a problem to me than what I wrote previously, but one of the main drawbacks I can see with the unweighted system is that it is forced to have this minimum requirement from the players to make the unweighted system work. Because of this forced requirement, you are requiring everyone who hasn't played 50 to 100 songs, to play (ideally seriously) in order to be considered ranked and improve the representation of the unweighted rankings. So there is a huge reliance on the players to play their part in making the unweighted system work. This isn't realistic in practice and this is why I call the unweighted system much more favorable to "active players". The ones who are committed to contributing to the high scores will be the ones who make the unweighted setting work.

The weighted system I designed is a lot more lenient in terms of requiring a minimum requirement (we are freely able to chose this requirement independent of the model's development). You can choose any reasonable minimum requirement for each player to satisfy and regardless if that requirement is met or not, the model attempts to find the best representation of skill using the weighted setting. Those who don't meet the minimum requirement will simply be excluded from the high scores via a defined conditional filter. (e.g. don't show username in high scores if they don't play 50 or 100 songs)
When you filter these players out, this also changes the definition of "skill rating". Imagine if Usain Bolt did not participate in Summer Olympics this year but attended four years ago. Has his skill rating changed? Maybe, maybe not. Point is, he still should relatively have the skills to perform at the Olympics-level if he were to attend. Your suggestion is to mark him as "no skill due to not participating", where my solution respects his skills given his past performance and attempts to make that acknowledged despite not seeing his performance this summer. Which one better measures skill?

Quote:
Originally Posted by xXOpkillerXx View Post
Although you seem to very much consider long term effects of new content, you dont really address the short term effects. In a weighted system where bias is significantly greater than in an unweighted system, every single new file that is biased enough towards one skill will create more unfairness in its specific difficulty range.

In order to have some fairness, you'd need enough of these biased files in each specific skill for anyone to fill optimal 25 scores with any random combination of these files in their difficulty range. Mathematically, this means you need:

25 (majority of 50) * 5 (number of skills) * 12 (minimum number of distinct difficulty ranges in a 1-120 system) = 1500 skill specific files
(assuming perfect distribution between skills and difficulty ranges, which is even more unrealistic)

This number will take Far longer to achieve than the 50+ years needed to make top X size a serious concern (when X >= 50).
The short term effects are addressed due to the regulated system's goal of accounting both pros and cons of both weighted and unweighted settings. The results above speak for themselves.

Quote:
Originally Posted by xXOpkillerXx View Post
2.3. Scaling conclusion
It honestly doesn't seem adequate to focus too much on scaling issues, as both system would be fine for a very long time. My problem with weighted however, is that it will forever be unfair.
This is under your definition of "fair". There are multiple definitions to consider as mentioned previously, but you're so hyperfocused on one and try to make amends to resolve the faults of the unweighted setting via filtering inactive players, relying on their involvement, etc.

My definition of fair incorporates both fairness criterions offered by weighted and unweighted settings and tradeoffs between the two to overall generalize that definition of "fairness" without needing to rely on other factors except the scores given to me. In my opinion, this defines skill rating.

It's also important to focus on building any scalable solution possible. The earlier, the better. Otherwise, we'll have this conversation again when the hacky solution fails.

Quote:
Originally Posted by xXOpkillerXx View Post
3. On rewarding top scores, irrelevant of rating system[/b]
I am very aware of the fact that many of you can't accept seeing players with a few great scores being ranked too low due to unoptimal top X. I agree that this is subjective and that every player has their right to assign as much importance they want to that flaw. For that reason, I will propose a slight change in FFR's design to hopefully fix that.

Do keep in mind that although I suggest this new idea to complement an unweighted skill rating system, I also believe it should be implemented even if a weighted system is chosen.

3.1. The suggestion
Some of you may or may not have noticed that, in a player's leaderboard page, their Top 5 unweighted average and their Top 100 unweighted averages can already be seen. This is essentially the first step of what I think is a great step forward.

A Top 5 metric fully embraces skillset bias and fluke scores, as these are inevitable over time for a non-negligible number of players. Not only does it suffer no scaling issues, it also takes into account All players, retired or not, since the very beginning of FFR. This metric basically reflects the current weighted top 15, but removes the unnecessary weights and simplifies the process.

A Top 50 (or Top 100) metric would do everything I've been arguing for, which is maximized fairness and simplicity of outliers.
My biggest issue with this is that you are now injecting your own personal bias into the skill rating. Specifically, you're now making the conscious decision of answering the question "when should I rely on "Top 5 vs. Top 50/100"?" This is a choice that you make, not the model.

This is like choosing conditionally if a chess player's elo rating vs. win percentage is more definitive of their skills. I disagree with this completely because skill is measured by your performance and scores, not someone else's conscious decision to choose between two different metrics.

Quote:
Originally Posted by xXOpkillerXx View Post
3.2. User friendliness
In all honesty, I despise the argument "But players might prefer a single number to represent their rating". First of all, we do have a unique number that represents one's solo level and it's called just that: Level. There is absolutely no reason to enforce a unique metric for player comparison, because I could just as easily say "But some players might prefer having different options to compete for", which is equally valid and subjective.
Isn't this what we're arguing about though? Specifically how is Level computed? Do we want the weighted vs. unweighted settings to perform the computation of Level?

""But some players might prefer having different options to compete for", which is equally valid and subjective." Because choosing between two different options as the "official rating" is subjective from the modeling standpoint in the conversation of tracking skill rating, it's not valid because now you are comparing apples to oranges. This is a paradoxical statement. Do we want skill to be defined subjectively by someone's brief look at your level ranks or objectively by the scores that you produce? I personally prefer the latter.


Quote:
Originally Posted by xXOpkillerXx View Post
There is also the current issue of "how do I compute my skill rating ?", something that can be seen pretty often in either discord or multiplayer. Then, some experienced player may decide to take their time to explain weights and stuff, and eventually it takes quite some time to compute manually anyway (if you want to see the effect of a potential change). This issue should definitely be less apparent with the proposition I make. We can expect people to ask "Why are there 2 ratings and what do they mean ?", but clearly it should not take longer to explain two simple (no weights) averages; I'd say it should even be shorter to explain tbh.
I agree that it's easier to explain the unweighted setting no matter how you engineer the weighted configuration. Do we care more about being transparant or being more accurate in defining skill though? This is a tradeoff we need to sacrifice because the simplier the approach is, the more suceptible the model will become to performing poorly on the outliers defined previously. The code I wrote is really simple to explain as well (not as simple as unweighted but still relatively easy to understand). It can be easily done with pictures.

Quote:
Originally Posted by xXOpkillerXx View Post
3.3. Appearance on the website and game
I think both metrics should have their respective leaderboard, and that everywhere that the current "Skill Rating" is listed should be split in 2 cells of Top 5 and Top 100. This involves a bit more development, but pretty minor changes afaik, as there is nothing drastically new to implement.
Terrible design idea in my opinion. A bystander will simply think that you might as well have a Top 10, 100, 1000, etc. Do you see any high scores that contains two different scoring metrics? No, neither have I.

The two rating system reserved to defining skill rating will not address the "I just beat Myuka's skill rating lmfaoooo!!" issue. The next natural question for someone new to the game is "Which one is better?" Shouldn't the rating system be one centralized system that easily allow the user to make valid comparisons to the people surrounding them in the high scores? How does ranking even work in this case? lol

Quote:
Originally Posted by xXOpkillerXx View Post
I personally don't think it's ok to assume that a player's scores will on average match a linear curve when it comes to effort put into each of the scores. However I do agree that it can be great to reward the top scores. Therefore, the 2-metric system would not have that decay you suggest, but it would still give significant value to your top 5 scores.
I only created a linear progression because I see each song having a representive contribution (with respect to placement in your high scores) to the skill rating without encoding any additional bias when comparing between two songs. Specifically, the delta between your #1 and #2 weight percentages is the same as the delta between your #51 and #52 weight percentages.

Quote:
Originally Posted by Gradiant View Post
Confused with the difference here. With both of these systems, players aren't going to be listed if they haven't hit whatever minimum is in place.

Also in general, don't really like the 'people are going to have to play' argument against average system. What is the whole point of this game anyway but to play files to get scores they think are good? I mentioned this in the discord when op brought it up, but the token requirement for coactive is to AAA 50 different files in a day. So playing 50 different files just to best of ability not even AAA'ing shouldn't take longer than a day either. Don't think this is too much to ask for at all for the benefit of being on a leaderboard. And if they don't care enough about playing the game, then they're not listed in the leaderboard like the bolded part of that 2nd part in the quote.

Also thinking of games like moba's or stuff like starcraft where you go through placement matches before being ranked; those games a match could go anywhere from like 30min to an hour, compared to an ffr file being like 2 minutes or so. The times required for the placement matches would be similar to hitting whatever minimum number of files played to be on ffr's leaderboard.
The difference is that there's going to be a huge dependency on relying on these filters to make the unweighted setting work. For the weighted setting, they're completely independent from the filters assigned. You can freely choose how you want to filter the scoreboard without relying on the weighted model.

You're right that it's not a huge ask. I get that. What I'm saying is that when both systems call for a revamp after more files get added into the engine, you will need to rely on everyone who previously met that requirement to make that unweighted system continually work over time. For the weighted setting, you don't need their involvement at all because the data is already there, so why not try to make the best value out of that information? The unweighted setting is too restrictive to make value of the information provided from inactive players, and filtering them out from the scoreboard is only a hacky way of hiding the deficiencies present in the unweighted setting.

I agree with the needing a minimum requirement similar to ranked queues like League, Starcraft, etc. in order to qualify on the high scores. I disagree with how dependent the unweighted system is on user activity (i.e. their involvement to make the unweighted configuration more reliable and representative) to measure skill rating. You don't need to do this in the weighted setting.
__________________
                   

Last edited by WirryWoo; 05-24-2021 at 01:38 PM..
WirryWoo is offline   Reply With Quote