Flash Flash Revolution

Flash Flash Revolution (http://www.flashflashrevolution.com/vbz/index.php)
-   FFR General Talk (http://www.flashflashrevolution.com/vbz/forumdisplay.php?f=14)
-   -   Entropy Gain for per-receptor NPS (http://www.flashflashrevolution.com/vbz/showthread.php?t=149402)

xXOpkillerXx 07-6-2018 06:34 PM

Entropy Gain for per-receptor NPS
 
I'm currently working on adding a few more metrics to the extended statistics of every files, with the help of PrawnSkunk to validate and integrate those to the website. I'm reaching out to everyone who has some knowledge in machine learning and maths.

The first stats I finished coding are the NPS (split just like the current total nps by different timeframes like .3s, .5s, 1s, 2s, etc.) for individual receptors (left, down, up, right). My intuition is that a 4 NPS section like [1,2,3,4] vs [1,1,1,1] have absolutely different difficulties, the latter being much more difficult. So, do you think that those + the total NPS would give a significant entropy gain (or any equivalent depending on the model) in computing the difficulties of the files ?

Any ideas/questions appreciated.

Dinglesberry 07-6-2018 07:36 PM

Re: Entropy Gain for per-receptor NPS
 
Could you factor in the occurrence of certain types of notes during sequences? Just as an example, a 20 nps section of single note streams is probabaly much harder than a 20 nps section of dense js where every other note is a jump, so maybe you could find some ratio of single notes to jumps etc

Obviously 4 nps of repeated jacks is harder than 4 nps of a roll etc, but there's also things like 20 nps of streams that are rolly are generally easier than 20 nps of streams with lots of ohts

TheSaxRunner05 07-6-2018 07:40 PM

Re: Entropy Gain for per-receptor NPS
 
I'd be interested to see its results. I would think that there would be three basic pattern difficulties - NPS, jacks, and predominately one handed patterns. The coding would have to be able to read a song like club, which has a max NPS of only 16 but is considered a 75 currently. I would also think a song should get a bump in difficulty if it alternates between all three of those categories, or combines them, instead of just focusing on one. (I think its part of the reason "Southern Cross" has seen such a drop in its recognized difficulty - modern stepcharts are much more likely to mix in more variety of complex patterns over just having speed).

There are certain charts I've always felt were underrated, and if you had a draft program sometime, I'd give you a short list to test.

xXOpkillerXx 07-6-2018 07:44 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by Dinglesberry (Post 4626900)
Could you factor in the occurrence of certain types of notes during sequences? Just as an example, a 20 nps section of single note streams is probabaly much harder than a 20 nps section of dense js where every other note is a jump, so maybe you could find some ratio of single notes to jumps etc

Hmmm, what you're saying is that [1, {2,3}, 4] in 1 second is harder than [1, 2, 3, 4], am I correct ? Rebember that if you have the same nps, the gap between {1} and {23} in js will be bigger than the one between {1}, {2} and {3} in pure stream. If you do believe that the former is more difficult, could you please elaborate on why ?

Quote:

Originally Posted by Dinglesberry (Post 4626900)
Obviously 4 nps of repeated jacks is harder than 4 nps of a roll etc, but there's also things like 20 nps of streams that are rolly are generally easier than 20 nps of streams with lots of ohts

Those would also be taken into account with the per-receptor nps ! for example, if you have a stream like [1, 2, 3, 4, 1, 2 ,3 ,4] over 2 seconds, all receptors will have the same max nps of 1. On the other hand, if you have [1, 2, 1, 2, 3, 4, 3, 4] on 2 seconds, all receptors will have a max nps of 2 !

Thanks for your questions

xXOpkillerXx 07-6-2018 07:51 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by TheSaxRunner05 (Post 4626902)
I'd be interested to see its results. I would think that there would be three basic pattern difficulties - NPS, jacks, and predominately one handed patterns. The coding would have to be able to read a song like club, which has a max NPS of only 16 but is considered a 75 currently. I would also think a song should get a bump in difficulty if it alternates between all three of those categories, or combines them, instead of just focusing on one. (I think its part of the reason "Southern Cross" has seen such a drop in its recognized difficulty - modern stepcharts are much more likely to mix in more variety of complex patterns over just having speed).

There are certain charts I've always felt were underrated, and if you had a draft program sometime, I'd give you a short list to test.

I don't have access to ffr's files so I'm a bit limited right now for the tests (I need to convert sm files, which I havent done yet). I'll post an update if I get more stuff to test.

For the one handed, I was already thinking about adding the same kind of nps splits but with left hand and right hand, so all {1} or {2}, and all {3} or {4}. That was the one handed trilling bias would be accounted for in the metrics, along with jumpjacks on single hand.

EDIT: I kinda get what you mean with the alternating patterns, but I don't think I agree. Would you have any other examples for it so that I can check them out ? A metric of variety in patterns sounds pretty hard to define mathematically, although not impossible; it would still be computed by using some kind of normalized variance on the different nps metrics. For example, if the nps-per-receptor has definite peaks vs spread out progression vs constant nps, etc

TheSaxRunner05 07-6-2018 07:51 PM

Re: Entropy Gain for per-receptor NPS
 
What I think would be interesting is to get a few players together and make two chart with various patterns in it - one set of simpler patterns and one set with more complex patterns (jump jacks and handstream, etc).

Have each player submit scores on different rates of the chart and plot the decline in scores as the rate increases until they reach a point of just mashing. Use math then to determine the relative difficulty of certain patterns over others.

Using this method, you'd be able to compare "160 BPM handstream vs 190 BPM Jumpstream" for example, or "jumpstream with mini-jacks vs jumpstream without them." Using multiple players will help reduce player ability bias.

Dinglesberry 07-6-2018 07:52 PM

Re: Entropy Gain for per-receptor NPS
 
I meant more so like, denser jumpstream patterns, like suppose for example you had

2 (13) 2 (14)

As a pattern in jumpstream - in order to achieve the same nps with streams the patterns would be have to be faster since you don't have the double notes, but equally "difficult" streams would just be like, a 4 note one hand trill based on the pattern of js etc, but that wouldn't make up the same nps

Also consider a situation, a pattern that's just a jumptrill (12) (34) (12)... Is arguably the same difficulty as (1) (4) (1) (4) or even (12) (1) (12)... Despite different nps

xXOpkillerXx 07-6-2018 08:03 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by TheSaxRunner05 (Post 4626906)
What I think would be interesting is to get a few players together and make two chart with various patterns in it - one set of simpler patterns and one set with more complex patterns (jump jacks and handstream, etc).

Have each player submit scores on different rates of the chart and plot the decline in scores as the rate increases until they reach a point of just mashing. Use math then to determine the relative difficulty of certain patterns over others.

Using this method, you'd be able to compare "160 BPM handstream vs 190 BPM Jumpstream" for example, or "jumpstream with mini-jacks vs jumpstream without them." Using multiple players will help reduce player ability bias.

The idea is good but there's one critical point that makes it not work: you have to extract all the patterns from a file, at various speeds. This is far from a trivial task and I do not think I can achieve such a model tbh. Your concept kinda goes in the direction of fully unsupervised with very few attributes to output some regression. It's not really doable I would think.

xXOpkillerXx 07-6-2018 08:15 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by Dinglesberry (Post 4626908)
I meant more so like, denser jumpstream patterns, like suppose for example you had

2 (13) 2 (14)

As a pattern in jumpstream - in order to achieve the same nps with streams the patterns would be have to be faster since you don't have the double notes, but equally "difficult" streams would just be like, a 4 note one hand trill based on the pattern of js etc, but that wouldn't make up the same nps

I don't think you can compare [2, {13}, 2, {14}] with [2, 1 ,2, 1] at a faster speed, or I don't understand why you would ? An equivalent stream would rather be [2, 1, 3, 2, 1, 4] or [2, 3, 1, 2, 4, 1] at a faster speed.

Quote:

Originally Posted by Dinglesberry (Post 4626908)
Also consider a situation, a pattern that's just a jumptrill 1. (12) (34) (12)... Is arguably the same difficulty as 2. (1) (4) (1) (4) or even 3. (12) (1) (12)... Despite different nps

Here, example 1. and 2. would have the same nps-per-receptor, even though their total nps would be different by a factor of 2. So yes, the difficulty would remain similar. Idk about 3.

One Winged Angel 07-6-2018 08:23 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by Dinglesberry (Post 4626900)
a 20 nps section of single note streams is probabaly much harder than a 20 nps section of dense js where every other note is a jump, so maybe you could find some ratio of single notes to jumps etc

I'd say the former is only marginally more difficult than the latter, at least at specifically 20nps. You're effectively comparing a 300bpm stream to 200bpm jumpstream with jumps every eighth. Putting aside patterning or any potential for stam drain and just trying to discern what's harder to maintain good PA on, I'd say the stream wouldn't be rated more than 5 points higher than the dense js, assuming the sections aren't drawn out for a long period of time.

At higher bpms the disparity in difficulty becomes a bit wider (e.g. 375 streaming pushes a speed threshold that 250 dense jumpstreaming doesn't quite match up with) but eventually you hit a point where they're both outside the realm of possibility to PA for almost everyone anyways (450 streaming vs 300 dense js etc.)

xXOpkillerXx 07-6-2018 08:34 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by One Winged Angel (Post 4626914)
At higher bpms the disparity in difficulty becomes a bit wider (e.g. 375 streaming pushes a speed threshold that 250 dense jumpstreaming doesn't quite match up with) but eventually you hit a point where they're both outside the realm of possibility to PA for almost everyone anyways (450 streaming vs 300 dense js etc.)

That is very much depending on the patterns still. It the stream has for example [1, 2, 3, 2], the max nps on {2} will be high, and yes the difficulty will ramp up. The js can also be harder if it has anchors: [1, {23}, 1, {34}, 1 {24}], etc. I believe that judging the difficulty of files by their patterns, even if it seems intuitive, is the wrong way to go. Maybe I'm mistaken though, which is why this thread is up~

One Winged Angel 07-6-2018 08:42 PM

Re: Entropy Gain for per-receptor NPS
 
Yes with the latter part of my post you can assume the same as noted in the former, or better yet just a fairly equal note distribution for all four notes. Barring bullshit like anchors or one hand trills or patterning that trivializes the section almost entirely (like a giant roll), the stream would still likely have a slight edge in difficulty. I'd be willing to bet there's a considerably larger amount of D7+ players that can maintain better PA on 250 dense js over 375 streaming, despite equal nps.

edit:
Quote:

Originally Posted by xXOpkillerXx (Post 4626915)
That is very much depending on the patterns still.

Quote:

Originally Posted by xXOpkillerXx (Post 4626915)
I believe that judging the difficulty of files by their patterns, even if it seems intuitive, is the wrong way to go.

op ily but i'm very confused (and i disagree wholeheartedly with the second quote; many difficulty algorithms largely rooted in nps to calc difficulty failed because they couldn't account for wild difficulty swings for charts at the same bpm due to incredibly lenient or extremely abrasive patterning)

I want to say patashu's TS difficulty calc took into account receptor nps and the results were super memey, but maybe you'll do it better (or maybe I'm mistaken and it just involved nps as a whole)

xXOpkillerXx 07-6-2018 09:04 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by One Winged Angel (Post 4626917)
Yes with the latter part of my post you can assume the same as noted in the former, or better yet just a fairly equal note distribution for all four notes. Barring bullshit like anchors or one hand trills or patterning that trivializes the section almost entirely (like a giant roll), the stream would still likely have a slight edge in difficulty. I'd be willing to bet there's a considerably larger amount of D7+ players that can maintain better PA on 250 dense js over 375 streaming, despite equal nps.

The thing is that you discard the "harder" and "easier" variations of patterns we know (streams, jumpstreams, etc) to keep some kind of middleground difficulty and you then generalize the comparison of said patterns by saying that overall, streams are harder than js at x fps. I'm not Totally refusing the idea, but the problem is that the middleground you speak of can be composed of such an large number of combinations that the comparision becomes subjective and hardly defined.

Quote:

Originally Posted by One Winged Angel (Post 4626917)
op ily but i'm very confused (and i disagree wholeheartedly with the second quote; many difficulty algorithms largely rooted in nps to calc difficulty failed because they couldn't account for wild difficulty swings for charts at the same bpm due to incredibly lenient or extremely abrasive patterning)

It's ok to disagree ! My goal is to explain my ideas as best as I can, and discuss alternatives.

I just want to mention again that although I only talk about nps, there are Many metrics that can be extracted with that. What I mean with "judging with patterns" is any approach that tries to match hardcoded patterns in a file (kinda like a regex) and applies metrics to that; I believe it can never take into account every pattern and variation, as opposed to nps metrics that can model speed and hand bias in a way that englobes all possibilities.

One Winged Angel 07-6-2018 09:12 PM

Re: Entropy Gain for per-receptor NPS
 
I'm not home anymore so cant post the response I want atm but I appreciate you striving to create something to tackle this problem rooted in objectivity (would love the same), just fearful of potentially poor results based on what others have tried to do in the past in a similar fashion

Also hi chooby I saw u infracted me but I cant open PMs on my phone but that's ok I probably deserved it ps I missed u

xXOpkillerXx 07-6-2018 09:16 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by One Winged Angel (Post 4626922)
I'm not home anymore so cant post the response I want atm but I appreciate you striving to create something to tackle this problem rooted in objectivity (would love the same), just fearful of potentially poor results based on what others have tried to do in the past in a similar fashion

I'll be waiting for your reply ! I'm in no hurry and I still need to do Many things before I get to actually modeling anything. :)

EDIT: Even though I appreciate any comment about how x or y previous solution worked or not, since there are quite a bunch of ways to approach the problem, I'd prefer if details to the mentionned solutions are linked to or explained thoroughly. Otherwise, I can only guess stuff about the implementations and that would lead me nowhere most likely. More maths and machine learning arguments would be much more productive imo.

MinaciousGrace 07-7-2018 01:47 AM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by xXOpkillerXx (Post 4626921)
I just want to mention again that although I only talk about nps, there are Many metrics that can be extracted with that. What I mean with "judging with patterns" is any approach that tries to match hardcoded patterns in a file (kinda like a regex) and applies metrics to that; I believe it can never take into account every pattern and variation, as opposed to nps metrics that can model speed and hand bias in a way that englobes all possibilities.

wrong

xXOpkillerXx 07-7-2018 02:06 AM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by MinaciousGrace (Post 4626942)
wrong

Well thank you for your detailed insight, I will consider your opinion !

No seriously provide information or don't post. Idc how much you know about it if you're gonna say yes/no. Ty :)

MinaciousGrace 07-7-2018 02:18 AM

Re: Entropy Gain for per-receptor NPS
 
i could explain myself but then i'd have to kill you

MinaciousGrace 07-7-2018 02:24 AM

Re: Entropy Gain for per-receptor NPS
 
man the only thing more cliche than that response would be if i had already written extensively on all of the relevant areas of discussion

then carefully organized said writing into a document that was made public

then spent thousands of hours doing practical implementation of testing of said thoughts

gosh that would really be the b side of a bollywood movie tier script

leonid 07-7-2018 02:41 AM

Re: Entropy Gain for per-receptor NPS
 
rong

xXOpkillerXx 07-7-2018 03:16 AM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by MinaciousGrace (Post 4626946)
man the only thing more cliche than that response would be if i had already written extensively on all of the relevant areas of discussion

then carefully organized said writing into a document that was made public

then spent thousands of hours doing practical implementation of testing of said thoughts

gosh that would really be the b side of a bollywood movie tier script

OR, you could post a link to said documentation, stop being an ass for absolutely no reason like you often are, and everything would've been cool~

You have yet to implement something that doesn't require so many bans on files, and how many times I heard Etterna players say "wow this is nowhere near the rating I thought this would is worth". Now this thread is about model attributes, and if you don't feel like having a normal discussion about the various things that were mentionned so far, get lost man.

I will be fine with the link only. If you want to explain anything you feel would need closer attention, please go ahead.

MinaciousGrace 07-7-2018 03:55 AM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by xXOpkillerXx (Post 4626949)
You have yet to implement something that doesn't require so many bans on files, and how many times I heard Etterna players say "wow this is nowhere near the rating I thought this would is worth". Now this thread is about model attributes, and if you don't feel like having a normal discussion about the various things that were mentionned so far, get lost man.

you do realize how ridiculously nonsensical this logic is right? i mean you clearly don't which is the essential problem here

im not here to help you; i did give you the information you needed to help yourself and explicitly rebuked your assessment of how patterns are unimportant and how nps metrics can be used in totality and if you stopped to think about it you would realize why ( SUPREME HINT: IT HAS TO DO WITH THE FACT THAT PATTERN CONFIGURATION HAS HIGHER POTENTIAL IMPACT ON DIFFICULTY THAN NPS )

im just here because its amusing to watch you get buttmad over my specific aversion to emotionally coddling you while giving you everything you need to figure shit out

my being an asshole has no bearing on your capacity to think about or understand things, but it's nice to see that you'll actively stymie your ability to do so just to spite me

MinaciousGrace 07-7-2018 04:04 AM

Re: Entropy Gain for per-receptor NPS
 
here's another free supreme hint:

define difficulty

e: supreme hint #3: if you can't articulate and understand a robust statistical definition of difficulty then you have no business going anywhere near machine learning or neural networks, although, not unironically, if you could you wouldn't be doing so in the first place

MinaciousGrace 07-7-2018 04:28 AM

Re: Entropy Gain for per-receptor NPS
 
supreme hint #4: ffr's difficulty is based on aaa rating which places greater influence on rating to specific/unique patterns, difficulty spikes, and generalized factors such as length, inevitably increasing overall variance particularly with non standard files and moreover increasing subjective variance when evaluating the accuracy of an estimated difficulty

supreme hint #5: supreme hint #4 should help you with #2 and #3

supreme hint #6: it's not that you're approaching the problem incorrectly because you're thinking of it incorrectly, it's that you haven't thought about it at all, you're trying to find answers to questions you didn't ask because you assume the answers will be self evident

they're not

MinaciousGrace 07-7-2018 04:44 AM

Re: Entropy Gain for per-receptor NPS
 
questions like, given a distribution of margin of error, is it more important to have an average as close to 0 as possible?

is it more important to minimize the outliers?

can you apportion relative importance?

i.e. is it more important to have roughly 80% of files within 5% but with the remaining 20 having 30%+ margins of error? or would it be preferable to have 95% of files within 7.5% and the remaining 5% within 10%? 15%?

given the option do we want an average closer to +(overrated) or -(underrated) 1%? why?

how do you examine and test for this? how do you go about eliciting results specific to your goals? how do you ensure that any methods employed don't produce undesirable effects on the results? are some undesirable effects worth a closer adherence to your goal?

how much do you account for human subjectivity when testing for this? think you're going to use neural networks and match it to a score base? wrong again you just exposed yourself to population bias which, going back to the previous point, exposes you to wild outliers (30%+) of players even if it fits well with most other players

you also have the least amount of data on the files you are most concerned with, which are the files that are the hardest and least played, because the files where there is the most player subjective agreement are the easy files that people have played to death over and over

how do you extrapolate existing player scorebases to new files?

do you apply neural networks to pattern configurations? how do you detect patterns? you already threw out the possibility of doing so, so that leaves you without that option. too bad

even if you didn't, how do you mathematically model pattern difficulty, how do you account for subjective player strengths given extremely specific patterns and extremely specific players? do you?

again, the same question but applied to specific patterns, is it more important to be generally accurate and leave open high error margins on outliers or sacrifice general accuracy in an attempt to account for the outliers as best as possible? how does the decision you make impact the overall correctness?

how do you deal with transitions? are transitions important? trick question, yes you fucking idiot

do you model stamina drain? how do you model stamina drain? physical? mental? ffr requires additional consideration for mental stamina drain because of the aaa difficulty goal. is that objectively stupid? yes, will it change? probably not

the answers to these questions will guide your specific implementation, none of which you have clearly bothered asking, which is the same predictable fallacy that everyone falls into

you're doing it ass backwards

stop trying to build the spaceship, figure out where you're going first

ps. it's possible to reverse engineer my entire calc from the last 4 posts so if you really can't get anything from them that's on you

pps. do you understand better now, my virulent disdain for all of you

ppps. in case im not done holding your hand enough

Quote:

Originally Posted by xXOpkillerXx (Post 4626887)
The first stats I finished coding are the NPS (split just like the current total nps by different timeframes like .3s, .5s, 1s, 2s, etc.) for individual receptors (left, down, up, right). So, do you think that those + the total NPS would give a significant entropy gain (or any equivalent depending on the model) in computing the difficulties of the files ?

no

you aren't going to reduce file difficulty to 2 prominent variables and even if you could i don't think you would be able to use that information to actually produce a single number and assuming you did you'd still be stuck with the inherent fallacy of using machine learning to produce values that you can't actually corroborate because of human subjectivity

xXOpkillerXx 07-7-2018 07:40 AM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by MinaciousGrace (Post 4626951)
you do realize how ridiculously nonsensical this logic is right? i mean you clearly don't which is the essential problem here

im not here to help you; i did give you the information you needed to help yourself and explicitly rebuked your assessment of how patterns are unimportant and how nps metrics can be used in totality and if you stopped to think about it you would realize why ( SUPREME HINT: IT HAS TO DO WITH THE FACT THAT PATTERN CONFIGURATION HAS HIGHER POTENTIAL IMPACT ON DIFFICULTY THAN NPS )

im just here because its amusing to watch you get buttmad over my specific aversion to emotionally coddling you while giving you everything you need to figure shit out

my being an asshole has no bearing on your capacity to think about or understand things, but it's nice to see that you'll actively stymie your ability to do so just to spite me

You can fantasize all you want thinking people get mad at you for supposedly knowing it all, but it doesnt change the fact that you're just an ass anyway. As for my understanding of things, only you could manage to think it would be affected or have some correlation with how much of an ass you are. Guess what, that's wrong.

Now about the actual topic, I will get to most of your questions soon. If you expect me to know the exact results of my future tests, you'll be disappointed to learn that that's not how things work. The second paragraph in that quote is just air because you're basically saying: "nps is a bad metric for difficulty because patterns are a good metric". I'm not playing a game of guess what the ass is trying to say; if you want to ask me any amount of questions on the subject, like you did in your latest post, I will gladly do my best to answer them and correct my assumptions if necessary. However, do not expect me to also assume/guess your unmentionned mathematical/logical definitions of concepts such as pattern, transition, standard file and difficulty. By arguing those, I expect you have a rigorous definition for each of them. If that is the case, refer to my second reply to you: provide actual content (be it a link to something or an explanation). Otherwise, I will focus on your questions and rightly consider any criticism so far as voided of credibility. If for you that means holding my hand, you can pat your own back for all I care. You can be helpful and nobody denies it, but nobody's begging you for anything here so you should probably give up on the condescending attitude.

xXOpkillerXx 07-7-2018 09:56 AM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by MinaciousGrace (Post 4626954)
questions like, given a distribution of margin of error, is it more important to have an average as close to 0 as possible?

is it more important to minimize the outliers?

can you apportion relative importance? i.e. is it more important to have roughly 80% of files within 5% but with the remaining 20 having 30%+ margins of error? or would it be preferable to have 95% of files within 7.5% and the remaining 5% within 10%? 15%?

given the option do we want an average closer to +(overrated) or -(underrated) 1%? why?

Prior to having done any modeling yet for the difficulty, my take on that is that it would initially be better to aim for a higher rate of very good guesses than minimizing outliers' error. The reason is that I could have information on what kind of files really don't fit my model. From those results I can then make more accurate tweaks to the initial model, and repeating the process until some trivial threshold is attained. Only then would I maybe sacrifice overall accuracy if the payoff is good in terms of the amount of files that are subjectively not far from expectation. Mind you, like I mentionned in earlier posts, I'm Not at the stage of implementation/tests; I cannot give you a detailed explanation of my plans because I have yet to see what primitives/attributes I can extract from the files (the reason of this thread).

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
how do you examine and test for this? how do you go about eliciting results specific to your goals? how do you ensure that any methods employed don't produce undesirable effects on the results? are some undesirable effects worth a closer adherence to your goal?

how much do you account for human subjectivity when testing for this? think you're going to use neural networks and match it to a score base? wrong again you just exposed yourself to population bias which, going back to the previous point, exposes you to wild outliers (30%+) of players even if it fits well with most other players

Since this can only be an unsupervised problem if we want to keep some sort of numerical range as output (which I believe we obviously do), then the results can only be trusted or not. FFR's difficulty spectrum still has flaws, but it's been worked on for a long time by expert players (OWA for example), so even though we don't want to use it as groundtruth, it's still a good indication of how accurate the predictions are (even if it's not a set quantitative measurement). The prediction accuracy is definitely harder to judge when aiming for a precise fit to subjective expectation because it's unsupervised. It then seems wiser to get a close enough fit and formulate properly what explains the variations, so that the subjective opinions can be compared to what the model predicts, and if no common grounds can be found, go back to tweaking the model and adjusting primitives.

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
you also have the least amount of data on the files you are most concerned with, which are the files that are the hardest and least played, because the files where there is the most player subjective agreement are the easy files that people have played to death over and over

Although this is obviously a problem than many people mention, I still have ideas to try. Depending on what model turns out to be acceptable, if any, a study on the behavior of each primitive when difficulty ramps up can potentially be extrapolated to new data. Can't make any more assumptions before having fully defined my primitives first.

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
how do you extrapolate existing player scorebases to new files?

I don't plan on using scores to estimate anything, but rather the existing difficulties for the ingame files.

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
do you apply neural networks to pattern configurations? how do you detect patterns? you already threw out the possibility of doing so, so that leaves you without that option. too bad

I won't detect patterns in a hardcoded way. I will deal with densities and various nps change distributions to accomodate for the very many ways a section can be difficult. For example, a high nps on a single receptor with fairly low nps on all 3 other receptors with minimal change can represent anything such as runningmen, anchored jumpgluts or anchored polyrhythms. The representation of patterns is still there, but not as hardly set in stone since there can be too many ways to mix patterns and very little possibility to stay more on the objective side when explaining the resulting difficulty. There's no way I can imagine someone be objectively talking about the difficulty of a runningman pattern with a minijack on every other anchored note. Patterns are friendly concepts for us to communicate about files with an easy mental visualisation, they are not a suitable difficulty metric.

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
even if you didn't, how do you mathematically model pattern difficulty, how do you account for subjective player strengths given extremely specific patterns and extremely specific players? do you?

I don't model patterns. Strenghts are objective and difficulty does not have anything to do with them, so no I don't account for them. If a player is good at something, then so be it.

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
again, the same question but applied to specific patterns, is it more important to be generally accurate and leave open high error margins on outliers or sacrifice general accuracy in an attempt to account for the outliers as best as possible? how does the decision you make impact the overall correctness?

I believe I have answered this in the above replies.

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
how do you deal with transitions? are transitions important? trick question, yes you fucking idiot

You never defined transitions to begin with. However, I'd say I can deal with those with nps change rate per receptor. For example, a roll to a jack will clearly show a drastic increase on one of the receptor's nps and a decrease on all other receptors. This applies to even the most bizzare patterns since nps is a distribution over time and not a finite set of patterns.

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
do you model stamina drain? how do you model stamina drain? physical? mental? ffr requires additional consideration for mental stamina drain because of the aaa difficulty goal. is that objectively stupid? yes, will it change? probably not

This I would say is one the the more interesting questions you've asked. Yes, FFR difficulty is judged based on AAA, so there definitely has to be a primitive for the song lenght or something similar. Average nps, mixed with the rest, can account for stamina drain but that might need some tweaking too. I do believe that using the nps change rate is helpful here also because a constant nps for a long time is more stamina draining than shorter hard sections. In the case where it's subjectively hard to tell, other primitives like max nps will hopefully lead the model to making an acceptable prediction.

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
the answers to these questions will guide your specific implementation, none of which you have clearly bothered asking, which is the same predictable fallacy that everyone falls into

Mentionned a few times that it's preferable to extract primitives first and then see what modeling can be done.

Quote:

Originally Posted by MinaciousGrace (Post 4626954)
you aren't going to reduce file difficulty to 2 prominent variables and even if you could i don't think you would be able to use that information to actually produce a single number and assuming you did you'd still be stuck with the inherent fallacy of using machine learning to produce values that you can't actually corroborate because of human subjectivity

By "2 prominent variables" I guess you meant any decently sized quantity of variables. As for the machine learning part, that's basically the whole foundation behind unsupervised algorithms: the model gives you an output which is meant to be closely analysed to find information about your data and compare it to your subjective expectations.



Sadly (not really but w/e) you are banned, so you won't be able to reply to this soon I suppose. I would've gladly listen to your arguments as to why I'm wrong on certain points, because there is no way I can be right on all that right off the bat. Hopefully you learn to have a respectful conversation/debate before you're unbanned though.

EtienneSM 07-7-2018 11:20 AM

Re: Entropy Gain for per-receptor NPS
 
I read neural networks and FFR


why

xXOpkillerXx 07-7-2018 11:26 AM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by EtienneSM (Post 4626968)
I read neural networks and FFR


why

I also was curious why mina only mentionned those. You can do regression with them but I really wonder if they're efficient at all in this context. Do you have any specific reason to totally discard them though ?

dadcop2 07-7-2018 11:28 AM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by EtienneSM (Post 4626968)
I read neural networks and FFR


why

because i learned about them at a cursory level at my computer science 3 class and i HAVE to apply this concept here even if it doesn't !!!

AutotelicBrown 07-7-2018 12:08 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by xXOpkillerXx (Post 4626964)
I don't plan on using scores to estimate anything, but rather the existing difficulties for the ingame files.

This makes no sense if you are using those as ground truth in the first place.

Anyway, I don't think it's worth to break down what you currently have if you haven't built a model in the first place. I guess it's fine to test around with some data and see what happens but it'll make more sense to decide what data to extract after you decide what you are modeling in the first place.

On the neural networks topic, lack of useful data sucks but I think convolutional networks could work well to build difficulty curve graphs.

leonid 07-7-2018 12:19 PM

Re: Entropy Gain for per-receptor NPS
 
So I didn't read this convo but what do you think of showing % of players who played the file that passed/AA'd/AAA'd/etc it, SDVX style

xXOpkillerXx 07-7-2018 12:22 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by AutotelicBrown (Post 4627000)
This makes no sense if you are using those as ground truth in the first place.

Anyway, I don't think it's worth to break down what you currently have if you haven't built a model in the first place. I guess it's fine to test around with some data and see what happens but it'll make more sense to decide what data to extract after you decide what you are modeling in the first place.

On the neural networks topic, lack of useful data sucks but I think convolutional networks could work well to build difficulty curve graphs.

What do you mean by "if you are going to use those as ground truth" ? I said I'm going the unsupervised way, there is no ground truth in that afaik. I plan on doing any estimation based on difficulties, not scores. Sorry if I misunderstood your point.

The rest is all true. The goal of this thread was never to talk modeling so much but rather discuss primitives.

xXOpkillerXx 07-7-2018 12:25 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by leonid (Post 4627011)
So I didn't read this convo but what do you think of showing % of players who played the file that passed/AA'd/AAA'd/etc it, SDVX style

I currently only have rights to provide stats on the songs/files, I have nothing on the users.

If you meant it as some kind of attribute to predict difficulty could you please explain your reasoning ? Otherwise I'm sorry I can't do that.

leonid 07-7-2018 12:29 PM

Re: Entropy Gain for per-receptor NPS
 
It gives a rough estimation of difficulty through general performances on the chart
Low % = Hard
High % = Easy
But you need a server to log all the user scores, users have to be online, and the chart needs a good enough number of players
Using neural network is like assigning one person to judge all the difficulties (since it's supposed to map human brains and what not), but what if you disagree with that person

xXOpkillerXx 07-7-2018 12:40 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by leonid (Post 4627015)
It gives a rough estimation of difficulty through general performances on the chart
Low % = Hard
High % = Easy
But you need a server to log all the user scores, users have to be online, and the chart needs a good enough number of players
Using neural network is like assigning one person to judge all the difficulties (since it's supposed to map human brains and what not), but what if you disagree with that person

Yeah just the fact that you need a massive amount of plays from various players in the lvl spectrum makes that unviable. Plus the difficulty should be predicted before any plays are made (or else it's a bit pointless).

As for the neural net, I have no clue why people are all on it, I dont recall mentionning it in this thread. That being said, you wonder what happens if people dont agree with a neural net's output ? Well, if the vast majority agrees with the net, then those who disagree should try to see if they're biaised because of their skillset and understand what led to that output. If only a minority agree with the output, then the possibility of it being wrong is greater and the chart would need closer inspection to see why that is the case. It's just how things goes when you have no predefined output class or labeled input.

RenegadeLucien 07-7-2018 01:01 PM

Re: Entropy Gain for per-receptor NPS
 
Just for the record, I've tried to produce a difficulty algorithm primarily based on "distance to last note on each arrow/hand".

I don't know if there's something inherently wrong with this approach or I was just too inexperienced at programming to see it through to a satisfactory completion but I was unable to get to a result that was deemed usable by myself and the difficulty consultants whom I discussed the results with.

On the subject of neural nets, both myself and Trumpet63 have attempted to use neural nets on FFR's song difficulties using extended level stats. Trumpet got his neural net closer than mine (his had a mean difference of 2.4 points from the actual value whereas mine was 4-5 IIRC) but his used several features (such as note color) that could be cheesed by a clever stepfile artist to over/underrepresent the difficulty of their file (ex. if white notes = high diff, throw in a lot of white grace notes that function identically to jumps in practice.)

xXOpkillerXx 07-7-2018 01:07 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by RenegadeLucien (Post 4627036)
Just for the record, I've tried to produce a difficulty algorithm primarily based on "distance to last note on each arrow/hand".

I don't know if there's something inherently wrong with this approach or I was just too inexperienced at programming to see it through to a satisfactory completion but I was unable to get to a result that was deemed usable by myself and the difficulty consultants whom I discussed the results with.

On the subject of neural nets, both myself and Trumpet63 have attempted to use neural nets on FFR's song difficulties using extended level stats. Trumpet got his neural net closer than mine (his had a mean difference of 2.4 points from the actual value whereas mine was 4-5 IIRC) but his used several features (such as note color) that could be cheesed by a clever stepfile artist to over/underrepresent the difficulty of their file (ex. if white notes = high diff, throw in a lot of white grace notes that function identically to jumps in practice.)

Thanks for the information !

What metrics did you use in relation with that distance ? Was it min/max/avg/distribution/... ? Because just like nps, it sounds like a solution that needs quite a few statistical values.

Yes note color is arbitrary.

RenegadeLucien 07-7-2018 01:20 PM

Re: Entropy Gain for per-receptor NPS
 
It was not pure NPS (NPS was included in the algorithm but only a small factor). It was more like "give each note a value based on how close it is to the next one on the next arrow/hand/overall, then sum everything and take the highest consecutive X notes, add factors for stamina/consistency/NPS"

I did a bunch of playing around with the factors and scales but I would always end up with either long streamy files being rated way too high or big spiky files being rated way too high (or both.)

xXOpkillerXx 07-7-2018 01:29 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by RenegadeLucien (Post 4627051)
It was not pure NPS (NPS was included in the algorithm but only a small factor). It was more like "give each note a value based on how close it is to the next one on the next arrow/hand/overall, then sum everything and take the highest consecutive X notes, add factors for stamina/consistency/NPS"

I did a bunch of playing around with the factors and scales but I would always end up with either long streamy files being rated way too high or big spiky files being rated way too high (or both.)

What do you think of the rate at which that distance value changes ? Maybe also confined to a certain timeframe and averaging on that ? If the rate is high, then you have a spiky/bursty section, if low then the difficulty is pretty constant. Then with the actual min/max distance you can get a better idea of how drastic the spikes are or how fast is the constant section.

RenegadeLucien 07-7-2018 01:45 PM

Re: Entropy Gain for per-receptor NPS
 
I'd need to experiment with it to get a definitive answer. I can see the value in having something like that, but it would be difficult to separate actual spikes/bursts from just natural variance in patterns (take a staircase for example: there are gaps of 5 notes between every left arrow, but only 1 between (some) down or up arrows, so the down/up arrows look much harder than the left/right arrows, and this could produce odd results for a difficulty change rate value. Would probably have to look at average difficulty over a short period of notes and use that to determine the difficulty change rate.

AutotelicBrown 07-7-2018 04:26 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by xXOpkillerXx (Post 4627013)
What do you mean by "if you are going to use those as ground truth" ? I said I'm going the unsupervised way, there is no ground truth in that afaik. I plan on doing any estimation based on difficulties, not scores. Sorry if I misunderstood your point.

Sorry, I misread your original statement I quoted before. You can ignore that part.

xXOpkillerXx 07-7-2018 04:57 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by RenegadeLucien (Post 4627069)
I'd need to experiment with it to get a definitive answer. I can see the value in having something like that, but it would be difficult to separate actual spikes/bursts from just natural variance in patterns (take a staircase for example: there are gaps of 5 notes between every left arrow, but only 1 between (some) down or up arrows, so the down/up arrows look much harder than the left/right arrows, and this could produce odd results for a difficulty change rate value. Would probably have to look at average difficulty over a short period of notes and use that to determine the difficulty change rate.

I'm not sure, why would you want to seperate those ? It would help me understand if you defined spikes as opposed to natural variance. Lets say we use the distance metric and focus on the up arrow of a long staircase: that receptor is now essentially receiving minijacks of 2 notes (seperated by 1 note on right) every 4 notes. The rate of change can then be computed like this:

up, ,up, , , ,up, ,up
__, ,0 , , , ,-2, ,2

vs

ri, , , ,ri, , , ,ri, , , ,ri
_, , , ,0, , , ,0, , , ,0

(changes between 0 and 1 have been normalized to the opposite of their inverse: 0.5 => 2 => -2)

Takes a minimum of 3 notes to have a variation in distance. While it's true that the average is the same (0), you could maybe take the range between the minimum negative value (biggest deceleration) and the maximum positive value (biggest acceleration).

Deceleration doesn't affect difficulty, don't forget that this is a per-receptor metric. A file starts at 0 difficulty with 0 notes. If you put a jack at x speed, then after a few notes its speed changes to x/2, the only problem is going from 0 speed to x speed, not from x to x/2. Gradual acceleration/deceleration aren't considered in this but you can get a primitive for it using this same concept. So, for the example of the staircase, if we discard the negative values, we get a max range of 2 on up and down, and a max range of 0 on left and right. And you dont aggregate those in any way because the min/max on each receptor is important.

Does that cover the type of example you had in mind, Renegade ?

RenegadeLucien 07-7-2018 05:23 PM

Re: Entropy Gain for per-receptor NPS
 
So, on spikes vs natural variance: what I mean by "spike", at least in the context of saying that my old algorithms would rate spiky files way too high, was files such as ABCDEath or TTE which have one disproportionately note-heavy section that overshadows everything else in the file. When I say "natural variance", I mean that some arrows in a long pattern like a stream, jumpstream, or staircase will be harder to hit than others.

What I'm trying to avoid is to see a staircase, get a max range of 2 on up or down as you described, and falsely claim that the staircase is a spike when in reality, it's just a staircase. Whatever metric that is used to determine the rate would have to be able to tell the difference.

xXOpkillerXx 07-7-2018 06:55 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by RenegadeLucien (Post 4627104)
So, on spikes vs natural variance: what I mean by "spike", at least in the context of saying that my old algorithms would rate spiky files way too high, was files such as ABCDEath or TTE which have one disproportionately note-heavy section that overshadows everything else in the file. When I say "natural variance", I mean that some arrows in a long pattern like a stream, jumpstream, or staircase will be harder to hit than others.

What I'm trying to avoid is to see a staircase, get a max range of 2 on up or down as you described, and falsely claim that the staircase is a spike when in reality, it's just a staircase. Whatever metric that is used to determine the rate would have to be able to tell the difference.

There are a few different points in this so I'll try to dissect it as clearly as possible.

You have to have some mathematical definition of your concepts if not used for visualisation only. For example, a spike would be a sudden high density x of notes, at least to my understanding of your description. In a more formal way you could say it's any section with high acceleration (lets use a trivial number like 4). Also btw my metric isn't totally correct for another reason, I'll post a fix to it.

So you then have a trivial definition of a spike. With that, you want to avoid cases where the spike is short (i.e. in a staircase, the two ups or two downs) and constant (the staircase goes on for some time like 2 measures). The reason it's trivial is because first of all there is a trivial threshold to set and also because the length of said spike is not well bounded.

You mention TTE. Take TTE's fastest spot (a rolly burst like 123412341234) and remove everything before it. The acceleration from nothing to that is equal for each receptor, so min = max = x. Now take a staircase 123432123432 with the distance between two up arrows being equal in this and the roll (from a per-receptor perspective, that is most definitely fair). From nothing to it, 2*min = max = x. It would seem that both are identical, however for the comparison to hold the total nps of the spike will be lower on the staircase than on the roll (the amount of notes bewteen the fastest consecutive notes per-receptor being 1 for the staircase and 3 for the roll). Therefore, a distinction Should be made naturally but the spikiness (again, per-receptor!) will be the same according to the trivial definition.

EDIT:
Just to be extra clear, I'll point out that what you refer to as a spike as we all know it is easily defined when using all notes (not per-receptor). There's a quick increase and decrease in the nps of the section and that's it. That metric can be useful, but it's not what I was explaining/arguing in the previous few posts.

RenegadeLucien 07-7-2018 10:14 PM

Re: Entropy Gain for per-receptor NPS
 
Yeah I think we're talking about totally different concepts here. Per-receptor spikiness isn't something I ever really considered in my algorithm, at least not beyond "this note is really close to the last note for this receptor, therefore it should have a high value".

I can't think of any files off the top of my head where per-receptor spikiness plays a major factor in the difficulty of the file, so I can't judge how well the simple "this note is close" factor covers it. I do think such a metric would be valuable to have.

xXOpkillerXx 07-8-2018 11:16 AM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by RenegadeLucien (Post 4627131)
Yeah I think we're talking about totally different concepts here. Per-receptor spikiness isn't something I ever really considered in my algorithm, at least not beyond "this note is really close to the last note for this receptor, therefore it should have a high value".

I can't think of any files off the top of my head where per-receptor spikiness plays a major factor in the difficulty of the file, so I can't judge how well the simple "this note is close" factor covers it. I do think such a metric would be valuable to have.

Think Crowdpleaser, Death Piano ending, RAN's trilly bursts section or even party4u v1's 0-framers for more intense examples. Any jacks (per-receptor) faster than their surrounding notes basically. If you take total nps only, Death Piano's ending roll and trill are the same difficulty but obviously it's not the case. Per-receptor nps will clearly make the difference and rate the trill much higher than the roll.

xXOpkillerXx 07-8-2018 11:36 AM

Re: Entropy Gain for per-receptor NPS
 
Here's something I am wondering about per-hand stuff.

Is it safe to assume that any section with a high nps (x) on {3} and a lower nps on {4} is Harder than having x nps on both {3} and {4} ? Any counterexample is welcome.

More visually, I'm thinking that [34]4[34]4 is always harder than [34][34][34][34]. But only per-hand, so the same wouldn't apply with combinations of receptors like {2} and {3}, or {2} and {4}, etc. And by always I mean no matter what is before it, after it, and what's going on on the other receptors.

EDIT:
I will even go as far as claiming that if x is the nps on {3} and y is the nps on {4}, the peak of that per-hand difficulty is reached when x = 2y or 2x = y. When you lower small nps, you get things like [34]44[34]44[34]44, and when you raise it, you get [34][34]4[34][34]4, both of which I would argue are objectively easier than [34]4[34]4[34]4.

RenegadeLucien 07-8-2018 12:42 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by xXOpkillerXx (Post 4627159)
If you take total nps only, Death Piano's ending roll and trill are the same difficulty but obviously it's not the case. Per-receptor nps will clearly make the difference and rate the trill much higher than the roll.

I wasn't talking about per-receptor NPS. Of course that is important, and my algorithm already covers it since it measures the distance between notes on the same receptor (in fact, in its current state, it freaks out at the DP megatrill and rates it higher than Undici). I was talking about the per-receptor spikiness that you had been describing in your last few posts. I don't know if there are any files where per-receptor spikiness add any difficulty that isn't accounted for by per-receptor NPS or distance between two notes on the same receptor.

Quote:

Originally Posted by xXOpkillerXx
Is it safe to assume that any section with a high nps (x) on {3} and a lower nps on {4} is Harder than having x nps on both {3} and {4} ? Any counterexample is welcome.

Mostly, as long as the nps on 4 is high enough. I don't really think, say, a long 3 jack with one random {34} jump (ex. AIM Anthem) in the middle is harder than a long {34} jack.

xXOpkillerXx 07-8-2018 01:09 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by RenegadeLucien (Post 4627165)
I wasn't talking about per-receptor NPS. Of course that is important, and my algorithm already covers it since it measures the distance between notes on the same receptor (in fact, in its current state, it freaks out at the DP megatrill and rates it higher than Undici). I was talking about the per-receptor spikiness that you had been describing in your last few posts. I don't know if there are any files where per-receptor spikiness add any difficulty that isn't accounted for by per-receptor NPS or distance between two notes on the same receptor.



Mostly, as long as the nps on 4 is high enough. I don't really think, say, a long 3 jack with one random {34} jump (ex. AIM Anthem) in the middle is harder than a long {34} jack.

Ok I get what you mean with that first paragraph. Indeed, there might not be a need for anything more than nps for per-receptor metrics. I will try to find any example where it would matter but on top of my head I dont see any either.

How high does your algo rate DP compared to Undici ? Those are really hard files and none have been AAA'd yet Right now, they're only 2 points apart from each other, and I would Not consider the opposite to be an error because it's only a single file (it's better to look at results as a whole first and then understand the difference between particular files, so not having your complete results, I can only assume things).

As for you AIM example, it would make no sense to put a long jack and a long jack with 1 jump in it at the same exact difficulty on a real numbers scale. The one with the jump Has to be harder, even if it's by a very small amount.

leonid 07-8-2018 01:11 PM

Re: Entropy Gain for per-receptor NPS
 
Does it also address the fact that difficulty differs based on what your goal is? Charts can be trivial to AA but impossible to AAA, or there's some stupid minefield that makes it really hard to pass but once you survive it's a guaranteed AA, etc

xXOpkillerXx 07-8-2018 01:25 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by leonid (Post 4627168)
Does it also address the fact that difficulty differs based on what your goal is? Charts can be trivial to AA but impossible to AAA, or there's some stupid minefield that makes it really hard to pass but once you survive it's a guaranteed AA, etc

Right now I'm focusing solely on FFR. Since we use AAA equivalency, most of the difficulty will come from max values and length/stamina factors.

EDIT:
@leonid: stepmania is different obviously. What you describe as difficulty to AA, AAA, pass, are all very distinct values that may have a similar computing process but would have their own specific primitives. You can't possibly have a single metric for overall difficulty when your definition of difficulty is an undefined combination of 3 distinct aspects, otherwise you end up with obviously biaised results that are very hard to interpret. A fair comparison can be made with Etterna's calculator: if overall difficulty is some aggregate (like avg or weighted avg) of the per-pattern difficulties (jack, stream, js, etc), then it's not a surprise that they haveso many files to ban from leaderboards.

RenegadeLucien 07-8-2018 01:52 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by xXOpkillerXx (Post 4627167)
How high does your algo rate DP compared to Undici ? Those are really hard files and none have been AAA'd yet Right now, they're only 2 points apart from each other, and I would Not consider the opposite to be an error because it's only a single file (it's better to look at results as a whole first and then understand the difference between particular files, so not having your complete results, I can only assume things).

Way higher. I haven't scaled it to match the 1-120 (or 1-99) FFR scale, but Undici is given a value of 49.5 (for some comparisons, RATO is 46.4, Magical 8bit Tour is 41.2, La Camp is 39.4). DP is given 62.7.

Quote:

As for you AIM example, it would make no sense to put a long jack and a long jack with 1 jump in it at the same exact difficulty on a real numbers scale. The one with the jump Has to be harder, even if it's by a very small amount.
Yes, the 3 jack with the one 4 jump is harder than a 3 jack. But I question it being harder than a {34} jack. I don't know about you, but I can do faster 3 jacks than I can do {34} jacks.

xXOpkillerXx 07-8-2018 02:00 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by RenegadeLucien (Post 4627170)
Way higher. I haven't scaled it to match the 1-120 (or 1-99) FFR scale, but Undici is given a value of 49.5 (for some comparisons, RATO is 46.4, Magical 8bit Tour is 41.2, La Camp is 39.4). DP is given 62.7.



Yes, the 3 jack with the one 4 jump is harder than a 3 jack. But I question it being harder than a {34} jack. I don't know about you, but I can do faster 3 jacks than I can do {34} jacks.

Ok yes that is an odd result haha.

As for the jacks, the difficulty of perhand would be symmetric on both sides of of the 2:1 ratio, but the nps primitive would naturally make a [34] jumpjack harder than a 3 jack with a single [34] in it.

blanky! 07-8-2018 02:47 PM

Re: Entropy Gain for per-receptor NPS
 
I don't like difficulty being "one value".
It should vary in magnitude throughout the song, and be different kinds of difficulty.

Like, how do scores change if you're only slightly less good at hitting something than another player. Difficulty might not be that well, but if it means the difference between AAA'ing and good-rushing a difficult jumpstream, then the scores are highly sensitive to skill. Maybe that's a good measure? Change in score vs. change in skill in a certain direction? Dunno. Thoughts aren't fleshed out at all. Just food for thought.

RenegadeLucien 07-8-2018 04:08 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by blanky! (Post 4627175)
Like, how do scores change if you're only slightly less good at hitting something than another player. Difficulty might not be that well, but if it means the difference between AAA'ing and good-rushing a difficult jumpstream, then the scores are highly sensitive to skill. Maybe that's a good measure? Change in score vs. change in skill in a certain direction? Dunno. Thoughts aren't fleshed out at all. Just food for thought.

Thing is, for FFR specifically, this is a moot point because FFR difficulties are based strictly on the difficulty to AAA a song. It doesn't matter if a song is more prone to blackflags or 15g scores for people who are close to AAA'ing it, what matters is that the rating given to the song is higher then the rating that these players can AAA. Though I'd question that a player who consistently good-rushes a section of a song and ends up with 15g is close to AAA'ing it at all.

xXOpkillerXx 07-9-2018 02:53 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by blanky! (Post 4627175)
I don't like difficulty being "one value".
It should vary in magnitude throughout the song, and be different kinds of difficulty.

Like, how do scores change if you're only slightly less good at hitting something than another player. Difficulty might not be that well, but if it means the difference between AAA'ing and good-rushing a difficult jumpstream, then the scores are highly sensitive to skill. Maybe that's a good measure? Change in score vs. change in skill in a certain direction? Dunno. Thoughts aren't fleshed out at all. Just food for thought.

Could you elaborate on that bold part please ? I'm not sure I understand what you're asking for.

Also here's another idea, but I haven't found a way to make it fully non-trivial yet:
I can take the nps at every frame of a file (kinda like the nps generator), and figure out how long the file stays around its max nps. The only problem with that is I can't just put a random threshold like "time during which nps is at most 2nps away from max nps" or "time during which nps is higher than 95% of the max nps". Thoughts ?

MinaciousGrace 08-22-2018 09:36 PM

Re: Entropy Gain for per-receptor NPS
 
so did you get far enough to realize you have no idea what you're doing yet or did you just spout a bunch of bullshit and then do nothing

i swear you people that think everything can be solved with machine learning are worse than the people who think blockchain makes everything better

xXOpkillerXx 08-22-2018 09:56 PM

Re: Entropy Gain for per-receptor NPS
 
I didn't want to be pushy towards prawn because he seemed very busy already. I talked with him and for tests on the whole song db I need to ask him everytime. I have converted a few sm packs for basic tests but since those arent rated like ffr, I couldnt just accept/discard results (there also were some sketchy numbers with stuff like Beyond Bludgeonned, Big Black and Little piece of Heaven, stuff that extrapolation should still rate correctly). I still have formulas to try yet, but now I'm on vacation and not focusing on ffr at all. Will most likely get back to it when my semester starts (early september).

You seem to be in quite the hurry to see results for an arrogant ass. I think even if I manage to get good results I'll hide them from you.

MinaciousGrace 08-23-2018 01:47 PM

Re: Entropy Gain for per-receptor NPS
 
so basically you're in total denial still

PrawnSkunk 08-23-2018 02:39 PM

Re: Entropy Gain for per-receptor NPS
 
Let's try page 4 again

xXOpkillerXx 08-23-2018 02:43 PM

Re: Entropy Gain for per-receptor NPS
 
Quote:

Originally Posted by MinaciousGrace (Post 4645055)
duh, i think im pretty clear in saying that i think you're full of shit, have no idea what you're talking about, are completely in denial of it, talked a big game and will never deliver, and will do so while you sit on your high horse like you've accomplished something, which you haven't, and never will

the idea that there's anything to "contribute" to the "topic" of this thread is also laughable

So basically I'm right when I say you're only here to bash on me which you totally have the right to do, just that this shouldn't be the place for it. If you see nothing to contribute to the thread, why are you posting in it in the first place ? Do you need some confirmation from others that your lack of belief in me is justified ? With your ego I don't think you do. So yes, what's your deal dude ?

I take no credit or whatever for anything I've said or done so far and gain pretty much huh.. absolutely nothing from it, so I don't get what made you think I'm on my high horse. Some people in the thread (Renegade for example) made me re-think some aspects of the problem, and heck parts of your bigger post did too, so I don't think I'm in any denial here. Why would you even be slightly affected by the fact that I deliver or not (which hey it's totally possible that I don't, who knows), you'll be disappointed that I didn't succeed ? Come on we both know you'd just stroke your fat ego and say you told everyone you were right.

MinaciousGrace 08-23-2018 10:01 PM

Re: Entropy Gain for per-receptor NPS
 
i don't need you to fail to know i'm right

it'll just be its own prize considering you both solicited my advice while bashing my work while ignoring the fact that i have ample public contributions to the subject which would take less than 5 minutes to look up which you refused to do because of what, pride? butthurtedness?

not to mention the string of hilarious responses to all of my points which don't logically track in any dimension

i mean your premise is wrong and your response to every point was wrong and your entire frame of mind in approaching the subject is wrong so yes there is no helping you, it is entirely about watching you fail

Moria 08-23-2018 10:09 PM

Re: Entropy Gain for per-receptor NPS
 
ahuh

MinaciousGrace 12-18-2018 01:43 AM

Re: Entropy Gain for per-receptor NPS
 
merry christmas

flashflash account 12-18-2018 03:16 AM

Re: Entropy Gain for per-receptor NPS
 
You remind me of my sister

PixlSM 12-18-2018 04:46 PM

Re: Entropy Gain for per-receptor NPS
 
Merry christmas

DNAlei 12-20-2018 11:34 PM

Re: Entropy Gain for per-receptor NPS
 
Merry Christmas :)


All times are GMT -5. The time now is 07:34 PM.

Powered by vBulletin® Version 3.8.1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright FlashFlashRevolution