Go Back   Flash Flash Revolution > Flash Flash Revolution > FFR General Talk

Reply
 
Thread Tools Display Modes
Old 07-7-2018, 03:16 AM   #21
xXOpkillerXx
Forever OP
Simfile JudgeFFR Simfile AuthorD8 Godly KeysmasherFFR Veteran
 
xXOpkillerXx's Avatar
 
Join Date: Dec 2008
Location: Canada,Quebec
Age: 28
Posts: 4,171
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by MinaciousGrace View Post
man the only thing more cliche than that response would be if i had already written extensively on all of the relevant areas of discussion

then carefully organized said writing into a document that was made public

then spent thousands of hours doing practical implementation of testing of said thoughts

gosh that would really be the b side of a bollywood movie tier script
OR, you could post a link to said documentation, stop being an ass for absolutely no reason like you often are, and everything would've been cool~

You have yet to implement something that doesn't require so many bans on files, and how many times I heard Etterna players say "wow this is nowhere near the rating I thought this would is worth". Now this thread is about model attributes, and if you don't feel like having a normal discussion about the various things that were mentionned so far, get lost man.

I will be fine with the link only. If you want to explain anything you feel would need closer attention, please go ahead.
xXOpkillerXx is offline   Reply With Quote
Old 07-7-2018, 03:55 AM   #22
MinaciousGrace
FFR Player
D7 Elite Keysmasher
 
MinaciousGrace's Avatar
 
Join Date: Dec 2007
Location: nima
Posts: 4,278
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by xXOpkillerXx View Post
You have yet to implement something that doesn't require so many bans on files, and how many times I heard Etterna players say "wow this is nowhere near the rating I thought this would is worth". Now this thread is about model attributes, and if you don't feel like having a normal discussion about the various things that were mentionned so far, get lost man.
you do realize how ridiculously nonsensical this logic is right? i mean you clearly don't which is the essential problem here

im not here to help you; i did give you the information you needed to help yourself and explicitly rebuked your assessment of how patterns are unimportant and how nps metrics can be used in totality and if you stopped to think about it you would realize why ( SUPREME HINT: IT HAS TO DO WITH THE FACT THAT PATTERN CONFIGURATION HAS HIGHER POTENTIAL IMPACT ON DIFFICULTY THAN NPS )

im just here because its amusing to watch you get buttmad over my specific aversion to emotionally coddling you while giving you everything you need to figure shit out

my being an asshole has no bearing on your capacity to think about or understand things, but it's nice to see that you'll actively stymie your ability to do so just to spite me

Last edited by MinaciousGrace; 07-7-2018 at 04:02 AM..
MinaciousGrace is offline   Reply With Quote
Old 07-7-2018, 04:04 AM   #23
MinaciousGrace
FFR Player
D7 Elite Keysmasher
 
MinaciousGrace's Avatar
 
Join Date: Dec 2007
Location: nima
Posts: 4,278
Default Re: Entropy Gain for per-receptor NPS

here's another free supreme hint:

define difficulty

e: supreme hint #3: if you can't articulate and understand a robust statistical definition of difficulty then you have no business going anywhere near machine learning or neural networks, although, not unironically, if you could you wouldn't be doing so in the first place

Last edited by MinaciousGrace; 07-7-2018 at 04:17 AM..
MinaciousGrace is offline   Reply With Quote
Old 07-7-2018, 04:28 AM   #24
MinaciousGrace
FFR Player
D7 Elite Keysmasher
 
MinaciousGrace's Avatar
 
Join Date: Dec 2007
Location: nima
Posts: 4,278
Default Re: Entropy Gain for per-receptor NPS

supreme hint #4: ffr's difficulty is based on aaa rating which places greater influence on rating to specific/unique patterns, difficulty spikes, and generalized factors such as length, inevitably increasing overall variance particularly with non standard files and moreover increasing subjective variance when evaluating the accuracy of an estimated difficulty

supreme hint #5: supreme hint #4 should help you with #2 and #3

supreme hint #6: it's not that you're approaching the problem incorrectly because you're thinking of it incorrectly, it's that you haven't thought about it at all, you're trying to find answers to questions you didn't ask because you assume the answers will be self evident

they're not
MinaciousGrace is offline   Reply With Quote
Old 07-7-2018, 04:44 AM   #25
MinaciousGrace
FFR Player
D7 Elite Keysmasher
 
MinaciousGrace's Avatar
 
Join Date: Dec 2007
Location: nima
Posts: 4,278
Default Re: Entropy Gain for per-receptor NPS

questions like, given a distribution of margin of error, is it more important to have an average as close to 0 as possible?

is it more important to minimize the outliers?

can you apportion relative importance?

i.e. is it more important to have roughly 80% of files within 5% but with the remaining 20 having 30%+ margins of error? or would it be preferable to have 95% of files within 7.5% and the remaining 5% within 10%? 15%?

given the option do we want an average closer to +(overrated) or -(underrated) 1%? why?

how do you examine and test for this? how do you go about eliciting results specific to your goals? how do you ensure that any methods employed don't produce undesirable effects on the results? are some undesirable effects worth a closer adherence to your goal?

how much do you account for human subjectivity when testing for this? think you're going to use neural networks and match it to a score base? wrong again you just exposed yourself to population bias which, going back to the previous point, exposes you to wild outliers (30%+) of players even if it fits well with most other players

you also have the least amount of data on the files you are most concerned with, which are the files that are the hardest and least played, because the files where there is the most player subjective agreement are the easy files that people have played to death over and over

how do you extrapolate existing player scorebases to new files?

do you apply neural networks to pattern configurations? how do you detect patterns? you already threw out the possibility of doing so, so that leaves you without that option. too bad

even if you didn't, how do you mathematically model pattern difficulty, how do you account for subjective player strengths given extremely specific patterns and extremely specific players? do you?

again, the same question but applied to specific patterns, is it more important to be generally accurate and leave open high error margins on outliers or sacrifice general accuracy in an attempt to account for the outliers as best as possible? how does the decision you make impact the overall correctness?

how do you deal with transitions? are transitions important? trick question, yes you fucking idiot

do you model stamina drain? how do you model stamina drain? physical? mental? ffr requires additional consideration for mental stamina drain because of the aaa difficulty goal. is that objectively stupid? yes, will it change? probably not

the answers to these questions will guide your specific implementation, none of which you have clearly bothered asking, which is the same predictable fallacy that everyone falls into

you're doing it ass backwards

stop trying to build the spaceship, figure out where you're going first

ps. it's possible to reverse engineer my entire calc from the last 4 posts so if you really can't get anything from them that's on you

pps. do you understand better now, my virulent disdain for all of you

ppps. in case im not done holding your hand enough

Quote:
Originally Posted by xXOpkillerXx View Post
The first stats I finished coding are the NPS (split just like the current total nps by different timeframes like .3s, .5s, 1s, 2s, etc.) for individual receptors (left, down, up, right). So, do you think that those + the total NPS would give a significant entropy gain (or any equivalent depending on the model) in computing the difficulties of the files ?
no

you aren't going to reduce file difficulty to 2 prominent variables and even if you could i don't think you would be able to use that information to actually produce a single number and assuming you did you'd still be stuck with the inherent fallacy of using machine learning to produce values that you can't actually corroborate because of human subjectivity

Last edited by MinaciousGrace; 07-7-2018 at 05:13 AM..
MinaciousGrace is offline   Reply With Quote
Old 07-7-2018, 07:40 AM   #26
xXOpkillerXx
Forever OP
Simfile JudgeFFR Simfile AuthorD8 Godly KeysmasherFFR Veteran
 
xXOpkillerXx's Avatar
 
Join Date: Dec 2008
Location: Canada,Quebec
Age: 28
Posts: 4,171
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by MinaciousGrace View Post
you do realize how ridiculously nonsensical this logic is right? i mean you clearly don't which is the essential problem here

im not here to help you; i did give you the information you needed to help yourself and explicitly rebuked your assessment of how patterns are unimportant and how nps metrics can be used in totality and if you stopped to think about it you would realize why ( SUPREME HINT: IT HAS TO DO WITH THE FACT THAT PATTERN CONFIGURATION HAS HIGHER POTENTIAL IMPACT ON DIFFICULTY THAN NPS )

im just here because its amusing to watch you get buttmad over my specific aversion to emotionally coddling you while giving you everything you need to figure shit out

my being an asshole has no bearing on your capacity to think about or understand things, but it's nice to see that you'll actively stymie your ability to do so just to spite me
You can fantasize all you want thinking people get mad at you for supposedly knowing it all, but it doesnt change the fact that you're just an ass anyway. As for my understanding of things, only you could manage to think it would be affected or have some correlation with how much of an ass you are. Guess what, that's wrong.

Now about the actual topic, I will get to most of your questions soon. If you expect me to know the exact results of my future tests, you'll be disappointed to learn that that's not how things work. The second paragraph in that quote is just air because you're basically saying: "nps is a bad metric for difficulty because patterns are a good metric". I'm not playing a game of guess what the ass is trying to say; if you want to ask me any amount of questions on the subject, like you did in your latest post, I will gladly do my best to answer them and correct my assumptions if necessary. However, do not expect me to also assume/guess your unmentionned mathematical/logical definitions of concepts such as pattern, transition, standard file and difficulty. By arguing those, I expect you have a rigorous definition for each of them. If that is the case, refer to my second reply to you: provide actual content (be it a link to something or an explanation). Otherwise, I will focus on your questions and rightly consider any criticism so far as voided of credibility. If for you that means holding my hand, you can pat your own back for all I care. You can be helpful and nobody denies it, but nobody's begging you for anything here so you should probably give up on the condescending attitude.
xXOpkillerXx is offline   Reply With Quote
Old 07-7-2018, 09:56 AM   #27
xXOpkillerXx
Forever OP
Simfile JudgeFFR Simfile AuthorD8 Godly KeysmasherFFR Veteran
 
xXOpkillerXx's Avatar
 
Join Date: Dec 2008
Location: Canada,Quebec
Age: 28
Posts: 4,171
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by MinaciousGrace View Post
questions like, given a distribution of margin of error, is it more important to have an average as close to 0 as possible?

is it more important to minimize the outliers?

can you apportion relative importance? i.e. is it more important to have roughly 80% of files within 5% but with the remaining 20 having 30%+ margins of error? or would it be preferable to have 95% of files within 7.5% and the remaining 5% within 10%? 15%?

given the option do we want an average closer to +(overrated) or -(underrated) 1%? why?
Prior to having done any modeling yet for the difficulty, my take on that is that it would initially be better to aim for a higher rate of very good guesses than minimizing outliers' error. The reason is that I could have information on what kind of files really don't fit my model. From those results I can then make more accurate tweaks to the initial model, and repeating the process until some trivial threshold is attained. Only then would I maybe sacrifice overall accuracy if the payoff is good in terms of the amount of files that are subjectively not far from expectation. Mind you, like I mentionned in earlier posts, I'm Not at the stage of implementation/tests; I cannot give you a detailed explanation of my plans because I have yet to see what primitives/attributes I can extract from the files (the reason of this thread).

Quote:
Originally Posted by MinaciousGrace View Post
how do you examine and test for this? how do you go about eliciting results specific to your goals? how do you ensure that any methods employed don't produce undesirable effects on the results? are some undesirable effects worth a closer adherence to your goal?

how much do you account for human subjectivity when testing for this? think you're going to use neural networks and match it to a score base? wrong again you just exposed yourself to population bias which, going back to the previous point, exposes you to wild outliers (30%+) of players even if it fits well with most other players
Since this can only be an unsupervised problem if we want to keep some sort of numerical range as output (which I believe we obviously do), then the results can only be trusted or not. FFR's difficulty spectrum still has flaws, but it's been worked on for a long time by expert players (OWA for example), so even though we don't want to use it as groundtruth, it's still a good indication of how accurate the predictions are (even if it's not a set quantitative measurement). The prediction accuracy is definitely harder to judge when aiming for a precise fit to subjective expectation because it's unsupervised. It then seems wiser to get a close enough fit and formulate properly what explains the variations, so that the subjective opinions can be compared to what the model predicts, and if no common grounds can be found, go back to tweaking the model and adjusting primitives.

Quote:
Originally Posted by MinaciousGrace View Post
you also have the least amount of data on the files you are most concerned with, which are the files that are the hardest and least played, because the files where there is the most player subjective agreement are the easy files that people have played to death over and over
Although this is obviously a problem than many people mention, I still have ideas to try. Depending on what model turns out to be acceptable, if any, a study on the behavior of each primitive when difficulty ramps up can potentially be extrapolated to new data. Can't make any more assumptions before having fully defined my primitives first.

Quote:
Originally Posted by MinaciousGrace View Post
how do you extrapolate existing player scorebases to new files?
I don't plan on using scores to estimate anything, but rather the existing difficulties for the ingame files.

Quote:
Originally Posted by MinaciousGrace View Post
do you apply neural networks to pattern configurations? how do you detect patterns? you already threw out the possibility of doing so, so that leaves you without that option. too bad
I won't detect patterns in a hardcoded way. I will deal with densities and various nps change distributions to accomodate for the very many ways a section can be difficult. For example, a high nps on a single receptor with fairly low nps on all 3 other receptors with minimal change can represent anything such as runningmen, anchored jumpgluts or anchored polyrhythms. The representation of patterns is still there, but not as hardly set in stone since there can be too many ways to mix patterns and very little possibility to stay more on the objective side when explaining the resulting difficulty. There's no way I can imagine someone be objectively talking about the difficulty of a runningman pattern with a minijack on every other anchored note. Patterns are friendly concepts for us to communicate about files with an easy mental visualisation, they are not a suitable difficulty metric.

Quote:
Originally Posted by MinaciousGrace View Post
even if you didn't, how do you mathematically model pattern difficulty, how do you account for subjective player strengths given extremely specific patterns and extremely specific players? do you?
I don't model patterns. Strenghts are objective and difficulty does not have anything to do with them, so no I don't account for them. If a player is good at something, then so be it.

Quote:
Originally Posted by MinaciousGrace View Post
again, the same question but applied to specific patterns, is it more important to be generally accurate and leave open high error margins on outliers or sacrifice general accuracy in an attempt to account for the outliers as best as possible? how does the decision you make impact the overall correctness?
I believe I have answered this in the above replies.

Quote:
Originally Posted by MinaciousGrace View Post
how do you deal with transitions? are transitions important? trick question, yes you fucking idiot
You never defined transitions to begin with. However, I'd say I can deal with those with nps change rate per receptor. For example, a roll to a jack will clearly show a drastic increase on one of the receptor's nps and a decrease on all other receptors. This applies to even the most bizzare patterns since nps is a distribution over time and not a finite set of patterns.

Quote:
Originally Posted by MinaciousGrace View Post
do you model stamina drain? how do you model stamina drain? physical? mental? ffr requires additional consideration for mental stamina drain because of the aaa difficulty goal. is that objectively stupid? yes, will it change? probably not
This I would say is one the the more interesting questions you've asked. Yes, FFR difficulty is judged based on AAA, so there definitely has to be a primitive for the song lenght or something similar. Average nps, mixed with the rest, can account for stamina drain but that might need some tweaking too. I do believe that using the nps change rate is helpful here also because a constant nps for a long time is more stamina draining than shorter hard sections. In the case where it's subjectively hard to tell, other primitives like max nps will hopefully lead the model to making an acceptable prediction.

Quote:
Originally Posted by MinaciousGrace View Post
the answers to these questions will guide your specific implementation, none of which you have clearly bothered asking, which is the same predictable fallacy that everyone falls into
Mentionned a few times that it's preferable to extract primitives first and then see what modeling can be done.

Quote:
Originally Posted by MinaciousGrace View Post
you aren't going to reduce file difficulty to 2 prominent variables and even if you could i don't think you would be able to use that information to actually produce a single number and assuming you did you'd still be stuck with the inherent fallacy of using machine learning to produce values that you can't actually corroborate because of human subjectivity
By "2 prominent variables" I guess you meant any decently sized quantity of variables. As for the machine learning part, that's basically the whole foundation behind unsupervised algorithms: the model gives you an output which is meant to be closely analysed to find information about your data and compare it to your subjective expectations.



Sadly (not really but w/e) you are banned, so you won't be able to reply to this soon I suppose. I would've gladly listen to your arguments as to why I'm wrong on certain points, because there is no way I can be right on all that right off the bat. Hopefully you learn to have a respectful conversation/debate before you're unbanned though.

Last edited by xXOpkillerXx; 07-7-2018 at 09:57 AM..
xXOpkillerXx is offline   Reply With Quote
Old 07-7-2018, 11:20 AM   #28
EtienneSM
FFR Player
D8 Godly Keysmasher
 
Join Date: Jan 2013
Age: 26
Posts: 1,724
Send a message via Skype™ to EtienneSM
Default Re: Entropy Gain for per-receptor NPS

I read neural networks and FFR


why
__________________
Quality quotes:

Quote:
Originally Posted by KgZ View Post
enjoy having every guy ask if they can get some love on their weiner
Quote:
Originally Posted by Izzy View Post
I also like the nps scale. The standard ITG scale for harder files is blown out of proportion and no longer makes sense.
Quote:
Originally Posted by kommisar View Post
nps is still a better idea for ratings
Quote:
Originally Posted by klimtkiller View Post
there is 1 tip for people going to college. When you're in college, you'll be 16, which is the age where (where i live) you can get laid lawfully. basically, get laid asap when they look the best.
Quote:
Originally Posted by Rapta View Post
My logic is that the brain processes in 60 FPS so I play 60 FPS.
EtienneSM is offline   Reply With Quote
Old 07-7-2018, 11:26 AM   #29
xXOpkillerXx
Forever OP
Simfile JudgeFFR Simfile AuthorD8 Godly KeysmasherFFR Veteran
 
xXOpkillerXx's Avatar
 
Join Date: Dec 2008
Location: Canada,Quebec
Age: 28
Posts: 4,171
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by EtienneSM View Post
I read neural networks and FFR


why
I also was curious why mina only mentionned those. You can do regression with them but I really wonder if they're efficient at all in this context. Do you have any specific reason to totally discard them though ?
xXOpkillerXx is offline   Reply With Quote
Old 07-7-2018, 11:28 AM   #30
dadcop2
FFR Player
 
Join Date: Jan 2016
Posts: 229
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by EtienneSM View Post
I read neural networks and FFR


why
because i learned about them at a cursory level at my computer science 3 class and i HAVE to apply this concept here even if it doesn't !!!
dadcop2 is offline   Reply With Quote
Old 07-7-2018, 12:08 PM   #31
AutotelicBrown
Under the scarlet moon
FFR Simfile AuthorD7 Elite KeysmasherFFR Veteran
 
AutotelicBrown's Avatar
 
Join Date: Jan 2014
Age: 31
Posts: 921
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by xXOpkillerXx View Post
I don't plan on using scores to estimate anything, but rather the existing difficulties for the ingame files.
This makes no sense if you are using those as ground truth in the first place.

Anyway, I don't think it's worth to break down what you currently have if you haven't built a model in the first place. I guess it's fine to test around with some data and see what happens but it'll make more sense to decide what data to extract after you decide what you are modeling in the first place.

On the neural networks topic, lack of useful data sucks but I think convolutional networks could work well to build difficulty curve graphs.
AutotelicBrown is offline   Reply With Quote
Old 07-7-2018, 12:19 PM   #32
leonid
I am leonid
Retired StaffFFR Simfile AuthorFFR Music ProducerD7 Elite KeysmasherFFR Veteran
 
leonid's Avatar
 
Join Date: Oct 2008
Location: MOUNTAIN VIEW
Age: 34
Posts: 8,080
Default Re: Entropy Gain for per-receptor NPS

So I didn't read this convo but what do you think of showing % of players who played the file that passed/AA'd/AAA'd/etc it, SDVX style
__________________



Proud member of Team No
leonid is offline   Reply With Quote
Old 07-7-2018, 12:22 PM   #33
xXOpkillerXx
Forever OP
Simfile JudgeFFR Simfile AuthorD8 Godly KeysmasherFFR Veteran
 
xXOpkillerXx's Avatar
 
Join Date: Dec 2008
Location: Canada,Quebec
Age: 28
Posts: 4,171
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by AutotelicBrown View Post
This makes no sense if you are using those as ground truth in the first place.

Anyway, I don't think it's worth to break down what you currently have if you haven't built a model in the first place. I guess it's fine to test around with some data and see what happens but it'll make more sense to decide what data to extract after you decide what you are modeling in the first place.

On the neural networks topic, lack of useful data sucks but I think convolutional networks could work well to build difficulty curve graphs.
What do you mean by "if you are going to use those as ground truth" ? I said I'm going the unsupervised way, there is no ground truth in that afaik. I plan on doing any estimation based on difficulties, not scores. Sorry if I misunderstood your point.

The rest is all true. The goal of this thread was never to talk modeling so much but rather discuss primitives.
xXOpkillerXx is offline   Reply With Quote
Old 07-7-2018, 12:25 PM   #34
xXOpkillerXx
Forever OP
Simfile JudgeFFR Simfile AuthorD8 Godly KeysmasherFFR Veteran
 
xXOpkillerXx's Avatar
 
Join Date: Dec 2008
Location: Canada,Quebec
Age: 28
Posts: 4,171
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by leonid View Post
So I didn't read this convo but what do you think of showing % of players who played the file that passed/AA'd/AAA'd/etc it, SDVX style
I currently only have rights to provide stats on the songs/files, I have nothing on the users.

If you meant it as some kind of attribute to predict difficulty could you please explain your reasoning ? Otherwise I'm sorry I can't do that.
xXOpkillerXx is offline   Reply With Quote
Old 07-7-2018, 12:29 PM   #35
leonid
I am leonid
Retired StaffFFR Simfile AuthorFFR Music ProducerD7 Elite KeysmasherFFR Veteran
 
leonid's Avatar
 
Join Date: Oct 2008
Location: MOUNTAIN VIEW
Age: 34
Posts: 8,080
Default Re: Entropy Gain for per-receptor NPS

It gives a rough estimation of difficulty through general performances on the chart
Low % = Hard
High % = Easy
But you need a server to log all the user scores, users have to be online, and the chart needs a good enough number of players
Using neural network is like assigning one person to judge all the difficulties (since it's supposed to map human brains and what not), but what if you disagree with that person
__________________



Proud member of Team No
leonid is offline   Reply With Quote
Old 07-7-2018, 12:40 PM   #36
xXOpkillerXx
Forever OP
Simfile JudgeFFR Simfile AuthorD8 Godly KeysmasherFFR Veteran
 
xXOpkillerXx's Avatar
 
Join Date: Dec 2008
Location: Canada,Quebec
Age: 28
Posts: 4,171
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by leonid View Post
It gives a rough estimation of difficulty through general performances on the chart
Low % = Hard
High % = Easy
But you need a server to log all the user scores, users have to be online, and the chart needs a good enough number of players
Using neural network is like assigning one person to judge all the difficulties (since it's supposed to map human brains and what not), but what if you disagree with that person
Yeah just the fact that you need a massive amount of plays from various players in the lvl spectrum makes that unviable. Plus the difficulty should be predicted before any plays are made (or else it's a bit pointless).

As for the neural net, I have no clue why people are all on it, I dont recall mentionning it in this thread. That being said, you wonder what happens if people dont agree with a neural net's output ? Well, if the vast majority agrees with the net, then those who disagree should try to see if they're biaised because of their skillset and understand what led to that output. If only a minority agree with the output, then the possibility of it being wrong is greater and the chart would need closer inspection to see why that is the case. It's just how things goes when you have no predefined output class or labeled input.
xXOpkillerXx is offline   Reply With Quote
Old 07-7-2018, 01:01 PM   #37
RenegadeLucien
FFR Veteran
Skill Rating Designer
Retired StaffFFR Veteran
 
RenegadeLucien's Avatar
 
Join Date: Jan 2016
Age: 27
Posts: 282
Default Re: Entropy Gain for per-receptor NPS

Just for the record, I've tried to produce a difficulty algorithm primarily based on "distance to last note on each arrow/hand".

I don't know if there's something inherently wrong with this approach or I was just too inexperienced at programming to see it through to a satisfactory completion but I was unable to get to a result that was deemed usable by myself and the difficulty consultants whom I discussed the results with.

On the subject of neural nets, both myself and Trumpet63 have attempted to use neural nets on FFR's song difficulties using extended level stats. Trumpet got his neural net closer than mine (his had a mean difference of 2.4 points from the actual value whereas mine was 4-5 IIRC) but his used several features (such as note color) that could be cheesed by a clever stepfile artist to over/underrepresent the difficulty of their file (ex. if white notes = high diff, throw in a lot of white grace notes that function identically to jumps in practice.)
__________________


RenegadeLucien is offline   Reply With Quote
Old 07-7-2018, 01:07 PM   #38
xXOpkillerXx
Forever OP
Simfile JudgeFFR Simfile AuthorD8 Godly KeysmasherFFR Veteran
 
xXOpkillerXx's Avatar
 
Join Date: Dec 2008
Location: Canada,Quebec
Age: 28
Posts: 4,171
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by RenegadeLucien View Post
Just for the record, I've tried to produce a difficulty algorithm primarily based on "distance to last note on each arrow/hand".

I don't know if there's something inherently wrong with this approach or I was just too inexperienced at programming to see it through to a satisfactory completion but I was unable to get to a result that was deemed usable by myself and the difficulty consultants whom I discussed the results with.

On the subject of neural nets, both myself and Trumpet63 have attempted to use neural nets on FFR's song difficulties using extended level stats. Trumpet got his neural net closer than mine (his had a mean difference of 2.4 points from the actual value whereas mine was 4-5 IIRC) but his used several features (such as note color) that could be cheesed by a clever stepfile artist to over/underrepresent the difficulty of their file (ex. if white notes = high diff, throw in a lot of white grace notes that function identically to jumps in practice.)
Thanks for the information !

What metrics did you use in relation with that distance ? Was it min/max/avg/distribution/... ? Because just like nps, it sounds like a solution that needs quite a few statistical values.

Yes note color is arbitrary.
xXOpkillerXx is offline   Reply With Quote
Old 07-7-2018, 01:20 PM   #39
RenegadeLucien
FFR Veteran
Skill Rating Designer
Retired StaffFFR Veteran
 
RenegadeLucien's Avatar
 
Join Date: Jan 2016
Age: 27
Posts: 282
Default Re: Entropy Gain for per-receptor NPS

It was not pure NPS (NPS was included in the algorithm but only a small factor). It was more like "give each note a value based on how close it is to the next one on the next arrow/hand/overall, then sum everything and take the highest consecutive X notes, add factors for stamina/consistency/NPS"

I did a bunch of playing around with the factors and scales but I would always end up with either long streamy files being rated way too high or big spiky files being rated way too high (or both.)
__________________



Last edited by RenegadeLucien; 07-7-2018 at 01:21 PM..
RenegadeLucien is offline   Reply With Quote
Old 07-7-2018, 01:29 PM   #40
xXOpkillerXx
Forever OP
Simfile JudgeFFR Simfile AuthorD8 Godly KeysmasherFFR Veteran
 
xXOpkillerXx's Avatar
 
Join Date: Dec 2008
Location: Canada,Quebec
Age: 28
Posts: 4,171
Default Re: Entropy Gain for per-receptor NPS

Quote:
Originally Posted by RenegadeLucien View Post
It was not pure NPS (NPS was included in the algorithm but only a small factor). It was more like "give each note a value based on how close it is to the next one on the next arrow/hand/overall, then sum everything and take the highest consecutive X notes, add factors for stamina/consistency/NPS"

I did a bunch of playing around with the factors and scales but I would always end up with either long streamy files being rated way too high or big spiky files being rated way too high (or both.)
What do you think of the rate at which that distance value changes ? Maybe also confined to a certain timeframe and averaging on that ? If the rate is high, then you have a spiky/bursty section, if low then the difficulty is pretty constant. Then with the actual min/max distance you can get a better idea of how drastic the spikes are or how fast is the constant section.
xXOpkillerXx is offline   Reply With Quote
Reply


Currently Active Users Viewing This Thread: 1 (0 members and 1 guests)
 
Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump



All times are GMT -5. The time now is 11:15 PM.


Powered by vBulletin® Version 3.8.1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright FlashFlashRevolution