07-7-2018, 03:16 AM | #21 | |
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,172
|
Re: Entropy Gain for per-receptor NPS
Quote:
You have yet to implement something that doesn't require so many bans on files, and how many times I heard Etterna players say "wow this is nowhere near the rating I thought this would is worth". Now this thread is about model attributes, and if you don't feel like having a normal discussion about the various things that were mentionned so far, get lost man. I will be fine with the link only. If you want to explain anything you feel would need closer attention, please go ahead. |
|
07-7-2018, 03:55 AM | #22 | |
FFR Player
Join Date: Dec 2007
Location: nima
Posts: 4,278
|
Re: Entropy Gain for per-receptor NPS
Quote:
im not here to help you; i did give you the information you needed to help yourself and explicitly rebuked your assessment of how patterns are unimportant and how nps metrics can be used in totality and if you stopped to think about it you would realize why ( SUPREME HINT: IT HAS TO DO WITH THE FACT THAT PATTERN CONFIGURATION HAS HIGHER POTENTIAL IMPACT ON DIFFICULTY THAN NPS ) im just here because its amusing to watch you get buttmad over my specific aversion to emotionally coddling you while giving you everything you need to figure shit out my being an asshole has no bearing on your capacity to think about or understand things, but it's nice to see that you'll actively stymie your ability to do so just to spite me Last edited by MinaciousGrace; 07-7-2018 at 04:02 AM.. |
|
07-7-2018, 04:04 AM | #23 |
FFR Player
Join Date: Dec 2007
Location: nima
Posts: 4,278
|
Re: Entropy Gain for per-receptor NPS
here's another free supreme hint:
define difficulty e: supreme hint #3: if you can't articulate and understand a robust statistical definition of difficulty then you have no business going anywhere near machine learning or neural networks, although, not unironically, if you could you wouldn't be doing so in the first place Last edited by MinaciousGrace; 07-7-2018 at 04:17 AM.. |
07-7-2018, 04:28 AM | #24 |
FFR Player
Join Date: Dec 2007
Location: nima
Posts: 4,278
|
Re: Entropy Gain for per-receptor NPS
supreme hint #4: ffr's difficulty is based on aaa rating which places greater influence on rating to specific/unique patterns, difficulty spikes, and generalized factors such as length, inevitably increasing overall variance particularly with non standard files and moreover increasing subjective variance when evaluating the accuracy of an estimated difficulty
supreme hint #5: supreme hint #4 should help you with #2 and #3 supreme hint #6: it's not that you're approaching the problem incorrectly because you're thinking of it incorrectly, it's that you haven't thought about it at all, you're trying to find answers to questions you didn't ask because you assume the answers will be self evident they're not |
07-7-2018, 04:44 AM | #25 | |
FFR Player
Join Date: Dec 2007
Location: nima
Posts: 4,278
|
Re: Entropy Gain for per-receptor NPS
questions like, given a distribution of margin of error, is it more important to have an average as close to 0 as possible?
is it more important to minimize the outliers? can you apportion relative importance? i.e. is it more important to have roughly 80% of files within 5% but with the remaining 20 having 30%+ margins of error? or would it be preferable to have 95% of files within 7.5% and the remaining 5% within 10%? 15%? given the option do we want an average closer to +(overrated) or -(underrated) 1%? why? how do you examine and test for this? how do you go about eliciting results specific to your goals? how do you ensure that any methods employed don't produce undesirable effects on the results? are some undesirable effects worth a closer adherence to your goal? how much do you account for human subjectivity when testing for this? think you're going to use neural networks and match it to a score base? wrong again you just exposed yourself to population bias which, going back to the previous point, exposes you to wild outliers (30%+) of players even if it fits well with most other players you also have the least amount of data on the files you are most concerned with, which are the files that are the hardest and least played, because the files where there is the most player subjective agreement are the easy files that people have played to death over and over how do you extrapolate existing player scorebases to new files? do you apply neural networks to pattern configurations? how do you detect patterns? you already threw out the possibility of doing so, so that leaves you without that option. too bad even if you didn't, how do you mathematically model pattern difficulty, how do you account for subjective player strengths given extremely specific patterns and extremely specific players? do you? again, the same question but applied to specific patterns, is it more important to be generally accurate and leave open high error margins on outliers or sacrifice general accuracy in an attempt to account for the outliers as best as possible? how does the decision you make impact the overall correctness? how do you deal with transitions? are transitions important? trick question, yes you fucking idiot do you model stamina drain? how do you model stamina drain? physical? mental? ffr requires additional consideration for mental stamina drain because of the aaa difficulty goal. is that objectively stupid? yes, will it change? probably not the answers to these questions will guide your specific implementation, none of which you have clearly bothered asking, which is the same predictable fallacy that everyone falls into you're doing it ass backwards stop trying to build the spaceship, figure out where you're going first ps. it's possible to reverse engineer my entire calc from the last 4 posts so if you really can't get anything from them that's on you pps. do you understand better now, my virulent disdain for all of you ppps. in case im not done holding your hand enough Quote:
you aren't going to reduce file difficulty to 2 prominent variables and even if you could i don't think you would be able to use that information to actually produce a single number and assuming you did you'd still be stuck with the inherent fallacy of using machine learning to produce values that you can't actually corroborate because of human subjectivity Last edited by MinaciousGrace; 07-7-2018 at 05:13 AM.. |
|
07-7-2018, 07:40 AM | #26 | |
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,172
|
Re: Entropy Gain for per-receptor NPS
Quote:
Now about the actual topic, I will get to most of your questions soon. If you expect me to know the exact results of my future tests, you'll be disappointed to learn that that's not how things work. The second paragraph in that quote is just air because you're basically saying: "nps is a bad metric for difficulty because patterns are a good metric". I'm not playing a game of guess what the ass is trying to say; if you want to ask me any amount of questions on the subject, like you did in your latest post, I will gladly do my best to answer them and correct my assumptions if necessary. However, do not expect me to also assume/guess your unmentionned mathematical/logical definitions of concepts such as pattern, transition, standard file and difficulty. By arguing those, I expect you have a rigorous definition for each of them. If that is the case, refer to my second reply to you: provide actual content (be it a link to something or an explanation). Otherwise, I will focus on your questions and rightly consider any criticism so far as voided of credibility. If for you that means holding my hand, you can pat your own back for all I care. You can be helpful and nobody denies it, but nobody's begging you for anything here so you should probably give up on the condescending attitude. |
|
07-7-2018, 09:56 AM | #27 | |||||||||||
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,172
|
Re: Entropy Gain for per-receptor NPS
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Sadly (not really but w/e) you are banned, so you won't be able to reply to this soon I suppose. I would've gladly listen to your arguments as to why I'm wrong on certain points, because there is no way I can be right on all that right off the bat. Hopefully you learn to have a respectful conversation/debate before you're unbanned though. Last edited by xXOpkillerXx; 07-7-2018 at 09:57 AM.. |
|||||||||||
07-7-2018, 11:20 AM | #28 | |||
FFR Player
|
Re: Entropy Gain for per-receptor NPS
I read neural networks and FFR
why
__________________
Quality quotes: Quote:
Quote:
Quote:
|
|||
07-7-2018, 11:26 AM | #29 |
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,172
|
Re: Entropy Gain for per-receptor NPS
|
07-7-2018, 11:28 AM | #30 |
FFR Player
Join Date: Jan 2016
Posts: 229
|
Re: Entropy Gain for per-receptor NPS
|
07-7-2018, 12:08 PM | #31 | |
Under the scarlet moon
Join Date: Jan 2014
Age: 31
Posts: 921
|
Re: Entropy Gain for per-receptor NPS
Quote:
Anyway, I don't think it's worth to break down what you currently have if you haven't built a model in the first place. I guess it's fine to test around with some data and see what happens but it'll make more sense to decide what data to extract after you decide what you are modeling in the first place. On the neural networks topic, lack of useful data sucks but I think convolutional networks could work well to build difficulty curve graphs. |
|
07-7-2018, 12:19 PM | #32 |
I am leonid
Join Date: Oct 2008
Location: MOUNTAIN VIEW
Age: 35
Posts: 8,080
|
Re: Entropy Gain for per-receptor NPS
So I didn't read this convo but what do you think of showing % of players who played the file that passed/AA'd/AAA'd/etc it, SDVX style
|
07-7-2018, 12:22 PM | #33 | |
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,172
|
Re: Entropy Gain for per-receptor NPS
Quote:
The rest is all true. The goal of this thread was never to talk modeling so much but rather discuss primitives. |
|
07-7-2018, 12:25 PM | #34 | |
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,172
|
Re: Entropy Gain for per-receptor NPS
Quote:
If you meant it as some kind of attribute to predict difficulty could you please explain your reasoning ? Otherwise I'm sorry I can't do that. |
|
07-7-2018, 12:29 PM | #35 |
I am leonid
Join Date: Oct 2008
Location: MOUNTAIN VIEW
Age: 35
Posts: 8,080
|
Re: Entropy Gain for per-receptor NPS
It gives a rough estimation of difficulty through general performances on the chart
Low % = Hard High % = Easy But you need a server to log all the user scores, users have to be online, and the chart needs a good enough number of players Using neural network is like assigning one person to judge all the difficulties (since it's supposed to map human brains and what not), but what if you disagree with that person |
07-7-2018, 12:40 PM | #36 | |
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,172
|
Re: Entropy Gain for per-receptor NPS
Quote:
As for the neural net, I have no clue why people are all on it, I dont recall mentionning it in this thread. That being said, you wonder what happens if people dont agree with a neural net's output ? Well, if the vast majority agrees with the net, then those who disagree should try to see if they're biaised because of their skillset and understand what led to that output. If only a minority agree with the output, then the possibility of it being wrong is greater and the chart would need closer inspection to see why that is the case. It's just how things goes when you have no predefined output class or labeled input. |
|
07-7-2018, 01:01 PM | #37 |
FFR Veteran
Skill Rating Designer
Join Date: Jan 2016
Age: 28
Posts: 282
|
Re: Entropy Gain for per-receptor NPS
Just for the record, I've tried to produce a difficulty algorithm primarily based on "distance to last note on each arrow/hand".
I don't know if there's something inherently wrong with this approach or I was just too inexperienced at programming to see it through to a satisfactory completion but I was unable to get to a result that was deemed usable by myself and the difficulty consultants whom I discussed the results with. On the subject of neural nets, both myself and Trumpet63 have attempted to use neural nets on FFR's song difficulties using extended level stats. Trumpet got his neural net closer than mine (his had a mean difference of 2.4 points from the actual value whereas mine was 4-5 IIRC) but his used several features (such as note color) that could be cheesed by a clever stepfile artist to over/underrepresent the difficulty of their file (ex. if white notes = high diff, throw in a lot of white grace notes that function identically to jumps in practice.)
__________________
|
07-7-2018, 01:07 PM | #38 | |
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,172
|
Re: Entropy Gain for per-receptor NPS
Quote:
What metrics did you use in relation with that distance ? Was it min/max/avg/distribution/... ? Because just like nps, it sounds like a solution that needs quite a few statistical values. Yes note color is arbitrary. |
|
07-7-2018, 01:20 PM | #39 |
FFR Veteran
Skill Rating Designer
Join Date: Jan 2016
Age: 28
Posts: 282
|
Re: Entropy Gain for per-receptor NPS
It was not pure NPS (NPS was included in the algorithm but only a small factor). It was more like "give each note a value based on how close it is to the next one on the next arrow/hand/overall, then sum everything and take the highest consecutive X notes, add factors for stamina/consistency/NPS"
I did a bunch of playing around with the factors and scales but I would always end up with either long streamy files being rated way too high or big spiky files being rated way too high (or both.)
__________________
Last edited by RenegadeLucien; 07-7-2018 at 01:21 PM.. |
07-7-2018, 01:29 PM | #40 | |
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,172
|
Re: Entropy Gain for per-receptor NPS
Quote:
|
|
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|