|
|
#11 | |
|
FFR Player
Join Date: Dec 2007
Location: nima
Posts: 4,278
|
questions like, given a distribution of margin of error, is it more important to have an average as close to 0 as possible?
is it more important to minimize the outliers? can you apportion relative importance? i.e. is it more important to have roughly 80% of files within 5% but with the remaining 20 having 30%+ margins of error? or would it be preferable to have 95% of files within 7.5% and the remaining 5% within 10%? 15%? given the option do we want an average closer to +(overrated) or -(underrated) 1%? why? how do you examine and test for this? how do you go about eliciting results specific to your goals? how do you ensure that any methods employed don't produce undesirable effects on the results? are some undesirable effects worth a closer adherence to your goal? how much do you account for human subjectivity when testing for this? think you're going to use neural networks and match it to a score base? wrong again you just exposed yourself to population bias which, going back to the previous point, exposes you to wild outliers (30%+) of players even if it fits well with most other players you also have the least amount of data on the files you are most concerned with, which are the files that are the hardest and least played, because the files where there is the most player subjective agreement are the easy files that people have played to death over and over how do you extrapolate existing player scorebases to new files? do you apply neural networks to pattern configurations? how do you detect patterns? you already threw out the possibility of doing so, so that leaves you without that option. too bad even if you didn't, how do you mathematically model pattern difficulty, how do you account for subjective player strengths given extremely specific patterns and extremely specific players? do you? again, the same question but applied to specific patterns, is it more important to be generally accurate and leave open high error margins on outliers or sacrifice general accuracy in an attempt to account for the outliers as best as possible? how does the decision you make impact the overall correctness? how do you deal with transitions? are transitions important? trick question, yes you fucking idiot do you model stamina drain? how do you model stamina drain? physical? mental? ffr requires additional consideration for mental stamina drain because of the aaa difficulty goal. is that objectively stupid? yes, will it change? probably not the answers to these questions will guide your specific implementation, none of which you have clearly bothered asking, which is the same predictable fallacy that everyone falls into you're doing it ass backwards stop trying to build the spaceship, figure out where you're going first ps. it's possible to reverse engineer my entire calc from the last 4 posts so if you really can't get anything from them that's on you pps. do you understand better now, my virulent disdain for all of you ppps. in case im not done holding your hand enough Quote:
you aren't going to reduce file difficulty to 2 prominent variables and even if you could i don't think you would be able to use that information to actually produce a single number and assuming you did you'd still be stuck with the inherent fallacy of using machine learning to produce values that you can't actually corroborate because of human subjectivity Last edited by MinaciousGrace; 07-7-2018 at 05:13 AM.. |
|
|
|
|
| Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|