08-27-2019, 07:54 AM
|
#29
|
✘ Forever OP✘
Join Date: Dec 2008
Location: Canada,Quebec
Age: 29
Posts: 4,171
|
Re: Scoring Data in Brutal Difficulty Range
Here's my take on the difficulty factors of individual notes:
-Local one-hand complexity: how difficult it is to hit that note given the past and future X notes on the same hand (future notes are necessary to account for readability). That is something I havent finalized, but generally the difficulty goes up first with the spacing of notes (the less frames between the notes, the harder it is in a non-linear fashion so that 1-framers are much harder than 2-framers but 30 and 31 frames are pretty similar), then by transition (at the same speed, a jump to a single note is always harder than a minijack or a jumpjack, and single-to-single like 12 or 34 or 21 or 43 have a special weight for being easily hit as a jump or not). Some time-based gaussian window over each note gave decent results with a window of about 1 second (30 frames) or less on each side and a low std dev (< 1.0).
-Global 2-hands complexity: a distribution of the two 1-hand complexities at each timestep. For example, a very hard section on one hand with a very simple one on the other hand could be easier than medium difficulty on both hands at same time. This would need refinement for polys and a more well-defined explanation (with general and edge cases).
-Note time: just a factor of where the note is in time. This has to be picked/formulated so that a note after 5 minutes with low complexity cannot be harder than a note 1 minute in with high complexity. It is easily defined once the complexities are defined. Accounts for focus loss and partly for stamina.
-Note stamina: a large time-based past-only window over the aggregated factors above. Essentially accounts for breaks in a song; it's easier to hit a hard section after a break than in the middle of some stream or whatever.
This gives a difficulty number to each note of a file. It then becomes possible to compute a different AAA equiv formula for each file. The overall difficulty of a file would then not be a single number, but rather a distribution over raw goods count. Nothing forbids us to compute and show difficulty for a specific count (AAA difficulty, 10g difficulty, 20g difficulty, etc).
I have a pretty good setup to compute these already, so I'm saying it here to gather more opinions on the aggregation part and various factors (gaussians parameters, complexity, etc).
@rob if you prefer this to be in another thread let me know
Last edited by xXOpkillerXx; 08-27-2019 at 08:10 AM..
|
|
|