Entropy Gain for per-receptor NPS
I'm currently working on adding a few more metrics to the extended statistics of every files, with the help of PrawnSkunk to validate and integrate those to the website. I'm reaching out to everyone who has some knowledge in machine learning and maths.
The first stats I finished coding are the NPS (split just like the current total nps by different timeframes like .3s, .5s, 1s, 2s, etc.) for individual receptors (left, down, up, right). My intuition is that a 4 NPS section like [1,2,3,4] vs [1,1,1,1] have absolutely different difficulties, the latter being much more difficult. So, do you think that those + the total NPS would give a significant entropy gain (or any equivalent depending on the model) in computing the difficulties of the files ? Any ideas/questions appreciated. |
Re: Entropy Gain for per-receptor NPS
Could you factor in the occurrence of certain types of notes during sequences? Just as an example, a 20 nps section of single note streams is probabaly much harder than a 20 nps section of dense js where every other note is a jump, so maybe you could find some ratio of single notes to jumps etc
Obviously 4 nps of repeated jacks is harder than 4 nps of a roll etc, but there's also things like 20 nps of streams that are rolly are generally easier than 20 nps of streams with lots of ohts |
Re: Entropy Gain for per-receptor NPS
I'd be interested to see its results. I would think that there would be three basic pattern difficulties - NPS, jacks, and predominately one handed patterns. The coding would have to be able to read a song like club, which has a max NPS of only 16 but is considered a 75 currently. I would also think a song should get a bump in difficulty if it alternates between all three of those categories, or combines them, instead of just focusing on one. (I think its part of the reason "Southern Cross" has seen such a drop in its recognized difficulty - modern stepcharts are much more likely to mix in more variety of complex patterns over just having speed).
There are certain charts I've always felt were underrated, and if you had a draft program sometime, I'd give you a short list to test. |
Re: Entropy Gain for per-receptor NPS
Quote:
Quote:
Thanks for your questions |
Re: Entropy Gain for per-receptor NPS
Quote:
For the one handed, I was already thinking about adding the same kind of nps splits but with left hand and right hand, so all {1} or {2}, and all {3} or {4}. That was the one handed trilling bias would be accounted for in the metrics, along with jumpjacks on single hand. EDIT: I kinda get what you mean with the alternating patterns, but I don't think I agree. Would you have any other examples for it so that I can check them out ? A metric of variety in patterns sounds pretty hard to define mathematically, although not impossible; it would still be computed by using some kind of normalized variance on the different nps metrics. For example, if the nps-per-receptor has definite peaks vs spread out progression vs constant nps, etc |
Re: Entropy Gain for per-receptor NPS
What I think would be interesting is to get a few players together and make two chart with various patterns in it - one set of simpler patterns and one set with more complex patterns (jump jacks and handstream, etc).
Have each player submit scores on different rates of the chart and plot the decline in scores as the rate increases until they reach a point of just mashing. Use math then to determine the relative difficulty of certain patterns over others. Using this method, you'd be able to compare "160 BPM handstream vs 190 BPM Jumpstream" for example, or "jumpstream with mini-jacks vs jumpstream without them." Using multiple players will help reduce player ability bias. |
Re: Entropy Gain for per-receptor NPS
I meant more so like, denser jumpstream patterns, like suppose for example you had
2 (13) 2 (14) As a pattern in jumpstream - in order to achieve the same nps with streams the patterns would be have to be faster since you don't have the double notes, but equally "difficult" streams would just be like, a 4 note one hand trill based on the pattern of js etc, but that wouldn't make up the same nps Also consider a situation, a pattern that's just a jumptrill (12) (34) (12)... Is arguably the same difficulty as (1) (4) (1) (4) or even (12) (1) (12)... Despite different nps |
Re: Entropy Gain for per-receptor NPS
Quote:
|
Re: Entropy Gain for per-receptor NPS
Quote:
Quote:
|
Re: Entropy Gain for per-receptor NPS
Quote:
At higher bpms the disparity in difficulty becomes a bit wider (e.g. 375 streaming pushes a speed threshold that 250 dense jumpstreaming doesn't quite match up with) but eventually you hit a point where they're both outside the realm of possibility to PA for almost everyone anyways (450 streaming vs 300 dense js etc.) |
Re: Entropy Gain for per-receptor NPS
Quote:
|
Re: Entropy Gain for per-receptor NPS
Yes with the latter part of my post you can assume the same as noted in the former, or better yet just a fairly equal note distribution for all four notes. Barring bullshit like anchors or one hand trills or patterning that trivializes the section almost entirely (like a giant roll), the stream would still likely have a slight edge in difficulty. I'd be willing to bet there's a considerably larger amount of D7+ players that can maintain better PA on 250 dense js over 375 streaming, despite equal nps.
edit: Quote:
Quote:
I want to say patashu's TS difficulty calc took into account receptor nps and the results were super memey, but maybe you'll do it better (or maybe I'm mistaken and it just involved nps as a whole) |
Re: Entropy Gain for per-receptor NPS
Quote:
Quote:
I just want to mention again that although I only talk about nps, there are Many metrics that can be extracted with that. What I mean with "judging with patterns" is any approach that tries to match hardcoded patterns in a file (kinda like a regex) and applies metrics to that; I believe it can never take into account every pattern and variation, as opposed to nps metrics that can model speed and hand bias in a way that englobes all possibilities. |
Re: Entropy Gain for per-receptor NPS
I'm not home anymore so cant post the response I want atm but I appreciate you striving to create something to tackle this problem rooted in objectivity (would love the same), just fearful of potentially poor results based on what others have tried to do in the past in a similar fashion
Also hi chooby I saw u infracted me but I cant open PMs on my phone but that's ok I probably deserved it ps I missed u |
Re: Entropy Gain for per-receptor NPS
Quote:
EDIT: Even though I appreciate any comment about how x or y previous solution worked or not, since there are quite a bunch of ways to approach the problem, I'd prefer if details to the mentionned solutions are linked to or explained thoroughly. Otherwise, I can only guess stuff about the implementations and that would lead me nowhere most likely. More maths and machine learning arguments would be much more productive imo. |
Re: Entropy Gain for per-receptor NPS
Quote:
|
Re: Entropy Gain for per-receptor NPS
Quote:
No seriously provide information or don't post. Idc how much you know about it if you're gonna say yes/no. Ty :) |
Re: Entropy Gain for per-receptor NPS
i could explain myself but then i'd have to kill you
|
Re: Entropy Gain for per-receptor NPS
man the only thing more cliche than that response would be if i had already written extensively on all of the relevant areas of discussion
then carefully organized said writing into a document that was made public then spent thousands of hours doing practical implementation of testing of said thoughts gosh that would really be the b side of a bollywood movie tier script |
Re: Entropy Gain for per-receptor NPS
rong
|
Re: Entropy Gain for per-receptor NPS
Quote:
You have yet to implement something that doesn't require so many bans on files, and how many times I heard Etterna players say "wow this is nowhere near the rating I thought this would is worth". Now this thread is about model attributes, and if you don't feel like having a normal discussion about the various things that were mentionned so far, get lost man. I will be fine with the link only. If you want to explain anything you feel would need closer attention, please go ahead. |
Re: Entropy Gain for per-receptor NPS
Quote:
im not here to help you; i did give you the information you needed to help yourself and explicitly rebuked your assessment of how patterns are unimportant and how nps metrics can be used in totality and if you stopped to think about it you would realize why ( SUPREME HINT: IT HAS TO DO WITH THE FACT THAT PATTERN CONFIGURATION HAS HIGHER POTENTIAL IMPACT ON DIFFICULTY THAN NPS ) im just here because its amusing to watch you get buttmad over my specific aversion to emotionally coddling you while giving you everything you need to figure shit out my being an asshole has no bearing on your capacity to think about or understand things, but it's nice to see that you'll actively stymie your ability to do so just to spite me |
Re: Entropy Gain for per-receptor NPS
here's another free supreme hint:
define difficulty e: supreme hint #3: if you can't articulate and understand a robust statistical definition of difficulty then you have no business going anywhere near machine learning or neural networks, although, not unironically, if you could you wouldn't be doing so in the first place |
Re: Entropy Gain for per-receptor NPS
supreme hint #4: ffr's difficulty is based on aaa rating which places greater influence on rating to specific/unique patterns, difficulty spikes, and generalized factors such as length, inevitably increasing overall variance particularly with non standard files and moreover increasing subjective variance when evaluating the accuracy of an estimated difficulty
supreme hint #5: supreme hint #4 should help you with #2 and #3 supreme hint #6: it's not that you're approaching the problem incorrectly because you're thinking of it incorrectly, it's that you haven't thought about it at all, you're trying to find answers to questions you didn't ask because you assume the answers will be self evident they're not |
Re: Entropy Gain for per-receptor NPS
questions like, given a distribution of margin of error, is it more important to have an average as close to 0 as possible?
is it more important to minimize the outliers? can you apportion relative importance? i.e. is it more important to have roughly 80% of files within 5% but with the remaining 20 having 30%+ margins of error? or would it be preferable to have 95% of files within 7.5% and the remaining 5% within 10%? 15%? given the option do we want an average closer to +(overrated) or -(underrated) 1%? why? how do you examine and test for this? how do you go about eliciting results specific to your goals? how do you ensure that any methods employed don't produce undesirable effects on the results? are some undesirable effects worth a closer adherence to your goal? how much do you account for human subjectivity when testing for this? think you're going to use neural networks and match it to a score base? wrong again you just exposed yourself to population bias which, going back to the previous point, exposes you to wild outliers (30%+) of players even if it fits well with most other players you also have the least amount of data on the files you are most concerned with, which are the files that are the hardest and least played, because the files where there is the most player subjective agreement are the easy files that people have played to death over and over how do you extrapolate existing player scorebases to new files? do you apply neural networks to pattern configurations? how do you detect patterns? you already threw out the possibility of doing so, so that leaves you without that option. too bad even if you didn't, how do you mathematically model pattern difficulty, how do you account for subjective player strengths given extremely specific patterns and extremely specific players? do you? again, the same question but applied to specific patterns, is it more important to be generally accurate and leave open high error margins on outliers or sacrifice general accuracy in an attempt to account for the outliers as best as possible? how does the decision you make impact the overall correctness? how do you deal with transitions? are transitions important? trick question, yes you fucking idiot do you model stamina drain? how do you model stamina drain? physical? mental? ffr requires additional consideration for mental stamina drain because of the aaa difficulty goal. is that objectively stupid? yes, will it change? probably not the answers to these questions will guide your specific implementation, none of which you have clearly bothered asking, which is the same predictable fallacy that everyone falls into you're doing it ass backwards stop trying to build the spaceship, figure out where you're going first ps. it's possible to reverse engineer my entire calc from the last 4 posts so if you really can't get anything from them that's on you pps. do you understand better now, my virulent disdain for all of you ppps. in case im not done holding your hand enough Quote:
you aren't going to reduce file difficulty to 2 prominent variables and even if you could i don't think you would be able to use that information to actually produce a single number and assuming you did you'd still be stuck with the inherent fallacy of using machine learning to produce values that you can't actually corroborate because of human subjectivity |
Re: Entropy Gain for per-receptor NPS
Quote:
Now about the actual topic, I will get to most of your questions soon. If you expect me to know the exact results of my future tests, you'll be disappointed to learn that that's not how things work. The second paragraph in that quote is just air because you're basically saying: "nps is a bad metric for difficulty because patterns are a good metric". I'm not playing a game of guess what the ass is trying to say; if you want to ask me any amount of questions on the subject, like you did in your latest post, I will gladly do my best to answer them and correct my assumptions if necessary. However, do not expect me to also assume/guess your unmentionned mathematical/logical definitions of concepts such as pattern, transition, standard file and difficulty. By arguing those, I expect you have a rigorous definition for each of them. If that is the case, refer to my second reply to you: provide actual content (be it a link to something or an explanation). Otherwise, I will focus on your questions and rightly consider any criticism so far as voided of credibility. If for you that means holding my hand, you can pat your own back for all I care. You can be helpful and nobody denies it, but nobody's begging you for anything here so you should probably give up on the condescending attitude. |
Re: Entropy Gain for per-receptor NPS
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Sadly (not really but w/e) you are banned, so you won't be able to reply to this soon I suppose. I would've gladly listen to your arguments as to why I'm wrong on certain points, because there is no way I can be right on all that right off the bat. Hopefully you learn to have a respectful conversation/debate before you're unbanned though. |
Re: Entropy Gain for per-receptor NPS
I read neural networks and FFR
why |
Re: Entropy Gain for per-receptor NPS
Quote:
|
Re: Entropy Gain for per-receptor NPS
Quote:
|
Re: Entropy Gain for per-receptor NPS
Quote:
Anyway, I don't think it's worth to break down what you currently have if you haven't built a model in the first place. I guess it's fine to test around with some data and see what happens but it'll make more sense to decide what data to extract after you decide what you are modeling in the first place. On the neural networks topic, lack of useful data sucks but I think convolutional networks could work well to build difficulty curve graphs. |
Re: Entropy Gain for per-receptor NPS
So I didn't read this convo but what do you think of showing % of players who played the file that passed/AA'd/AAA'd/etc it, SDVX style
|
Re: Entropy Gain for per-receptor NPS
Quote:
The rest is all true. The goal of this thread was never to talk modeling so much but rather discuss primitives. |
Re: Entropy Gain for per-receptor NPS
Quote:
If you meant it as some kind of attribute to predict difficulty could you please explain your reasoning ? Otherwise I'm sorry I can't do that. |
Re: Entropy Gain for per-receptor NPS
It gives a rough estimation of difficulty through general performances on the chart
Low % = Hard High % = Easy But you need a server to log all the user scores, users have to be online, and the chart needs a good enough number of players Using neural network is like assigning one person to judge all the difficulties (since it's supposed to map human brains and what not), but what if you disagree with that person |
Re: Entropy Gain for per-receptor NPS
Quote:
As for the neural net, I have no clue why people are all on it, I dont recall mentionning it in this thread. That being said, you wonder what happens if people dont agree with a neural net's output ? Well, if the vast majority agrees with the net, then those who disagree should try to see if they're biaised because of their skillset and understand what led to that output. If only a minority agree with the output, then the possibility of it being wrong is greater and the chart would need closer inspection to see why that is the case. It's just how things goes when you have no predefined output class or labeled input. |
Re: Entropy Gain for per-receptor NPS
Just for the record, I've tried to produce a difficulty algorithm primarily based on "distance to last note on each arrow/hand".
I don't know if there's something inherently wrong with this approach or I was just too inexperienced at programming to see it through to a satisfactory completion but I was unable to get to a result that was deemed usable by myself and the difficulty consultants whom I discussed the results with. On the subject of neural nets, both myself and Trumpet63 have attempted to use neural nets on FFR's song difficulties using extended level stats. Trumpet got his neural net closer than mine (his had a mean difference of 2.4 points from the actual value whereas mine was 4-5 IIRC) but his used several features (such as note color) that could be cheesed by a clever stepfile artist to over/underrepresent the difficulty of their file (ex. if white notes = high diff, throw in a lot of white grace notes that function identically to jumps in practice.) |
Re: Entropy Gain for per-receptor NPS
Quote:
What metrics did you use in relation with that distance ? Was it min/max/avg/distribution/... ? Because just like nps, it sounds like a solution that needs quite a few statistical values. Yes note color is arbitrary. |
Re: Entropy Gain for per-receptor NPS
It was not pure NPS (NPS was included in the algorithm but only a small factor). It was more like "give each note a value based on how close it is to the next one on the next arrow/hand/overall, then sum everything and take the highest consecutive X notes, add factors for stamina/consistency/NPS"
I did a bunch of playing around with the factors and scales but I would always end up with either long streamy files being rated way too high or big spiky files being rated way too high (or both.) |
Re: Entropy Gain for per-receptor NPS
Quote:
|
Re: Entropy Gain for per-receptor NPS
I'd need to experiment with it to get a definitive answer. I can see the value in having something like that, but it would be difficult to separate actual spikes/bursts from just natural variance in patterns (take a staircase for example: there are gaps of 5 notes between every left arrow, but only 1 between (some) down or up arrows, so the down/up arrows look much harder than the left/right arrows, and this could produce odd results for a difficulty change rate value. Would probably have to look at average difficulty over a short period of notes and use that to determine the difficulty change rate.
|
Re: Entropy Gain for per-receptor NPS
Quote:
|
Re: Entropy Gain for per-receptor NPS
Quote:
up, ,up, , , ,up, ,up __, ,0 , , , ,-2, ,2 vs ri, , , ,ri, , , ,ri, , , ,ri _, , , ,0, , , ,0, , , ,0 (changes between 0 and 1 have been normalized to the opposite of their inverse: 0.5 => 2 => -2) Takes a minimum of 3 notes to have a variation in distance. While it's true that the average is the same (0), you could maybe take the range between the minimum negative value (biggest deceleration) and the maximum positive value (biggest acceleration). Deceleration doesn't affect difficulty, don't forget that this is a per-receptor metric. A file starts at 0 difficulty with 0 notes. If you put a jack at x speed, then after a few notes its speed changes to x/2, the only problem is going from 0 speed to x speed, not from x to x/2. Gradual acceleration/deceleration aren't considered in this but you can get a primitive for it using this same concept. So, for the example of the staircase, if we discard the negative values, we get a max range of 2 on up and down, and a max range of 0 on left and right. And you dont aggregate those in any way because the min/max on each receptor is important. Does that cover the type of example you had in mind, Renegade ? |
Re: Entropy Gain for per-receptor NPS
So, on spikes vs natural variance: what I mean by "spike", at least in the context of saying that my old algorithms would rate spiky files way too high, was files such as ABCDEath or TTE which have one disproportionately note-heavy section that overshadows everything else in the file. When I say "natural variance", I mean that some arrows in a long pattern like a stream, jumpstream, or staircase will be harder to hit than others.
What I'm trying to avoid is to see a staircase, get a max range of 2 on up or down as you described, and falsely claim that the staircase is a spike when in reality, it's just a staircase. Whatever metric that is used to determine the rate would have to be able to tell the difference. |
Re: Entropy Gain for per-receptor NPS
Quote:
You have to have some mathematical definition of your concepts if not used for visualisation only. For example, a spike would be a sudden high density x of notes, at least to my understanding of your description. In a more formal way you could say it's any section with high acceleration (lets use a trivial number like 4). Also btw my metric isn't totally correct for another reason, I'll post a fix to it. So you then have a trivial definition of a spike. With that, you want to avoid cases where the spike is short (i.e. in a staircase, the two ups or two downs) and constant (the staircase goes on for some time like 2 measures). The reason it's trivial is because first of all there is a trivial threshold to set and also because the length of said spike is not well bounded. You mention TTE. Take TTE's fastest spot (a rolly burst like 123412341234) and remove everything before it. The acceleration from nothing to that is equal for each receptor, so min = max = x. Now take a staircase 123432123432 with the distance between two up arrows being equal in this and the roll (from a per-receptor perspective, that is most definitely fair). From nothing to it, 2*min = max = x. It would seem that both are identical, however for the comparison to hold the total nps of the spike will be lower on the staircase than on the roll (the amount of notes bewteen the fastest consecutive notes per-receptor being 1 for the staircase and 3 for the roll). Therefore, a distinction Should be made naturally but the spikiness (again, per-receptor!) will be the same according to the trivial definition. EDIT: Just to be extra clear, I'll point out that what you refer to as a spike as we all know it is easily defined when using all notes (not per-receptor). There's a quick increase and decrease in the nps of the section and that's it. That metric can be useful, but it's not what I was explaining/arguing in the previous few posts. |
Re: Entropy Gain for per-receptor NPS
Yeah I think we're talking about totally different concepts here. Per-receptor spikiness isn't something I ever really considered in my algorithm, at least not beyond "this note is really close to the last note for this receptor, therefore it should have a high value".
I can't think of any files off the top of my head where per-receptor spikiness plays a major factor in the difficulty of the file, so I can't judge how well the simple "this note is close" factor covers it. I do think such a metric would be valuable to have. |
Re: Entropy Gain for per-receptor NPS
Quote:
|
Re: Entropy Gain for per-receptor NPS
Here's something I am wondering about per-hand stuff.
Is it safe to assume that any section with a high nps (x) on {3} and a lower nps on {4} is Harder than having x nps on both {3} and {4} ? Any counterexample is welcome. More visually, I'm thinking that [34]4[34]4 is always harder than [34][34][34][34]. But only per-hand, so the same wouldn't apply with combinations of receptors like {2} and {3}, or {2} and {4}, etc. And by always I mean no matter what is before it, after it, and what's going on on the other receptors. EDIT: I will even go as far as claiming that if x is the nps on {3} and y is the nps on {4}, the peak of that per-hand difficulty is reached when x = 2y or 2x = y. When you lower small nps, you get things like [34]44[34]44[34]44, and when you raise it, you get [34][34]4[34][34]4, both of which I would argue are objectively easier than [34]4[34]4[34]4. |
Re: Entropy Gain for per-receptor NPS
Quote:
Quote:
|
Re: Entropy Gain for per-receptor NPS
Quote:
How high does your algo rate DP compared to Undici ? Those are really hard files and none have been AAA'd yet Right now, they're only 2 points apart from each other, and I would Not consider the opposite to be an error because it's only a single file (it's better to look at results as a whole first and then understand the difference between particular files, so not having your complete results, I can only assume things). As for you AIM example, it would make no sense to put a long jack and a long jack with 1 jump in it at the same exact difficulty on a real numbers scale. The one with the jump Has to be harder, even if it's by a very small amount. |
Re: Entropy Gain for per-receptor NPS
Does it also address the fact that difficulty differs based on what your goal is? Charts can be trivial to AA but impossible to AAA, or there's some stupid minefield that makes it really hard to pass but once you survive it's a guaranteed AA, etc
|
Re: Entropy Gain for per-receptor NPS
Quote:
EDIT: @leonid: stepmania is different obviously. What you describe as difficulty to AA, AAA, pass, are all very distinct values that may have a similar computing process but would have their own specific primitives. You can't possibly have a single metric for overall difficulty when your definition of difficulty is an undefined combination of 3 distinct aspects, otherwise you end up with obviously biaised results that are very hard to interpret. A fair comparison can be made with Etterna's calculator: if overall difficulty is some aggregate (like avg or weighted avg) of the per-pattern difficulties (jack, stream, js, etc), then it's not a surprise that they haveso many files to ban from leaderboards. |
Re: Entropy Gain for per-receptor NPS
Quote:
Quote:
|
Re: Entropy Gain for per-receptor NPS
Quote:
As for the jacks, the difficulty of perhand would be symmetric on both sides of of the 2:1 ratio, but the nps primitive would naturally make a [34] jumpjack harder than a 3 jack with a single [34] in it. |
Re: Entropy Gain for per-receptor NPS
I don't like difficulty being "one value".
It should vary in magnitude throughout the song, and be different kinds of difficulty. Like, how do scores change if you're only slightly less good at hitting something than another player. Difficulty might not be that well, but if it means the difference between AAA'ing and good-rushing a difficult jumpstream, then the scores are highly sensitive to skill. Maybe that's a good measure? Change in score vs. change in skill in a certain direction? Dunno. Thoughts aren't fleshed out at all. Just food for thought. |
Re: Entropy Gain for per-receptor NPS
Quote:
|
Re: Entropy Gain for per-receptor NPS
Quote:
Also here's another idea, but I haven't found a way to make it fully non-trivial yet: I can take the nps at every frame of a file (kinda like the nps generator), and figure out how long the file stays around its max nps. The only problem with that is I can't just put a random threshold like "time during which nps is at most 2nps away from max nps" or "time during which nps is higher than 95% of the max nps". Thoughts ? |
Re: Entropy Gain for per-receptor NPS
so did you get far enough to realize you have no idea what you're doing yet or did you just spout a bunch of bullshit and then do nothing
i swear you people that think everything can be solved with machine learning are worse than the people who think blockchain makes everything better |
Re: Entropy Gain for per-receptor NPS
I didn't want to be pushy towards prawn because he seemed very busy already. I talked with him and for tests on the whole song db I need to ask him everytime. I have converted a few sm packs for basic tests but since those arent rated like ffr, I couldnt just accept/discard results (there also were some sketchy numbers with stuff like Beyond Bludgeonned, Big Black and Little piece of Heaven, stuff that extrapolation should still rate correctly). I still have formulas to try yet, but now I'm on vacation and not focusing on ffr at all. Will most likely get back to it when my semester starts (early september).
You seem to be in quite the hurry to see results for an arrogant ass. I think even if I manage to get good results I'll hide them from you. |
Re: Entropy Gain for per-receptor NPS
so basically you're in total denial still
|
Re: Entropy Gain for per-receptor NPS
Let's try page 4 again
|
Re: Entropy Gain for per-receptor NPS
Quote:
I take no credit or whatever for anything I've said or done so far and gain pretty much huh.. absolutely nothing from it, so I don't get what made you think I'm on my high horse. Some people in the thread (Renegade for example) made me re-think some aspects of the problem, and heck parts of your bigger post did too, so I don't think I'm in any denial here. Why would you even be slightly affected by the fact that I deliver or not (which hey it's totally possible that I don't, who knows), you'll be disappointed that I didn't succeed ? Come on we both know you'd just stroke your fat ego and say you told everyone you were right. |
Re: Entropy Gain for per-receptor NPS
i don't need you to fail to know i'm right
it'll just be its own prize considering you both solicited my advice while bashing my work while ignoring the fact that i have ample public contributions to the subject which would take less than 5 minutes to look up which you refused to do because of what, pride? butthurtedness? not to mention the string of hilarious responses to all of my points which don't logically track in any dimension i mean your premise is wrong and your response to every point was wrong and your entire frame of mind in approaching the subject is wrong so yes there is no helping you, it is entirely about watching you fail |
Re: Entropy Gain for per-receptor NPS
ahuh
|
Re: Entropy Gain for per-receptor NPS
merry christmas
|
Re: Entropy Gain for per-receptor NPS
You remind me of my sister
|
Re: Entropy Gain for per-receptor NPS
Merry christmas
|
Re: Entropy Gain for per-receptor NPS
Merry Christmas :)
|
All times are GMT -5. The time now is 07:34 PM. |
Powered by vBulletin® Version 3.8.1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright FlashFlashRevolution