08-2-2005, 02:55 AM | #1 |
auauauau
|
Someone explain this to me.
|
08-2-2005, 10:00 AM | #2 |
FFR Player
|
HYPOTHESIS: "Audio can be represented well as a combination of rising and falling pitches modelled with cubic splines."
-Alright, we'll start here, I'm going to break this one down. Cubic Spline- Well we know that a Spline is " Any of a series of projections on a shaft that fit into slots on a corresponding shaft, enabling both to rotate together." Basically the hypothesis is saying that the Audio will be placed into a cubic spline, that will enable us to hear the drastic changes in the pitches/frequencies. CONCEPT: "The waveform will first be broken up into its various frequencies. Each frequency will be scanned individually for local minima and maxima. Nodes will be placed at each minima and maxima, and the intermediate data will be fit by a bezier spline to minimize the mean-squared error. " -They are first going to seperate the frequencies, basically pick them apart so that they can reassemble them to their needs, and then they're going to scan them, test them in a machine that is going to read the lowest end and highest end of each frequency. Then they will place them back, but this time putting them in order. They will then subtract the mean of the frequencies to find the error. ISSUES: "Discrete Fourier transforms produce aliasing ripples that wouldn't be compressed well using splines (Even when the transform uses good overlapping windows.) For this to be effective, we need a window size independent transform that still extracts local frequency information. " -In other words, the frequency has aliased ripples which they are having difficulty compressing using a spline. *See definition above*. This is because the transform that they are equpped with, and using for this, is too small. It cannot recieve the waves that they need it to, therefore not make the frequency range as they would like it. I know that I'm right in what I've told you, however I can't guarantee that is all of the information to give. If someone has more, I hope they'll add it. Why would you need to know about that anyways?
__________________
.so what. -Skooter- .drama makes life boring. |
08-2-2005, 11:09 AM | #3 |
auauauau
|
I was just curious.
|
08-2-2005, 12:23 PM | #4 |
FFR Player
|
Ever listened to a DJ set and suddenly the bass sounds crackly and whatnot? you knoiw your speakers arent up loud aBasically it says that if you push 6" sound through a 4" tube... you're getting a truncated wave therefore losing some sound. To see an example of that re-record a wav file wilth the recoding bar in the red, make it sound ugly, dirty, poor... then open something like SoundForge or a wave analyser and you'll see the sine, cosine, tangent, whatever kind of wave it is, get leveld at its peaks. This is where the Noise Reduction and sound reconstruction comes into play. Its pretty complicated to do correctly, but it can be done.
But, this is just a large scale example and in order to make the article more understandable, think big first, then work down to small and you can begin to kind of see how it works |
08-2-2005, 02:15 PM | #5 |
FFR Player
|
Wow, now I feel stupid.
__________________
|
08-3-2005, 12:07 AM | #6 |
FFR Player
Join Date: Jun 2003
Posts: 61
|
His hypothesis says that you should be able to compress sounds using, well, waves. By saying which channels open and how much at a time, you can compress the audio by a lot more. But, the problem under the "issues" heading says that you would need a sensor to say how much to open each channel, and the compression of it would have to change accordingly.
__________________
Zomgwtfbbqpwn. |
Currently Active Users Viewing This Thread: 1 (0 members and 1 guests) | |
|
|