View Single Post
Old 04-12-2013, 03:02 PM   #19
reuben_tate
Kawaii Desu Ne?
Retired StaffFFR Veteran
 
reuben_tate's Avatar
 
Join Date: Dec 2007
Location: The Kawaiian Island~
Age: 30
Posts: 4,182
Default Re: Teach me CS stuff for interviews

responses in bold
Quote:
Originally Posted by Reincarnate View Post
Code:
static long Fibonacci(int n) {
    if (n==1||n==2) return 1;
    return Fibonacci(n-2)+Fibonacci(n-1);
}
Without memoization, I'd say O(2^n). I know there's something called the Master Theorem that can be used for linear recurrences but I haven't learned it yet.

With memoization, O(n)

Yeah, O(2^n) is good. You can kind of figure out it by noticing that for each call of the function, you have to recursively call the same function twice.

So say fib(4) took n steps to complete. If you calculate fib(5), you'd have to call fib(4) (which is n steps) and fib(3) (which is about n/2 steps considering half the work for calling fib(4) is in calling fib(3).) So total, it'll take about 3n/2 steps to complete fib(5) compared to the n steps it took for fib(4). We see that it took about 1.5 more steps to complete the algorithm when we increased the data size by 1. This is analogous to about O((1.5)^n) which we can declare 2^n to be an upper bound. (If you go in depth with the fib(n) algorithm, you'll find that a tight bound is O((the golden ratio)^n) or about O(1.618^n) but this takes a lot more analysis to derive.)


Code:
for (int i=0,j=0; i<n; i++)
    for (double d=1; d<n; d*=2)
        j++;
O(n log n)

yup


Code:
int[] a=new int[n];
for (int i=4;i<n;i++) {
    for (int j=i-3,sum=a[i-4];j<=i;j++)
        sum+=a[j];
}
what the fuck this code lol
uhh, O((n-4)*4) = O(4n-16) = O(4n) = O(n)?

it's ok, that's what I thought when I saw that code too lmfao. Analysis is good though :)

Code:
private static long gcd(long n,long r) {
    if (r==0) return n;
    return gcd(r,n%r);
}
mofuggin euclidean algo ftw
and yet I have no idea its runtime. hmmm.
I don't know what the runtime contribution is like for the modulus operation, which in itself can be pretty expensive IME. I'll ignore it.
Overall though I'll just say log n, feels right but I don't know how to really prove that rigorously

This one is a bit tricky. For the sake of the quiz, I believe that is was assumed that % ran in constant time. Anyways, you can kinda derive this from the division algorithm. The division algorithm states that for any two numbers, a and b, there exists a unique q and r such that a = bq+r where 0<=r<b. Say a>b but b is at least one half of a. Then b can only go into a once and your remainder with be a = b*1+r -> r=a-b < a-(1/2)a = 1/2a so r will be less than (1/2)a if b is greater than 1/2 of a. Consider the case where b is less than 1/2 of a, well, r can only have values up to b so r will still be less than (1/2)a. You can then conclude that your remainder will always at most be (1/2)a. Thus you can kinda see how you are at least "halving" your work with each step of the recursion.

That was a sloppy, half-ass proof that I just came up with off the top of my head (I'm hoping I didn't slip up with some assumption I shouldn't have made.) There's a
more rigorous proof I can find in one of my textbooks but it's a bit confusing and convoluted.



Edit: mofuggin Euclidean algorithm ftw \o/

EDIT2:

ok, I found what I was looking for, for that gcd one.

So, by Euclidiean algorithm, you follow the procedure to find your gcd of (a,b): (assume a>b>0)

a = b*q_1+r_1, 0<r_1<|b|
b = r_1*q_2+r_2, 0<r_2<r_1
r_1 = r_2*q_3+r_3, 0<r_3<r_2
...
r_(n-2) = r_(n-1)*q_n+r_n, 0< r_n < r_(n-1)
r_(n-1) = r_n*q_(n+1), r_n = gcd(a,b)

It's obvious that the above process takes about n steps if you take each line as a step (or n+/- 1 or 2 or something, but for the sake of big-O, we can just say n for simplification.)

Note for the above process, q_i >= 1 for all i. Also, q_(n+1) >= 2 because r_n < r_(n-1). If q_(n+1) = 1, then that would imply that r_n = r_(n-1) which we can't let happen. We already established that all q_i >=1 so q_(n+1) can't be equal either. So it must be at least 2. From that we have the following:

r_(n-1) >= 2*r_n >= 2 = f_3 (the 3rd Fibonacci number)
r_(n-2) >= r_(n-1)+r_n >= (2)+(1) >= 3 = f_4
...
r_1 >= r_2+r_3 >= f_n+f_(n-1) = f_(n+1)
b >= r_1+r_2 >= f_(n+1)+f_n = f_(n+2)

So we have that b >= f_(n+2). If you know some of your properties of Fibonacci numbers, you'll find that:
f_n > [(1+sqrt(5))/2]^(n-2) (this would be a whole other proof to show this which I don't feel like going through atm >.<)

Therefore b > [(1+sqrt(5))/2]^(n-2).

You do some fancy manipulation and change of base crap and you end up with n < 5*log(b) (log is base 10)

Since n is the number of steps, we have that the big O is O(5*log(b))

EDIT3: The above proof is known as Lame's Theorem (with an accent mark over the e)
__________________
AMA: http://ask.fm/benguino

Not happening now! Don't click to join!



Quote:
Originally Posted by Spenner View Post
(^)> peck peck says the heels
Quote:
Originally Posted by Xx{Midnight}xX
And god made ben, and realized he was doomed to miss. And said it was good.
Quote:
Originally Posted by Zakvvv666
awww :< crushing my dreams; was looking foward to you attempting to shoot yourself point blank and missing

Last edited by reuben_tate; 04-12-2013 at 04:12 PM..
reuben_tate is offline   Reply With Quote