Teach me CS stuff for interviews
So I may be interviewing at a place to make a jump to the realm of CS/analytics type stuff, but I also suck at CS interviews because I haven't ever bothered to use stuff like suffix trees / implementing my own linked lists / hashtables / basic sorts / etc. I pretty much just do what I need the program to do and look stuff up if I don't know it.
Sooo, any advice would be welcome. What sort of shit should I know how to do cold / can you explain the basics in a simple way / offer good resources / etc? |
Re: Teach me CS stuff for interviews
disclaimer: I'm not actually a CS major
Quote:
Quote:
- reversing a linked list - choosing and implementing a sorting algorithm (apparently Quicksort reigns supreme) The types of questions would largely depend on where you're applying and what they're looking for, I would think. Quote:
http://teachingtree.co/ |
Re: Teach me CS stuff for interviews
I'm not even sure what I should be studying -- I just figure stuff like home-rolled linklists, hashtables, suffix trees, SVL trees, binary search trees, red/black trees, insertion/selection/merge/bubble/shell/heap/quick/radix sort and their runtimes blahblah, graph theory, stacks and queues, priority queues, dijkstra/A*, breadth and depth-first search, blahblahblah I have no clue honestly. I'm just listing off a bunch of stuff I don't really know how to do from scratch without looking it up to see the general idea.
Basically I suck at the basics but I am good at solving problems / getting stuff working well in practice when I need to. edit: Looks like an interesting site -- thanks |
Re: Teach me CS stuff for interviews
yup. Just gotta get through the interview, and then looking stuff up as needed should be fine.
The fact that you can rattle off all of those topics should be encouraging. Are you looking for a refresher on the general ideas, or would translating the ideas into code be the bigger concern? edit: leonid posted. /thread. mega helpful |
Re: Teach me CS stuff for interviews
Read Introduction to Algorithms
You should know what data structures to use for certain problems right away, and the efficiencies of them in Big-O notation Master basic sorting algorithms; know the efficiencies for best, average, and worst cases in Big-O You should be able to write simple codes on a whiteboard Know how to code in C (or C++), JAVA, and Python Learn some OOP; Know how to plot starter classes to solve complex problems. Learn about inheritance, encapsulation etc etc Read aloud anything that goes through your mind while solving a problem. If you are on the right track, interviewer will mark that positively. If you aren't, interviewer might add subtle hints to help you out If in doubt, shout "HASH TABLES" and you will probably be right If you have mentioned your previous works on your resume, interviewer might ask you about them.. Be prepared to briefly talk about it. If those are too old for you to be recalled, just get rid of them (interviewers hate long-ass resumes i.e. more than two pages) |
Re: Teach me CS stuff for interviews
Why would you want to know about counterstrike if you're being interviewed
|
Re: Teach me CS stuff for interviews
Quote:
Resume less than one page I have intro to algs (third edition, nickname is CLRS iirc?) but it's so effing verbose and 10000x longer than it needs to be. Having a hard time figuring out how to study from it. Feels like it can be condensed to maybe 100 pages max. Ask some random questions (tomorrow or sth) maybe, and I'll try to answer without using external sources |
Re: Teach me CS stuff for interviews
anyone?
|
Re: Teach me CS stuff for interviews
Quote:
Sorry I'm not being useful, I'm not a CS major. |
Re: Teach me CS stuff for interviews
A singly linked list has integer as data in each node. Write a function to delete a node from the list given an integer as the argument. (Assume all the nodes have unique integers.)
question blatantly stolen from here |
Re: Teach me CS stuff for interviews
Yeahhhh last time I wrote linked lists was in like, high school. exactly the sort of thing I'd just look up /copy/paste/edit to my needs if I needed to use such a thing
But I'd say something like let LinkedList be the name of the class which implements the linked list edit (removed earlier code for being stupid) edit2: You know what I am going to start over. god damn it. Code:
void LinkedList::delete(int value){ |
Re: Teach me CS stuff for interviews
Here's what I did for my 'linked list from scratch' when I had to do it last year. Pretty sure it's at least 90% correct, I haven't looked it over recently or anything, this is just a copypaste lol
http://puu.sh/2yI5D |
Re: Teach me CS stuff for interviews
One way to impress: if they ask what is the best sorting algorithm is, tell them that the best comparison based sorting algorithms run in O(n*logn) but there exists non-comparison based sorting algorithms that run in linear time (the most popular being radix sort). There's also counting sort which is also O(n) but the coefficient on n and the constant that is dropped is kinda big to the point where its only more efficient than nlogn algorithms if the data set is extremely huge. These non-comparison based algorithms were only briefly acknowledge in my cs class (our professor just let us know of their existence) so someone can correct me if my details are wrong.
|
Re: Teach me CS stuff for interviews
Quote:
Quote:
Interestingly enough, Quicksort runs in O(n^2). Big-oh doesn't always tell the whole story. |
Re: Teach me CS stuff for interviews
My understanding is that the right sort depends on 1. the framework, 2. the size of the data, 3. other assumptions about that data
For example if you know your list will be close to being sorted, insertion sort or bubble sort may not be a bad idea because for a sorted list, they run in O(n) time. It's just that for randomized lists they tend to run in O(n^2) aka O(shit) time for average and worst cases. If you know your dataset is massive, then clearly you wouldn't want something that runs in O(n^2) time Also depends on memory -- some require n extra memory, others only constant memory, so if you're under constraints there, that matters too. Or maybe if writes are more expensive speed-wise in your framework you wouldn't use insertion sort which is moving shit around all the time. Or if comparisons are more expensive you wouldn't bother with something like selection sort. Quicksort and radix sort I don't know too much about yet. EDIT: danceguy: I do know (IIRC) that quicksort's n^2 runtime is worst-case. On average and best case I believe it's n log n time. The n^2 runtime is extremely rare since it'd require that all your elements are pretty much in the exact random order needed to completely mess with the recursive pivoting at each substep -- although I guess this depends on how you choose your pivot? |
Re: Teach me CS stuff for interviews
Quote:
|
Re: Teach me CS stuff for interviews
Also, simply memorizing the time efficiencies of common algorithms may not be good enough, you should be able to look at the code yourself and do some analysis to determine the Big-O yourself. Your interviewer may hand you some code and ask you what the worst case time efficiency is.
These are from a simple review quiz I had on my first day of class. If you can do these you should be fine. Find the Big-O of the following functions / code fragments Code:
static long Fibonacci(int n) { Code:
for (int i=0,j=0; i<n; i++) Code:
int[] a=new int[n]; Code:
private static long gcd(long n,long r) { |
Re: Teach me CS stuff for interviews
Code:
static long Fibonacci(int n) { With memoization, O(n) Code:
for (int i=0,j=0; i<n; i++) Code:
int[] a=new int[n]; uhh, O((n-4)*4) = O(4n-16) = O(4n) = O(n)? Code:
private static long gcd(long n,long r) { and yet I have no idea its runtime. hmmm. I don't know what the runtime contribution is like for the modulus operation, which in itself can be pretty expensive IME. I'll ignore it. Overall though I'll just say log n, feels right but I don't know how to really prove that rigorously |
Re: Teach me CS stuff for interviews
responses in bold
Quote:
EDIT2: ok, I found what I was looking for, for that gcd one. So, by Euclidiean algorithm, you follow the procedure to find your gcd of (a,b): (assume a>b>0) a = b*q_1+r_1, 0<r_1<|b| b = r_1*q_2+r_2, 0<r_2<r_1 r_1 = r_2*q_3+r_3, 0<r_3<r_2 ... r_(n-2) = r_(n-1)*q_n+r_n, 0< r_n < r_(n-1) r_(n-1) = r_n*q_(n+1), r_n = gcd(a,b) It's obvious that the above process takes about n steps if you take each line as a step (or n+/- 1 or 2 or something, but for the sake of big-O, we can just say n for simplification.) Note for the above process, q_i >= 1 for all i. Also, q_(n+1) >= 2 because r_n < r_(n-1). If q_(n+1) = 1, then that would imply that r_n = r_(n-1) which we can't let happen. We already established that all q_i >=1 so q_(n+1) can't be equal either. So it must be at least 2. From that we have the following: r_(n-1) >= 2*r_n >= 2 = f_3 (the 3rd Fibonacci number) r_(n-2) >= r_(n-1)+r_n >= (2)+(1) >= 3 = f_4 ... r_1 >= r_2+r_3 >= f_n+f_(n-1) = f_(n+1) b >= r_1+r_2 >= f_(n+1)+f_n = f_(n+2) So we have that b >= f_(n+2). If you know some of your properties of Fibonacci numbers, you'll find that: f_n > [(1+sqrt(5))/2]^(n-2) (this would be a whole other proof to show this which I don't feel like going through atm >.<) Therefore b > [(1+sqrt(5))/2]^(n-2). You do some fancy manipulation and change of base crap and you end up with n < 5*log(b) (log is base 10) Since n is the number of steps, we have that the big O is O(5*log(b)) EDIT3: The above proof is known as Lame's Theorem (with an accent mark over the e) |
Re: Teach me CS stuff for interviews
I probably would have failed that gcd question had I been asked in a live interview. I would have said log n just because it "felt that way" to me based on my experience, and how modular arithmetic generally cuts things down hardcore, but I think I would have screwed up trying to prove it.
I think I could only do it by talking about it in steps of 2, to go through a full cycle. So if I am talking about gcd(a,b), after the first step I call gcd(b, a%b), and then once again, so I am really talking about the transformation from a and b to b and a%b to a%b and b % (a%b) So in other words, analyzing the effect of a, b to a%b, b % (a%b) We know the modulus operation n % m = r is the same as n = mk + r so a%b = R is the same as a = bK + R R = a - bK then b % (a%b) = r is b = (a%b)k + r = (a-bK)k + r r = b - (a-bK)k = b - ak + bKk = b + bKk - ak = b(1 + Kk) - ak Now let's say this isn't a trivial case, so K and k are at least 1 each r = b(1+1) - a r = 2b-a a+r = 2b b = (a+r)/2 and so around this point I would start braining myself with a car battery trying to make the argument that after two iterations of the GCD algorithm, b is of order magnitude of half of what a was or something, and therefore overall it's log a time or something. I have no idea if that math even checks out |
All times are GMT -5. The time now is 06:38 AM. |
Powered by vBulletin® Version 3.8.1
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright FlashFlashRevolution