Popular Posts

Tuesday, August 30, 2011

Problem:
Given an array of integers. How do we pick two numbers such that their sum is closest to zero.
Solution: 

  • Sort the array by comparing absolute value.
  • traverse the array once to find the two consecutive numbers with minimum sum.

Complexity: O(n log n)
Code:



Sunday, August 28, 2011

a1a2a3....anb1b2b3....bn to a1b1a2b2a3b3....anbn

Problem:
Convert a1a2a3....anb1b2b3.....bn to
a1b1a2b2a3b3.....anbn
in O(n) time and O(1) space
Solution:

[b]       a c d e 1 2 3 4 5      [2<=5, index=2*2-1=3]
[c]       a b b d e 1 2 3 4 5      [3<=5, index=3*2-1=5]
[e]       a b b d c 1 2 3 4 5      [5<=5, index=5*2-1=9]
[4]       a b b d c 1 2 3 e 5      [9>5, index=(9-5)*2=8]
[3]       a b b d c 1 2 4 e 5      [8>5, index=(8-5)*2=6]
[1]       a b b d c 3 2 4 e 5      [6>5, index=(6-5)*2=2]
[b]       b d c 3 2 4 e 5      [startindex=index=2]
[d]       a 1 b d c 3 2 4 e 5      [4<=5,index=4*2-1=7]
[2]       a 1 b d c 3 d 4 e 5      [7>5,index=(7-5)*2=4]
[d]       a 1 b c 3 d 4 e 5      [startindex=index=4]

Complexity: O(n) time O(1) space
Code:



Thursday, August 18, 2011

|a-b| + |b-c| + |c-a|

Problem:

Given n arrays, find n number such that sum of their differences is minimum. For e.g. if there are three arrays
A = {4, 10, 15, 20}
B = {1, 13, 29}
C = {5, 14, 28}
find three numbers a, b, c such that |a-b| + |b-c| + |c-a| is minimum 

where a E A , bEB , cEC

. Here the answer is a = 15, b = 13, and c = 14


Solution:
Let,
min_dif=INT_MAX
1.Sort the N arrays A,B,C.....
2.Find the minimum and maximum of A[0],B[0],C[0].....
3.Take the difference between MAX-MIN values.
5.If the difference is less than min_dif then update min_dif and save all n values.
6.Now increment the index of the array which contains minimum element.
7.repeat these steps till end of array is reached for atleast one array.


Complexity: O(total no of elements)
Code:


Wednesday, August 10, 2011

Diameter of a binary tree

Problem:
Find the diameter of a binary tree.
Solution:

  1. Do Post order traversal
  2. if Left Max path + Right Max path > Max then
  3. Max = Left Max path + Right Max path 
  4. return 1 + MAX(Left Max path length + Right Max path length)

Complexity: O(n), where n = no of nodes
Code:

Tuesday, August 9, 2011

CLRS (Page 85): Problems 4-2 : Finding the missing integer

Problem:

An array A[1...n] contains all the integers from 0 to n except one. It would be easy to determine the missing
integer in O(n) time by using an auxiliary array B[0...n] to record which numbers appear in A. In this
problem, however, we cannot access an entire integer in A with a single operation. The elements of A are
represented in binary, and the only operation we can use to access them is ”fetch the jth bit of A[i],” which
takes constant time.
Show that if we use only this operation, we can still determine the missing integer in O(n) time.

Solution:

Our convention is to call the least significant bit (LSB) as 0-th bit. Number of digits required to represent
n is at least dlg(n + 1)e. Most significant bit (MSB) of n is the (dlg(n + 1)e − 1)-th bit of it.
Among the integers between 0 to n, exactly 2
(dlg(n+1)e−1)
numbers are < 2
(dlg(n+1)e−1)
. If we look at
the (dlg(n + 1)e − 1)-th bit of any non-negative number ≤ n we can determine whether the number is
< 2
(dlg(n+1)e−1)
. We now look at the (dlg(n + 1)e − 1)-th bit of all the numbers in A[1...n]. We populate two
lists of references (e.g. pointer, index, handle etc) during the first n lookup. Reference to the numbers having
0 in their (dlg(n + 1)e − 1)-th bit are kept in one and the rest in the other. The size of the first list is the
count of numbers < 2
(dlg(n+1)e−1)
. The missing integer is < 2
(dlg(n+1)e−1)
iff this count is 2
(dlg(n+1)e−1) − 1.
We look for the missing integer in the first list in that case. Otherwise, we look in the second list. In this
process, we figure out the (dlg(n + 1)e − 1)-th bit of the missing integer in n bit lookup.
We want to use recursion to search the appropriate list. The fact that now we search in a list of references
to numbers in stead of a list of numbers can be handled as follows. The initial problem comes with the list
all the references. The fact that the search range could be shifted in the subproblem is handled by replacing
all the (dlg(n + 1)e − 1)-th bits as 0. However, as we never read the (dlg(n + 1)e − 1)-th bits again we don’t
have to do this modification explicitly.



find_missing_integer ( list_of_references, n )
if ( n is 1 )
print 0-th bit of missing number is (1 - the number in the list)
return
left_list . initialize ()
right_list . initialize ()
msb_position <- ceiling( lg(n+1) ) - 1
forall number in list_of_references
if ( number [ msb_position ] is 0 )
left_list . insert ( number )
else
right_list . insert ( number )
if ( left_list . size () < power_of_two ( msb_position ) )
print msb_position-th bit of missing number is 0
find_missing_integer ( left_list, power_of_two ( msb_position ) - 1 )
else
print msb_position-th bit of missing number is 1
find_missing_integer ( right_list, n - power_of_two ( msb_position ) )


Complexity: O(log n)
Code:

CLRS Exercises 5.1-2

Problem:

Describe an implementation of the procedure RANDOM(a, b) that only makes calls to
RANDOM(0, 1). What is the expected running time of your procedure, as a function of a and
b?

Solution:

  1. n = ceil(log(b-a+1))
  2. decimal_number=0;
  3. do
  4. for i = 1 to n: get all binary
  5. decimal_number = generate decimal from binary digits
  6. while decimal_number > b-a+1
  7. return decimal_number

Complexity:  Θ(log (b-a+1))

Monday, August 8, 2011

CLRS Exercises 5.1-3:

Problem:

Suppose that you want to output 0 with probability 1/2 and 1 with probability 1/2. At your
disposal is a procedure BIASED-RANDOM, that outputs either 0 or 1. It outputs 1 with some
probability p and 0 with probability 1 - p, where 0 < p < 1, but you do not know what p is.
Give an algorithm that uses BIASED-RANDOM as a subroutine, and returns an unbiased
answer, returning 0 with probability 1/2 and 1 with probability 1/2. What is the expected
running time of your algorithm as a function of p?

Solution:

  1. Start while loop
  2. a=rand(); b=rand()
  3. if(a==0 && b==1) return 0;
  4. else if(a==1 && b==0) return 1;
  5. else continue;

Complexity:

Expand a random range from 1-5 to 1-7

Problem:
Given a function which produces a random integer in the range 1 to 5, write a function which produces a random integer in the range 1 to 7.
Solution:

int i;
do
{
  i = 5 * (rand5() - 1) + rand5();  // i is now uniformly random between 1 and 25
} while(i > 21);
// i is now uniformly random between 1 and 21
return i % 7 + 1;  // result is now uniformly random between 1 and 7

Complexity:

Sunday, August 7, 2011

CLRS - Problem 2.1

Problem:

Although merge sort runs in Θ(n lg n) worst-case time and insertion sort runs in Θ(n2) worstcase
time, the constant factors in insertion sort make it faster for small n. Thus, it makes sense
to use insertion sort within merge sort when subproblems become sufficiently small. Consider
a modification to merge sort in which n/k sublists of length k are sorted using insertion sort
and then merged using the standard merging mechanism, where k is a value to be determined.
a. Show that the n/k sublists, each of length k, can be sorted by insertion sort in Θ(nk)
worst-case time.
                  Time to sort each list of length k using insertion sort= T(k^2)
                  Time to sort n/k lists each of length k = T(n/k *  k^2) = Θ(nk)
                                                           
b. Show that the sublists can be merged in Θ(n lg (n/k) worst-case time.
                  Height of the tree with n elements and with k = log n - log k = log(n/k)
                  so for merging it will take Θ(n * log(n/k))
c. Given that the modified algorithm runs in Θ(nk + n lg (n/k)) worst-case time, what is
the largest asymptotic (Θnotation) value of k as a function of n for which the modified
algorithm has the same asymptotic running time as standard merge sort?
Let k=n,
it will become Θ(n^2 + n lg(1)) = Θ(n^2)

Lets take k=1,
it will become Θ(n + n lg n)

Lets take k= lg n
it will become Θ(n lg n + n lg(n/(lg n)) => Θ(n lg n + n lg n - n lg(lg n)) => Θ(2 lg n - n lg(lg n)) => Θ(n lg n)

So, k= Θ(lg n)
d. How should k be chosen in practice?



In practise k should be taken greater the lg n depending upon the constant factors.



Thursday, August 4, 2011

CLRS Exercises 2.3-7

Problem:

Describe a Θ(n lg n)-time algorithm that, given a set S of n integers and another integer x,
determines whether or not there exist two elements in S whose sum is exactly x.

Solution:

  1. Merge sort the elements which will take O(n log n)
  2. then have two pointers one from beginning and another from end
  3. if sum of both the elements a[i]+a[j] < sum then i++
  4. else if sum of both the elements a[i]+a[j] > sum then j--
  5. else we found print those elements a[i] and a[j].
  6. complexity is O(n) in worst case.
  7. Alternatively, after merge sort, do binary search for tmp=Sum - a[i] in the array a[i+1 ... n]
  8. here for binary search complexity is O(log n), so still total worst case complexity is O(n log n)

Complexity: O(n log n)
Code:



Wednesday, August 3, 2011

CLRS - Exercises 2.3-6

Problem:

Observe that the while loop of lines 5 - 7 of the INSERTION-SORT procedure in Section 2.1
uses a linear search to scan (backward) through the sorted subarray A[1 j - 1]. Can we use a
binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of
insertion sort to Θ(n lg n)?

Solution:
No, for finding correct place lets say O(log n) then for moving all the elements will take O(n) time. So, its running time will be O(n^2)
Complexity:
Code:

CLRS Exercises 2.3-5 Binary search

Problem:

Binary search is an algorithm that repeats this
procedure, halving the size of the remaining portion of the sequence each time. Write
pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running
time of binary search is Θ(lg n).

Solution:

  1. If there are 2^i elements then the height of the tree will be i+1
  2. At each level we will just do comparison [key with middle element] which is a constant time O(1)
  3. So for any n, height of the tree will be log n + 1
  4. Thus, worst case running complexity will be Θ(lg n).

Complexity: Θ(lg n).
Code:


CLRS Exercises 2.3-4 Recursive insertion sort

Problem:

Insertion sort can be expressed as a recursive procedure as follows. In order to sort A[1 n],
we recursively sort A[1 n -1] and then insert A[n] into the sorted array A[1 n - 1]. Write a
recurrence for the running time of this recursive version of insertion sort.

Solution:

  1. Initially the array size will be n
  2. call insertionsort function until the size becomes 1
  3. then start the normal insertion approach as shown in code
T(n) = T(n-1) + 1 +  O(n) = O(n^2)

Complexity: O(n^2) TC
Code:


Monday, August 1, 2011

Problem:
You have two numbers represented by a linked list, where each node contains a single digit   The digits are stored in reverse order, such that the 1’s digit is at the head of
the list   Write a function that adds the two numbers and returns the sum as a linked
list
EXAMPLE
Input: (3 -> 1 -> 5) + (5 -> 9 -> 2)
Output: 8 -> 0 -> 8
Solution:
  1. Have pointer on each head of the two list
  2. Add the data of two nodes along with carry.
  3. create new result node and put the value, SUM % 10
  4. assign SUM / 10 to carry
  5. Repeat until any one list exhausts.
  6. Then add the carry to the node, repeat till end of list
Complexity: O(n) where n>=m, m and n are length of two lists
Code:



Problem:
  Write code to remove duplicates from an unsorted linked list
Solution:
  1. Have two pointers ptr1 and ptr2.
  2. For each node ptr1 visits check from head [head  to node before ptr1] whether that data is already present.
  3. If so, remove the ptr1 node.
  4. return the new list without any duplicates
Complexity: O(n^2)
Code:





Problem:
 You have two very large binary trees: T1, with millions of nodes, and T2, with hundreds of nodes   Create an algorithm to decide if T2 is a subtree of T1
Solution:
  1. Traverse the Tree T1.
  2. For each node, call a method to do the following (Step 3),
  3. traverse both the Tree T1 and T2, until tree T2 exhausts or until data of T1 and T2 does not match. 
  4. Return true if T2 is a subtree of T1 else return false.
 
 Complexity: O(m*n) in the worst case
Code: