Binary Heap

Binary Heap is an array object can be viewed as Complete Binary Tree. Each node of the Binary Tree corresponds to an element in an array.Length [A],number of elements in arrayHeap-Size[A], number of elements in a heap stored within array A.The root of tree A [1] and gives index 'i' of a node that indices of its parents, left child, and the right child can be computed.PARENT (i)      Return floor (i/2)  LEFT (i)      Return 2i  RIGHT (i)      Return 2i+1  Representation of an array of the above figure is given below:The index of 20 is 1To find the index of the left child, we calculate 1*2=2This takes us (correctly) to the 14.Now, we go right, so we calculate 2*2+1=5This takes us (again, correctly) to the 6.Now, 4's index is 7, we want to go to the parent, so we calculate 7/2 =3 which takes us to the 17.Heap Property:A binary heap can be classified as Max Heap or Min Heap1. Max Heap: In a Binary Heap, for every node I other than the root, the value of the node is greater than or equal to the value of its highest childA [PARENT (i) ≥A[i]  Thus, the highest element in a heap is stored at the root. Following is an example of MAX-HEAP2. MIN-HEAP: In MIN-HEAP, the value of the node is lesser than or equal to the value of its lowest child.A [PARENT (i) ≤A[i]  Heapify Method:1. Maintaining the Heap Property: Heapify is a procedure for manipulating heap Data Structure. It is given an array A and index I into the array. The subtree rooted at the children of A [i] are heap but node A [i] itself may probably violate the heap property i.e. A [i] < A [2i] or A [2i+1]. The procedure 'Heapify' manipulates the tree rooted as A [i] so it becomes a heap.MAX-HEAPIFY (A, i) 1. l ← left [i] 2. r ← right [i] 3. if l≤ heap-size [A] and A[l] > A [i] 4. then largest ← l 5. Else largest ← i 6. If r≤ heap-size [A] and A [r] > A[largest] 7. Then largest ← r 8. If largest ≠ i 9. Then exchange A [i] A [largest] 10. MAX-HEAPIFY (A, largest) Analysis:The maximum levels an element could move up are Θ (log n) levels. At each level, we do simple comparison which O (1). The total time for heapify is thus O (log n).Building a Heap:BUILDHEAP (array A, int n) 1 for i ← n/2 down to 1 2 do 3 HEAPIFY (A, i, n) HEAP-SORT ALGORITHM:HEAP-SORT (A) 1. BUILD-MAX-HEAP (A) 2. For I ← length[A] down to Z 3. Do exchange A [1] ←→ A [i] 4. Heap-size [A] ← heap-size [A]-1 5. MAX-HEAPIFY (A,1)

Quick sort

It is an algorithm of Divide & Conquer type.Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between search that each element in left sub array is less than or equal to the average element and each element in the right sub- array is larger than the middle element.Conquer: Recursively, sort two sub arrays.Combine: Combine the already sorted array.Algorithm:QUICKSORT (array A, int m, int n)    1 if (n > m)    2 then    3 i ← a random index from [m,n]    4 swap A [i] with A[m]    5 o ← PARTITION (A, m, n)    6 QUICKSORT (A, m, o - 1)   7 QUICKSORT (A, o + 1, n)  Partition Algorithm:Partition algorithm rearranges the sub arrays in a place.PARTITION (array A, int m, int n)    1 x ← A[m]    2 o ← m    3 for p ← m + 1 to n   4 do if (A[p] < x)    5 then o ← o + 1    6 swap A[o] with A[p]   7 swap A[m] with A[o]    8 return o  Figure: shows the execution trace partition algorithmExample of Quick Sort:44  33  11  55  77  90  40  60  99  22  88    Let 44 be the Pivot element and scanning done from right to leftComparing 44 to the right-side elements, and if right-side elements are smaller than 44, then swap it. As 22 is smaller than 44 so swap them. 22 33 11 55 77 90 40 60 99 44 88 Now comparing 44 to the left side element and the element must be greater than 44 then swap them. As 55 are greater than 44 so swap them.22 33 11 44 77 90 40 60 99 55 88 Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 & one right from pivot element.22 33 11 40 77 90 44 60 99 55 88 Swap with 77:22 33 11 40 44 90 77 60 99 55 88 Now, the element on the right side and left side are greater than and smaller than 44 respectively.Now we get two sorted lists:And these sublists are sorted under the same process as above done.These two sorted sublists side by side.Merging Sublists:             SORTED LISTSWorst Case Analysis: It is the case when items are already in sorted form and we try to sort them again. This will takes lots of time and space.Equation:T (n) =T(1)+T(n-1)+n  T (1) is time taken by pivot element.T (n-1) is time taken by remaining element except for pivot element.N: the number of comparisons required to identify the exact position of itself (every element)If we compare first element pivot with other, then there will be 5 comparisons.It means there will be n comparisons if there are n items.Relational Formula for Worst Case:Note: for making T (n-4) as T (1) we will put (n-1) in place of '4' and ifWe put (n-1) in place of 4 then we have to put (n-2) in place of 3 and (n-3)In place of 2 and so on.T(n)=(n-1) T(1) + T(n-(n-1))+(n-(n-2))+(n-(n-3))+(n-(n-4))+nT (n) = (n-1) T (1) + T (1) + 2 + 3 + 4+............nT (n) = (n-1) T (1) +T (1) +2+3+4+...........+n+1-1[Adding 1 and subtracting 1 for making AP series]T (n) = (n-1) T (1) +T (1) +1+2+3+4+........ + n-1T (n) = (n-1) T (1) +T (1) + -1Stopping Condition: T (1) =0Because at last there is only one element left and no comparison is required.T (n) = (n-1) (0) +0+-1Worst Case Complexity of Quick Sort is T (n) =O (n2)Randomized Quick Sort [Average Case]:Generally, we assume the first element of the list as the pivot element. In an average Case, the number of chances to get a pivot element is equal to the number of items.Let total time taken =T (n)  For eg: In a given list     p 1,   p 2,    p 3,    p 4............pn    If p 1 is the pivot list then we have 2 lists.       I.e. T (0) and T (n-1)    If p2 is the pivot list then we have 2 lists.          I.e. T (1) and T (n-2)     p 1,   p 2,    p 3,    p 4............pn   If p3 is the pivot list then we have 2 lists.    I.e. T (2) and T (n-3)      p 1,   p 2,    p 3,    p 4............p n  So in general if we take the Kth element to be the pivot element.Then,Pivot element will do n comparison and we are doing average case so,So Relational Formula for Randomized Quick Sort is: = n+1 +(T(0)+T(1)+T(2)+...T(n-1)+T(n-2)+T(n-3)+...T(0)) = n+1 +x2 (T(0)+T(1)+T(2)+...T(n-2)+T(n-1)) n T (n) = n (n+1) +2  (T(0)+T(1)+T(2)+...T(n-1)........eq 1  Put n=n-1 in eq 1(n -1) T (n-1) = (n-1) n+2 (T(0)+T(1)+T(2)+...T(n-2)......eq2  From eq1 and eq 2n T (n) - (n-1) T (n-1)= n(n+1)-n(n-1)+2 (T(0)+T(1)+T(2)+?T(n-2)+T(n-1))-2(T(0)+T(1)+T(2)+...T(n-2))n T(n)- (n-1) T(n-1)= n[n+1-n+1]+2T(n-1)n T(n)=[2+(n-1)]T(n-1)+2nn T(n)= n+1 T(n-1)+2nPut n=n-1 in eq 3Put 4 eq in 3 eqPut n=n-2 in eq 3Put 6 eq in 5 eqPut n=n-3 in eq 3Put 8 eq in 7 eqFrom 3eq, 5eq, 7eq, 9 eq we getFrom 10 eqMultiply and divide the last term by 2Is the average case complexity of quick sort for sorting n elements.3. Quick Sort [Best Case]: In any sorting, best case is the only case in which we don't make any comparison between elements that is only done when we have only one element to sort.

Stable Sorting

A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input unsorted array.Some Sorting Algorithm is stable by nature like Insertion Sort, Merge Sort and Bubble Sort etc.Sorting Algorithm is not stable like Quick Sort, Heap Sort etc.Another Definition of Stable Sorting:A Stable Sort is one which preserves the original order of input set, where the comparison algorithm does not distinguish between two or more items. A Stable Sort will guarantee that the original order of data having the same rank is preserved in the output.In Place Sorting Algorithm:An In-Place Sorting Algorithm directly modifies the list that is received as input instead of creating a new list that is then modified.In this Sorting, a small amount of extra space it uses to manipulate the input set. In other Words, the output is placed in the correct position while the algorithm is still executing, which means that the input will be overwritten by the desired output on run-time.In-Place, Sorting Algorithm updates input only through replacement or swapping of elements.An algorithm which is not in-place is sometimes called not-in-Place or out of Place.An Algorithm can only have a constant amount of extra space, counting everything including function call and Pointers, Usually; this space is O (log n).Note:Bubble sort, insertion sort, and selection sort are in-place sorting algorithms. Because only swapping of the element in the input array is required.Bubble sort and insertion sort can be applying as stable algorithms but selection sort cannot (without significant modifications).Merge sort is a stable algorithm but not an in-place algorithm. It requires extra array storage.Quicksort is not stable but is an in-place algorithm.Heap sort is an in-place algorithm but is not stable.

Binary Search

 In Binary Search technique, we search an element in a sorted array by recursively dividing the interval in half.2. Firstly, we take the whole array as an interval.3. If the Pivot Element (the item to be searched) is less than the item in the middle of the interval, We discard the second half of the list and recursively repeat the process for the first half of the list by calculating the new middle and last element.4. If the Pivot Element (the item to be searched) is greater than the item in the middle of the interval, we discard the first half of the list and work recursively on the second half by calculating the new beginning and middle element.5. Repeatedly, check until the value is found or interval is empty.Analysis:Input: an array A of size n, already sorted in the ascending or descending order.Output: analyze to search an element item in the sorted array of size n.Logic: Let T (n) = number of comparisons of an item with n elements in a sorted array.Set BEG = 1 and END = nFind mid =Compare the search item with the mid item.Case 1: item = A[mid], then LOC = mid, but it the best case and T (n) = 1Case 2: item ≠A [mid], then we will split the array into two equal parts of size .And again find the midpoint of the half-sorted array and compare with search element.Repeat the same process until a search element is found.T (n) =  ...... (Equation 1){Time to compare the search element with mid element, then with half of the selected half part of array}At least there will be only one term left that's why that term will compare out, and only one comparison be done that's why Is the last term of the equation and it will be equal to 1

Lower Bound Theory

Lower Bound Theory Concept is based upon the calculation of minimum time that is required to execute an algorithm is known as a lower bound theory or Base Bound Theory.Lower Bound Theory uses a number of methods/techniques to find out the lower bound.Concept/Aim: The main aim is to calculate a minimum number of comparisons required to execute an algorithm.Techniques:The techniques which are used by lower Bound Theory are:Comparisons Trees.Oracle and adversary argumentState Space Method1. Comparison trees:In a comparison sort, we use only comparisons between elements to gain order information about an input sequence (a1; a2......an).Given ai,aj from (a1, a2.....an)We Perform One of the Comparisonsai < aj    less thanai ≤ aj    less than or equal toai > aj    greater thanai ≥ aj    greater than or equal toai = aj    equal toTo determine their relative order, if we assume all elements are distinct, then we just need to consider ai ≤ aj '=' is excluded &, ≥,≤,>,< are equivalent.Consider sorting three numbers a1, a2, and a3. There are 3! = 6 possible combinations:(a1, a2, a3), (a1, a3, a2),  (a2, a1, a3), (a2, a3, a1)  (a3, a1, a2), (a3, a2, a1)  The Comparison based algorithm defines a decision tree.Decision Tree: A decision tree is a full binary tree that shows the comparisons between elements that are executed by an appropriate sorting algorithm operating on an input of a given size. Control, data movement, and all other conditions of the algorithm are ignored.In a decision tree, there will be an array of length n.So, total leaves will be n! (I.e. total number of comparisons)If tree height is h, then surely n! ≤2n (tree will be binary) Taking an Example of comparing a1, a2, and a3.Left subtree will be true condition i.e. ai ≤ ajRight subtree will be false condition i.e. ai >ajFig: Decision TreeSo from above, we gotN! ≤2n Taking Log both sidesComparison tree for Binary Search:Example: Suppose we have a list of items according to the following Position:1,2,3,4,5,6,7,8,9,10,11,12,13,14  And the last midpoint is:2,  4,  6,  8,  10, 12, 14  Thus, we will consider all the midpoints and we will make a tree of it by having stepwise midpoints.The Bold letters are Mid-Points HereAccording to Mid-Point, the tree will be:Step1: Maximum number of nodes up to k level of the internal node is 2k-1For Example 2k-1 23-1= 8-1=7 Where k = level=3 Step2: Maximum number of internal nodes in the comparisons tree is n!Note: Here Internal Nodes are Leaves.Step3: From Condition1 & Condition 2 we getN! ≤ 2k-1 14 < 15 Where N = Nodes Step4: Now, n+1 ≤ 2kHere, Internal Nodes will always be less than 2k in the Binary Search.Step5:n+1<= 2k Log (n+1) = k log 2 k >= k >=log2(n+1) Step6:T (n) = k  Step7:T (n) >=log2(n+1) Here, the minimum number of Comparisons to perform a task of the search of n terms using Binary Search2. Oracle and adversary argument:Another technique for obtaining lower bounds consists of making use of an "oracle."Given some model of estimation such as comparison trees, the oracle tells us the outcome of each comparison.In order to derive a good lower bound, the oracle efforts it's finest to cause the algorithm to work as hard as it might.It does this by deciding as the outcome of the next analysis, the result which matters the most work to be needed to determine the final answer.And by keeping step of the work that is finished, a worst-case lower bound for the problem can be derived.Example: (Merging Problem) given the sets A (1: m) and B (1: n), where the information in A and in B are sorted. Consider lower bounds for algorithms combining these two sets to give an individual sorted set.Consider that all of the m+n elements are specific and A (1) < A (2) < ....< A (m) and B (1) < B (2) < ....< B (n).Elementary combinatory tells us that there are C ((m+n), n)) ways that the A's and B's may merge together while still preserving the ordering within A and B.Thus, if we need comparison trees as our model for combining algorithms, then there will be C ((m+n), n)) external nodes and therefore at least log C ((m+n), m) comparisons are needed by any comparison-based merging algorithm.If we let MERGE (m, n) be the minimum number of comparisons used to merge m items with n items then we have the inequalityLog C ((m+n), m) MERGE (m, n) m+n-1.   The upper bound and lower bound can get promptly far apart as m gets much smaller than n.3. State Space Method:1. State Space Method is a set of rules that show the possible states (n-tuples) that an algorithm can assume from a given state of a single comparison.2. Once the state transitions are given, it is possible to derive lower bounds by arguing that the finished state cannot be reached using any fewer transitions.3. Given n distinct items, find winner and loser.4. Aim: When state changed count it that is the aim of State Space Method.5. In this approach, we will count the number of comparison by counting the number of changes in state.6. Analysis of the problem to find out the smallest and biggest items by using the state space method.7. State: It is a collection of attributes.8. Through this we sort out two types of Problems:Find the largest & smallest element from an array of elements.To find out largest & second largest elements from an array of an element.9. For the largest item, we need 7 comparisons and what will be the second largest item?Now we count those teams who lose the match with team A Teams are: B, D, and E So the total no of comparisons are: 7 Let n is the total number of items, then Comparisons = n-1 (to find the biggest item) No of Comparisons to find out the 2nd biggest item = log2n-1 10. In this no of comparisons are equal to the number of changes of states during the execution of the algorithm.Example: State (A, B, C, D)No of items that can never be compared.No of items that win but never lost (ultimate winner).No of items that lost but never win (ultimate loser)No of items who sometimes lost & sometimes win.Firstly, A, B compare out or match between them. A wins and in C, D.C wins and so on. We can assume that B wins and so on. We can assume that B wins in place of A it can be anything depending on our self.In Phase-1 there are 4 statesIf the team is 8 then 4 statesAs if n team the n/2 states.4 is Constant in C-State as B, D, F, H are lost teams that never win.Thus there are 3 states in Phase-2,If n is 8 then states are 3If n teams that are states.Phase-3: This is a Phase in which teams which come under C-State are considered and there will be matches between them to find out the team which is never winning at all.In this Structure, we are going to move upward for denoting who is not winning after the match.Here H is the team which is never winning at all. By this, we fulfill our second aim to.Thus there are 3 states in Phase -3If n teams that are  statesNote: the total of all states value is always equal to 'n'.Thus, by adding all phase's states we will get:-Phase1 + Phase 2 + Phase 3Suppose we have 8 teams then states are there (as minimum) to find out which one is never wins team.Thus, Equation is:Lower bound (L (n)) is a property of the particular issue i.e. the sorting problem, matrix multiplication not of any particular algorithm solving that problem.Lower bound theory says that no calculation can carry out the activity in less than that of (L (n)) times the units for arbitrary inputs i.e. that for every comparison based sorting algorithm must take at least L (n) time in the worst case.L (n) is the base overall conceivable calculation which is greatest finished.Trivial lower bounds are utilized to yield the bound best alternative is to count the number of elements in the problems input that must be prepared and the number of output items that need to be produced.The lower bound theory is the method that has been utilized to establish the given algorithm in the most efficient way which is possible. This is done by discovering a function g (n) that is a lower bound on the time that any algorithm must take to solve the given problem. Now if we have an algorithm whose computing time is the same order as g (n) , then we know that asymptotically we cannot do better.If f (n) is the time for some algorithm, then we write f (n) = Ω (g (n)) to mean that g (n) is the lower bound of f (n) . This equation can be formally written, if there exists positive constants c and n0 such that |f (n)| >= c|g (n)| for all n > n0. In addition for developing lower bounds within the constant factor, we are more conscious of the fact to determine more exact bounds whenever this is possible.Deriving good lower bounds is more challenging than arrange efficient algorithms. This happens because a lower bound states a fact about all possible algorithms for solving a problem. Generally, we cannot enumerate and analyze all these algorithms, so lower bound proofs are often hard to obtain.

algorithm must have the following properties

Correctness: It should produce the output according to the requirement of the algorithmFiniteness: Algorithm must complete after a finite number of instructions have been executed.An Absence of Ambiguity: Each step must be defined, having only one interpretation.Definition of Sequence: Each step must have a unique defined preceding and succeeding step. The first step and the last step must be noted.Input/output: Number and classification of needed inputs and results must be stated.Feasibility: It must be feasible to execute each instruction.Flexibility: It should also be possible to make changes in the algorithm without putting so much effort on it.Efficient - Efficiency is always measured in terms of time and space requires implementing the algorithm, so the algorithm uses a little running time and memory space as possible within the limits of acceptable development time.Independent: An algorithm should focus on what are inputs, outputs and how to derive output without knowing the language it is defined. Therefore, we can say that the algorithm is independent of language.

Analyzing Algorithm Control Structure

To analyze a programming code or algorithm, we must notice that each instruction affects the overall performance of the algorithm and therefore, each instruction must be analyzed separately to analyze overall performance. However, there are some algorithm control structures which are present in each programming code and have a specific asymptotic analysis.Some Algorithm Control Structures are:SequencingIf-then-elsefor loopWhile loop1. Sequencing:Suppose our algorithm consists of two parts A and B. A takes time tA and B takes time tB for computation. The total computation "tA + tB" is according to the sequence rule. According to maximum rule, this computation time is (max (tA,tB)).Example:Suppose tA =O (n) and tB = θ (n2). Then, the total computation time can be calculated as Computation Time = tA + tB = (max (tA,tB) = (max (O (n), θ (n2)) = θ (n2)

Asymptotic Analysis of algorithms (Growth of function)

Asymptotic notation:The word Asymptotic means approaching a value or curve arbitrarily closely (i.e., as some sort of limit is taken).Asymptotic analysisIt is a technique of representing limiting behavior. The methodology has the applications across science. It can be used to analyze the performance of an algorithm for some large data set.1. In computer science in the analysis of algorithms, considering the performance of algorithms when applied to very large input datasets Asymptotic Analysis of algorithms (Growth of function)Resources for an algorithm are usually expressed as a function regarding input. Often this function is messy and complicated to work. To study Function growth efficiently, we reduce the function down to the important part. Let f (n) = an2+bn+c In this function, the n2 term dominates the function that is when n gets sufficiently large.Dominate terms are what we are interested in reducing a function, in this; we ignore all constants and coefficient and look at the highest order term concerning n.Asymptotic notation:The word Asymptotic means approaching a value or curve arbitrarily closely (i.e., as some sort of limit is taken).Asymptotic analysisIt is a technique of representing limiting behavior. The methodology has the applications across science. It can be used to analyze the performance of an algorithm for some large data set.1. In computer science in the analysis of algorithms, considering the performance of algorithms when applied to very large input datasetsThe simplest example is a function ƒ (n) = n2+3n, the term 3n becomes insignificant compared to n2 when n is very large. The function "ƒ (n) is said to be asymptotically equivalent to n2 as n → ∞", and here is written symbolically as ƒ (n) ~ n2.Asymptotic notations are used to write fastest and slowest possible running time for an algorithm. These are also referred to as 'best case' and 'worst case' scenarios respectively."In asymptotic notations, we derive the complexity concerning the size of the input. (Example in terms of n)""These notations are important because without expanding the cost of running the algorithm, we can estimate the complexity of the algorithms."Why is Asymptotic Notation Important?1. They give simple characteristics of an algorithm's efficiency.2. They allow the comparisons of the performances of various algorithms.Asymptotic Notations:Asymptotic Notation is a way of comparing function that ignores constant factors and small input sizes. Three notations are used to calculate the running time complexity of an algorithm:1. Big-oh notation: Big-oh is the formal method of expressing the upper bound of an algorithm's running time. It is the measure of the longest amount of time. The function f (n) = O (g (n)) [read as "f of n is big-oh of g of n"] if and only if exist positive constant c and such thatf (n) ⩽ k.g (n)f(n)⩽k.g(n) for n>n0n>n0 in all case  

Complexity of Algorithm

It is very convenient to classify algorithm based on the relative amount of time or relative amount of space they required and specify the growth of time/space requirement as a function of input size.Time Complexity: Running time of a program as a function of the size of the input.Space Complexity: Some forms of analysis could be done based on how much space an algorithm needs to complete its task. This space complexity analysis was critical in the early days of computing when storage space on the computer was limited. When considering this algorithm are divided into those that need extra space to do their work and those that work in place.But now a day's problem of space rarely occurs because space on the computer (internal or external) is enough.Broadly, we achieve the following types of analysis -Worst-case: f (n) defined by the maximum number of steps taken on any instance of size n.Best-case: f (n) defined by the minimum number of steps taken on any instance of size n.Average case: f (n) defined by the average number of steps taken on any instance of size n

Iteration Methods

Recurrence RelationA recurrence is an equation or inequality that describes a function in terms of its values on smaller inputs. To solve a Recurrence Relation means to obtain a function defined on the natural numbers that satisfy the recurrence.For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures is described by the recurrence.T (n) = θ (1) if n=1 2T + θ (n) if n>1 There are four methods for solving Recurrence:Substitution MethodIteration MethodRecursion Tree MethodMaster Method1. Substitution Method:The Substitution Method Consists of two main steps:Guess the Solution.Use the mathematical induction to find the boundary condition and shows that the guess is correct.For Example1 Solve the equation by Substitution Method. T (n) = T + n We have to show that it is asymptotically bound by O (log n).Solution:For T (n) = O (log n) We have to show that for some constant cT (n) ≤c logn.  Put this in given Recurrence Equation. T (n) ≤c log+ 1 ≤c log+ 1 = c logn-clog2 2+1 ≤c logn for c≥1 Thus T (n) =O logn. Example2 Consider the RecurrenceT (n) = 2T+ n n>1 Find an Asymptotic bound on T.Solution:We guess the solution is O (n (logn)).Thus for constant 'c'. T (n) ≤c n logn Put this in given Recurrence Equation. Now, T (n) ≤2clog +n ≤cnlogn-cnlog2+n =cn logn-n (clog2-1) ≤cn logn for (c≥1) Thus T (n) = O (n logn) . 2. Iteration MethodsIt means to expand the recurrence and express it as a summation of terms of n and initial condition.Example1: Consider the RecurrenceT (n) = 1  if n=1        = 2T (n-1) if n>1  Solution: T (n) = 2T (n-1) = 2[2T (n-2)] = 22T (n-2) = 4[2T (n-3)] = 23T (n-3) = 8[2T (n-4)] = 24T (n-4) (Eq.1) Repeat the procedure for i times T (n) = 2i T (n-i) Put n-i=1 or i= n-1 in (Eq.1) T (n) = 2n-1 T (1) = 2n-1 .1 {T (1) =1 .....given} = 2n-1 Example2: Consider the RecurrenceT (n) = T (n-1) +1 and T (1) =  θ (1).  Solution: T (n) = T (n-1) +1 = (T (n-2) +1) +1 = (T (n-3) +1) +1+1 = T (n-4) +4 = T (n-5) +1+4 = T (n-5) +5= T (n-k) + k Where k = n-1 T (n-k) = T (1) = θ (1) T (n) = θ (1) + (n-1) = 1+n-1=n= θ (n).

LIVING PEACEFULLY

 To live peacefully, one should start install peace within (self). Charity begins at home. Then one can spread peace to family, organization where one works, and then to the world, including the environment. Only who are at peace can spread peace. You can’t gift an article which you do not possess. The essence of oriental philosophy is that one should not fight for peace. It is oxymoron. War or peace can be won only by peace, and not by wars!  One should adopt the following means to live peacefully, in the world Nurture  Order in one’s life (self-regulation, discipline, and duty).  Pure thoughts in one’s soul (loving others, blessing others, friendly, and not criticizing or hurting others by thought, word ordeed).  Creativity in one’s head (useful and constructive).  Beauty in one’s heart (love, service, happiness, and peace). Get  Good health/body (Physical strength for service to enjoy the academic environment in the institution)  Help the needy with head, heart, and hands (charity). Service to the poor is considered holier than the service to God. Not hurting and torturing others physically, verbally, or mentally. 

THEORIES ABOUT RIGHT ACTION

The main objectives of right action are To understand the distinction between a theory of Right and a theory of Good.  To understand Utilitarianism, Ethical Egoism, and Consequentialism  To know how rule utilitarianism differs from actutilitarianism; ―Utilitarianism is the moral philosophy putting that at the centre of things. It concentrates upon general well-wishing or benevolence, or solidarity or identification with the pleasure and pain or welfare of people as a whole. The good is identified with the greatest happiness of the greatest number, and the aim of action is to advance the good (this is known as the principle of Utility). We should always do whatever will produce the greatest possible balance of happiness over unhappiness for everyone who will be affected by our action. Utilitarianism is often summed up as doing ‗the greatest good for the greatest number. Theories of Rights Action are philosophical concepts concerned with human nature and their rights and duties to lead the life with ethical values. The concepts mainly focus on individual person‘s actions and their consequences. There are different versions of rights action introduced by difference ethicists during the eighteen-century Enlightenment Era: utilitarianism; rights ethics, and duty. Our task here is to define the concept of Rights Action. We may have different perspectives and understanding of the concepts. After having learnt the concepts: utilitarianism; liberty rights; welfare rights; and duty ethics we can theorize the concept of Right Action as the followings:  Right action is the action which controls bylaw  Right action considers to good consequences of action  Right action is the action which is benefits to all students, teachers, society, industry etc.  Right action is the consequences of action that is not violate the moral rule.