Abstract Data Type (ADT):
- A data type can be considered abstract when it is defined interms of operations on it with its implementation is hidden
Definition:
- A set of data types & associated operations that are precisely specified, independent of any particular implementation.
Example: Common examples are built in primitive types, i.e int ,char, float, etc.
- A data structure is a specialized format for organizing, processing, retrieving and storing data.
- A data structure is a particular way of organizing data in a computer so that it can be used effectively.
Properties of Data Structure:
- Every data structure is used to organise large amount of data.
- Every data structure follows particular principle
Types of Data Structures:
- There are two types of data structures such as primitive and non-primitive.
- Primitive data structures are all inbuilt data types. Where non primitive data structures are generated by using primitive data types.
Linear Data Structure:
- If a data structure is organising data sequentially then that data structure is linear data structure
- A Linear data structure have data elements arranged in sequential manner and each member element is connected to its previous and next element.
- This connection helps to traverse a linear data structure in a single level and in single run.
Example:
Non Linear Data Structure:
- If a data structure organises data in random order, then that data structure is called non linear data structure
- A non-linear data structure has no set sequence of connecting all its elements and each element can have multiple paths to connect to other elements.
Example:
- Tree
- Graph
Fig : Non linear data structure
Static Data Structure:
- It is an organisation or collection of data in memory that is fixed in size
- This result in maximum size needing to be known in advance, as memory cannot be reallocated later
Example: All inbuilt data type, newspaper
Dynamic Data Structure:
- We can change its size.
- Where in with the later, the structure size can dynamically grow or shrink as need.
- They are flexible to consume additional memory if needed or free up memory when possible for improved efficiency.
Example: Linked list, web pages
Persistent Data Structure:
It is one where multiple versions are simultaneously accessible where after an update both old and new version can be used.
- This data structure always preserves previous version of itself on modification.
- Such datastructure are Immutable
- Data structure is fully persistent if every version can be both accessed and modified.
Example: tuple, string
Ephemeral Data Structure:
An EDS is one for which only one version is available at a time, after update the structure as it existed before updation is lost.
- In imperative languages, most data structures are ephemeral
- Much of data stored on computers like like data stored on RAM & caches is temporary & thus is referred as ephemeral.
- This temporary, transient files are deleted as often as every few hours.
- Such data structure are mutable
Example: list, set and dictionary
Performance Analysis:
- There are multiple algorithms to solve a problem. When we have more than one algorithm, we need to select one.
Performance of an algorithm means predicting the resources which are required to an algorithm to perform its task.
Performance analysis helps us to select the best algorithm to solve a problem.
Performance analysis of an algorithm is the process of calculating space and time required by that algorithm.
Complexity:
- Complexity of an algorithm is a function f(n) which measures time and space used by an algorithm in terms of input size ‘n’.
- Algorithm complexity is commonly represented with the O(f) notation.
Complexity has two types:
1. Time complexity
2. Space complexity
Space Complexity:
Total amount of computer memory required by an algorithm to complete its execution is called as space complexity of that algorithm.
When we design an algorithm to solve a problem, it needs some computer memory to complete its execution. For any algorithm, memory is required for the following purposes:
- To store program instructions.
- To store constant values.
- To store variable values.
- And for few other things like function calls, jumping statements etc,.
Generally, when a program is under execution it uses the computer memory for THREE reasons. They are as follows....
Instruction Space: It is the amount of memory used to store compiled version of instructions.
Environmental Stack: It is the amount of memory used to store information of partially executed functions at the time of function call.
Data Space: It is the amount of memory used to store all the variables and constants.
Time Complexity:
The time complexity of an algorithm is the total amount of time required by an algorithm to complete its execution.
Generally, the running time of an algorithm depends upon the following:
1. Whether it is running on Single processor machine or Multi processor Machine.
2. Whether it is a 32 bit machine or 64 bit machine.
3. Read and Write speed of the machine.
4. The amount of time required by an algorithm to perform Arithmetic operations, logical operations, return value, assignment operations and input data.
Constant Time Complexity:
If any program requires a fixed amount of time for all input values then its time complexity is said to be Constant Time Complexity.
Linear Time Complexity:
If the amount of time required by an algorithm is increased with the increase of input value then that time complexity is said to be Linear Time Complexity.
Asymptotic Notation:
- Asymptotic notation of an algorithm is a mathematical representation of its complexity.
- Asymptotic Notations are the expressions that are used to represent the complexity of an algorithm.
Types of Asymptotic Notation:
- Big - Oh (O)
- Big - Omega (Ω)
- Big - Theta (Θ)
Big - Oh Notation (O):
- Big - Oh notation is used to define the upper bound of an algorithm in terms of Time Complexity.
- That means Big - Oh notation always indicates the maximum time required by an algorithm for all input values.
- That means Big - Oh notation describes the worst case of an algorithm time complexity.
Definition:
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as O(g(n))
f(n) = O(g(n))
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and time required is on Y-Axis
In above graph after a particular input value n0, always C g(n) is greater than f(n) which indicates the algorithm's upper bound.
Example:
We are creating array with size= 9
int a[9];
Case 1:
Suppose we want to search number 9 in the array: 9 is located at 8th index. It takes O(n) time for traversing this number.
Case 2:
Suppose we want to search 55 number in the array.
Example:
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values of C > 0 and n0>= 1
f(n) <= C g(n)
3n + 2 <= C n
f(n)=3*2+2=8
cn=4*2=8
8<=8
Above condition is always TRUE for all values of C = 4 and n >= 2.
By using Big - Oh notation we can represent the time complexity as follows....
3n + 2 = O(n)
Big - Omege Notation (Ω):
- Big - Omega notation is used to define the lower bound of an algorithm in terms of Time Complexity.
- That means Big-Omega notation always indicates the minimum time required by an algorithm for all input values.
- That means Big-Omega notation describes the best case of an algorithm time complexity.
Definition:
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as Ω(g(n)).
f(n) = Ω(g(n))
Example:
We are creating array with size= 9
int a[9];
Case 1:
Suppose we want to search number 12 in the array: 12 is located at 0th index. It takes O(1) time for traversing this number.
Case 2:
Suppose we want to search 45 number in the array.
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and time required is on Y-Axis
In above graph after a particular input value n0, always C g(n) is less than f(n) which indicates the algorithm's lower bound.
Example:
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of C > 0 and n0>= 1
f(n) >= C g(n)
⇒3n + 2 >= C n
Above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big - Omega notation we can represent the time complexity as follows....
3n + 2 = Ω(n)
Big - Theta Notation (Θ):
- Big - Theta notation is used to define the average bound of an algorithm in terms of Time Complexity.
- That means Big - Theta notation always indicates the average time required by an algorithm for all input values.
- That means Big - Theta notation describes the average case of an algorithm time complexity.
Definition:
Consider function f(n) as time complexity of an algorithm and g(n) is the most significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1. Then we can represent f(n) as Θ(g(n)).
f(n) = Θ(g(n))
Example:
We are creating array with size= 9
int a[9];
Case 1:
Suppose we want to search number 54 in the array: 54 is located at 3rd index. O(3) is the Time complexity for searching 54.
Case 2:
Suppose we want to search 67 number in the array.
Consider the following graph drawn for the values of f(n) and C g(n) for input (n) value on X-Axis and time required is on Y-Axis.
In above graph after a particular input value n0, always C1 g(n) is less than f(n) and C2 g(n) is greater
than f(n) which indicates the algorithm's average bound.
Example:
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <= C2 g(n) for all values of C1 > 0, C2 > 0 and n0>= 1
C1 g(n) <= f(n) <= C2 g(n)
⇒C1 n <= 3n + 2 <= C2 n
C1n=2
f(n)=3*2+2=8 c2(f(n))= 4*2=8
Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 2.
By using Big - Theta notation we can represent the time complexity as follows....
3n + 2 = Θ(n).
Analysis of Programming Constructs:
Algorithmic Strategies:
- General approaches to the construction of efficient solution to problems
- Some methods provide templates suited to solving broad range of diverse problem
- Although more than 1 technique may be applicable to a specific problem, it is often the case that algorithm constructed by certain approach is clearly superior to equivalent solution built using alternative techniques
There are different types of Algorithmic Strategies:
1. Divide and Conquer
2. Greedy method
3. Backtracking
4. Branch and bound
5. Dynamic programming
Divide and Conquer:
Divide-and-conquer, breaks a problem into subproblems that are similar to the original problem, recursively solves the subproblems, and finally combines the solutions to the subproblems to solve the original problem.
Divide: the original problem into a set of subproblems.
Conquer: Solve every subproblem individually, recursively.
Combine: Put together the solutions of the subproblems to get the solution to the whole problem.
Divide and Conquer |
The specific computer algorithms are based on the Divide & Conquer approach:
- Binary Search
- Merge sort
- Maximum and Minimum Problem
- Quick sort
- Tower of Hanoi
Greedy method:
- This makes the choice that seems to be the best at that moment.
- This means that it makes a locally-optimal choice in the hope that this choice will lead to a globally-optimal solution.
- The Greedy algorithm has only one shot to compute the optimal solution so that it never goes back and reverses the decision.
Example:
- Minimum Spanning Tree
- Fractional Knapsack Problem
- Graph vertex cover
- Huffman tree
- Job Sequencing problem
4 Comments
Great work!��