What is Amortized Analysis?
Amortized analysis is such kind of method that analyzes the costs associated with the data structure. Which helps to averages the worst operations out over time. It is common for data structures to have one costly procedure, which is not often performed. Data structure became so expensive just because of performing that one operation.
So for the worst-case amortized analysis, average out the costly operations.
A data structure’s worst-case scenario is the order in which it performs its operations with the most cost. The functions can be averaged after the procedure is found.
Example on Amortized Analysis
Dynamic arrays allow items to be inserted at an index in O(1) time. However, it is unable to perform the task in constant time without the Image in the array. For that case, it doubles array size and then inserts the element if the index is present.
Let = cost of ith insertion, it for the dynamic array.
Here are the main methods that are used for amortization analysis:
- The aggregate method
- The potential (or physicist’s) method.
- The accounting (or banker’s) method
Let consider ci be the cost of the i-th insertion:
Let’s consider si the size of the table and ci is the cost.
Alternatively, now ci=1+di, di= cost of doubling the table size. That is
Now, all the 1’s sum to O(n), and all the di also sum to O(n). That is,
Here, m = log(n − 1),
and both terms on the right-hand side of the inequality are O(n).
The total running time= n insertions is O(n).
Potential (Physicist’s) Method
Suppose Φ (read “Phi”) is a potential function on states of a data structure. The operation follows with the properties:
- Φ(h0) = 0, where the data structure’s initial state is represented through h0.
- Φ(ht) ≥ 0 for all states ht
Now, defining the amortized time of an operation as
c + Φ(h’) − Φ(h),
where c = the actual cost of the operation. Here, h and h’ are the states of the data structure before and after the operation. Ideally, Φ should be defined. So the amortization time of each operation is short. For low-cost operations, the change should be positive, whereas high-cost operations should be negatively affected.
Consider a sequence of n operations,
Taking actual times= c0, c1, c2, …, cn−1
Now, data structures h1, h2, …, hn start from h0. The total amortized time,
(c0 + Φ(h1) − Φ(h0)) + (c1 + Φ(h2) − Φ(h1)) + … + (cn−1 + Φ(hn) − Φ(hn−1))
= c0 + c1 + … + cn−1 + Φ(hn) − Φ(h0)
= c0 + c1 + … + cn−1 + Φ(hn).
Due to this, amortized time overestimates actual time by Φ(hn). Though assumption it is always positive. In this sense, the total amortized time will always be greater than the exact time.
We can use the potential function to resize dynamically resizable arrays by doubling.
Φ(h) = 2n − m,
where n=the current number of elements,
m= is the current length of the array.
The array length 0 and allocate an array of length 1. Adding the first element doubles the array size. For more space, we have Φ(h0) = 0 and Φ(ht) ≥ 0 for all t. Due to this latter inequality, the array will always have at least half as many elements as it has.
We now want to demonstrate that adding an element takes amortized constant time. A couple of cases are involved.
- If n < m,
The actual cost is 1; if n increases by 1, then m does not change. Then the potential increases by 2.
Here the amortized running time is 1 + 2 = 3.
- If n = m,
Now, the array is doubled.
the actual time= n + 1. But the potential drops from n to 2.
Here the amortized run time is n + 1 + (2 − n) = 3.
So in both cases, we got the amortized time is O(1).
Physicists use the potential function to determine which assumptions determine amortized analysis. Possible functions need to save enough time so that they can be used later.
Accounting (Banker’s) Method
A sequence of operations is directly bounded as far as the overall running time is concerned by the aggregate method. By contrast, the accounting method determines the number of extra time units charged to the workers. The sum of the individual payments represents the upper bound of the total actual cost for each operation. For example, let’ assume to maintain a bank account. It is common for low-cost operations to charge their customers a little more than they cost. Surpluses are deposited in a bank account to be used later. The savings then finance the deficit in the account on high-cost operations. Therefore, we can spread the high-cost procedures over a more extended period. To keep the balance in the bank account positive, the charges must be set so large that the amounts for each operation are large enough. However, small enough so as not to exceed the actual cost of any process.
This does not mean that an operation will take that much time at all. On the contrary, this accounting method makes the analysis easier.
here, c’i= The charge for the i-th operation
Here, ci= the true cost, then Σ1≤i≤n ci ≤ Σ1≤i≤n c’i
For all n, amortized time Σ1≤i≤n c’i.
In that sequence of n,
Σ1≤i≤n ci= true times of operation bound
Here is an example of an extensible array. Let’s say the cost of inserting an element is one unit. When the table is doubled, it takes 1 unit to move it. An insertion charge of one unit is not enough, as the moving costs cannot be recouped. A charge of 2 units per insertion again is not enough too.
but a charge of 3 appears to be:
So here, i-th insertion bi is the balance.
Let m refer to the m-th element, and the three units charged to m are spent as follows:
- Inserting m immediately into a table requires one unit.
- When the table is grown after m is inserted, one unit is used to move m.
- One unit is donated to m − 2k element, where 2k is the enormous power of 2, which is not exceeding m. During the first time the table is growing after M is inserted, this element is moved.
The first time an element is moved, it is paid for by one of its time units. All subsequent moves are funded by elements inserted later in donations made when the component was inserted.
We could do better by charging just 1 for the first insertion and 3 for subsequent insertions. There is no element to copy in the first insertion.