Stanford Algorithm Specialization Solutions

Here is a record of my solutions to the Stanford Algorithm Specialization on Coursera. All codes are written in Python. The contents are the same as those on my GitHub:

Course 1: Divide and Conquer, Sorting and Searching, and Randomized Algorithms

Week 1: Quiz

Question 1
Question 2
Question 3
Question 4
Question 5

Week 1: Programming Assignment

In this programming assignment you will implement one or more of the integer multiplication algorithms described in lecture.

To get the most out of this assignment, your program should restrict itself to multiplying only pairs of single-digit numbers. You can implement the grade-school algorithm if you want, but to get the most out of the assignment you’ll want to implement recursive integer multiplication and/or Karatsuba’s algorithm.

So: what’s the product of the following two 64-digit numbers?



[TIP: before submitting, first test the correctness of your program on some small test cases of your own devising. Then post your best test cases to the discussion forums to help your fellow students!]

[Food for thought: the number of digits in each input number is a power of 2. Does this make your life easier? Does it depend on which algorithm you’re implementing?]

The numeric answer should be typed in the space below. So if your answer is 1198233847, then just type 1198233847 in the space provided without any space / commas / any other punctuation marks.

(We do not require you to submit your code, so feel free to use any programming language you want — just type the final numeric answer in the following space.)


def karatsuba(x, y):
    n = len(str(x))
    mid = n // 2
    if n == 1:
        return x * y
        a, b = int((str(x))[:mid + 1]), int((str(x))[mid + 1:])
        c, d = int((str(y))[:mid + 1]), int((str(y))[mid + 1:])
        p, q = a + b, c + d
        ac, bd, pq = a * c, b * d, p * q
        adbc = pq - ac- bd
        return (10 ** n) * ac + (10 ** mid) * adbc + bd
x = 3141592653589793238462643383279502884197169399375105820974944592
y = 2718281828459045235360287471352662497757247093699959574966967627
print(karatsuba(x, y))

Week 2: Quiz

Week 2: Programming Assignment

Course 2: Graph Search, Shortest Paths, and Data Structures

Week 1: Quiz

Week 1: Programming Assignment

Course 4: Shortest Paths Revisited, NP-Complete Problems and What To Do About Them

Week 1: Quiz

Week 1: Programming Assignment

Week 4: Breadth-first Search (BFS)

Breadth-first Search (BFS) appears in interviews a lot. If an interview question is about graph, 99% of the times it is testing you BFS. It is very simple and versatile, just like binary search, so it is worthy to spend some time and tackle it down. BFS can be used in tree traversal and finding the shortest path in an undirected graph. It uses a queue to keep track on elements that are waiting for search. In Python, we use collections.deque to realize this functionality. 

Week 1: Greedy Algorithm

Greedy Algorithm is very unlikely to appear in a coding interview. That means, you don’t have to learn that hard on this topic. Your task in this week is just to memorize the following solutions, then you are good to go.

1. Majority Element

LeetCode #169: Majority Element

Given an array of size n, find the majority element. The majority element is the element that appears more than ⌊ n/2 ⌋ times.

You may assume that the array is non-empty and the majority element always exist in the array.

Example 1:

Input: [3,2,3]
Output: 3

Example 2:

Input: [2,2,1,1,1,2,2]
Output: 2


The main idea is that if we eliminate every pair of majority element and non-majority element, the majority element will still be majority in the new list, since it appears more than ⌊ n/2 ⌋ times. The algorithm is the following: suppose the first element in nums is the majority element, and initialize count to 1. Loop the rest of nums. If the next element is still the same, increment count; otherwise, decrement count. If count is decreased to 0, suppose the next element is the majority element, and keep looping. The code is here:


class Solution:
    def majorityElement(self, nums: List[int]) -> int:
        majority = nums[0]
        count = 1
        for i in range(1, len(nums)):
            if majority == nums[i]:
                count += 1
                count -= 1
            if count == 0:
                if i == len(nums):
                    majority = nums[i + 1]
        return majority


This algorithm uses a for loop that runs n - 1 times, and all operations inside the for loop takes time O(1), so the overall running time is O(n). It is also an in-place algorithm, thus takes O(1) space.

2. Create Maximum Number

LeetCode #321: Create Maximum Number

Given two arrays of length m and n with digits 0-9 representing two numbers. Create the maximum number of length k <= m + n from digits of the two. The relative order of the digits from the same array must be preserved. Return an array of the k digits.

Note: You should try to optimize your time and space complexity.

Example 1:

nums1 = [3, 4, 6, 5]
nums2 = [9, 1, 2, 5, 8, 3]
k = 5
[9, 8, 6, 5, 3]

Example 2:

nums1 = [6, 7]
nums2 = [6, 0, 4]
k = 5
[6, 7, 6, 0, 4]

Example 3:

nums1 = [3, 9]
nums2 = [8, 9]
k = 3
[9, 8, 9]

Week 3: Two Pointers

As its name suggests, in a two pointers style algorithm, we setup two different pointers at beginning. These two pointers can be opposite-direction or same-direction, and they may have different speed, i.e. slow and fast pointers. Two pointers tends to have worse time complexity but O(1) space complexity. If a follow-up question asks you to solve “in-place”, that is, with $latex O(1) space complexity, you should keep two pointers in mind.

1. Opposite-direction Two Pointers:

A very simple example is to reverse a list:

# suppose s is a list of characters
def reverse(s):
    start, end = 0, len(s) - 1
    while start < end:
        s[start], s[end] = s[start], s[end]
        start += 1
        end -= 1

Here, start and end are two pointers, and they move toward each other in the list. That’s why it is called “opposite-direction”. Another example is to determine if a string is palindrome, without using the slice operator:

def is_palindrome(s):
    start, end = 0, len(s) - 1
    while start < end:
        if s[start] != s[end]:
            return False
        start += 1
        end -= 1
    return True

One famous question regarding opposite-direction two pointers is Two Sum:

LeetCode #1. Two Sum

Given an array of integers, return indices of the two numbers such that they add up to a specific target.

You may assume that each input would have exactly one solution, and you may not use the same element twice.


Given nums = [2, 7, 11, 15], target = 9, 

Because nums[0] + nums[1] = 2 + 7 = 9,
return [0, 1].

Using Hash Table (O(n) time and O(n) space):

This question has a very elegant solution, which uses a set to track all numbers that we have seen:

class Solution:
    def twoSum(self, nums: List[int], target: int) -> List[int]:
        seen = set()

        for i in range(len(nums)):
            if target - nums[i] in seen:
                return [nums.index(target-nums[i]), i]

The time complexity is O(n) since we use a hashmap. The space complexity is also O(n), for constructing the set. Here comes a follow-up question: what if we are required to write an O(1) space complexity algorithm? Now we need to use two pointers. The algorithm is the following:

Using Two Pointers (O(n\log n) time and O(1) space):

  1. Sort nums.
  2. Create two pointers representing an index at 0 and an index at len(nums) - 1.
  3. Sum the elements at the pointers.
    • If they produce the desired sum, return the pointer indices.
    • Otherwise, if the sum is less than the target, increment the left pointer
    • Otherwise, decrement the right pointer.
  4. Go back to step 3 unless the pointers are pointing to the same element, in which case return failure.

2. Same-direction Two Pointers:

Let’s start with a “slow and fast pointers” question:

LeetCode #26. Remove Duplicates from Sorted Array

Given a sorted array nums, remove the duplicates in-place such that each element appear only once and return the new length.

Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory.

Example 1:

Given nums = [1,1,2],

Your function should return length = 2, with the first two elements of nums being 1 and 2 respectively.

It doesn't matter what you leave beyond the returned length.

Example 2:

Given nums = [0,0,1,1,1,2,2,3,3,4],

Your function should return length = 5, with the first five elements of nums being modified to 0, 1, 2, 3, and 4 respectively.

It doesn't matter what values are set beyond the returned length.Using Hash Table:


Since we are required to build an in-place solution, we can’t simply use a set to do this job. Two pointers is suitable in such scenario. The algorithm can be summarized as these steps:

  1. Initialize a pointer i to index 0.
  2. Initialize a pointer j to loop through the array starting from the second element.
  3. Compare the elements pointed by i and j: if they are equal, which means there is a pair of duplicate, just skip it; if they are not equal, let i points to the next element, and then update the element at index i to the element at index j. Note that at this point, the index i had already been incremented.
  4. At the end, the slice from nums[0] to nums[i] is an array with all duplicates removed. We simply return the length of this slice, which is i + 1.

If you are still confused about the updating procedure, you can look at a test case, for example, [0, 0, 1, 1, 1, 2, 2, 3], and follow above algorithm manually. Here is the code:

class Solution:
    def removeDuplicates(self, nums: List[int]) -> int:
        if len(nums) == 0:
            return 0
        i = 0
        for j in range(1, len(nums)):
            if (nums[j] != nums[i]):
                i += 1
                nums[i] = nums[j]
        return i + 1

Week 2: Binary Search

In a sorted list, binary search is a good method to search for an item. Each time we pick a “middle element”, and compare it to our target. If it is smaller than the target, then we keep doing binary search on its right side; if it is larger than the target, then we keep doing binary search on its left side. Note that each time we can get rid of about half of the list, so eventually it is an O(\log n) algorithm. If you encounter an interview question asking you to implement an O(\log n) algorithm, most likely it is asking for a binary search. Let’s start from the most classical question on binary search:

Classical Binary Search Question:

Take a look at LeetCode #704:

LeetCode #704. Binary Search

Given a sorted (in ascending order) integer array nums of n elements and a target value, write a function to search target in nums. If target exists, then return its index, otherwise return -1.

Example 1:

Input: nums = [-1,0,3,5,9,12], target = 9
Output: 4
Explanation: 9 exists in nums and its index is 4

Example 2:

Input: nums = [-1,0,3,5,9,12], target = 2
Output: -1
Explanation: 2 does not exist in nums so return -1


  1. You may assume that all elements in nums are unique.
  2. n will be in the range [1, 10000].
  3. The value of each element in nums will be in the range [-9999, 9999].

In this case, using the classical version of binary search is perfectly fine:

class Solution:
    def search(self, nums: List[int], target: int) -> int:
        start = 0
        end = len(nums) - 1
        found = False
        while start <= end and not found:
            mid = (start + end) // 2
            if nums[mid] == target:
                return mid
                if target < nums[mid]:
                    end = mid - 1
                    start = mid + 1
        return -1

However, for other types of questions, this version of binary search might not work. For example, let’s take a look at the next question:

When Classical Version doesn’t work:

Take a look at Leetcode #34:

LeetCode #34. Find First and Last Position of Element in Sorted Array

Given an array of integers nums sorted in ascending order, find the starting and ending position of a given targetvalue.

Your algorithm’s runtime complexity must be in the order of O(log n).

If the target is not found in the array, return [-1, -1].

Example 1:

Input: nums = [5,7,7,8,8,10], target = 8
Output: [3,4]

Example 2:

Input: nums = [5,7,7,8,8,10], target = 6 Output: [-1,-1]

Very likely your first attempt would fail, and it is extremely frustrating to debug. The while loop can easily become an infinite loop, and that will give you a “Time Limit Exceeded” error. To fix this, let’s use the following template.

A Better Binary Search Template:

This template code comes from here. Let’s see what is different:

class Solution:
    # @param nums: The integer array
    # @param target: Target number to find
    # @return the first position of target in nums, position start from 0 
    def binarySearch(self, nums, target):
        if len(nums) == 0:
            return -1           

        start, end = 0, len(nums) - 1
        while start + 1 < end:
            mid = (start + end) // 2
            if nums[mid] < target:
                start = mid
            elif nums[mid] == target:
                end = mid
                end = mid
        if nums[start] == target:
            return start
        if nums[end] == target:
            return end
        return -1

There are two major points that make this code a nice template:

1. We want to use while start + 1 < end for the while loop. This is because start < end and start <= end may give us infinite loop when finding the last position of target. Consider the following case:

Input: nums = [1, 1], target = 1 Output: [0,1]

If we write start < end, note that here we will have start = 0 all the time as the loop goes. This makes while start < end or while start <= end always evaluate to True, and thus gives us an infinite loop. Using while start + 1 < end helps to get rid of any potential error.

2. We want to use start = mid and end = mid, instead of start = mid + 1 and end = mid - 1. In some special cases, such as finding the last position of target, when target == nums[mid], we can’t write +1 and -1, for that we may miss some solutions. So for simplicity, just write start = mid and end = mid for any binary search question. This makes the algorithm O(\log(n+1)), which doesn’t make much difference as the standard O(\log n).

Solution to LeetCode #34:

Here is a solution to LeetCode #34 using above template:

class Solution:
    def searchRange(self, nums: List[int], target: int) -> List[int]:
        return [self.first_position(nums, target), self.last_position(nums, target)]
    def first_position(self, nums, target):
        if len(nums) == 0:
            return -1           

        start, end = 0, len(nums) - 1
        while start + 1 < end:
            mid = (start + end) // 2
            if nums[mid] < target:
                start = mid
            elif nums[mid] == target:
                end = mid
                end = mid
        if nums[start] == target:
            return start
        elif nums[end] == target:
            return end
            return -1
    def last_position(self, nums, target):
        if len(nums) == 0:
            return -1           

        start, end = 0, len(nums) - 1
        while start + 1 < end:
            mid = (start + end) // 2
            if nums[mid] < target:
                start = mid
            elif nums[mid] == target:
                start = mid
                end = mid
        if nums[end] == target:
            return end
        elif nums[start] == target:
            return start
            return -1

The only two differences between first_position and last_position:

  1. When nums[mid] == target, first_position sets end = mid and last_position sets start = mid. This is very intuitive. For example, in first_position, you can’t just return mid when you find the target. You should set end to mid, so that you can keep searching the elements before it, just to see if the target occurs anywhere before the element you are looking at. Similar reasoning holds for last_position, too.
  2. After the while loop, first_position checks nums[start] first, and last_position checks nums[end] first, for very obvious reason.

Some Examples:

Using the template, binary search questions can be really easy to solve. For example, consider this question:

LeetCode #162. Find Peak Element

A peak element is an element that is greater than its neighbors.

Given an input array nums, where nums[i] ≠ nums[i+1], find a peak element and return its index.

The array may contain multiple peaks, in that case return the index to any one of the peaks is fine.

You may imagine that nums[-1] = nums[n] = -∞.

Example 1:

Input: nums = [1,2,3,1]
Output: 2
Explanation: 3 is a peak element and your function should return the index number 2.

Example 2:

Input: nums = [1,2,1,3,5,6,4]
Output: 1 or 5 
Explanation: Your function can return either index number 1 where the peak element is 2, 
             or index number 5 where the peak element is 6.


Your solution should be in logarithmic complexity.


class Solution:
    def findPeakElement(self, nums: List[int]) -> int:
        if len(nums) == 0:
            return -1
        if len(nums) == 1:
            return 0
        start, end = 0, len(nums) - 1
        while start + 1 < end:
            mid = (start + end) // 2
            if nums[mid] > nums[mid + 1]:
                end = mid
                start = mid
        if nums[start] > nums[end]:
            return start
        elif nums[start] < nums[end]:
            return end
            return -1

LeetCode Time!

Let’s Do all problems in this page:

Ad Hoc Multi-Input Functional Encryption (aMIFE)


  1. Functional Encryption: Definitions and Challenges
  2. Multi-Input Functional Encryption
  3. Ad Hoc Multi-Input Functional Encryption


In this paper, authors proved two main results:

  1. aMIFE for general functionality can be constructed by standard MIFE + general FE + function re-runnable two-round MPC.
  2. As a special case, aMIFE for the inner product functionality can be constructed from standard assumption: Learning From Error (LWE)

To figure out how it is achieved, we need to start from the building blocks:

Functional Encryption (FE):

Consider a cloud image storage service. Law enforcement may require the cloud to search for images containing a particular face. For protecting users’ privacy (if there is still any), the cloud needs a restricted secret key that only decrypts a target image, and only decrypts the target face in clear, while still keeps all other parts of the images blurred. In this scenario, the secret key only reveals a function of the plaintext image. We know traditional public key cryptography is an “all or nothing” scheme, which means either we can decrypt and read the entire plaintext, or we are not able to decrypt it and thus learn nothing about the plaintext. So here function encryption (FE) comes to rescue. Just as in traditional encryption, in FE, ciphertexts can be generated with an encryption key. However, each decryption key is associated with a function f, and decryption of an encryption of m using this key results in not m but f(m). Informally, the security definition of FE requires nothing more than f(m) can be learned from the encryption of m and the decryption key for f.

Multi-Input Functional Encryption (MIFE):

Note that FE only works for computing a function f on a single ciphertext. Moving one step further, what if we want to compute a function defined over multiple plaintexts given their ciphertexts each encrypted under a different key? Now we need to introduce MIFE. In MIFE, decryption keys allow computing joint functions of different plaintexts underlying multiple ciphertexts. That is, decryption takes a key for a function f and ciphertexts c_1, ... , c_n encrypting m_1, ... , m_n, and outputs f(m_1, ... , m_n). The interactive pattern of MIFE can be summarized into the following steps:

  1. Suppose there are l users involved. This number l is counted and sent to a trusted global setup. The global setup outputs a master secret key MSK, and a set of l encryption keys \{EK_1, EK_2, ..., EK_l\}, one for each user.
  2. The global setup picks a function f, and generates a decryption key DK_f using MSK and f. The subscript indicates this decryption key is associated with f.
  3. The global setup publishes the public parameters to all users, and distributes each encryption key EK_i to user with index i. Note that this step can be done in parallel with step 2.
  4. Each user encrypts their plaintext m_i upon receiving encryption key EK_i to get a ciphertext c_i.
  5. All users send their ciphertexts to the aggregator, who now computes the result y = f(m_1, m_2, ..., m_l) using the decryption key DK_f.


The motivation of aMIFE comes from these two drawbacks of the standard MIFE presented above:

  1. In MIFE, encryption and decryption keys are generated via a global setup procedure run by an external entity, called the “key authority”. This key authority is able to decrypt all the ciphertexts gathered from users. It also picks which function f to be computed. Therefore, it must be trusted to maintain the security of this scheme.
  2. In MIFE, the number of function arity must be declared at setup time. It does not support a dynamic setting in which users can join or leave.


Where is challenging part to improve these two drawbacks?


For eliminating the key authority, a natural idea is to use MPC to replace it. However, naively using MPC as setup would introduce interaction between users, which MIFE doesn’t allow. Hence we can try Two-round MPC.

Two-round MPC:

Here is how it works:

  1. Round One: Assume the number of users n and the function to be computed f of arity n is predetermined. Each user feeds their index i and plaintext m into the first round MPC, and get the first round message \rho^{(1)} and a secret s.
  2. Round Two: All users publish their first round message \rho^{(1)}. Using all these first round messages, together with each secret s, each user computes the second round message \rho^{(2)}
  3. Compute Result: All users send their second round messages to an aggregator, and the aggregator computes the final result y.

However, naively using a two-round MPC does not suffice, as it introduces two issues:

  1. This template publishes fresh public parameters PP for each DK_f, but MIFE requires publishing PP only once.
  2. EK_i is not known given just the first round MPC messages, so users are not able to encrypt.

Solution to 1: Function re-runnable two-round MPC:

We require that the first round MPC message to be sent only once, and reused for all subsequent second round messages. This property is called “function re-runnable”. Moreover, we require the number of users to be declared during the second round of MPC, instead of at the time that the first round is sent. This property is called “function delayed”.

Solution to 2: Delaying Encryption:

We allow the users to delay encryption until EK_i are known. During the setup, each user runs a FE scheme to get the FE master key. This master key is used to compute the first round MPC message. Additionally, each user encrypts their input m_i using FE.Encrypt.

aMIFE Interaction Pattern:

We want to let each user run a local setup, which replaces the trusted global setup in standard MIFE. In addition, we want the number of users to be undetermined in the setup procedure, so that it allows users to join or leave in the computation process. For simplicity, we present the aMIFE in the private-key setting:

  1. Each user i runs a local setup to generate some public parameters and a private encryption key EK_i.
  2. Each user publishes their public parameters, and encrypts their input x using the private encryption key only, nothing else.
  3. Using their private encryption keys, users can issue partial decryption keys to the aggregator. Each partial decryption key is generated by the public parameters of l - 1 other users.
  4. If these l - 1 other users also issue the matching partial encryption keys for f to the aggregator, then the aggregator can decrypt the ciphertexts c_1, c_2, ..., c_l using all partial decryption keys PDK_{1,f}, PDK_{2,f}, ..., PDK_{l,f}. The final result is y = f(m_1, m_2, ..., m_l)

But how to replace the global setup using local setup? The idea is to nest FE, MIFE and Two Round MPC together.

aMIFE for General Functionalities:

Now we are ready to present the complete aMIFE scheme for general functionalities:

  1. Local Setup: Each user runs FE scheme to generate a FE master secret key MSK_{FE}. Feed this MSK_{FE} into the first round MPC to get the first round protocol message \rho^{(1)} and a secret s. Set public parameter PP = \rho^{(1)} and master secret key MSK = (MSK_{FE}, s). Note that the global setup in standard MIFE is bypassed using FE and first round MPC.
  2. Key Generation: Suppose l users are chosen to participate in the computation, then these l users publish their public parameters and master secret keys. Each user runs second round MPC to generated a second round message \rho^{(2)}, which is the partial decryption key PDK_i.
  3. Encryption:
  4. Decryption:

“Ad hoc Friendliness”:

We say a MIFE scheme is “ad hoc friendly” if it satisfies the following three conditions:

  1. Decentralized Setup: The encryption keys EK_i and the partial master secret key MSK_i for each user can be generated locally.
  2. Local Encryption: Each user is able to encrypt using their encryption key $EK_i$ and plaintext x, nothing else. The encryption process is fully independent from the number of users n.
  3. Piecewise Master Secret Key: If we restrict the master secret key MSK = \{MSK_1, MSK_2, ..., MSK_n\} to some subset S with |S| = l, the subset has the same distribution as a master secret key generated for functions of arity l.

Oblivious RAM Done Right(5): Lower Bound

Written by Chenyang Yu, for CS 282B final presentation.


  1. Simons Institute Talk: “Oblivious RAM” by Boyle
  2. Software protection and simulation on oblivious RAMs” by Goldreich and Ostrovsky, we will refer it as [GO96]
  3. Is There an Oblivious RAM Lower Bound?” by Boyle and Naor
  4. Simons Institute Talk: “Yes, There is an Oblivious RAM Lower Bound!” by Larsen
  5. Yes, There is an Oblivious RAM Lower Bound!” by Larsen and Nielsen

In particular, number 5 is the paper that I am presenting (picked from CRYPTO 2018).


Online: The content of the operation sequence is given to the ORAM one at a time.

Offline: The ORAM is given the entire operation sequence ahead of simulation time.

Bandwidth overhead: The multiplicative factor extra memory blocks that must be accessed to hide the operation sequence. The formula will be given later. When we say “upper bound” or “lower bound” of ORAM, we always mean the bound of amortized bandwidth overhead.

Balls in bins: The data items are modeled as “balls”, and CPU registers and server-side memory are modeled as “bins”, and the set of allowed data operations consists only of moving balls between bins, in other word, shuffling.

Active: Allow the server to perform untrusted computation on behalf of the user.

Passive: Doesn’t allow the server to perform untrusted computation on behalf of the user.


ORAM was first introduced for hiding memory access pattern in a software protection context. The main performance metric of an ORAM is the bandwidth overhead. In lectures, we studied square root solutions, hierarchy solution, and so on, in great details. Those models were built for pushing down the upper bound. Today, we are going to discuss the lower bound. In [GO96], we already had a \Omega(log n) lower bound. Is there anything we can do to further reduce the lower bound?


There are three directions to think about for further exploring lower bound:

  1. [GO96] lower bound only applies to “balls in bins” algorithms, so we may consider to remove this assumption. Boyle and Naor removed this assumption and showed it will result in super linear lower bounds for sorting circuits. To overcome this barrier, they proposed an “online” ORAM. They also argued that most implementations of ORAM fall into the “online” category.
  2. [GO96] lower bound holds even for offline ORAM. Boyle and Naor pointed out that most practical constructions of ORAM are online, so we may hope to find a better lower bound for online ORAM only.
  3. [GO96] lower bound crucially relies on the statistically closeness of access pattern distributions for any two inputs. In many cases, statistical security might be stronger than necessary, so we may consider to relax this assumption to computationally secure, which is sufficient in real world application. Note that in [GO96], authors pointed out that in practice, one can implement such ORAM using PRFs, so the result is computationally secure only. But the lower bound was proved assuming access to random oracle, so that the proof itself relies on statistical closeness. Hence by “relaxing this assumption”, I mean for proving lower bound, not for implementation.

All together, in this paper, we are going to prove a lower bound for non-“balls in bins” ORAM that is online, passive and computationally secure only. That is, the ORAM algorithm can be arbitrary, the adversary must be efficient, and it holds only for online ORAM.

Definitions and Notations:

An online ORAM supports the following two operations:

  1. write(a, data): Store data in the block of address a, where a \in [n] and data \in \{0.1\}^{r}.
  2. read(a): Return the contents of the block of address a.

We refer to blocks at the server as “memory cells“, where each cell contains w bits. We refer to blocks in the ORAM array as “entries“, where each block contains r bits. Note that this “r” is just the length of “data” as defined above. We also let the client memory to have m bits. Note that if after N accesses the ORAM makes M probes, then the bandwidth overhead is Mw/Nr.

Let y be an operation sequence: y := (op_{1}, ... , op_{M}), where op_{i} \in \{write, read\}. Let A(y) := (A(op_{1}), A(op_{2}), ... , A(op_{M})) denote the memory access pattern to the server from the online ORAM, where each A(op_{i}) represents the list of addresses of the memory cells accessed at the server while processing operation op_{i}.

An online ORAM is secure in the following sense: for two distinct operation sequences y and z with length M, A(y) and A(z) are computationally indistinguishable.

Main Theorem (informal):

Theorem: In such setting, any online ORAM must have an expected amortized bandwidth overhead of \Omega(log(nr/m)) on sequences of \Theta(n) operations.

Comment: Note that this theorem holds in the random oracle model, requiring only computational indistinguishability, holds for any w and allows for arbitrary representations of the data in memory. For the natural setting of parameters r \leq m \leq n^{1-\epsilon} for arbitrary small constant \epsilon > 0, the lower bound simplifies to \Omega(log n).

Proof Strategy:

Note that above definition is very close to the definition of an oblivious data structure, which is defined as follows:

Definition: In the array maintenance problem, we must maintain any array B of n r-bit entries under the following two operations:

  1. write(a, data): Set the contents of B[a] to data, where a \in [n] and data \in \{0,1\}^{r}.
  2. read(a): Return the contents of B[a].

Hence the idea is to prove a lower bound for oblivious data structures solving the array maintenance problem, and then use the same technique to prove the same lower bound for online ORAM.


Data Structure lower bounds are typically proved in the cell probe model of Yao. The model is very similar to standard RAM, the difference is that the computation is free and we only pay for memory accesses. To meet our goal, we will define our model as “oblivious cell probe model“. Based on this model, we will use “information transfer” method. This method comes from the paper “Logarithmic Lower Bounds in the Cell-Probe Model“.

The first page of the handout contains a graph of such model. Each operation can be a read or a write. Each operation op makes read and write probes to the server memory. The sequence of probes made during an operation op is called the access pattern of op and is written as A(op). Words in the server memory are called cells. Each cell is w bits. The ORAM is restricted to m bits of storage between two operations. During an operation it can use unlimited storage.