Clean Code Handbook: LeetCode 50 Common Interview Questions | LeetCode
Data structure
Array/string
- Transform & simplify combinatorial iterations
- Representation
- #13 Longest Palindromic Substring
Instead of representing the substring with starting and ending indices, represent it as center index and length is much more efficient, improving runtime from O(n3) to O(n2).
- Proxy items
- #2 Two Sums II - Input Array Is Sorted
If the array is sorted, then a[iβ]+a[j]β€a[i]+a[j]β€a[i]+a[j+]. Thus a[i]+a[j] can be used as a proxy of a bunch of other items in the iterations, improving the runtime to O(n).
- Iteration folding
- #1 Two Sums
+ is commutative, thus half of the iteration steps can be saved and makes each iteration only previous items are relevant. Therefore, hash table of the previous items can be constructed in each iteration to accelerate later iterations, improving runtime from O(n2) to O(n).
- #3 Two Sums III - Data Structure Design
- #6 Reverse Words in a String
Iterate from head to tail requires 2 passes, while from tail to head leads to a 1 pass algorithm.
- #7 Reverse Words in a String II
By first reverse the whole strings, the order of words aligns with the desired output, thus the word-level reverse can be done in place. This further improves space complexity from O(n) to O(1).
- Problem split
- #10 Longest Substring Without Repeating Characters
When a string contains repeating characters, any substring of it is invalidated, thus the existence of repeating characters separates the combinatorial space into two subspaces. Considering that the longest possible substring can also be deduce from the location of the repeating characters, the runtime can be improved from O(n3) brute force search to O(n).
- Marginal cases analysis
Linked list
- Dummy head trick
Initialize a dummy head first can make your code simpler, since for the first and later iterations you're always adding new nodes to an existed node, avoiding conditional initialization logic embedded into the iteration loops.
- #23 Merge K Sorted Linked Lists
Brute force merging can be implemented by merging 2 lists at a time sequentially, leading to O(nk2) runtime complexity. Divide and conquer mechanism can be used to improve the runtime down to O(nklogk).
- #24 Copy List with Random Pointer
Deep copy a ordinary linked list is simple, whereas deep copy a linked list with an additional random pointer is tricky, since retrieve an item in the linked list costs O(n) runtime, leading to O(n2) runtime complexity in total.
This can be improve to O(n) runtime and O(n) space by construct a hash map from the original items to the deep copied items.
The space complexity can be further improved to O(1) by first inserting the deep copied items after the original items, thus the hash map can be replaced with p.next
. Then split deep copied ones from the original ones.
Binary tree
- Recursion
All nodes of a binary tree share similar structure, which makes recursion a particularly suitable choice for solving tree-related problems.
One thing to be noted is that recursion costs stack space, which should be considered when calculating the space complexity.
- Information feedforward
- #25 Validate Binary Search Tree
Validate that any nodes are greater/less than all nodes of their left/right trees costs O(n2) runtime with the brute force approach. With recursion, the max & min bound information can be feedforward down to each node for verification, improving runtime to O(n).
- Information feedback
- Divide-and-conquer
-
#29 Convert Sorted Array to Balanced Binary Search Tree*
-
#31 Binary Tree Maximum Path Sum
Finding the maximum path sum seems to have giant combinatorial solution space, which makes even the brute force approach unimaginable - it's O(β1nβ(inβ)). Divide-and-conquer improves it to O(n) with the max path sum as the feedback information (feedbacks 0 when the current node is not included in the path).
- Depth-first traversal
Depth-first traversal of the binary search tree happens to be the ordered traversal of the nodes.
-
#25 Validate Binary Search Tree
If the tree is a binary search tree, depth-first traversal provides a sorted list. Thus depth-first traversal can be used for verification in O(n) runtime.
-
#30 Convert Sorted List to Balanced Binary Search Tree*
First acquire the length of the linked list, then the start, mid, end indices of the trees & subtrees can be depth-first traversed, which will happens to access the nodes from the linked list in order.
- Width-first traversal
Width-first traversal can be implemented with a queue: push the root to the queue. Then at each iteration, poll a node from the start of the queue and push all children to the queue.
- Tree structure editing
This kind of problem is much more difficult than the others as it changes the tree structures at each step. Bear in mind that assigning a node A as another node B's subtree doesn't delete A's links to its parent nor its subtrees. Be very careful!
- #32 Binary Tree Upside Down
- Top-down approach: At each iteration, first keep copies of the current nodes left and right trees and then edit the current node: place the previous right node as the current node's left node and place previous edited node as the current node's right node - for accumulatively constructing an upside-down tree. Then set the kept left node as the current node and launch another iteration.
- Bottom-up approach: Constructing the bottom-up tree by recursion is way more intuitive, as shown in this figure:

Stack
- #39 Min Stack
When we pop a stack, it's state will be changed. To access the state of the stack, e.g. its minimal item, we can record the stack state with another stack whenever we push it (or whenever we change its state).
- Last-In-First-Out (LIFO)
In some problems, we only look for the adjacent elements to determine what to do next, e.g. parsing arithmetic notations and parentheses. In this case, the stack's LIFO property can be very useful.
Algorithm
Dynamic programming
The most important and challenging step of using dynamic programming is transforming the problem so that a dynamic programming formula can be formulated.
- #42 Climbing Stairs
Set f(n) as the number of ways you can climb to the n-th step. Then f(n)=f(nβ1)+f(nβ2) - the Fibonacci sequence.
- #43 Unique Paths
Set f(r,c) as the number of paths from cell (r,c) to the bottom right cell. The f(r,c)=f(r+1,c)+f(r,c+1). The calculation can be implemented with top-down recursion (from start cell) and ideally with memorization to improve its efficiency. But the bottom up implementation (from target cell) is way more efficient.
- #45 Maximum Sum Subarray
Set f(k) as the maximum sum of subarray ending at index k. Let's assume f(kβ1)'s subarray is M. If M+A[k]>A[k], then f(k) is obviously M+A[k].
However, if it's not true, is it possible to append a subarray of M with A[k] to get the maximum sum of subarray ending at index k? Luckily, the answer is no. M+A[k]β€A[k]βMβ€0, thus any subarray of M's sum β€0. Therefore, in this case, f(k)'s subarray should be A[k] itself.
In summary, f(k)=max(f(kβ1)+A[k],A[k]).
- #46 Maximum Product Subarray
The difficulty here is when A[k] is negative, f(kβ1)βA[k] could become the minimum subarray, while g(kβ1), the minimum subarray ending at index kβ1, becomes the maximum. Therefore, both f(k),g(k) are needed to be tracked and the dynamic programming formula is: f(k)=max(f(kβ1)βA[k],A[k],g(kβ1)βA[k])
- Memorization
- #47 Coins in a Line
The possible moves of you and your opponent are overwhelming. One critical idea is assuming that both you and your opponent will take the optimal move. Set P(i,j) as the maximum money one can take from the i-th to the j-th coins. When you are selecting between the i-th and the j-th coins, you minimize the maximum value your opponents could get from the remaining coins, and your opponents will do exactly the same. Therefore:
P(i,j)=max(Aiβ+min(P(i+2,j),P(i+1,jβ1)),Ajβ+min(P(i+1,jβ1),P(i,jβ2)))
Binary search
Following is a basic template of binary search:
int L = 0, R = A.length - 1;
while (L < R) {
int M = (L + R) / 2;
// TODO: Implement conditional checks.
}
It's very subtle to avoid infinite loops: make sure either L
or R
must be updated in every loop! For example, when M
is acquired via floor divide of (L + R)/2
, then either L = M + 1
or R = M
.
- #48 Search Insert Position
When target is within the range of the list's minimum/maximum, then L
/R
must have been updated when the loop ends, when L=R
. Therefore, A[L-1] < target
and A[L+1] > target
and the insertion location can be easily deduced.
It's a little bit trick to understand the case when the target is less/greater than the minimum/maximum since in this case L
/R
could keep unchanged and the size relationships don't hold. However, the insertion location should be L
or L+1
, exactly because L
or R
is never changed in this case.
- #49 Find Minimum in Sorted Rotated Array
Misc
Math
- #17 Reverse Integer
Deal with overflow/underflow.
P.S. For 32-bit int
, since 1 bit is used for storing the sign, the max/min value is Β±231β1=Β±2,147,483,647.
- #18 Plus One
Marginal case: 999.
- #19 Palindrome Number
Digit parsing tricks.
Bit manipulation
- Tweaking commutative operation
When a commutative operation is applied to a chain of values, these values can be rearranged arbitrarily yet still get the same result. This property is particularly useful.
- #33 Single Number
XOR detect changes between 2 bits. When XOR all of the numbers together, since XOR is commutative, we can group the repeating numbers together, which all become 0s. Therefore, the result will be the single number itself. ^ex-33
- #34 Single Number II
The same idea of #33 is used, but the commutative operation becomes: for each bit position, add all bits together and mod 3.
This can be implemented with a 32-length int
array. However, a more efficient approach uses 3 bitmasks ones
, twos
, and threes
to represent the bits that appears to be on for 1, 2, and 3 times. The manipulation of these bitmasks requires some fascinating usage of logic operations.
Misc