For more details see data structure and algorithms

Topic list

For more details see stack and queues.

An array is a

A stack is a container of objects that are inserted and removed according to the last-in first-out (LIFO) principle. In the push down stacks only two operations are allowed:

Topic list

*Overview**Union-Find Algorithms**Stacks and Queues**Sorting Algorithms**Advanced Topics in Sorting**Priority Queues**Symbol Tables**Binary Search Trees**Balanced Trees**Hashing**Undirected Graphs**Directed Graphs**Minimum Spanning Trees**Shortest Paths**Geometric Algorithms**Search and Intersection**Radix Sorts**Tries**Data Compression**Pattern Matching**Linear Programming**Reductions**Combinatorial Search*

**Stack and queues**For more details see stack and queues.

An array is a

*random access*data structure, where each element can be accessed directly and in constant time. A typical illustration of random access is a book - each page of the book can be open independently of others. Random access is critical to many algorithms, for example**binary search**. A linked list is a**, where each element can be accessed only in particular order. A typical illustration of sequential access is a roll of paper or tape - all prior material must be unrolled in order to get to data you want. In this note we consider a subcase of sequential data structures, so-called***sequential access*data structure**limited access data structures**.**Stacks**A stack is a container of objects that are inserted and removed according to the last-in first-out (LIFO) principle. In the push down stacks only two operations are allowed:

**push**the item into the stack, and**pop**the item out of the stack. A stack is a limited access data structure - elements can be added and removed from the stack only at the top.**push**adds an item to the top of the stack,**pop**removes the item from the top. A helpful analogy is to think of a stack of books; you can remove only the top book, also you can add a new book on the top. A stack is a**recursive**data structure. Here is a structural definition of a Stack:- a stack is either empty or

it consists of a top and the rest which is a stack;

**Q**ueue

A queue is a container of objects (a linear collection) that are inserted and removed according to the first-in first-out (FIFO) principle. An excellent example of a queue is a line of students in the food court of the UC. New additions to a line made to the back of the queue, while removal (or serving) happens in the front. In the queue only two operations are allowed

**enqueue**and

**dequeue**. Enqueue means to insert an item into the back of the queue, dequeue means removing the front item. The picture demonstrates the FIFO access.The difference between stacks and queues is in removing. In a stack we remove the item the most recently added; in a queue, we remove the item the least recently added.

**Sorting algorithm**

Sorting algorithms are often classified by:

- Computational complexity (worst, average and best behavior) of element comparisons in terms of the size of the list (
*n*). For typical serial sorting algorithms good behavior is O(*n*log*n*), with parallel sort in O(log2*n*), and bad behavior is O(*n*2). (See Big O notation.) Ideal behavior for a serial sort is O(*n*), but this is not possible in the average case, optimal parallel sorting is O(log*n*). Comparison-based sorting algorithms, which evaluate the elements of the list via an abstract key comparison operation, need at least O(*n*log*n*) comparisons for most inputs. - Computational complexity of swaps (for "in place" algorithms).
- Memory usage (and use of other computer resources). In particular, some sorting algorithms are "in place". Strictly, an in place sort needs only O(1) memory beyond the items being sorted; sometimes O(log(
*n*)) additional memory is considered "in place". - Recursion. Some algorithms are either recursive or non-recursive, while others may be both (e.g., merge sort).
- Stability: stable sorting algorithms maintain the relative order of records with equal keys (i.e., values).
- Whether or not they are a comparison sort. A comparison sort examines the data only by comparing two elements with a comparison operator.
- General method: insertion, exchange, selection, merging,
*etc.*Exchange sorts include bubble sort and quicksort. Selection sorts include shaker sort and heapsort. Also whether the algorithm is serial or parallel. The remainder of this discussion almost exclusively concentrates upon serial algorithms and assumes serial operation. - Adaptability: Whether or not the presortedness of the input affects the running time. Algorithms that take this into account are known to be adaptive.

See complete list of sorting algorithms and their Big-O complexity

Big-O cheatsheet

**Hashing**

A hash function is any function that can be used to map digital data of arbitrary size to digital data of fixed size, with slight differences in input data producing very big differences in output data. The values returned by a hash function are called hash values, hash codes, hash sums, or simply hashes. One practical use is a data structure called a hash table, widely used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file. An example is finding similar stretches in DNA sequences. They are also useful in cryptography. A cryptographic hash function allows one to easily verify that some input data matches a stored hash value, but makes it hard to reconstruct the data from the hash alone. This principle is used by the PGP algorithm for data validation and by many password checking systems.

**Linear programming**

Linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine function defined on this polyhedron. A linear programming algorithm finds a point in the polyhedron where this function has the smallest (or largest) value if such a point exists.

In a linear programming problem, a series of linear constraints produces a convex feasible region of possible values for those variables. In the two-variable case this region is in the shape of a convex simple polygon.

**Reductions**

In computability theory and computational complexity theory, a

**reduction**is an algorithm for transforming one problem into another problem. A reduction from one problem to another may be used to show that the second problem is at least as difficult as the first. The mathematical structure generated on a set of problems by the reductions of a particular type generally forms a preorder, whose equivalence classes may be used to definedegrees of unsolvability and complexity classes.

Intuitively, problem A is reducible to problem B if an algorithm for solving problem B efficiently (if it existed) could also be used as a subroutine to solve problem A efficiently. When this is true, solving A cannot be harder than solving B. We write A ≤m B, usually with a subscript on the ≤ to indicate the type of reduction being used (m : mapping reduction, p : polynomial reduction).