# Competitive Programming Topics

Feb 02, 2022dsacp

The core of competitive programming is about inventing efficient algorithms that solve well-defined computational problems. The design of algorithms requires problem solving and mathematical skills. Often a solution to a problem is a combination of following well-known methods and new insights.

## Simulation Problems

The main difficulty in the simulation problem is not figuring out how to solve it conceptually, but instead writing code that follows the (usually complex) set of rules they tell you. The challenge is writing this code quickly and accurately.

It is called a simulation problem because your code is simulating whatever situation is described in the problem (e.g. a no-choices card game, in one of the examples).

## Complete Search

Complete search is a general method that can be used to solve almost any algorithm problem. The idea is to generate all possible solutions to the problem using brute force, and then select the best solution or count the number of solutions, depending on the problem.

Complete search is a good technique if there is enough time to go through all the solutions, because the search is usually easy to implement and it always gives the correct answer. If complete search is too slow, other techniques, such as greedy algorithms or dynamic programming, may be needed.

## Constructive Algorithms

In almost all constructive algorithm problems, there are a lot of possible solutions, and usually any example which satisfies the conditions is accepted.

Since there are multiple solutions, you have to find a pattern that can be generated programmatically, and implement it. These problems are usually ad-hoc, with little knowledge of data structures and algorithms needed. People with strong mathematical creativity especially shine in this category of problems.

The normal strategy for these problems is to manually solve a few example inputs, and try to generalize your thinking process.

## Greedy Algorithms

A greedy algorithm constructs a solution to the problem by always making a choice that looks the best at the moment. A greedy algorithm never takes back its choices, but directly constructs the final solution. For this reason, greedy algorithms are usually very efficient.

The difficulty in designing greedy algorithms is to find a greedy strategy that always produces an optimal solution to the problem. The locally optimal choices in a greedy algorithm should also be globally optimal. It is often difficult to argue that a greedy algorithm works.

## Dynamic Programming

Dynamic programming is a technique that combines the correctness of complete search and the efficiency of greedy algorithms. Dynamic programming can be applied if the problem can be divided into overlapping sub problems that can be solved independently.

There are two uses for dynamic programming:

• Finding an optimal solution: We want to find a solution that is as large as possible or as small as possible.
• Counting the number of solutions: We want to calculate the total number of possible solutions.

## Data Structures

A data structure is a way to store data in the memory of a computer. It is important to choose an appropriate data structure for a problem, because each data structure has its own advantages and disadvantages. The crucial question is: which operations are efficient in the chosen data structure?

It is a good idea to use the standard library whenever possible, because it will save a lot of time. All basic data structures are available in C++ STL like Stack, Queue, Deque (double-ended queue), Set, Map, PriorityQueue.

• Priority Queue: An abstract data type similar to a regular Queue or Stack in which each element additionally has a priority associated with it.

There are many more advanced data structures that are used in contests:

• Binary Indexed Tree: Also called Fenwick Tree, is a data structure that can efficiently update elements and calculate prefix sums in a table of numbers.
• Segment Tree: A data structure that allows answering range queries over an array effectively, while still being flexible enough to allow modifying the array.
• Disjoint Set: Also called Union Find, is a data structure that stores a collection of disjoint (non-overlapping) sets.

## Graphs

Many programming problems can be solved by modeling the problem as a graph problem and using an appropriate graph algorithm. A typical example of a graph is a network of roads and cities in a country. Sometimes, though, the graph is hidden in the problem and it may be difficult to detect it.

• Graph Traversal: Two fundamental graph algorithms are depth-first search and breadth-first search. Both algorithms are given a starting node in the graph, and they visit all nodes that can be reached from the starting node. The difference in the algorithms is the order in which they visit the nodes.
• Finding a Shortest Path: An important problem that has many practical applications. For example, a natural problem related toa road network is to calculate the shortest possible length of a route between two cities, given the lengths of the roads.

## Mathematics

A majority of the Competitive Coding problems that you’ll encounter will have some mathematical logic or trick.

• Number theory - a branch of mathematics that studies integers such as finding prime numbers and factors, and solving integer equations.
• Combinatorics - Studies methods for counting combinations of objects. Usually, the goal is to find a way to count the combinations efficiently without generating each combination separately.

## Bit Manipulation

In programming, an n bit integer is internally stored as a binary number that consists of n bits. For example, the C++ type int is a 32-bit type, which means that every int number consists of 32 bits.

One of the most useful and effective low-level optimisations is bit manipulation or using the bits of an integer to represent a set. Not only does it produce an order-of-magnitude improvement in both speed and size, but it can also often simplify code at the same time.

Many algorithms can be optimized using bit operations. Such optimizations do not change the time complexity of the algorithm, but they may have a large impact on the actual running time of the code.

### Bit Operations

• AND (x & y): Produces a number that has one bits in positions where both x and y have one bits.
• OR (x | y): Produces a number that has one bits in positions where at least one of x and y have one bits.
• XOR (x ^ y): Produces a number that has one bits in positions where exactly one of x and y have one bits.
• NOT (~x): Produces a number where all the bits of x have been inverted.

### Bit shifts

• Left Shift (x << k): Appends k zero bits to the number.
• Right Shift (x >> k): Removes the k last bits from the number.