#### Become a professional iOS developer

##### Get started with iOS 13 and Swift 5

Sign up for my iOS development course to learn iOS development with Swift 5, and start your professional iOS career.

Big O notation is a topic that can make your head spin, especially if you’re new to algorithms.

You use big O notation to describe how efficient an algorithm runs, compared to other algorithms. And we get an idea about how much more work an algorithm has to do, as the size of its input grows.

In this article you’ll learn:

- What big O notation is
- Why it’s relevant for iOS developers
- What the different time complexities mean
- How you can assess the time complexity of an algorithm

Sounds interesting? Let’s dive in.

- What Is Big O Notation?
- Constant Time With O(1)
- Linear Time With O(n)
- Quadratic Time With O(n
^{2}) - Logarithmic Time With O(log n)
- Why Big O Notation?
- Further Reading

Imagine you’re building a music app, and your boss wants you to code an algorithm that makes music recommendations. Kinda how Amazon tells you “Products You Might Also Like”, but then for a few million music tracks across a few dozen genres.

You dauntlessly go to work. You build an algorithm that determines a likeness score for every music track, compared to another track. For each track you select 5 other tracks with a likeness that exceeds a certain threshold.

So far so good! Your algorithm works fine comparing a few of your own favourite tracks, and predicts with some certainty other tracks you might also like. Awesome!

And then you try to run those millions of songs through the algorithm… Your computer’s CPU shoots to 100%, its memory is at capacity, and the whole algorithm grinds to a halt. At this rate, it’ll take weeks for the algorithm to finish!

That’s where *time complexity* and *big O notation* come in. Big O notation is used to classify algorithms based on how their running time grows relative to the size of the input for the algorithm.

Let’s break that down:

*Algorithms*are computer programs that take input and produce output. Data goes in, the algorithm does some work, and a result comes out. Examples are sorting data alphabetically, or the recommendation algorithm described earlier.*Classification*means to sort things in groups. Some algorithms go in group A, because all algorithms in group A share a particular characteristic. Other algo’s go in group B, because they share another characteristic.

Big O notation classifies algorithms in these groups, ordered from best to worst performance:

*O(1)*, called*constant time**O(log n)*, called*logaritmic time**O(n)*, called*linear time**O(n*, called^{2})*quadratic time*

(There are more groups, but let’s not get into those today.)

Check out the graph of the above time complexities, below. You can judge by the steepness of some of the lines how quickly an algorithm’s runtime grows, as its input size grows.

*Image CC BY-SA 4.0 by Cmglee*

Big O notation says nothing about how much actual time, i.e. seconds, it takes for an algorithm to do its work. It says something about the *time* relative to the size of the input. As the input for an algorithm gets bigger, how fast does the runtime grow?

With big O notation, there are two things we also talk about:

- How fast the runtime grows for arbitrarily large input sizes
- The worst case scenario for an algorithm

If you’re searching an array for a particular value, you might find it at your first guess. That doesn’t mean that the algorithm is fast, just that you got lucky. What’s more interesting is what happens to the time it takes to run an algorithm when the size of the input gets extremely large, and what happens if we have to process all that data. That’s why we’re looking for the worst case scenario.

You can also use big O notation to express the additional *space* needed to run an algorithm. Just like time complexity, space complexity describes the additional space needed as the size of the input grows.

Let’s take a look at a few examples.

The simplest of time complexities is *constant time*. An algorithm that runs in *O(1)* will always execute in the same time (or space) regardless of the size of the input.

Here’s an example:

```
let name = names[5]
```

The above code gets an item from an array by using an index number. It doesn’t matter if this array has 10 items or 10.000, it’ll always take the same time to get an array item by its index number. The complexity of looking up a value in an array is *O(1)*.

Functions with a constant time complexity are important in software development, because *O(1)* algorithms execute efficiently regardless of the size of the input. Common operations like getting an item from an array by index, or pushing a value onto a stack, are all *O(1)* operations.

What’s important to point out here is that an algorithm can still be considered slow if it takes a lot of time to execute. But that’s not what big O notation is about! We’re talking efficiency here *relative* to the size of the input, not performance as in “this algorithm takes 5 minutes to run.”

Sign up for my iOS development course to learn iOS development with Swift 5, and start your professional iOS career.

What’s more interesting, is algorithms that execute in *linear time*. This means that the time the algorithm takes to complete grows linearly with the size of the input.

Big O notation for a linear time algorithm is *O(n)*. The `n`

here is the size of the input, so the size of the input is proportional to the time it takes to process that input.

If `n = 100`

, then *O(100) = 100*. But we don’t actually put those numbers in, the “big O” *O(n)* notation is just a way to describe this time complexity class.

An *O(n)* algorithm that processes an array with 100 items has to do ten times as much work, compared to the same algorithm that processes an array with 10 items.

This is an algorithm that calculates the sum of an array of integers:

```
let numbers = [8, 5, 2, 10, 17, 2, 11]
var sum = 0
for number in numbers
{
sum += number
}
print(sum)
```

The algorithm iterates once over every item in the array. So, as we add more numbers, the work the algorithm needs to do also grows. At what rate does it grow? At a *linear* rate, so its complexity is *O(n)*.

A good example of an algorithm that runs in *O(n)* is finding an item in an array, whose index is unknown. In the worst case scenario, you have to check every item in the array to find what you’re looking for.

This is where we *really* have to do some work. For algorithms that run in quadratic time, with *O(n ^{2})*, the time it takes to run them grows directly proportional to the square of the size of the input.

Going back to our music recommendation app, this is where we ran into trouble. Because we’re checking every song against every other song, we’ll have to check every song again if we only add one song to the dataset!

Here’s what that would look like:

```
for song_A in songs
{
for song_B in songs
{
let similarity = compare(song_A, song_B)
}
}
```

This algorithm might run OK for 25 songs, because you do `25 x 25`

iterations. But imagine what happens if you add millions of songs, that’s `1.000.000 x 1.000.000`

iterations. That’s a ton of work, and an algorithm with pretty poor performance.

In most scenarios, and for large data sets, algorithms with quadratic time complexities take a lot of time to execute. So, you’ll either need a lot of resources and time, or you need to come up with a better algorithm.

A good example of a *O(n ^{2})* algorithm is the

Algorithms with logarithmic time compexities are an odd bunch. Unlike constant, linear and quadratic time, logarithmic time is hard to grasp intuitively.

An algorithm that runs in *O(log n)* reduces the amount of work it needs to do as it progresses through the data set. The more work it does, the less work it still has to do. This produces a growth curve that is logarithmic.

A good example of a logarithmic time complexity is *binary search*. With binary search we’re looking for a certain number in a sorted array of numbers.

Here’s how it goes:

- Start with the middle number in the array. Is it bigger or smaller than the number we’re looking for? This tells us if the number we’re looking for is
*left*or*right*of that middle number. - Remove the part of the array the number is
*not*in, i.e. the left or the right part. This effectively cuts the remaining work we have to do in half! - Go back to step one. Take the middle number, compare it with the target number, and divide in half. We’ve reduced the amount of work again!

See how the amount of work we need to do halves with every step the algorithm takes? This means that if we double the amount of data the algorithm needs to do, it’ll only take one extra step extra to cut it in half again. If you plot that on a line, you’ll get a logarithmic growth curve, or *O(log n)*.

Why is it important to know about big O notation? It’s a relevant discussion for a few reasons:

- Big O notation informs how we think about efficient algorithms
- Big O notation affects what algorithms you decide to use in your work
- Big O notation is a common topic in coding interviews
- (And it’s fun!)

So, why is big O notation useful? Because it informs the way we think about computation. A powerful, fast CPU will run an algorithm faster than a slower CPU – that’s obvious. But as our data set gets larger, an inefficient algorithm will take more and more time to run. That’s where big O notation comes in, because it can show you how much more work an algorithm has to do as the size of the input grows. You can’t just measure that in CPU speed.

Big O notation also helps you make decisions. When you’re asked to solve a problem, and you determine it can only be done in *O(n ^{2})*, you can decide to look for an algorithm with better performance. Likewise, big O notation helps you understand that

Don’t just learn about big O notation because it’s part of coding interviews. That doesn’t help anyone, and you’d just forget about it once you’re hired (or not). Big O notation can help you brush up your computer science skills, because it’s a good path into learning more about algorithms and computation.

Sign up for my iOS development course to learn iOS development with Swift 5, and start your professional iOS career.

Big O notation can make your head spin, especially because it feels so abstract. The cool thing about *O(1)*, *O(n)*, *O(log n)* and *O(n ^{2})* is that after a while, you start to see them everywhere.

Apple’s documentation for iOS and Swift often mentions the time complexity of operations. You can find out that `array`

runs in *O(1)* and `sort(_:)`

uses a hybrid sorting algorithm that runs in `O(n log n)`

. And now you know exactly what that means!

Want to learn more? Check out these resources:

**Hi, I'm Reinder.**

I help developers play with code.

- How To Learn iOS App Development
- 27 App Marketing Strategies That Just Work
- How To Make An App (In 9 Steps)
- For Loops In Swift (How To)
- How To: Pass Data Between View Controllers In Swift (Extended)
- The Scene Delegate In Xcode 11 And iOS 13
- Working With Codable And JSON In Swift
- » Show latest