# Introduction

A class was given two quizes. Upon surveying the students in the class, most said that the second quiz was "harder" than the first. What makes a test harder than another? Here are some possible answers:
• There were questions that took a very long time. They required a lot of calculations and a lot of thinking.---They were time consuming.
• There were questions that required a lot of writing. The answers were easy but needed a lot of paper to write them.---They were space consuming.
Problems to be solved on a computer are not very different when it comes to comparing them. They are either unsolvable or solvable. And if they are solvable, the algorithms nedded to solve them may differ in terms of how much time they need and how much space they need. Thus, the difficulty of a solvable problem could be measured or estimated by figuring out how much time and how much space will be needed to solve them. These attributes are called the space and time complexity of an algorithm. We will talk about these later. But first, we must introduce the notion of problem size.

# Problem Size

When one writes a program, it is usually intended to solve a number of similar problems. For example, when one writes a program to compute the average test scores for a particular class, the same program could be used to average out the scores of other tests in the same course, or even in other courses. The only difference between the different uses of that program may be the "size" of the class. Averaging out the test scores for a 200-student class involves more work than a averaging out the test scores for a 100-student class. The number of students in the class (i.e. the number of test scores that must be averaged out) is an example of a problem size.

To evaluate the complexity of a program is to study how the space/time requirements of the program change with the problem size.

# Space Complexity

Unlike humans, computers don't use "paper" they use "memory". Thus, the amount of space they need to solve a problem (through the execution of a program) is measured by the least number of bits (or bytes) they have to use.

Let's look at an example. Suppose we want to write a program to compute the sum of a list of numbers. The program goes into a loop (for a predetermined number of iterations equal to the size of the list, for example 100 or 200, ... etc.) asking the user to enter a number and keeps adding these numbers. What is the space complexity of that program?

There are three simple steps that one should follow:

• Identify the parameter(s) that determine the problem size,
• Find out how much space (i.e. memory) is needed for a particular size,
• Find out how much space (i.e. memory) is needed for twice the size considered earlier,
• Repeat step 3 many times until you establish a relationship between the size of the problem and its space requirements. That relationship is the space complexity of that program.
Now, let's apply the above steps to the example at hand.
• The problem size is obviously the size of the list of numbers to be added.
• Let's assume that we have a list of 10 numbers. Obviously, we need a variable where the numbers are to be eneterd (one at a time), and a variable (initially 0) where a running sum is to be kept. Thus we need two variables.
• Let's assume that we have a list of 20 numbers (twice as before). Obviously, it doesn't matter; we still need only two variables.
• Let's assume that we have a list of 40 numbers (twice as before). Obviously, it still doesn't matter; we still need only two variables.
• Obviously, we may conclude that no matter how many numbers we want to add, we still need a constant number of variables, thus we say that the relationship between problem size and space (i.e. the space complexity of the program) is constant.

# Time Complexity

Again, let's look at the same example. Suppose we want to write a program to compute the sum of a list of numbers. The program goes into a loop (for a predetermined number of iterations equal to the size of the list, for example 100 or 200, ... etc.) asking the user to enter a number and keeps adding these numbers. What is the time complexity of that program?

There are three simple steps that one should follow:

• Identify the parameter(s) that determine the problem size,
• Find out how much time (approximately) is needed for a particular size,
• Find out how much time is needed for twice the size considered earlier,
• Repeat step 3 many times until you establish a relationship between the size of the problem and its time requirements. That relationship is the time complexity of that program.
Now, let's apply the above steps to the example at hand.
• The problem size is obviously the size of the list of numbers to be added.
• Let's assume that we have a list of 10 numbers. Let's assume that it takes a micro-second to do one addition. Thus, we need 10 micro-seconds to add the 10 numbers, since in each iteration we have to do one addition.
• Now let's assume that we have a list of 20 numbers (twice as before). Obviously, now we need 20 micro-seconds to finish adding all 20 numbers.
• Now let's assume that we have a list of 40 numbers (twice as before). Obviously, now we need 40 micro-seconds to finish adding all 40 numbers.
• Obviously, we may conclude that the time we need to compute the total of a list of numbers grows proportional to the size of that list. In other words, doubling the size of the list results in doubling the time needed to add the list. We call this relationship a linear relationship. Thus the time complexity of the program is linear.

# Some Common Relationships

In the above analysis we have encountered two types of relationships: constant and linear. What other possibilities exist? Well, there are lots and lots of possibilities. Here we focus on some important/popular ones.

[ ... To be Continued ... ]

```Created on: April 13, 1995.