Linear time v.s. Quadratic time

You usually argue about an algorithm in terms of their input size n (if the input is an array or a list). A linear solution to a problem would be an algorithm which execution times scales lineary with n, so x*n + y, where x and y are real numbers. n appears with a highest exponent of 1: n = n^1.

With a quadratic solution, n appears in a term with 2 as the highest exponent, e.g. x*n^2 + y*n + z.

For arbitrary n, the linear solution grows in execution time much slower than the quadratic one.

For mor information, look up Big O Notation.


You do not specify but as you mention a solution it is possible you are asking about quadratic and linear convergence. To this end, if you have an algorithm that is iterative and generates a sequence of approximations to a convergent value, then you have quadratic convergence when you can show that

 x(n) <= c * x(n-1)^2

for some positive constant c. That is to say that the error in the solution at iteration n+1 is less than the square of the error at iteration n. See this for a fuller introduction for more general convergence rate definitions http://en.wikipedia.org/wiki/Rate_of_convergence


They must be referring to run-time complexity also known as Big O notation. This is an extremely large topic to tackle. I would start with the article on wikipedia: https://en.wikipedia.org/wiki/Big_O_notation

When I was researching this topic one of the things I learned to do is graph the runtime of my algorithm with different size sets of data. When you graph the results you will notice that the line or curve can be classified into one of several orders of growth.

Understanding how to classify the runtime complexity of an algorithm will give you a framework to understanding how your algorithm will scale in terms of time or memory. It will give you the power to compare and classify algorithms loosely with each other.

I'm no expert but this helped me get started down the rabbit hole.

Here are some typical orders of growth:

  • O(1) - constant time
  • O(log n) - logarithmic
  • O(n) - linear time
  • O(n^2) - quadratic
  • O(2^n) - exponential
  • O(n!) - factorial

If the wikipedia article is difficult to swallow, I highly recommend watching some lectures on the subject on iTunes University and looking into the topics of algorithm analysis, big-O notation, data structures and even operation counting.

Good luck!


A method is linear when the time it takes increases linearly with the number of elements involved. For example, a for loop which prints the elements of an array is roughly linear:

for x in range(10):
    print x

because if we print range(100) instead of range(10), the time it will take to run it is 10 times longer. You will see very often that written as O(N), meaning that the time or computational effort to run the algorithm is proportional to N.

Now, let's say we want to print the elements of two for loops:

for x in range(10):
    for y in range(10):
        print x, y

For every x, I go 10 times looping y. For this reason, the whole thing goes through 10x10=100 prints (you can see them just by running the code). If instead of using 10, I use 100, now the method will do 100x100=10000. In other words, the method goes as O(N*N) or O(N²), because every time you increase the number of elements, the computation effort or time will increase as the square of the number of points.