Why isn't integer factorization in complexity P, when you can factorize n in O(√n) steps?

Let $k$ denote the length of the input value $n$.

Since $k=\log_2n$, a complexity of $O(\sqrt{n})$ is equivalent to $O(\sqrt{2^k})$.

Since $\sqrt{2^k}=2^{k/2}$, this complexity is obviously exponential in terms of input-length.


Strictly speaking, it is invalid to ask whether an abstract problem like integer factoring is in P. When deciding whether an algorithm runs in P time, we don't ask about the $n$, the number being factored (or equivalent), we ask about the length of the input. The length of the input depends on exactly how you've chosen to represent it. Instead, we have to ask whether the integer factoring with a particular representation is in P.

Integer factoring with the numbers represented in binary is (as far as we know) not in P. In this case, the length of the input is $\log_2 n$.

Integer factoring with the numbers represented in unary is in P. In this case the number of bits is $n$.

Integer factoring with number represented as a list of numbers from 1 to N is in P. In this case the number of bits is $O(n \log n)$

Why the length of the input? That's just the definition of P.

Typically, we assume when talking about P/NP, that you are using a reasonable representation. What makes a reasonable representation? That's left somewhat underspecified. There is no strict definition for this reasonableness. But representing numbers in unary, or as lists of numbers is clearly not reasonable, as it uses an exponential amount of space compared to using binary.

Ok, what about the maximum of $n$ numbers?

The obvious way to represent this would be using $k$ bits for each number, giving $nk$ bits total. That puts the problem in P. In order for the problem to end up outside of P, the length would have to be something more like $O(\log n)$. But a representation that short couldn't hold the numbers in the list. Whatever problem was being solved there, it would not recognizably be finding the maximum of a list.

To some degree it comes down to: I can represent $n$ in the integer factorization problem with $\log n$ bits without losing information, so we assume that you will. But I can't do the same with the maximum integer problem.