why "make" before "make install"

There are times I want to try to compile code changes but not deploy those changes. For instance, if I'm hacking the Asterisk C code base, and I want to make sure the changes I'm making still compile, I'll save and run make. However, I don't want to deploy those changes because I'm not done coding.

For me, running make is just a way to make sure I don't end up with too many compile errors in my code to where I have trouble locating them. Perhaps more experienced C programmers don't have that problem, but for me, limiting the number of changes between compiles helps reduce the number of possible changes that may have completely trashed my build, and this makes debugging easier.

Lastly, this also helps give me a stopping point. If I want to go to lunch, I know that someone can restart the application in it's currently working state without having to come find me, since only make install would copy the binaries over to the actual application folder.

There may very well be other reasons, but this is my reason for embracing the fact that the two commands are separated. As others have said, if you want them combined, you can combine them using your shell.


A lot of software these days will do the right thing with only make install. In those that won't, the install target doesn't have a dependency on the compiled binaries. So to play safe, most people use make && make install or a variation thereof just to be safe.


make without parameters takes the ./Makefile (or ./makefile) and builds the first target. By convention, this may be the all target, but not necessarily. make install builds the special target, install. By convention, this takes the results of make all, and installs them on the current computer.

Not everybody needs make install. For example, if you build some a web app to be deployed on a different server, or if you use a cross-compiler (e.g. you build an Android application on a Linux machine), it makes no sense to run make install.

In most cases, the single line ./configure && make all install will be equivalent to the three-step process you describe, but this depends on the product, on your specific needs, and again, this is only by a convention.


When you run make, you're instructing it to essentially follow a set of build steps for a particular target. When make is called with no parameters, it runs the first target, which usually simply compiles the project. make install maps to the install target, which usually does nothing more than copy binaries into their destinations.

Frequently, the install target depends upon the compilation target, so you can get the same results by just running make install. However, I can see at least one good reason to do them in separate steps: privilege separation.

Ordinarily, when you install your software, it goes into locations for which ordinary users do not have write access (like /usr/bin and /usr/local/bin). Often, then, you end up actually having to run make and then sudo make install, as the install step requires a privilege escalation. This is a "Good Thing™", because it allows your software to be compiled as a normal user (which actually makes a difference for some projects), limiting the scope of potential damage for a badly-behaving build procedure, and only obtains root privileges for the install step.