PHP exec() performance

@Philipp since you have SSH and since your server allows access to the exec() I will assume that you also have full root access to the machine.

Recommended for single file processing

Having root access to the machine means that you can change the /etc/php5/php.ini memory limit settings.

Even without direct access to /etc/php5/php.ini you could check if your server supports overriding php.ini directives by creating a new php.ini file in you projects directory.

Even if the overrides are not permitted you can change your memory settings from .htaccess if AllowOverride is All.

Yet another means of changing the memory limit is by setting it during the PHP runtime using ini_set('memory_limit', 256);.

Recommended for batch file processing

The only good thing about running the convert via exec() is if you don't plan on getting a result back from exec() and allowing it to run asynchronously:

exec('convert --your-convert-options > /dev/null 2>/dev/null &');

The above approach is usually helpful if you're trying to batch process many files, you don't want to wait for them to finish processing and don't need confirmation regarding each having been processed.

Performance notes

Using the code above to make exec run async for processing a single file will cost more processor time and more memory than using GD/Imagick within PHP. The time/memory will be used by a different process that does not affect the PHP process (making the visitors feel the site moving faster), but the memory consumption exists and when it comes to handling many connections that will count.


I've been programming computers for over 56 years, but this is the first time I've encountered a bug like this. So I spent nearly a week trying to understand the 7X worse execution speed when executing a perl program from php via exec versus executing the perl program directly at the command line. As part of this effort, I also pored over all the times this issue was raised on the web. Here's what I found:

(1) This is a bug that was first reported in 2002 and has not been fixed in the subsequent 11 years.

(2) The bug is related to the way apache interacts with php, so both of those organizations pass the buck to the other.

(3) The bug is the same on exec, system, or any of the alternatives.

(4) The bug doesn't depend on whether the exec-ed program is perl, exe, or whatever.

(5) The bug is the same on UNIX and Windows.

(6) The bug has nothing to do with imagemagick or with images in general. I encountered the bug in a completely different setting.

(7) The bug has nothing to do with startup times for fork, shell, bash, whatever.

(8) The bug is not fixed by changing the owner of the apache service.

(9) I'm not sure, but I think it relates to vastly increased overhead in calling subroutines.

When I encountered this problem I had a perl program that would execute in 40 seconds but through exec took 304 seconds. My ultimate solution was to figure out how to optimize my program so that it would execute in 0.5 seconds directly or in 3.5 seconds through exec. So I never did solve the problem.


This is not a PHP bug and has nothing to do with Apache/Nginx or any webserver.

I had the same issue lately and had a look at the PHP source code to check the exec() implementation.

Essentially, PHP's exec() calls Libc's popen() function.

The offender here is C's popen() which seems to be very slow. A quick google search on "c popen slow" will show you lots of questions such as yours.

I also found that someone implemented a function called popen_noshell() in C to overcome this performance problem:

https://blog.famzah.net/2009/11/20/a-much-faster-popen-and-system-implementation-for-linux/

Here is a screenshot showing the speed difference vs popen() and popen_noshell():

Speed difference

PHP's exec() uses the regular popen() - the one at the right of the screenshot above. The CPU used by the system when executing C's popen() is very high as you can see.

I see 2 solutions to this problem:

  1. Create a PHP extension that implements popen_noshell
  2. Request from the PHP team to create a new set of functions popen_noshell(), exec_noshell(), etc... which is unlikely to happen I guess...

Additional note:

While searching for this, I discovered the PHP function of the same name as the C's one: popen()

It is interesting because one can execute an external command asynchronously with: pclose(popen('your command', 'r'));

Which essentially has the same effect as exec('your command &');


When you log in to a Unix machine, either at the keyboard, or over ssh, you create a new instance of a shell. The shell is usually something like /bin/sh or /bin/bash. The shell allows you to execute commands.

When you use exec(), it also creates a new instance of a shell. That instance executes the commands you sent to it, and then exits.

When you create a new instance of a shell command, it has it's own environment variables. So if you do this:

exec('env MAGICK_THREAD_LIMIT=1');
exec('/usr/local/bin/convert 1.pdf -density 200 -quality 85% 1.jpg');

Then you create two shells, and the setting in the first shell never gets to the second shell. To get the environment variable into in the second shell, you need something like this:

exec('env MAGICK_THREAD_LIMIT=1; /usr/local/bin/convert 1.pdf -density 200 -quality 85% 1.jpg');

Now, if you think that the shell itself may be the problem, because it takes too long to make a shell, test it with something that you know takes almost no time:

$starttime = microtime(true);
exec('echo hi');
$endtime = microtime(true);
$time_taken = $endtime-$starttime;

At that point you know to try and find some way to make the shell instantiate faster.

Hope this helps!