Apple - Temporarily disabling RAM to mimic a lower spec machine?

There's no need to take out RAM, create a RAM disk or use a VM. Simply boot the OS using the maxmem= boot flag value that's been created for this purpose and been around for decades.

Simply open Terminal as a sudoer and enter

sudo nano /Library/Preferences/SystemConfiguration/com.apple.Boot.plist

After entering your password change

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs$
<plist version="1.0">
<dict>
        <key>Kernel Flags</key>
        <string></string>
</dict>
</plist>

to

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs$
<plist version="1.0">
<dict>
        <key>Kernel Flags</key>
        <string>maxmem=2048</string>
</dict>
</plist>

and write the changes to disk with ctrlo and quit nano with ctrlx

Restart your Mac to apply the changes.

To revert the changes remove 'maxmem=2048' with nano again.


Just create a RAM Disk with the size 2 GiB to reduce available RAM for the system and running applications.

To get the necessary number of blocks to create such a disk, multiply (RAMdiskSize in MB)*2048. In your example that's 2048*2048=4194304.

Then open Terminal and enter:

diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nomount ram://4194304`

You will get a message similar to that one:

Started erase on disk9  
Unmounting disk  
Erasing  
Initialized /dev/rdisk9 as a 2 GB HFS Plus volume  
Mounting disk  
Finished erase on disk9 RAM Disk  

then use dd and the path to the volume and fill the disk with random data:

dd if=/dev/random of=/Volumes/RAM\ Disk/random.dat bs=1024k

The command will write 1 MiB chunks of random data to the file random.dat in the RAM Disk volume until it's filled to capacity.

This should artificially reduce your available RAM by ~2 GiB until you unmount the RAM Disk or restart your Mac.

After some testing this doesn't seem to work as reliably as in older system. The reason is the new memory management in the latest systems (10.9 and up).
The memory used by the RAM Disk shouldn't be swapped to disk but depending on the quality of the random data file it might be compressed a little bit. You may increase the RAM Disk size by 5-10% to ~2.1 GB to get a more realistic picture.


If you want to do this in 10.5-10.8 the following command seems sufficient to get a reliable result (to get the Disk Identifier check the output of the diskutil... command):

dd if=/dev/zero of=/dev/rdisk9 bs=1m

Yes - use the memory_pressure tool to apply real memory pressure to the system.

It's not a perfect analogy to removing the memory chip since the virtual memory tuning still knows there is 4 GB or RAM and the -p percent_free argument won't allocate a constant amount of RAM, but keep the system close to X percent free.

It should allow you to very quickly see if your workload is amenable to a system with 2 GB ram even with the imperfect analogy.

If you can physically remove the chip - you can first simulate things and get a benchmark and then do the hardware change if you need to verify it's accurate.