SSAS Model Refresh - Not enough memory to complete this operation Error

TL/DR: Add more Memory, reduce the size of your Model(s), and/or move either the SQL Server Services or SQL Server Analysis Services to a different server (e.g. scale out)

Longer Explanation: We went through this exercise a few months back with our Tabular SSAS production server, and actually reached out to Microsoft for "formal" recommendations as our Infrastructure team was being stingy with RAM (which I can understand as it's not exactly cheap). Just for clarity's sake, the error we ran into was as follows:

The operation has been cancelled because there is not enough memory available for the application. If using a 32-bit version of the product, consider upgrading to the 64-bit version or increasing the amount of memory available on the machine.

Our server was originally setup with 64GB of memory and was hosting 2 SSAS models totaling 40GB in size. No other SQL Server services were hosted on this machine. Some days our models would process without issue, but most days they would fail. We would reboot the server and then maybe they would succeed... if the wind was just right and the stars and planets all aligned.

Unlike Multidimensional (MOLAP/ROLAP/HOLAP) models, the default Tabular Models are loaded entirely into memory as they leverage In-Memory technology. If the model(s) are unable to be loaded entirely into memory, you run into problems.

Sadly, Microsoft's documentation breaks down on what the "memory recommendations" are as I cannot find any formal document providing anything other than "minimum" levels that are needed to just run the service. From the support ticket we filed, Microsoft's recommendations were as follows:

For a model of size X, provision between 2X - 10X RAM on the SSAS server to be used by the SSAS service, which is further influenced by the following factors:

  • Cube Processing requires 2X - 3X RAM for full processing which includes a shadow copy of the model in builtin memory.
  • The number of Users/Reports connected to the cube also increase RAM requirements, at times up to 10X depending upon number of reports, volume, etc. as users/reports can generate DAX queries which perform calculation or memory materialization (which cause the engine to build an intermediate non-compressed result and can cause the memory consumption to go higher than expected).
  • The number of Models being processed can also increase the memory footprint required.
  • Enable the VertiPaqPagingPolicy if the setting is disabled, so SSAS can utilize the OS paging file for additional memory at the cost of processing and query performance.

What we ended up doing was increase the amount of RAM on our server, which ultimately solved our issues for the time being. The only other real alternative "solutions" is to limit the amount of data you need in your model(s) or scale out (e.g. move services to another server) your deployment to other servers.

What I suspect is happening in your case is that your SSAS service is running out of memory because your SQL Server Service is also hosted on the same server. Basically you need to either segregate these services from one another or have enough RAM on the server to let them run in parallel. I would highly suggest segregating your SSAS services to a different server if possible, but licensing challenges may impact this so be sure to have enough RAM.

Other things you can fiddle with are the config settings located in the msmdsrv.ini file, but for our scenario, we didn't have much success with these making any significant differences in the eventual outcome of running out of memory.


I had similar problem and until we start using Azure Analysis Services, I had to find a work around with SSAS standard edition which can only allocate 16GB of memory regardless whether you're on-prem server has 64GB of memory.

If you are using Enterprise Edition I suggest looking into creating partitions in the tables in your data model and only refresh records have been updated recently. Otherwise, if you are running standard edition, then refresh your model in 2 parts or more. For example put 19 tables in one job, depending on the size of your tables. Try to balance it.

{  
  "refresh": {  
    "type": "full",  
    "objects": [  
      {  
                 "database":"AdventureWorks",
                "table":"A"
      }  
      , 
      {  
                 "database":"AdventureWorks",
                "table":"B"
      }  
      , 
      {  
                 "database":"AdventureWorks",
                "table":"C"
      }  
      ,     
      {  
                 "database":"AdventureWorks",
                "table":"D"
      }  
      ,
      {  
                 "database":"AdventureWorks",
                "table":"E"
      }  

      ,
      ....


    ]  
  }  

}

I think a better solution will be to use either one of those S' Plans on Azure depending on the size of your data model and then use table partitions.