Does the size of the core file reflects the memory usage when the application crashed?

Yes, the core file represent a dump of the whole virtual memory area used by the process when the crash happened. You can't have more than a 4 GB core file with 32 bit processes.

Under Solaris, you can use several commands located in /usr/proc/bin to get information from the core file. In particular:

  • file core : will confirm the core file is from your process
  • pstack core : will tell you where the process crashed
  • pmap core : will show you memory usage per address

You can limit the set of data saved in a core file, among other things, by using the coreadm command. By default everything is saved:
stack + heap + shm + ism + dism + text + data + rodata + anon + shanon + ctf


From the manpage (http://linux.die.net/man/5/core):

The default action of certain signals is to cause a process to terminate and produce a core dump file, a disk file containing an image of the process's memory at the time of termination.

Possibly your code is using a multi-threaded environment and shared data.

Also:

Since kernel 2.6.23, the Linux-specific /proc/PID/coredump_filter file can be used to control which memory segments are written to the core dump file in the event that a core dump is performed for the process with the corresponding process ID.

Possibly through this you can get to know the memory used by the application.