What is a "core" file, and does it count against my quota?

Core files are created when a program encounters a run-time error. It is an image of the memory used by the program, and debuggers such as gdb can access it to find out the state of the program at the time of the error.

Core files *are* counted against your quota. They usually have large "holes" in them, so they are not as big as they seem. Compare the output of "ls -l" and "ls -s".

For example, say you opened a random access file for writing, and wrote the first record and the 100,000th record, then closed it. The system only uses two blocks to hold the file, even though it is in some sense 100,000 records long.

The default .cshrc contains a line which reads "limit coredumpsize 0". This line sets the maximum size a corefile can be to zero. This prevents large core files from using up your quota and disrupting your work.

When using X Windows, some coredumps will not be limited if the program that crashed was started by the .xsession or window manager directly. To limit these core files, put the command

ulimit -c 0

into your .xsession file. Do not remove the limit command from your .cshrc. That is necessary for when you log into the system.

If you want to use corefiles for debugging, you should remove or comment out the line in your .cshrc and .xsession. Core files may be overwritten at any time by new core files. If you want to use them for debugging, you should rename them. If not, you can safely delete them.

Related Articles

© Computing and Educational Technology Services