XFS FAQ: Difference between revisions
No edit summary |
No edit summary |
||
Line 51: | Line 51: | ||
No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously. | No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously. | ||
== Q: Quota: Is umounting prjquota (project quota) enabled fs and mouting it again with grpquota (group quota) removing prjquota limits previously set from fs (and vice versa) ? == | |||
To be answered. | |||
== Q: Are there any dump/restore tools for XFS? == | == Q: Are there any dump/restore tools for XFS? == |
Revision as of 18:35, 17 March 2009
Info from: main XFS faq at SGI
Many thanks to earlier maintainers of this document - Thomas Graichen and Seth Mos.
Q: Where can I find documentation about XFS?
The SGI XFS project page http://oss.sgi.com/projects/xfs/ is the definitive reference. It contains pointers to whitepapers, books, articles, etc.
You could also join the XFS mailing list or the #xfs IRC channel on irc.freenode.net.
Q: Where can I find documentation about ACLs?
Andreas Gruenbacher maintains the Extended Attribute and POSIX ACL documentation for Linux at http://acl.bestbits.at/
The acl(5) manual page is also quite extensive.
Q: Where can I find information about the internals of XFS?
An [training/index.html SGI XFS Training course] aimed at developers, triage and support staff, and serious users has been in development. Parts of the course are clearly still incomplete, but there is enough content to be useful to a broad range of users.
Barry Naujok has documented the [papers/xfs_filesystem_structure.doc XFS ondisk format] which is a very useful reference.
Q: What partition type should I use for XFS on Linux?
Linux native filesystem (83).
Q: What mount options does XFS have?
There are a number of mount options influencing XFS filesystems - refer to the mount(8) manual page or the documentation in the kernel source tree itself (Documentation/filesystems/xfs.txt)
Q: Is there any relation between the XFS utilities and the kernel version?
No, there is no relation. Newer utilities tend to mainly have fixes and checks the previous versions might not have. New features are also added in a backward compatible way - if they are enabled via mkfs, an incapable (old) kernel will recognize that it does not understand the new feature, and refuse to mount the filesystem.
Q: Does it run on platforms other than i386?
XFS runs on all of the platforms that Linux supports. It is more tested on the more common platforms, especially the i386 family. Its also well tested on the IA64 platform since thats the platform SGI Linux products use.
Q: Quota: Do quotas work on XFS?
Yes.
To use quotas with XFS, you need to enable XFS quota support when you configure your kernel. You also need to specify quota support when mounting. You can get the Linux quota utilities at their sourceforge website http://sourceforge.net/projects/linuxquota/ or use xfs_quota(8).
Q: Quota: What's project quota?
The project quota is a quota mechanism in XFS can be used to implement a form of directory tree quota, where a specified directory and all of the files and subdirectories below it (i.e. a tree) can be restricted to using a subset of the available space in the filesystem.
Q: Quota: Can group quota and project quota be used at the same time?
No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously.
Q: Quota: Is umounting prjquota (project quota) enabled fs and mouting it again with grpquota (group quota) removing prjquota limits previously set from fs (and vice versa) ?
To be answered.
Q: Are there any dump/restore tools for XFS?
xfsdump(8) and xfsrestore(8) are fully supported. The tape format is the same as on IRIX, so tapes are interchangeable between operating systems.
Q: Does LILO work with XFS?
This depends on where you install LILO.
Yes, for MBR (Master Boot Record) installations.
No, for root partition installations because the XFS superblock is written at block zero, where LILO would be installed. This is to maintain compatibility with the IRIX on-disk format, and will not be changed.
Q: Does GRUB work with XFS?
There is native XFS filesystem support for GRUB starting with version 0.91 and onward. Unfortunately, GRUB used to make incorrect assumptions about being able to read a block device image while a filesystem is mounted and actively being written to, which could cause intermittent problems when using XFS. This has reportedly since been fixed, and the 0.97 version (at least) of GRUB is apparently stable.
Q: Can XFS be used for a root filesystem?
Yes.
Q: Will I be able to use my IRIX XFS filesystems on Linux?
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously, you should back-up your data before trying to move it between systems. Filesystems must be "clean" when moved (i.e. unmounted). If you plan to use IRIX filesystems on Linux keep the following points in mind: the kernel needs to have SGI partition support enabled; there is no XLV support in Linux, so you are unable to read IRIX filesystems which use the XLV volume manager; also not all blocksizes available on IRIX are available on Linux (only blocksizes less than or equal to the pagesize of the architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is possible for now). Make sure that the directory format is version 2 on the IRIX filesystems (this is the default since IRIX 6.5.5). Linux can only read v2 directories.
Q: Is there a way to make a XFS filesystem larger or smaller?
You can NOT make a XFS partition smaller online. The only way to shrink is to do a complete dump, mkfs and restore.
An XFS filesystem may be enlarged by using xfs_growfs(8).
If using partitions, you need to have free space after this partition to do so. Remove partition, recreate it larger with the exact same starting point. Run xfs_growfs to make the partition larger. Note - editing partition tables is a dangerous pastime, so back up your filesystem before doing so.
Using XFS filesystems on top of a volume manager makes this a lot easier.
Q: What information should I include when reporting a problem?
Things to include are what version of XFS you are using, if this is a CVS version of what date and version of the kernel. If you have problems with userland packages please report the version of the package you are using.
If the problem relates to a particular filesystem, the output from the xfs_info(8) command and any mount(8) options in use will also be useful to the developers.
If you experience an oops, please run it through ksymoops so that it can be interpreted.
If you have a filesystem that cannot be repaired, make sure you have xfsprogs 2.9.0 or later and run xfs_metadump(8) to capture the metadata (which obfuscates filenames and attributes to protect your privacy) and make the dump available for someone to analyse.
Q: Mounting a XFS filesystem does not work - what is wrong?
If mount prints an error message something like:
mount: /dev/hda5 has wrong major or minor number
you either do not have XFS compiled into the kernel (or you forgot to load the modules) or you did not use the "-t xfs" option on mount or the "xfs" option in /etc/fstab.
If you get something like:
mount: wrong fs type, bad option, bad superblock on /dev/sda1, or too many mounted file systems
Refer to your system log file (/var/log/messages) for a detailed diagnostic message from the kernel.
Q: Does the filesystem have an undelete capability?
There is no undelete in XFS. Always keep backups.
Q: How can I backup a XFS filesystem and ACLs?
You can backup a XFS filesystem with utilities like xfsdump(8) and standard tar(1) for standard files. If you want to backup ACLs you will need to use xfsdump, this is the only tool at the moment that supports backing up extended attributes. xfsdump can also be integrated with amanda(8).
Q: I see applications returning error 990 or "Structure needs cleaning", what is wrong?
The error 990 stands for EFSCORRUPTED which usually means XFS has detected a filesystem metadata problem and has shut the filesystem down to prevent further damage. Also, since about June 2006, we converted from EFSCORRUPTED/990 over to using EUCLEAN, "Structure needs cleaning."
The cause can be pretty much anything, unfortunately - filesystem, virtual memory manager, volume manager, device driver, or hardware.
There should be a detailed console message when this initially happens. The messages have important information giving hints to developers as to the earliest point that a problem was detected. It is there to protect your data.
Q: Why do I see binary NULLS in some files after recovery when I unplugged the power?
Update: This issue has been addressed with a CVS fix on the 29th March 2007 and merged into mainline on 8th May 2007 for 2.6.22-rc1.
XFS journals metadata updates, not data updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash.
Since XFS does not write data out immediately unless you tell it to with fsync, an O_SYNC or O_DIRECT open (the same is true of other filesystems), you are looking at an inode which was flushed out, but whose data was not. Typically you'll find that the inode is not taking any space since all it has is a size but no extents allocated (try examining the file with the xfs_bmap(8) command).
Q: What is the problem with the write cache on journaled filesystems?
Many drives use a write back cache in order to speed up the performance of writes. However, there are conditions such as power failure when the write cache memory is never flushed to the actual disk. Further, the drive can de-stage data from the write cache to the platters in any order that it chooses. This causes problems for XFS and journaled filesystems in general because they rely on knowing when a write has completed to the disk. They need to know that the log information has made it to disk before allowing metadata to go to disk. When the metadata makes it to disk then the transaction can effectively be deleted from the log resulting in movement of the tail of the log and thus freeing up some log space. So if the writes never make it to the physical disk, then the ordering is violated and the log and metadata can be lost, resulting in filesystem corruption.
With hard disk cache sizes of currently (Jan 2009) up to 32MB that can be a lot of valuable information. In a RAID with 8 such disks these adds to 256MB, and the chance of having filesystem metadata in the cache is so high that you have a very high chance of big data losses on a power outage.
With a single hard disk and barriers turned on (on=default), the drive write cache is flushed before an after a barrier is issued. A powerfail "only" loses data in the cache but no essential ordering is violated, and corruption will not occur.
With a RAID controller with battery backed controller cache and cache in write back mode, you should turn off barriers - they are unnecessary in this case, and if the controller honors the cache flushes, it will be harmful to performance. But then you *must* disable the individual hard disk write cache in order to ensure to keep the filesystem intact after a power failure. The method for doing this is different for each RAID controller. See the section about RAID controllers below.
Q: How can I tell if I have the disk write cache enabled?
For SCSI/SATA:
- Look in dmesg(8) output for a driver line, such as:
"SCSI device sda: drive cache: write back" - # sginfo -c /dev/sda | grep -i 'write cache'
For PATA/SATA (although for SATA this only works on a recent kernel with ATA command passthrough):
- # hdparm -I /dev/sda
and look under "Enabled Supported" for "Write cache"
For RAID controllers:
- See the section about RAID controllers below
Q: How can I address the problem with the disk write cache?
Disabling the disk write back cache.
For SATA/PATA(IDE): (although for SATA this only works on a recent kernel with ATA command passthrough):
- # hdparm -W0 /dev/sda
# hdparm -W0 /dev/hda - # blktool /dev/sda wcache off
# blktool /dev/hda wcache off
For SCSI:
- Using sginfo(8) which is a little tedious
It takes 3 steps. For example:- #sginfo -c /dev/sda
which gives a list of attribute names and values - #sginfo -cX /dev/sda
which gives an array of cache values which you must match up with from step 1, e.g.
0 0 0 1 0 1 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0 - #sginfo -cXR /dev/sda 0 0 0 1 0 0 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0
allows you to reset the value of the cache attributes.
- #sginfo -c /dev/sda
For RAID controllers:
- See the section about RAID controllers below
This disabling is kept persistent for a SCSI disk. However, for a SATA/PATA disk this needs to be done after every reset as it will reset back to the default of the write cache enabled. And a reset can happen after reboot or on error recovery of the drive. This makes it rather difficult to guarantee that the write cache is maintained as disabled.
Using an external log.
Some people have considered the idea of using an external log on a separate drive with the write cache disabled and the rest of the file system on another disk with the write cache enabled. However, that will not solve the problem. For example, the tail of the log is moved when we are notified that a metadata write is completed to disk and we won't be able to guarantee that if the metadata is on a drive with the write cache enabled.
In fact using an external log will disable XFS' write barrier support.
Write barrier support.
Write barrier support is enabled by default in XFS since kernel version 2.6.17. It is disabled by mounting the filesystem with "nobarrier". Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:
- "Disabling barriers, not supported with external log device"
- "Disabling barriers, not supported by the underlying device"
- "Disabling barriers, trial barrier write failed"
If the filesystem is mounted with an external log device then we currently don't support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn't support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.
Q. Should barriers be enabled with storage which has a persistent write cache?
Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with "nobarrier". But take care about the hard disk write cache, which should be off.
Q. Which settings does my RAID controller need ?
It's hard to tell because there are so many controllers. Please consult your RAID controller documentation to determine how to change these settings, but we try to give an overview here:
Real RAID controllers (not those found onboard of mainboards) normally have a battery backed cache which is used for buffering writes to improve speed. Even if it's battery backed, the individual hard disk write caches need to be turned off, as they are not protected from a powerfail and will just lose all contents in that case.
- onboard RAID controllers: there are so many different types it's hard to tell. Generally, those controllers have no cache, but let the hard disk write cache on. That can lead to the bad situation that after a powerfail with RAID-1 when only parts of the disk cache have been written, the controller doesn't even see that the disks are out of sync, as the disks can resort cached blocks and might have saved the superblock info, but then lost different data contents. So, turn off disk write caches before using the RAID function.
- 3ware: /cX/uX set cache=off, see http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86
- Adaptec: allows setting individual drives cache
arcconf setcache <disk> wb|wt wb=write back, which means write cache on, wt=write through, which means write cache off. So "wt" should be chosen.
- Areca: In archttp under "System Controls" -> "System Configuration" there's the option "Disk Write Cache Mode" (defaults "Auto")
"Off": disk write cache is turned off
"On": disk write cache is enabled, this is not save for your data but fast
"Auto": If you use a BBM (battery backup module, which you really should use if you care about your data), the controller automatically turns disk writes off, to protect your data. In case no BBM is attached, the controller switches to "On", because neither controller cache nor disk cache is save so you don't seem to care about your data and just want high speed (which you get then).
That's a very sensible default so you can let it "Auto" or enforce "Off" to be sure.
- LSI MegaRAID: allows setting individual disks cache:
MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL -EnDskCache|DisDskCache
- Xyratex: from the docs: "Write cache includes the disk drive cache and controller cache.". So that means you can only set the drive caches and the unit caches together. To protect your data, turn it off, but write performance will suffer badly as also the controller write cache is disabled.
Q: Which settings are best with virtualization like VMware, XEN, qemu?
The biggest problem is that those products seem to also virtualize disk writes in a way that even barriers don't work anymore, which means even a fsync is not reliable. Tests confirm that unplugging the power from such a system even with RAID controller with battery backed cache and hard disk cache turned off (which is save on a normal host) you can destroy a database within the virtual machine (client, domU whatever you call it).
In qemu you can specify cache=off on the line specifying the virtual disk. For others information is missing.
Q: What is the issue with directory corruption in Linux 2.6.17?
In the Linux kernel 2.6.17 release a subtle bug was accidentally introduced into the XFS directory code by some "sparse" endian annotations. This bug was sufficiently uncommon (it only affects a certain type of format change, in Node or B-Tree format directories, and only in certain situations) that it was not detected during our regular regression testing, but it has been observed in the wild by a number of people now.
Update: the fix is included in 2.6.17.7 and later kernels.
To add insult to injury, xfs_repair(8) is currently not correcting these directories on detection of this corrupt state either. This xfs_repair issue is actively being worked on, and a fixed version will be available shortly.
Update: a fixed xfs_repair is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version.
No other kernel versions are affected. However, using a corrupt filesystem on other kernels can still result in the filesystem being shutdown if the problem has not been rectified (on disk), making it seem like other kernels are affected.
The xfs_check tool, or xfs_repair -n, should be able to detect any directory corruption.
Until a fixed xfs_repair binary is available, one can make use of the xfs_db(8) command to mark the problem directory for removal (see the example below). A subsequent xfs_repair invocation will remove the directory and move all contents into "lost+found", named by inode number (see second example on how to map inode number to directory entry name, which needs to be done _before_ removing the directory itself). The inode number of the corrupt directory is included in the shutdown report issued by the kernel on detection of directory corruption. Using that inode number, this is how one would ensure it is removed:
# xfs_db -x /dev/sdXXX xfs_db> inode NNN xfs_db> print core.magic = 0x494e core.mode = 040755 core.version = 2 core.format = 3 (btree) ... xfs_db> write core.mode 0 xfs_db> quit
A subsequent xfs_repair will clear the directory, and add new entries (named by inode number) in lost+found.
The easiest way to map inode numbers to full paths is via xfs_ncheck(8):
# xfs_ncheck -i 14101 -i 14102 /dev/sdXXX 14101 full/path/mumble_fratz_foo_bar_1495 14102 full/path/mumble_fratz_foo_bar_1494
Should this not work, we can manually map inode numbers in B-Tree format directory by taking the following steps:
# xfs_db -x /dev/sdXXX xfs_db> inode NNN xfs_db> print core.magic = 0x494e ... next_unlinked = null u.bmbt.level = 1 u.bmbt.numrecs = 1 u.bmbt.keys[1] = [startoff] 1:[0] u.bmbt.ptrs[1] = 1:3628 xfs_db> fsblock 3628 xfs_db> type bmapbtd xfs_db> print magic = 0x424d4150 level = 0 numrecs = 19 leftsib = null rightsib = null recs[1-19] = [startoff,startblock,blockcount,extentflag] 1:[0,3088,4,0] 2:[4,3128,8,0] 3:[12,3308,4,0] 4:[16,3360,4,0] 5:[20,3496,8,0] 6:[28,3552,8,0] 7:[36,3624,4,0] 8:[40,3633,4,0] 9:[44,3688,8,0] 10:[52,3744,4,0] 11:[56,3784,8,0] 12:[64,3840,8,0] 13:[72,3896,4,0] 14:[33554432,3092,4,0] 15:[33554436,3488,8,0] 16:[33554444,3629,4,0] 17:[33554448,3748,4,0] 18:[33554452,3900,4,0] 19:[67108864,3364,4,0]
At this point we are looking at the extents that hold all of the directory information. There are three types of extent here, we have the data blocks (extents 1 through 13 above), then the leaf blocks (extents 14 through 18), then the freelist blocks (extent 19 above). The jumps in the first field (start offset) indicate our progression through each of the three types. For recovering file names, we are only interested in the data blocks, so we can now feed those offset numbers into the xfs_db dblock command. So, for the fifth extent - 5:[20,3496,8,0] - listed above:
... xfs_db> dblock 20 xfs_db> print dhdr.magic = 0x58443244 dhdr.bestfree[0].offset = 0 dhdr.bestfree[0].length = 0 dhdr.bestfree[1].offset = 0 dhdr.bestfree[1].length = 0 dhdr.bestfree[2].offset = 0 dhdr.bestfree[2].length = 0 du[0].inumber = 13937 du[0].namelen = 25 du[0].name = "mumble_fratz_foo_bar_1595" du[0].tag = 0x10 du[1].inumber = 13938 du[1].namelen = 25 du[1].name = "mumble_fratz_foo_bar_1594" du[1].tag = 0x38 ...
So, here we can see that inode number 13938 matches up with name "mumble_fratz_foo_bar_1594". Iterate through all the extents, and extract all the name-to-inode-number mappings you can, as these will be useful when looking at "lost+found" (once xfs_repair has removed the corrupt directory).
Q: Why does my > 2TB XFS partition disappear when I reboot ?
Strictly speaking this is not an XFS problem.
To support > 2TB partitions you need two things: a kernel that supports large block devices (CONFIG_LBD=y) and a partition table format that can hold large partitions. The default DOS partition tables don't. The best partition format for > 2TB partitions is the EFI GPT format (CONFIG_EFI_PARTITION=y).
Without CONFIG_LBD=y you can't even create the filesystem, but without CONFIG_EFI_PARTITION=y it works fine until you reboot at which point the partition will disappear. Note that you need to enable the CONFIG_PARTITION_ADVANCED option before you can set CONFIG_EFI_PARTITION=y.
Q: Why do I receive No space left on device after xfs_growfs?
After growing a XFS filesystem, df(1) would show enough free space but attempts to write to the filesystem result in -ENOSPC. To fix this, Dave Chinner advised:
The only way to fix this is to move data around to free up space below 1TB. Find your oldest data (i.e. that was around before even the first grow) and move it off the filesystem (move, not copy). Then if you copy it back on, the data blocks will end up above 1TB and that should leave you with plenty of space for inodes below 1TB. A complete dump and restore will also fix the problem ;)
Also, you can add 'inode64' to your mount options to allow inodes to live above 1TB.