<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://xfs.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Christian</id>
	<title>xfs.org - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://xfs.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Christian"/>
	<link rel="alternate" type="text/html" href="https://xfs.org/index.php/Special:Contributions/Christian"/>
	<updated>2026-04-20T15:42:59Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.3</generator>
	<entry>
		<id>https://xfs.org/index.php?title=Talk:XFS_Status_Updates&amp;diff=2302</id>
		<title>Talk:XFS Status Updates</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Talk:XFS_Status_Updates&amp;diff=2302"/>
		<updated>2011-04-03T20:54:08Z</updated>

		<summary type="html">&lt;p&gt;Christian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Someone&#039;s annoyingly dyslexic:&lt;br /&gt;
&amp;quot;quite&amp;quot; == totally/actually;&lt;br /&gt;
&amp;quot;quiet&amp;quot; == low level of noise/silent.&lt;br /&gt;
: (Hopefully) fixed. Why didn&#039;t you? -- [[User:Christian|chris_goe]] 20:54, 3 April 2011 (UTC)&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Status_Updates&amp;diff=2301</id>
		<title>XFS Status Updates</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Status_Updates&amp;diff=2301"/>
		<updated>2011-04-03T20:53:25Z</updated>

		<summary type="html">&lt;p&gt;Christian: quiet/quite fixed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== XFS status update for February 2011 ==&lt;br /&gt;
&lt;br /&gt;
February saw the stabilization of the Linux 2.6.38 tree, with just two&lt;br /&gt;
small XFS fixes going into Linus&#039; tree, and the XFS development tree&lt;br /&gt;
has been similarly quiet with just a few cleanups, and the delaylog option&lt;br /&gt;
propagated to the default operation mode.  A few more patches for the 2.6.39&lt;br /&gt;
merge window have been posted and/or discussed on the mailing list, but February&lt;br /&gt;
was a rather quiet month in general.&lt;br /&gt;
&lt;br /&gt;
On the user space side xfsprogs saw a few bug fixes, and a speedup for&lt;br /&gt;
phase2 of xfs_repair, xfsdump saw a bug fix and support for pruning the&lt;br /&gt;
inventory by session id, and xfstests saw it&#039;s usual stream of bug fixes&lt;br /&gt;
as well as two new test cases.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for January 2011 ==&lt;br /&gt;
&lt;br /&gt;
On the 4th of January we saw the release of Linux 2.6.37, which contains a&lt;br /&gt;
large XFS update:&lt;br /&gt;
&lt;br /&gt;
    67 files changed, 1424 insertions(+), 1524 deletions(-)&lt;br /&gt;
&lt;br /&gt;
User visible changes are the new XFS_IOC_ZERO_RANGE ioctl which allows&lt;br /&gt;
to convert already allocated space into unwritten extents that return&lt;br /&gt;
zeros on a read, and support for 32-bit wide project IDs.  The other large&lt;br /&gt;
item are various changes to improve metadata scalability even further,&lt;br /&gt;
by changes to the the buffer cache, inode lookup and other parts of the&lt;br /&gt;
filesystem driver.&lt;br /&gt;
&lt;br /&gt;
After that the XFS development tree for 2.6.38 was merged into mainline,&lt;br /&gt;
with an even larger set of changes.  Notable items include support for the&lt;br /&gt;
FITRIM ioctl to discard unused space on SSDs and thinly provisioned storage&lt;br /&gt;
systems, a buffer LRU scheme to improve hit rates for metadata, an&lt;br /&gt;
overhaul of the log subsystem locking, dramatically improving scalability&lt;br /&gt;
in that area, and much smarter handling of preallocations, especially&lt;br /&gt;
for files closed and reopened frequently, e.g. by the NFS server.&lt;br /&gt;
&lt;br /&gt;
User space development has been very quiet, with just a few fixes committed&lt;br /&gt;
to the xfstests repository, although various additional patches for xfsprogs&lt;br /&gt;
and xfstests that haven&#039;t been committed yet were discussed on the mailing list.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for December 2010 ==&lt;br /&gt;
&lt;br /&gt;
The release process of the Linux 2.6.37 kernel with it&#039;s large XFS updates&lt;br /&gt;
was in it&#039;s final days in December, which explains why we only saw a single&lt;br /&gt;
one-liner regression fix for XFS in Linus&#039; tree.  The XFS development tree&lt;br /&gt;
finally saw some updates when the writeback updates and some small cleanups&lt;br /&gt;
to the allocator and log recovery code were merged, but the large metadata&lt;br /&gt;
scalability updates that have been posted to the list multiple times are&lt;br /&gt;
still missing.  In addition to this on-going work the list also saw patches&lt;br /&gt;
that fix smaller issues, which are also still waiting to be merged.&lt;br /&gt;
&lt;br /&gt;
On the userspace side xfsprogs and xfsdump development has been quit, with&lt;br /&gt;
no commits to either repository in December, although a large series of&lt;br /&gt;
updates to the metadump command has been reposted near the end of the month.&lt;br /&gt;
The xfstests repository saw a new regression test for a btrfs problem,&lt;br /&gt;
and various updates to existing tests.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2010 ==&lt;br /&gt;
&lt;br /&gt;
From looking at the kernel git commits November looked like a pretty&lt;br /&gt;
slow month with just two hand full fixes going into the release candidates&lt;br /&gt;
for Linux 2.6.37, and none at all going into the development tree.&lt;br /&gt;
But in this case git statistics didn&#039;t tell the whole story - there&lt;br /&gt;
was a lot of activity on patches for the next merge window on the list.&lt;br /&gt;
The focus in November was still at metadata scalability, with various&lt;br /&gt;
patchsets that improves parallel creates and unlinks again, and also&lt;br /&gt;
improves 8-way dbench throughput by 30%.  In addition to that there&lt;br /&gt;
were patches to improve preallocation for NFS servers, to simplify&lt;br /&gt;
the writeback code, and to remove the XFS-internal percpu counters&lt;br /&gt;
for free space for the generic kernel percpu counters, which just needed&lt;br /&gt;
a small improvement.&lt;br /&gt;
&lt;br /&gt;
On the user space side we saw the release of xfsprogs 3.1.4, which&lt;br /&gt;
contains various accumulated bug fixes and Debian packaging updates.&lt;br /&gt;
The xfsdump tree saw a large update to speed up restore by using&lt;br /&gt;
mmap for an internal database and remove the limitation of ~ 214&lt;br /&gt;
million directory entries per dump file.  The xfstests test suite&lt;br /&gt;
saw three new testcases and various fixes, including support for the&lt;br /&gt;
hfsplus filesystem.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2010 ==&lt;br /&gt;
&lt;br /&gt;
Near the end of the month we finally saw the release of Linux 2.6.36.&lt;br /&gt;
Just a single fix made it into mainline in this month, showing that the&lt;br /&gt;
stabilization period before has worked very well.&lt;br /&gt;
&lt;br /&gt;
Linux 2.6.36 has been another impressive release for XFS, seeing&lt;br /&gt;
various performance improvements in the new delayed logging code,&lt;br /&gt;
for direct I/O and the sync system call, a few bug fixes, and lots&lt;br /&gt;
of cleanups, resulting in a net removal of over 2000 lines of code:&lt;br /&gt;
&lt;br /&gt;
        89 files changed, 1998 insertions(+), 4279 deletions(-)&lt;br /&gt;
&lt;br /&gt;
The merge window for Linux 2.6.37 opened just a few days after the&lt;br /&gt;
release of Linux 2.6.36 and already contains another large XFS update&lt;br /&gt;
at the end of October.  Highlights of the XFS tree merged into 2.6.37-rc1&lt;br /&gt;
are another large set of metadata scalability patches, support for 32-bit&lt;br /&gt;
wide project IDs, and support for the new XFS_IOC_ZERO_RANGE ioctl,&lt;br /&gt;
which allows to punch a whole and convert it to an unwritten extent&lt;br /&gt;
in a single atomic operation.&lt;br /&gt;
&lt;br /&gt;
The metadata scalability changes improve 8-way fs_mark of 50 million files&lt;br /&gt;
by over 15% and removal of those files by over 100%, with further&lt;br /&gt;
improvements expected by the next round of XFS metadata scalability&lt;br /&gt;
and VFS scalability improvements targeted at Linux 2.6.38.&lt;br /&gt;
&lt;br /&gt;
On the user space side October was a rather quit month for xfsprogs, which&lt;br /&gt;
only saw the addition of 32-bit project ID handling, and a fix for&lt;br /&gt;
parsing the mount table in fsr when used together with disk encryption&lt;br /&gt;
tools.  A few patches for xfsdump were posted on the list, but none&lt;br /&gt;
was applied, leaving the majority of the user space activity to&lt;br /&gt;
xfstests, which saw very active development.  Various patches went&lt;br /&gt;
into xfstests to improve portability to filesystems with a limited&lt;br /&gt;
feature set, and to move more filters to generic code.  In addition&lt;br /&gt;
various cleanups to test cases in test programs were applied.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2010 ==&lt;br /&gt;
&lt;br /&gt;
Mainline activity has been rather low in September while with only&lt;br /&gt;
two more fixes going into the 2.6.36 release candidates after the&lt;br /&gt;
large merge activity in August.  Development for the next merge&lt;br /&gt;
window has been more active.  The largest item was the inclusion&lt;br /&gt;
of the metadata scalability patch series, which provides very large&lt;br /&gt;
speedups for parallel metadata operations.  In addition a new&lt;br /&gt;
ioctl to punch holes and convert the whole to an unwritten extent&lt;br /&gt;
was added and a small number of cleanups also made it into the tree.&lt;br /&gt;
&lt;br /&gt;
Patches to add support for 32bit wide project ID identifiers and&lt;br /&gt;
using group and project quotas concurrently were posted to the list&lt;br /&gt;
and discussed but not yet included.&lt;br /&gt;
&lt;br /&gt;
Userspace development has been rather quiet again, with a single fix&lt;br /&gt;
committed to xfsprogs and xfsdump each.  The xfstests test suite grew&lt;br /&gt;
a new test case and received a few additional fixes.  Last but not least&lt;br /&gt;
the [http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/index.html XFS Users Guide]&lt;br /&gt;
was updated with various factual corrections and spelling fixes.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2010 ==&lt;br /&gt;
&lt;br /&gt;
At the first of August we finally saw the release of Linux 2.6.35,&lt;br /&gt;
which includes a large XFS update.  The most prominent feature in&lt;br /&gt;
Linux 2.6.35 is the new delayed logging code which provides massive&lt;br /&gt;
speedups for metadata-intensive workloads, but there has been&lt;br /&gt;
a large amount of other fixes and cleanups, leading to the following&lt;br /&gt;
diffstat:&lt;br /&gt;
&lt;br /&gt;
         67 files changed, 4426 insertions(+), 3835 deletions(-)&lt;br /&gt;
&lt;br /&gt;
Given the early release of Linux 2.6.35 the merge window for the&lt;br /&gt;
next release fully fell into the month of August.  The XFS updates&lt;br /&gt;
for Linux 2.6.36 include various additional performance improvements&lt;br /&gt;
in the delayed logging code, for direct I/O writes and for avoiding&lt;br /&gt;
synchronous transactions, as well as various fixed and large amount&lt;br /&gt;
of cleanups, including the removal of the remaining dead DMAPI&lt;br /&gt;
code.&lt;br /&gt;
&lt;br /&gt;
On the userspace side we saw the 3.1.3 release of xfsprogs, which includes&lt;br /&gt;
various smaller fixes, support for the new XFS_IOC_ZERO_RANGE ioctl and&lt;br /&gt;
Debian packaging updates.  The xfstests package saw one new test case&lt;br /&gt;
and a couple of smaller patches, and xfsdump has not seen any updates at&lt;br /&gt;
all.&lt;br /&gt;
&lt;br /&gt;
The XMLified versions of the XFS users guide, training labs and filesystem&lt;br /&gt;
structure documentation are now available as on the fly generated html on&lt;br /&gt;
the xfs.org website and can be found at [[XFS_Papers_and_Documentation|Papers &amp;amp; Documentation]].&lt;br /&gt;
&lt;br /&gt;
== XFS status update for July 2010 ==&lt;br /&gt;
&lt;br /&gt;
July saw three more release candidates for the Linux 2.6.35 kernel, which&lt;br /&gt;
included a relatively large number of XFS updates.  There were two security&lt;br /&gt;
fixes, a small one to prevent swapext to operate on write-only file&lt;br /&gt;
descriptors, and a much larger one to properly validate inode numbers&lt;br /&gt;
coming from NFS clients or userspace applications using the bulkstat or&lt;br /&gt;
the open-by-handle interfaces.  In addition to that another relatively&lt;br /&gt;
large patch fixes the way inodes get reclaimed in the background, and&lt;br /&gt;
avoids inode caches growing out of bounds.&lt;br /&gt;
&lt;br /&gt;
In the meantime the code for the Linux 2.6.36 got the last touches before&lt;br /&gt;
the expected opening of the merge window, by merging a few more last&lt;br /&gt;
minute fixes and cleanups.  The most notable one is a patch series&lt;br /&gt;
that fixes in-memory corruption when concurrently accessing unwritten&lt;br /&gt;
extents using the in-kernel AIO code.&lt;br /&gt;
&lt;br /&gt;
The userspace side was still quite slow, but some a bit more activity&lt;br /&gt;
than June.  In xfsprogs the xfs_db code grew two bug fixes, as did&lt;br /&gt;
the xfs_io script.  The xfstests package saw one new test cases and&lt;br /&gt;
various fixes to existing code.  Last but not least a few patches&lt;br /&gt;
affecting the build system for all userspace tools were committed.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for June 2010 ==&lt;br /&gt;
&lt;br /&gt;
The month of June saw a few important bug fixes for the Linux 2.6.35&lt;br /&gt;
release candidates.  That includes ensuring that files used for the&lt;br /&gt;
swapext ioctl are writable to the user, and doing proper validation&lt;br /&gt;
of inodes coming from untrusted sources, such as NFS exporting and&lt;br /&gt;
the open by handle system calls.  The main work however has been&lt;br /&gt;
focused on development for the Linux 2.6.36 merge window, including&lt;br /&gt;
merging various patches that have been out on the mainline list&lt;br /&gt;
for a long time.  Highlights include further performance improvements&lt;br /&gt;
for sync heavy metadata workloads, stack space reduction in the&lt;br /&gt;
writeback path and improvements of the XFS tracing infrastructure.&lt;br /&gt;
Also after some discussion the remaining hooks for DMAPI are going&lt;br /&gt;
to be dropped in mainline.   As a replacement a tree containing&lt;br /&gt;
full DMAPI support with a slightly cleaner XFS interaction will be&lt;br /&gt;
hosted by SGI.&lt;br /&gt;
&lt;br /&gt;
On the userspace side June was a rather slow month, with no updates&lt;br /&gt;
to xfsprogs and xfsdump at all, and just one new test case and a cleanup&lt;br /&gt;
applied to xfstests.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for May 2010 ==&lt;br /&gt;
&lt;br /&gt;
In May 2010 we saw the long awaited release of Linux 2.6.34, which includes&lt;br /&gt;
a large XFS update.  The most important features appearing in 2.6.34 was the&lt;br /&gt;
new inode and quota flushing code, which leads to much better I/O patterns&lt;br /&gt;
for metadata-intensive workloads.  Additionally support for synchronous NFS&lt;br /&gt;
exports has been improved to give much better performance, and performance&lt;br /&gt;
for the fsync, fdatasync and sync system calls has been improved slightly.&lt;br /&gt;
A bug when resizing extremely busy filesystems has been fixed, which required&lt;br /&gt;
extensive modification to the data structure used for looking up the&lt;br /&gt;
per-allocation group data.  Last but not least there was a steady flow of&lt;br /&gt;
minor bug fixes and cleanups, leading to the following diffstat from&lt;br /&gt;
2.6.33 to 2.6.34:&lt;br /&gt;
&lt;br /&gt;
  86 files changed, 3209 insertions(+), 3178 deletions(-)&lt;br /&gt;
&lt;br /&gt;
Meanwhile active development aimed at 2.6.35 merge progressed.  The&lt;br /&gt;
major feature for this window is the merge of the delayed logging code,&lt;br /&gt;
which adds a new logging mode that dramatically reduces the bandwidth&lt;br /&gt;
required for log I/O.  See the &lt;br /&gt;
[http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs-delayed-logging-design.txt;h=96d0df28bed323d5596fc051b0ffb96ed8e3c8df;hb=HEAD documentation] for details.  Testers&lt;br /&gt;
for this new code are welcome.&lt;br /&gt;
&lt;br /&gt;
In userland xfsprogs saw the long awaited 3.1.2 release, which can be&lt;br /&gt;
considered a bug fix release for xfs_repair, xfs_fsr and mkfs.xfs.  After&lt;br /&gt;
the release a few more fixes were merged into the development tree.&lt;br /&gt;
The xfstests package saw various new tests, including many tests to&lt;br /&gt;
exercise the quota code, and a few fixes to existing tests.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for April 2010 ==&lt;br /&gt;
&lt;br /&gt;
In April 2.6.34 still was in the release candidate phase, with&lt;br /&gt;
a hand full of XFS fixes making it into mainline.  Development for&lt;br /&gt;
the 2.6.35 merge window went ahead full steam at the same time.&lt;br /&gt;
&lt;br /&gt;
While a fair amount of patches hit the development tree these were&lt;br /&gt;
largely cleanups, with the real development activity happening on&lt;br /&gt;
the mailing list.  There was another round of patches and following&lt;br /&gt;
discussion on the scalable busy extent tracking and delayed logging&lt;br /&gt;
features mentioned last month.  They are expected to be merged in&lt;br /&gt;
May and queue up for the Linux 2.6.35 window.  Last but not least&lt;br /&gt;
April saw a large number of XFS fixes backported to the 2.6.32 and&lt;br /&gt;
2.6.33 -stable series.&lt;br /&gt;
&lt;br /&gt;
In user land xfsprogs has seen few but important updates, preparing&lt;br /&gt;
for a new release next month.  The xfs_repair tool saw a fix to&lt;br /&gt;
correctly enable the lazy superblock counters on an existing&lt;br /&gt;
filesystem, and xfs_fsr saw updates to better deal with dynamic&lt;br /&gt;
attribute forks.  Last but not a least a port to Debian GNU/kFreeBSD&lt;br /&gt;
got merged. The xfstests test suite saw two new test cases and various&lt;br /&gt;
smaller fixes.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for March 2010 ==&lt;br /&gt;
&lt;br /&gt;
The merge window for Linux 2.6.34 closed in the first week of March,&lt;br /&gt;
with the important XFS features already landing in February.  Not&lt;br /&gt;
surprisingly the XFS merge activity in March has been rather slow,&lt;br /&gt;
with only about a dozen bug fixes patches making it towards Linus&#039;&lt;br /&gt;
tree in that time.&lt;br /&gt;
&lt;br /&gt;
On the other hand active development for the 2.6.35 merge window has&lt;br /&gt;
been very active.  Most importantly there was a lot of work on the&lt;br /&gt;
transaction and log subsystems.  Starting with a large patchset to&lt;br /&gt;
clean up and refactor the transaction subsystem and introducing more&lt;br /&gt;
flexible I/O containers in the low-level logging code work is&lt;br /&gt;
progressing to a new, more efficient logging implementation.  While&lt;br /&gt;
this preparatory work has already been merged in the development tree,&lt;br /&gt;
the actual delayed logging implementation still needs more work after&lt;br /&gt;
the initial public posting.  The delayed logging implementation which&lt;br /&gt;
is very briefly modeled after the journaling mode in the ext3/4&lt;br /&gt;
and reiserfs filesystems allows to accumulated multiple asynchronous&lt;br /&gt;
transactions in memory instead of possibly writing them out&lt;br /&gt;
many times.  Using the new delayed logging mechanism I/O bandwidth&lt;br /&gt;
used for the log decreases by orders of magnitude and performance&lt;br /&gt;
on metadata intensive workloads increases massively.&lt;br /&gt;
&lt;br /&gt;
In addition to that a new version of the discard (aka TRIM) support&lt;br /&gt;
has been posted, this time entirely contained in kernel space&lt;br /&gt;
and without the need of a userspace utility to drive it.  Last but&lt;br /&gt;
not least the usual steady stream of cleanups and bug fixes has not&lt;br /&gt;
ceased this month either.&lt;br /&gt;
&lt;br /&gt;
Besides the usual flow of fixes and new test cases in the xfstests&lt;br /&gt;
test suite development on the userspace side has been rather slow.&lt;br /&gt;
Xfsprogs has only seen a single fix for SMP locking in xfs_repair&lt;br /&gt;
and support for building on Debian GNU/kFreeBSD, and xfsdump&lt;br /&gt;
has seen no commit at all.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for February 2010 ==&lt;br /&gt;
&lt;br /&gt;
February saw the release of the Linux 2.6.33 kernel, which includes&lt;br /&gt;
a large XFS update.  The biggest user-visible change in Linux 2.6.33&lt;br /&gt;
is that XFS now support the generic Linux trace event infrastructure,&lt;br /&gt;
which allows tracing lots of XFS behavior with a normal production&lt;br /&gt;
built kernel.  Except for this Linux 2.6.33 has been mostly a bug-fix&lt;br /&gt;
release, fixing various user reported bugs in previous releases.&lt;br /&gt;
The total diffstat for XFS in Linux 2.6.33 looks like:&lt;br /&gt;
&lt;br /&gt;
  84 files changed, 3023 insertions(+), 3550 deletions(-)&lt;br /&gt;
&lt;br /&gt;
In addition to that the merge window for Linux 2.6.34 opened and the&lt;br /&gt;
first merge of the XFS tree made it into Linus tree.  Unlike Linux&lt;br /&gt;
2.6.33 this merge window includes major feature work.  The most&lt;br /&gt;
important change for users is a new algorithm for inode and quota&lt;br /&gt;
writeback that leads to better I/O locality and improved metadata&lt;br /&gt;
performance.  The second big change is a rewrite of the per-allocation&lt;br /&gt;
group data lookup which fixes a long-standing problem in the code&lt;br /&gt;
to grow a life filesystem and will also ease future filesystem&lt;br /&gt;
shrinking support.  Not merged through the XFS tree, but of great&lt;br /&gt;
importance for embedded users is a new API that allows XFS to properly&lt;br /&gt;
flush cache lines on it&#039;s log and large directory buffers, making&lt;br /&gt;
XFS work properly on architectures with virtually indexed caches,&lt;br /&gt;
such as parisc and various arm and mips variants.  Last but not&lt;br /&gt;
least there is an above average amount of cleanups that went into&lt;br /&gt;
Linus tree in this cycle.&lt;br /&gt;
&lt;br /&gt;
There have been more patches on the mailing list that haven&#039;t made&lt;br /&gt;
it to Linus tree yet, including an optimized implementation of&lt;br /&gt;
fdatasync(2) and massive speedups for metadata workloads on&lt;br /&gt;
NFS exported XFS filesystems.&lt;br /&gt;
&lt;br /&gt;
On the userspace side February has been a relatively quiet month.&lt;br /&gt;
Lead by xfstests only a moderate amount of fixes made it into&lt;br /&gt;
the respective trees.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for January 2010 ==&lt;br /&gt;
&lt;br /&gt;
January saw additional release candidates of the Linux 2.6.33 kernel,&lt;br /&gt;
including a couple of bug fixes for XFS.  In the meantime the XFS tree&lt;br /&gt;
has been growing a large number of patches destined for the Linux 2.6.34&lt;br /&gt;
merge window: a large rework of the handling of per-AG data, support for&lt;br /&gt;
the quota netlink interface, and better power saving behavior of the&lt;br /&gt;
XFS kernel threads, and of course various cleanups.&lt;br /&gt;
&lt;br /&gt;
A large patch series to replace the current asynchronous inode writeback&lt;br /&gt;
with a new scheme that uses the delayed write buffers was posted to&lt;br /&gt;
the list.  The new scheme, which allows archive better I/O locality by&lt;br /&gt;
dispatching meta-data I/O from a single place has been discussed&lt;br /&gt;
extensively and is expected to be merged in February.&lt;br /&gt;
&lt;br /&gt;
On the userspace side January saw the 3.1.0 and 3.1.1 releases of xfsprogs,&lt;br /&gt;
as well as the 3.0.4 release of xfsdump.  The biggest changes in xfsprogs&lt;br /&gt;
3.1.0 were optimizations in xfs_repair that lead to a much lower memory&lt;br /&gt;
usage, and optional use of the blkid library for filesystem detection&lt;br /&gt;
and retrieving storage topology information.  The 3.1.1 release contained&lt;br /&gt;
various important bug fixes for these changes and a various improvements to&lt;br /&gt;
the build system.  The major feature of xfsdump 3.0.4 were fixes for&lt;br /&gt;
time stamp handling on 64-bit systems.&lt;br /&gt;
&lt;br /&gt;
The xfstests package also lots of activity including various new testcases&lt;br /&gt;
and an improved build system.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for December 2009 ==&lt;br /&gt;
&lt;br /&gt;
December finally saw the long awaited release of Linux 2.6.32, which for&lt;br /&gt;
XFS is mostly a bug fix release, with the major changes being various&lt;br /&gt;
improvement to the sync path, including working around the expectation&lt;br /&gt;
from the grub boot loader where metadata is supposed to be after a sync()&lt;br /&gt;
system call.  Together with a refactoring of the inode allocator this&lt;br /&gt;
gives a nice diffstat for this kernel release:&lt;br /&gt;
&lt;br /&gt;
 46 files changed, 767 insertions(+), 1048 deletions(-)&lt;br /&gt;
&lt;br /&gt;
In the meantime development for the 2.6.33 has been going strong.  The&lt;br /&gt;
new event tracing code that allows to observe the inner workings of XFS&lt;br /&gt;
in production systems has finally been merged, with another patch to&lt;br /&gt;
reduce the size of the tracing code by using new upstream kernel features&lt;br /&gt;
posted for review.  Also a large patch series has been posted which&lt;br /&gt;
changes per-AG data to be looked up by a radix tree instead of the&lt;br /&gt;
existing array.  This works around possible deadlocks and user after&lt;br /&gt;
free issues during growfs, and prepares for removing a global (shared)&lt;br /&gt;
lock from the free space allocators.  In addition to that a wide range&lt;br /&gt;
of fixes has been posted and applied.&lt;br /&gt;
&lt;br /&gt;
Work on the userspace packages has been just as busy.  In mkfs.xfs the&lt;br /&gt;
lazy superblock counter feature has now been enabled by default for the&lt;br /&gt;
upcoming xfsprogs 3.1.0 release, which will require kernel 2.6.22 for&lt;br /&gt;
the default mkfs invocation.  Also for mkfs.xfs as patch was posted&lt;br /&gt;
to correct the automatic detection of 4 kilobyte sector drivers which&lt;br /&gt;
are expected to show up in large quantities the real work soon.  The&lt;br /&gt;
norepair mode in xfs_repair has been enhanced with additional freespace&lt;br /&gt;
btree correction checks from xfs_db and is now identical to xfs_check in&lt;br /&gt;
filesystem consistency checking coverage.  A temporary file permission&lt;br /&gt;
problems has been fixed in xfs_fsr, and the libhandle library has been&lt;br /&gt;
fixed to better deal with symbolic links.  In xfs_io a few commands&lt;br /&gt;
that were added years ago have finally been wired up to actually be&lt;br /&gt;
usable.  And last but not least xfsdump saw a fix to the time stamp&lt;br /&gt;
handling in the backup format and some usability and documentation&lt;br /&gt;
improvements to xfsinvutil.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2009 ==&lt;br /&gt;
&lt;br /&gt;
November was a relatively slow month for XFS development.  The XFS tree&lt;br /&gt;
that is destined for the Linux 2.6.33 merge window saw a few fixes and&lt;br /&gt;
cleanups applied to it, and few important fixes still made it into the&lt;br /&gt;
last Linux 2.6.32 release candidates.  A few more patches including a&lt;br /&gt;
final version of the event tracing support for XFS were posted but not&lt;br /&gt;
reviewed yet.&lt;br /&gt;
&lt;br /&gt;
On the userspace side there has been a fair amount of xfsprogs activity.&lt;br /&gt;
The repair speedup patches have finally been merged into the main development&lt;br /&gt;
branch and a couple of other fixes to the various utilities made it in, too.&lt;br /&gt;
The xfstests test suite saw another new regression test suite and a build&lt;br /&gt;
system fix up.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2009 ==&lt;br /&gt;
In October we saw the Linux 2.6.32 merge window with a major XFS update.&lt;br /&gt;
This update includes a refactoring of the inode allocator which also&lt;br /&gt;
allows for speedups for very large filesystems, major sync fixes, updates&lt;br /&gt;
to the fsync and O_SYNC handling which merge the two code paths into a single&lt;br /&gt;
and more efficient one, a workaround for the VFS time stamp behavior,&lt;br /&gt;
and of course various smaller fixes.  A couple of additional fixes have been&lt;br /&gt;
queued up for the next merge window.&lt;br /&gt;
&lt;br /&gt;
On the userspace side there has been a healthy activity on xfsprogs:  mkfs can&lt;br /&gt;
now discard unused sectors on SSDs and thinly provisioned storage devices and&lt;br /&gt;
use the more generic libblkid for topology information and filesystems detection&lt;br /&gt;
instead of the older libdisk, and the build system gained some updates to&lt;br /&gt;
make the source package generation simpler and shared for different package&lt;br /&gt;
types.  A patch has been out to the list but yet committed to add symbol&lt;br /&gt;
versioning to the libhandle library to make future ABI additions easier.&lt;br /&gt;
The xfstests package only saw some minor activity with a new test case&lt;br /&gt;
and small build system fixes.&lt;br /&gt;
&lt;br /&gt;
New minor releases of xfsprogs and xfsdump were tagged but not formally&lt;br /&gt;
released after additional discussion.  Instead a new major xfsprogs release&lt;br /&gt;
is planned for next month.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2009 ==&lt;br /&gt;
&lt;br /&gt;
In September the Linux 2.6.31 kernel was finally released, including another&lt;br /&gt;
last minute XFS fix for the swapext (defragmentation) compat ioctl handler.&lt;br /&gt;
The final patch from 2.6.30 to 2.6.31 shows the following impressive diffstat&lt;br /&gt;
for XFS:&lt;br /&gt;
&lt;br /&gt;
   55 files changed, 1476 insertions(+), 2269 deletions(-)&lt;br /&gt;
&lt;br /&gt;
The 2.6.32 merge window started with a large XFS merge that included changes&lt;br /&gt;
to the inode allocator, and a few smaller fixes.  New versions of the sync&lt;br /&gt;
and time stamp fixes as well as the event tracing support have been posted&lt;br /&gt;
in September but not yet merged into the XFS development tree and/or mainline.&lt;br /&gt;
&lt;br /&gt;
On the userspace side a large patch series to reduce the memory usage in&lt;br /&gt;
xfs_repair to acceptable levels was posted, but not yet merged.  A new xfs_df&lt;br /&gt;
shell script to measure use of the on disk space was posted but not yet&lt;br /&gt;
merged pending some minor review comments and a missing man page.  In addition&lt;br /&gt;
we saw the usual amount of smaller fixes and cleanups.&lt;br /&gt;
&lt;br /&gt;
Also this month Felix Blyakher resigned from his post as XFS maintainer and handed off to Alex Elder.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2009 ==&lt;br /&gt;
&lt;br /&gt;
In August the Linux 2.6.31 kernel has still been in the release candidate&lt;br /&gt;
stage, but a couple of important XFS fixes made it in time for the release,&lt;br /&gt;
including a fix for the inode cache races with NFS workloads that have&lt;br /&gt;
plagued us for a long time.&lt;br /&gt;
&lt;br /&gt;
The list saw various patches destined for the Linux 2.6.32 merge window,&lt;br /&gt;
including a merge of the fsync and O_SYNC handling code to address various&lt;br /&gt;
issues with the latter, a workaround for deficits in the timestamp handling&lt;br /&gt;
interface between the VFS and filesystems, a repost of the sync improvements&lt;br /&gt;
patch series and various smaller patches.&lt;br /&gt;
&lt;br /&gt;
August also saw the minor 3.0.3 release of xfsprogs which collects smaller&lt;br /&gt;
fixes to the various tools and most importantly a fix to allow xfsprogs to&lt;br /&gt;
work again on SPARC and other strict alignment handling which regressed a&lt;br /&gt;
few releases ago.  The xfstests repository saw a few new test cases and a&lt;br /&gt;
various small improvements.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for July 2009 ==&lt;br /&gt;
&lt;br /&gt;
As a traditional summer vacation month July has not seen a lot of XFS&lt;br /&gt;
activity.  The mainline 2.6.31 kernel made it to the 5th release candidate&lt;br /&gt;
but besides a few kernel-wide patches touching XFS the only activity were&lt;br /&gt;
two small patches fixing a bug in FIEMAP and working around writeback&lt;br /&gt;
performance problems in the VM.&lt;br /&gt;
&lt;br /&gt;
A few more patches were posted to the list but haven&#039;t been merged yet.&lt;br /&gt;
Two big patch series deal with theoretically possible deadlocks due to&lt;br /&gt;
locks taken in reclaim contexts, which are now detected by lockdep.&lt;br /&gt;
&lt;br /&gt;
The pace on the userspace side has been slow.  There have been a couple&lt;br /&gt;
of fixes to xfs_repair and xfs_db, and xfstests grew a few more testcases.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for June 2009 ==&lt;br /&gt;
&lt;br /&gt;
On June 9th we finally saw the release of Linux 2.6.30.  For XFS&lt;br /&gt;
this release mostly contains the improved ENOSPC handling, but also&lt;br /&gt;
various smaller bugfixes and lots of cleanups.  The code size of XFS&lt;br /&gt;
decreased again by 500 lines of code in this release.&lt;br /&gt;
&lt;br /&gt;
The Linux 2.6.31 merge opened in the mid of the month and some big XFS&lt;br /&gt;
changes have been pushed: A removal of the quotaops&lt;br /&gt;
infrastructure which simplifies the quota implementation, the switch&lt;br /&gt;
from XFS&#039;s own Posix ACL implementation to the generic one shared&lt;br /&gt;
by various other filesystems which also supports in-memory caching of&lt;br /&gt;
ACLs and another incremental refactoring of the sync code.&lt;br /&gt;
&lt;br /&gt;
A patch to better track dirty inodes and work around issues in the&lt;br /&gt;
way the VFS updates the access time stamp on inodes has been reposted&lt;br /&gt;
and discussed. Another patch to converting the existing XFS tracing&lt;br /&gt;
infrastructure to use the ftrace even tracer has been posted.&lt;br /&gt;
&lt;br /&gt;
On the userspace side there have been a few updates to xfsprogs, including&lt;br /&gt;
some repair fixes and a new fallocate command for xfs_io.  There were&lt;br /&gt;
major updates for xfstests:  The existing aio-dio-regress testsuite has&lt;br /&gt;
been merged into xfstests, and various changes went into the tree to make&lt;br /&gt;
xfstests better suitable for use with other filesystems.&lt;br /&gt;
&lt;br /&gt;
The attr and acl projects which have been traditionally been hosted&lt;br /&gt;
as part of the XFS userspace utilities have now been split into a separate&lt;br /&gt;
project maintained by Andreas Gruenbacher, who has been doing most of&lt;br /&gt;
the work on it, and moved to the Savannah hosting platform.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for May 2009 ==&lt;br /&gt;
&lt;br /&gt;
In May Linux 2.6.30 was getting close to be released, and various&lt;br /&gt;
important XFS fixes made it during the latest release candidates.&lt;br /&gt;
In the meantime some big patch series to rework the sync code and&lt;br /&gt;
the inode allocator have been posted for the next merge window.&lt;br /&gt;
&lt;br /&gt;
On the userspace side xfsprogs and xfsdump 3.0.1 were finally released,&lt;br /&gt;
quickly followed by 3.0.2 releases with updated Debian packaging.&lt;br /&gt;
After that various small patches that were held back made it into xfsprogs.&lt;br /&gt;
A patch to add the xfs_reno tool which allows to move inodes around to&lt;br /&gt;
fit into 32 bit inode number space has been posted which is also one&lt;br /&gt;
central aspect of future online shrinking support.&lt;br /&gt;
&lt;br /&gt;
There has been major activity on xfstests including adding generic&lt;br /&gt;
filesystems support to allow running tests that aren&#039;t XFS-specific on&lt;br /&gt;
any Linux filesystems.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for April 2009 ==&lt;br /&gt;
&lt;br /&gt;
In April development for Linux 2.6.30 was in full motion.  A patchset to correct flushing of delayed allocations with near full filesystems has been committed in early April, as well as various smaller fixes. A patch series to improve the behavior of sys_sync has been posted but is waiting for VFS changes queued for Linux 2.6.31.&lt;br /&gt;
&lt;br /&gt;
On the userspace side xfsprogs and xfsdump 3.0.1 have managed to split their release dates into May again after a lot of last-minute build system updates.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for March 2009 ==&lt;br /&gt;
&lt;br /&gt;
Linux 2.6.29 has been released which includes major XFS updates like the&lt;br /&gt;
new generic btree code, a fully functional 32bit compat ioctl implementation&lt;br /&gt;
and the new combined XFS and Linux inode.  (See previous status reports&lt;br /&gt;
for more details). A patch series to improve correctness and performance&lt;br /&gt;
has been posted but not yet applied.  Various minor fixes and cleanups&lt;br /&gt;
have been sent to Linus for 2.6.30 which looks like it will be a minor&lt;br /&gt;
release for XFS after the big churn in 2.6.29.&lt;br /&gt;
&lt;br /&gt;
On userspace a lot of time has been spent on fixing and improving the&lt;br /&gt;
build system shared by the various XFS utilities as well as various smaller&lt;br /&gt;
improvements leading to the xfsprogs and xfsdump 3.0.1 releases which are&lt;br /&gt;
still outstanding.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for February 2009 ==&lt;br /&gt;
&lt;br /&gt;
In February various smaller fixes have been sent to Linus for 2.6.29,&lt;br /&gt;
including a revert of the faster vmap APIs which don&#039;t seem to be quite&lt;br /&gt;
ready yet on the VM side.  At the same time various patches have been&lt;br /&gt;
queued up for 2.6.30, with another big batch pending.  There also has&lt;br /&gt;
been a repost of the CRC patch series, including support for a new,&lt;br /&gt;
larger inode core.&lt;br /&gt;
&lt;br /&gt;
SGI released various bits of work in progress from former employees&lt;br /&gt;
that will be extremely helpful for the future development of XFS,&lt;br /&gt;
thanks a lot to Mark Goodwin for making this happen.&lt;br /&gt;
&lt;br /&gt;
On the userspace side the long awaited 3.0.0 releases of xfsprogs and&lt;br /&gt;
xfsdump finally happened early in the month, accompanied by a 2.2.9&lt;br /&gt;
release of the dmapi userspace.  There have been some issues with packaging&lt;br /&gt;
so a new minor release might follow soon.&lt;br /&gt;
&lt;br /&gt;
The xfs_irecover tool has been relicensed so that it can be merged into&lt;br /&gt;
the GPLv2 codebase of xfsprogs, but the actual integration work hasn&#039;t&lt;br /&gt;
happened yet.&lt;br /&gt;
&lt;br /&gt;
Important bits of XFS documentation that have been available on the XFS&lt;br /&gt;
website in PDF form have been released in the document source form under&lt;br /&gt;
the Creative Commons license so that they can be updated as a community&lt;br /&gt;
effort, and checked into a public git tree.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for January 2009 ==&lt;br /&gt;
&lt;br /&gt;
January has been an extremely busy month on the userspace front.  Many&lt;br /&gt;
smaller and medium updates went into xfsprogs, xfstests and to a lesser&lt;br /&gt;
extent xfsdump.  xfsprogs and xfsdump are ramping up for getting a 3.0.0&lt;br /&gt;
release out in early February which will include the first major re-sync&lt;br /&gt;
with the kernel code in libxfs, a cleanup of the exported library interfaces&lt;br /&gt;
and the move of two tools (xfs_fsr and xfs_estimate) from the xfsdump&lt;br /&gt;
package to xfsprogs.  After this the xfsprogs package will contain all&lt;br /&gt;
tools that use internal libxfs interfaces which fortunately equates to those&lt;br /&gt;
needed for normal administration.  The xfsdump package now only contains&lt;br /&gt;
the xfsdump/xfsrestore tools needed for backing up and restoring XFS&lt;br /&gt;
filesystems.  In addition it grew a fix to support dump/restore on systems&lt;br /&gt;
with a 64k page size.  A large number of acl/attr package patches was&lt;br /&gt;
posted to the list, but pending a possible split of these packages from the&lt;br /&gt;
XFS project these weren&#039;t processed yet.&lt;br /&gt;
&lt;br /&gt;
On the kernel side the big excitement in January was an in-memory corruption&lt;br /&gt;
introduced in the btree refactoring which hit people running 32bit platforms&lt;br /&gt;
without support for large block devices.  This issue was fixed and pushed&lt;br /&gt;
to the 2.6.29 development tree after a long collaborative debugging effort&lt;br /&gt;
at linux.conf.au.  Besides that about a dozen minor fixes were pushed to&lt;br /&gt;
2.6.29 and the first batch of misc patches for the 2.6.30 release cycle&lt;br /&gt;
was sent out.&lt;br /&gt;
&lt;br /&gt;
At the end of December the SGI group in Melbourne which the previous&lt;br /&gt;
XFS maintainer and some other developers worked for has been closed down&lt;br /&gt;
and they will be missed greatly.  As a result maintainership has been passed&lt;br /&gt;
on in a way that has been slightly controversial in the community, and the&lt;br /&gt;
first patchset of work in progress in Melbourne have been posted to the list&lt;br /&gt;
to be picked up by others.&lt;br /&gt;
&lt;br /&gt;
The xfs.org wiki has gotten a little facelift on it&#039;s front page making it&lt;br /&gt;
a lot easier to read.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for December 2008 ==&lt;br /&gt;
&lt;br /&gt;
On Christmas Eve the 2.6.28 mainline kernel was release, with only minor XFS&lt;br /&gt;
bug fixes over 2.6.27.&lt;br /&gt;
&lt;br /&gt;
On the development side December has been busy but unspectacular month.&lt;br /&gt;
All lot of misc fixes and improvements have been sent out, tested and committed&lt;br /&gt;
especially on the user land side.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2008 ==&lt;br /&gt;
&lt;br /&gt;
The mainline kernel is now at 2.6.28-rc6 and includes a small number of&lt;br /&gt;
XFS fixes.  There have been no updates to the XFS development tree during&lt;br /&gt;
November.  Without new regressions that large number of changes that&lt;br /&gt;
missed 2.6.28 has thus stabilized to be ready for 2.6.29.  In the meantime&lt;br /&gt;
kernel-side development has been slow, with the only major patch set&lt;br /&gt;
being a wide number of fixes to the compatibility for 32 bit ioctls on&lt;br /&gt;
a 64 bit kernel.&lt;br /&gt;
&lt;br /&gt;
In the meantime there has been a large number of commits to the user space&lt;br /&gt;
tree, which mostly consist of smaller fixes.  xfsprogs is getting close&lt;br /&gt;
to have the 3.0.0 release which will be the first full resync with the&lt;br /&gt;
kernel sources since the year 2005.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2008 ==&lt;br /&gt;
&lt;br /&gt;
Linux 2.6.27 released with all the bits covered in last month&#039;s report.  It&lt;br /&gt;
did however miss two important fixes for regressions that a few people hit.&lt;br /&gt;
2.6.27.3 or later are recommended for use with XFS.&lt;br /&gt;
&lt;br /&gt;
In the meantime the generic btree implementation, the sync reorganization&lt;br /&gt;
and after a lot of merge pain the XFS and VFS inode unification hit the&lt;br /&gt;
development tree during the time allocated for the merge window.  No XFS&lt;br /&gt;
updates other than the two regression fixes also in 2.6.27.3 have made it&lt;br /&gt;
into mainline as of 2.6.28-rc3.&lt;br /&gt;
&lt;br /&gt;
The only new feature on the list in October is support for the fiemap&lt;br /&gt;
interface that has been added to the VFS during the 2.6.28 merge window.&lt;br /&gt;
However there was lot of patch traffic consisting of fixes and respun&lt;br /&gt;
versions of previously known patches.  There still is a large backlog of&lt;br /&gt;
patches on the list that is not applied to the development tree yet.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2008 ==&lt;br /&gt;
&lt;br /&gt;
With Linux 2.6.27 still not released but only making slow progress from 2.6.27-rc5 to 2.6.27-rc8 XFS changes in mainline have been minimal in September with only about half a dozen bug fixes patches.&lt;br /&gt;
&lt;br /&gt;
In the meantime the generic btree patch set has been committed to the development tree, but not many other updates yet. On the user space side xfsprogs 2.10.1 has been released on September 5th with a number of important bug fixes. Following the release of xfsprogs 2.10.1 open season for development of the user space code has started. The first full update of the shared kernel / user space code in libxfs since 2005 has been committed. In addition to that the number of headers installed for the regular devel package has been reduced to the required minimum and support for checking the source code for endianess errors using sparse has been added.&lt;br /&gt;
&lt;br /&gt;
The patch sets to unify the XFS and Linux inode structures, and rewrite various bits of the sync code have seen various iterations on the XFS list, but haven&#039;t been committed yet. A first set of patches implementing CRCs for various metadata structures has been posted to the list.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2008 ==&lt;br /&gt;
&lt;br /&gt;
With the 2.6.27-rc5 release the 2.6.27 cycle is nearing it&#039;s end. The major XFS feature in 2.6.27-rc5 is support for case-insensitive file names. At this point it is still limited to 7bit ASCII file names, with updates for utf8 file names expected to follow later. In addition to that 2.6.27-rc5 fixes a long-standing problem with non-EABI arm compiler which pack some XFS data structures wrongly. Besides this 2.6.27-rc5 also contains various cleanups, most notably the removal of the last bhv_vnode_t instances, and most uses of semaphores. As usual the diffstat for XFS from 2.6.26 to 2.6.26-rc5 is negative:&lt;br /&gt;
&lt;br /&gt;
       100 files changed, 3819 insertions(+), 4409 deletions(-)&lt;br /&gt;
&lt;br /&gt;
On the user space front a new minor xfsprogs version is about to be released containing various fixes including the user space part of arm packing fix.&lt;br /&gt;
&lt;br /&gt;
Work in progress on the XFS mailing list are a large patch set to unify the alloc, inobt and bmap btree implementation into a single that supports arbitrarily pluggable key and record formats. These btree changes are the first major preparation for adding CRC checks to all metadata structures in XFS, and an even larger patch set to unify the XFS and Linux inode structures, and perform all inode write back from the btree uses instead of an inode cache in XFS.&lt;br /&gt;
&lt;br /&gt;
== Updates before 2008 ==&lt;br /&gt;
&lt;br /&gt;
News up to 2007 can be found on a separate page: [[OLD_News]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Status_Updates&amp;diff=2104</id>
		<title>XFS Status Updates</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Status_Updates&amp;diff=2104"/>
		<updated>2010-09-03T21:27:23Z</updated>

		<summary type="html">&lt;p&gt;Christian: url wikified&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== XFS status update for August 2010 ==&lt;br /&gt;
&lt;br /&gt;
At the first of August we finally saw the release of Linux 2.6.35,&lt;br /&gt;
which includes a large XFS update.  The most prominent feature in&lt;br /&gt;
Linux 2.6.35 is the new delayed logging code which provides massive&lt;br /&gt;
speedups for metadata-intensive workloads, but there has been&lt;br /&gt;
a large amount of other fixes and cleanups, leading to the following&lt;br /&gt;
diffstat:&lt;br /&gt;
&lt;br /&gt;
         67 files changed, 4426 insertions(+), 3835 deletions(-)&lt;br /&gt;
&lt;br /&gt;
Given the early release of Linux 2.6.35 the merge window for the&lt;br /&gt;
next release fully fell into the month of August.  The XFS updates&lt;br /&gt;
for Linux 2.6.36 include various additional performance improvements&lt;br /&gt;
in the delayed logging code, for direct I/O writes and for avoiding&lt;br /&gt;
synchronous transactions, as well as various fixed and large amount&lt;br /&gt;
of cleanups, including the removal of the remaining dead DMAPI&lt;br /&gt;
code.&lt;br /&gt;
&lt;br /&gt;
On the userspace side we saw the 3.1.3 release of xfsprogs, which includes&lt;br /&gt;
various smaller fixes, support for the new XFS_IOC_ZERO_RANGE ioctl and&lt;br /&gt;
Debian packaging updates.  The xfstests package saw one new test case&lt;br /&gt;
and a couple of smaller patches, and xfsdump has not seen any updates at&lt;br /&gt;
all.&lt;br /&gt;
&lt;br /&gt;
The XMLified versions of the XFS users guide, training labs and filesystem&lt;br /&gt;
structure documentation are now available as on the fly generated html on&lt;br /&gt;
the xfs.org website and can be found at [[XFS_Papers_and_Documentation|Papers &amp;amp; Documentation]].&lt;br /&gt;
&lt;br /&gt;
== XFS status update for July 2010 ==&lt;br /&gt;
&lt;br /&gt;
July saw three more release candidates for the Linux 2.6.35 kernel, which&lt;br /&gt;
included a relatively large number of XFS updates.  There were two security&lt;br /&gt;
fixes, a small one to prevent swapext to operate on write-only file&lt;br /&gt;
descriptors, and a much larger one to properly validate inode numbers&lt;br /&gt;
coming from NFS clients or userspace applications using the bulkstat or&lt;br /&gt;
the open-by-handle interfaces.  In addition to that another relatively&lt;br /&gt;
large patch fixes the way inodes get reclaimed in the background, and&lt;br /&gt;
avoids inode caches growing out of bounds.&lt;br /&gt;
&lt;br /&gt;
In the meantime the code for the Linux 2.6.36 got the last touches before&lt;br /&gt;
the expected opening of the merge window, by merging a few more last&lt;br /&gt;
minute fixes and cleanups.  The most notable one is a patch series&lt;br /&gt;
that fixes in-memory corruption when concurrently accessing unwritten&lt;br /&gt;
extents using the in-kernel AIO code.&lt;br /&gt;
&lt;br /&gt;
The userspace side was still quite slow, but some a bit more activity&lt;br /&gt;
than June.  In xfsprogs the xfs_db code grew two bug fixes, as did&lt;br /&gt;
the xfs_io script.  The xfstests package saw one new test cases and&lt;br /&gt;
various fixes to existing code.  Last but not least a few patches&lt;br /&gt;
affecting the build system for all userspace tools were committed.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for June 2010 ==&lt;br /&gt;
&lt;br /&gt;
The month of June saw a few important bug fixes for the Linux 2.6.35&lt;br /&gt;
release candidates.  That includes ensuring that files used for the&lt;br /&gt;
swapext ioctl are writable to the user, and doing proper validation&lt;br /&gt;
of inodes coming from untrusted sources, such as NFS exporting and&lt;br /&gt;
the open by handle system calls.  The main work however has been&lt;br /&gt;
focused on development for the Linux 2.6.36 merge window, including&lt;br /&gt;
merging various patches that have been out on the mainline list&lt;br /&gt;
for a long time.  Highlights include further performance improvements&lt;br /&gt;
for sync heavy metadata workloads, stack space reduction in the&lt;br /&gt;
writeback path and improvements of the XFS tracing infrastructure.&lt;br /&gt;
Also after some discussion the remaining hooks for DMAPI are going&lt;br /&gt;
to be dropped in mainline.   As a replacement a tree containing&lt;br /&gt;
full DMAPI support with a slightly cleaner XFS interaction will be&lt;br /&gt;
hosted by SGI.&lt;br /&gt;
&lt;br /&gt;
On the userspace side June was a rather slow month, with no updates&lt;br /&gt;
to xfsprogs and xfsdump at all, and just one new test case and a cleanup&lt;br /&gt;
applied to xfstests.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for May 2010 ==&lt;br /&gt;
&lt;br /&gt;
In May 2010 we saw the long awaited release of Linux 2.6.34, which includes&lt;br /&gt;
a large XFS update.  The most important features appearing in 2.6.34 was the&lt;br /&gt;
new inode and quota flushing code, which leads to much better I/O patterns&lt;br /&gt;
for metadata-intensive workloads.  Additionally support for synchronous NFS&lt;br /&gt;
exports has been improved to give much better performance, and performance&lt;br /&gt;
for the fsync, fdatasync and sync system calls has been improved slightly.&lt;br /&gt;
A bug when resizing extremely busy filesystems has been fixed, which required&lt;br /&gt;
extensive modification to the data structure used for looking up the&lt;br /&gt;
per-allocation group data.  Last but not least there was a steady flow of&lt;br /&gt;
minor bug fixes and cleanups, leading to the following diffstat from&lt;br /&gt;
2.6.33 to 2.6.34:&lt;br /&gt;
&lt;br /&gt;
  86 files changed, 3209 insertions(+), 3178 deletions(-)&lt;br /&gt;
&lt;br /&gt;
Meanwhile active development aimed at 2.6.35 merge progressed.  The&lt;br /&gt;
major feature for this window is the merge of the delayed logging code,&lt;br /&gt;
which adds a new logging mode that dramatically reduces the bandwidth&lt;br /&gt;
required for log I/O.  See the &lt;br /&gt;
[http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs-delayed-logging-design.txt;h=96d0df28bed323d5596fc051b0ffb96ed8e3c8df;hb=HEAD documentation] for details.  Testers&lt;br /&gt;
for this new code are welcome.&lt;br /&gt;
&lt;br /&gt;
In userland xfsprogs saw the long awaited 3.1.2 release, which can be&lt;br /&gt;
considered a bug fix release for xfs_repair, xfs_fsr and mkfs.xfs.  After&lt;br /&gt;
the release a few more fixes were merged into the development tree.&lt;br /&gt;
The xfstests package saw various new tests, including many tests to&lt;br /&gt;
exercise the quota code, and a few fixes to existing tests.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for April 2010 ==&lt;br /&gt;
&lt;br /&gt;
In April 2.6.34 still was in the release candidate phase, with&lt;br /&gt;
a hand full of XFS fixes making it into mainline.  Development for&lt;br /&gt;
the 2.6.35 merge window went ahead full steam at the same time.&lt;br /&gt;
&lt;br /&gt;
While a fair amount of patches hit the development tree these were&lt;br /&gt;
largely cleanups, with the real development activity happening on&lt;br /&gt;
the mailing list.  There was another round of patches and following&lt;br /&gt;
discussion on the scalable busy extent tracking and delayed logging&lt;br /&gt;
features mentioned last month.  They are expected to be merged in&lt;br /&gt;
May and queue up for the Linux 2.6.35 window.  Last but not least&lt;br /&gt;
April saw a large number of XFS fixes backported to the 2.6.32 and&lt;br /&gt;
2.6.33 -stable series.&lt;br /&gt;
&lt;br /&gt;
In user land xfsprogs has seen few but important updates, preparing&lt;br /&gt;
for a new release next month.  The xfs_repair tool saw a fix to&lt;br /&gt;
correctly enable the lazy superblock counters on an existing&lt;br /&gt;
filesystem, and xfs_fsr saw updates to better deal with dynamic&lt;br /&gt;
attribute forks.  Last but not a least a port to Debian GNU/kFreeBSD&lt;br /&gt;
got merged. The xfstests test suite saw two new test cases and various&lt;br /&gt;
smaller fixes.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for March 2010 ==&lt;br /&gt;
&lt;br /&gt;
The merge window for Linux 2.6.34 closed in the first week of March,&lt;br /&gt;
with the important XFS features already landing in February.  Not&lt;br /&gt;
surprisingly the XFS merge activity in March has been rather slow,&lt;br /&gt;
with only about a dozen bug fixes patches making it towards Linus&#039;&lt;br /&gt;
tree in that time.&lt;br /&gt;
&lt;br /&gt;
On the other hand active development for the 2.6.35 merge window has&lt;br /&gt;
been very active.  Most importantly there was a lot of work on the&lt;br /&gt;
transaction and log subsystems.  Starting with a large patchset to&lt;br /&gt;
clean up and refactor the transaction subsystem and introducing more&lt;br /&gt;
flexible I/O containers in the low-level logging code work is&lt;br /&gt;
progressing to a new, more efficient logging implementation.  While&lt;br /&gt;
this preparatory work has already been merged in the development tree,&lt;br /&gt;
the actual delayed logging implementation still needs more work after&lt;br /&gt;
the initial public posting.  The delayed logging implementation which&lt;br /&gt;
is very briefly modeled after the journaling mode in the ext3/4&lt;br /&gt;
and reiserfs filesystems allows to accumulated multiple asynchronous&lt;br /&gt;
transactions in memory instead of possibly writing them out&lt;br /&gt;
many times.  Using the new delayed logging mechanism I/O bandwidth&lt;br /&gt;
used for the log decreases by orders of magnitude and performance&lt;br /&gt;
on metadata intensive workloads increases massively.&lt;br /&gt;
&lt;br /&gt;
In addition to that a new version of the discard (aka TRIM) support&lt;br /&gt;
has been posted, this time entirely contained in kernel space&lt;br /&gt;
and without the need of a userspace utility to drive it.  Last but&lt;br /&gt;
not least the usual steady stream of cleanups and bug fixes has not&lt;br /&gt;
ceased this month either.&lt;br /&gt;
&lt;br /&gt;
Besides the usual flow of fixes and new test cases in the xfstests&lt;br /&gt;
test suite development on the userspace side has been rather slow.&lt;br /&gt;
Xfsprogs has only seen a single fix for SMP locking in xfs_repair&lt;br /&gt;
and support for building on Debian GNU/kFreeBSD, and xfsdump&lt;br /&gt;
has seen no commit at all.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for February 2010 ==&lt;br /&gt;
&lt;br /&gt;
February saw the release of the Linux 2.6.33 kernel, which includes&lt;br /&gt;
a large XFS update.  The biggest user-visible change in Linux 2.6.33&lt;br /&gt;
is that XFS now support the generic Linux trace event infrastructure,&lt;br /&gt;
which allows tracing lots of XFS behavior with a normal production&lt;br /&gt;
built kernel.  Except for this Linux 2.6.33 has been mostly a bug-fix&lt;br /&gt;
release, fixing various user reported bugs in previous releases.&lt;br /&gt;
The total diffstat for XFS in Linux 2.6.33 looks like:&lt;br /&gt;
&lt;br /&gt;
  84 files changed, 3023 insertions(+), 3550 deletions(-)&lt;br /&gt;
&lt;br /&gt;
In addition to that the merge window for Linux 2.6.34 opened and the&lt;br /&gt;
first merge of the XFS tree made it into Linus tree.  Unlike Linux&lt;br /&gt;
2.6.33 this merge window includes major feature work.  The most&lt;br /&gt;
important change for users is a new algorithm for inode and quota&lt;br /&gt;
writeback that leads to better I/O locality and improved metadata&lt;br /&gt;
performance.  The second big change is a rewrite of the per-allocation&lt;br /&gt;
group data lookup which fixes a long-standing problem in the code&lt;br /&gt;
to grow a life filesystem and will also ease future filesystem&lt;br /&gt;
shrinking support.  Not merged through the XFS tree, but of great&lt;br /&gt;
importance for embedded users is a new API that allows XFS to properly&lt;br /&gt;
flush cache lines on it&#039;s log and large directory buffers, making&lt;br /&gt;
XFS work properly on architectures with virtually indexed caches,&lt;br /&gt;
such as parisc and various arm and mips variants.  Last but not&lt;br /&gt;
least there is an above average amount of cleanups that went into&lt;br /&gt;
Linus tree in this cycle.&lt;br /&gt;
&lt;br /&gt;
There have been more patches on the mailing list that haven&#039;t made&lt;br /&gt;
it to Linus tree yet, including an optimized implementation of&lt;br /&gt;
fdatasync(2) and massive speedups for metadata workloads on&lt;br /&gt;
NFS exported XFS filesystems.&lt;br /&gt;
&lt;br /&gt;
On the userspace side February has been a relatively quite month.&lt;br /&gt;
Lead by xfstests only a moderate amount of fixes made it into&lt;br /&gt;
the respective trees.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for January 2010 ==&lt;br /&gt;
&lt;br /&gt;
January saw additional release candidates of the Linux 2.6.33 kernel,&lt;br /&gt;
including a couple of bug fixes for XFS.  In the meantime the XFS tree&lt;br /&gt;
has been growing a large number of patches destined for the Linux 2.6.34&lt;br /&gt;
merge window: a large rework of the handling of per-AG data, support for&lt;br /&gt;
the quota netlink interface, and better power saving behavior of the&lt;br /&gt;
XFS kernel threads, and of course various cleanups.&lt;br /&gt;
&lt;br /&gt;
A large patch series to replace the current asynchronous inode writeback&lt;br /&gt;
with a new scheme that uses the delayed write buffers was posted to&lt;br /&gt;
the list.  The new scheme, which allows archive better I/O locality by&lt;br /&gt;
dispatching meta-data I/O from a single place has been discussed&lt;br /&gt;
extensively and is expected to be merged in February.&lt;br /&gt;
&lt;br /&gt;
On the userspace side January saw the 3.1.0 and 3.1.1 releases of xfsprogs,&lt;br /&gt;
as well as the 3.0.4 release of xfsdump.  The biggest changes in xfsprogs&lt;br /&gt;
3.1.0 were optimizations in xfs_repair that lead to a much lower memory&lt;br /&gt;
usage, and optional use of the blkid library for filesystem detection&lt;br /&gt;
and retrieving storage topology information.  The 3.1.1 release contained&lt;br /&gt;
various important bug fixes for these changes and a various improvements to&lt;br /&gt;
the build system.  The major feature of xfsdump 3.0.4 were fixes for&lt;br /&gt;
time stamp handling on 64-bit systems.&lt;br /&gt;
&lt;br /&gt;
The xfstests package also lots of activity including various new testcases&lt;br /&gt;
and an improved build system.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for December 2009 ==&lt;br /&gt;
&lt;br /&gt;
December finally saw the long awaited release of Linux 2.6.32, which for&lt;br /&gt;
XFS is mostly a bug fix release, with the major changes being various&lt;br /&gt;
improvement to the sync path, including working around the expectation&lt;br /&gt;
from the grub boot loader where metadata is supposed to be after a sync()&lt;br /&gt;
system call.  Together with a refactoring of the inode allocator this&lt;br /&gt;
gives a nice diffstat for this kernel release:&lt;br /&gt;
&lt;br /&gt;
 46 files changed, 767 insertions(+), 1048 deletions(-)&lt;br /&gt;
&lt;br /&gt;
In the meantime development for the 2.6.33 has been going strong.  The&lt;br /&gt;
new event tracing code that allows to observe the inner workings of XFS&lt;br /&gt;
in production systems has finally been merged, with another patch to&lt;br /&gt;
reduce the size of the tracing code by using new upstream kernel features&lt;br /&gt;
posted for review.  Also a large patch series has been posted which&lt;br /&gt;
changes per-AG data to be looked up by a radix tree instead of the&lt;br /&gt;
existing array.  This works around possible deadlocks and user after&lt;br /&gt;
free issues during growfs, and prepares for removing a global (shared)&lt;br /&gt;
lock from the free space allocators.  In addition to that a wide range&lt;br /&gt;
of fixes has been posted and applied.&lt;br /&gt;
&lt;br /&gt;
Work on the userspace packages has been just as busy.  In mkfs.xfs the&lt;br /&gt;
lazy superblock counter feature has now been enabled by default for the&lt;br /&gt;
upcoming xfsprogs 3.1.0 release, which will require kernel 2.6.22 for&lt;br /&gt;
the default mkfs invocation.  Also for mkfs.xfs as patch was posted&lt;br /&gt;
to correct the automatic detection of 4 kilobyte sector drivers which&lt;br /&gt;
are expected to show up in large quantities the real work soon.  The&lt;br /&gt;
norepair mode in xfs_repair has been enhanced with additional freespace&lt;br /&gt;
btree correction checks from xfs_db and is now identical to xfs_check in&lt;br /&gt;
filesystem consistency checking coverage.  A temporary file permission&lt;br /&gt;
problems has been fixed in xfs_fsr, and the libhandle library has been&lt;br /&gt;
fixed to better deal with symbolic links.  In xfs_io a few commands&lt;br /&gt;
that were added years ago have finally been wired up to actually be&lt;br /&gt;
usable.  And last but not least xfsdump saw a fix to the time stamp&lt;br /&gt;
handling in the backup format and some usability and documentation&lt;br /&gt;
improvements to xfsinvutil.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2009 ==&lt;br /&gt;
&lt;br /&gt;
November was a relatively slow month for XFS development.  The XFS tree&lt;br /&gt;
that is destined for the Linux 2.6.33 merge window saw a few fixes and&lt;br /&gt;
cleanups applied to it, and few important fixes still made it into the&lt;br /&gt;
last Linux 2.6.32 release candidates.  A few more patches including a&lt;br /&gt;
final version of the event tracing support for XFS were posted but not&lt;br /&gt;
reviewed yet.&lt;br /&gt;
&lt;br /&gt;
On the userspace side there has been a fair amount of xfsprogs activity.&lt;br /&gt;
The repair speedup patches have finally been merged into the main development&lt;br /&gt;
branch and a couple of other fixes to the various utilities made it in, too.&lt;br /&gt;
The xfstests test suite saw another new regression test suite and a build&lt;br /&gt;
system fix up.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2009 ==&lt;br /&gt;
In October we saw the Linux 2.6.32 merge window with a major XFS update.&lt;br /&gt;
This update includes a refactoring of the inode allocator which also&lt;br /&gt;
allows for speedups for very large filesystems, major sync fixes, updates&lt;br /&gt;
to the fsync and O_SYNC handling which merge the two code paths into a single&lt;br /&gt;
and more efficient one, a workaround for the VFS time stamp behavior,&lt;br /&gt;
and of course various smaller fixes.  A couple of additional fixes have been&lt;br /&gt;
queued up for the next merge window.&lt;br /&gt;
&lt;br /&gt;
On the userspace side there has been a healthy activity on xfsprogs:  mkfs can&lt;br /&gt;
now discard unused sectors on SSDs and thinly provisioned storage devices and&lt;br /&gt;
use the more generic libblkid for topology information and filesystems detection&lt;br /&gt;
instead of the older libdisk, and the build system gained some updates to&lt;br /&gt;
make the source package generation simpler and shared for different package&lt;br /&gt;
types.  A patch has been out to the list but yet committed to add symbol&lt;br /&gt;
versioning to the libhandle library to make future ABI additions easier.&lt;br /&gt;
The xfstests package only saw some minor activity with a new test case&lt;br /&gt;
and small build system fixes.&lt;br /&gt;
&lt;br /&gt;
New minor releases of xfsprogs and xfsdump were tagged but not formally&lt;br /&gt;
released after additional discussion.  Instead a new major xfsprogs release&lt;br /&gt;
is planned for next month.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2009 ==&lt;br /&gt;
&lt;br /&gt;
In September the Linux 2.6.31 kernel was finally released, including another&lt;br /&gt;
last minute XFS fix for the swapext (defragmentation) compat ioctl handler.&lt;br /&gt;
The final patch from 2.6.30 to 2.6.31 shows the following impressive diffstat&lt;br /&gt;
for XFS:&lt;br /&gt;
&lt;br /&gt;
   55 files changed, 1476 insertions(+), 2269 deletions(-)&lt;br /&gt;
&lt;br /&gt;
The 2.6.32 merge window started with a large XFS merge that included changes&lt;br /&gt;
to the inode allocator, and a few smaller fixes.  New versions of the sync&lt;br /&gt;
and time stamp fixes as well as the event tracing support have been posted&lt;br /&gt;
in September but not yet merged into the XFS development tree and/or mainline.&lt;br /&gt;
&lt;br /&gt;
On the userspace side a large patch series to reduce the memory usage in&lt;br /&gt;
xfs_repair to acceptable levels was posted, but not yet merged.  A new xfs_df&lt;br /&gt;
shell script to measure use of the on disk space was posted but not yet&lt;br /&gt;
merged pending some minor review comments and a missing man page.  In addition&lt;br /&gt;
we saw the usual amount of smaller fixes and cleanups.&lt;br /&gt;
&lt;br /&gt;
Also this month Felix Blyakher resigned from his post as XFS maintainer and handed off to Alex Elder.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2009 ==&lt;br /&gt;
&lt;br /&gt;
In August the Linux 2.6.31 kernel has still been in the release candidate&lt;br /&gt;
stage, but a couple of important XFS fixes made it in time for the release,&lt;br /&gt;
including a fix for the inode cache races with NFS workloads that have&lt;br /&gt;
plagued us for a long time.&lt;br /&gt;
&lt;br /&gt;
The list saw various patches destined for the Linux 2.6.32 merge window,&lt;br /&gt;
including a merge of the fsync and O_SYNC handling code to address various&lt;br /&gt;
issues with the latter, a workaround for deficits in the timestamp handling&lt;br /&gt;
interface between the VFS and filesystems, a repost of the sync improvements&lt;br /&gt;
patch series and various smaller patches.&lt;br /&gt;
&lt;br /&gt;
August also saw the minor 3.0.3 release of xfsprogs which collects smaller&lt;br /&gt;
fixes to the various tools and most importantly a fix to allow xfsprogs to&lt;br /&gt;
work again on SPARC and other strict alignment handling which regressed a&lt;br /&gt;
few releases ago.  The xfstests repository saw a few new test cases and a&lt;br /&gt;
various small improvements.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for July 2009 ==&lt;br /&gt;
&lt;br /&gt;
As a traditional summer vacation month July has not seen a lot of XFS&lt;br /&gt;
activity.  The mainline 2.6.31 kernel made it to the 5th release candidate&lt;br /&gt;
but besides a few kernel-wide patches touching XFS the only activity were&lt;br /&gt;
two small patches fixing a bug in FIEMAP and working around writeback&lt;br /&gt;
performance problems in the VM.&lt;br /&gt;
&lt;br /&gt;
A few more patches were posted to the list but haven&#039;t been merged yet.&lt;br /&gt;
Two big patch series deal with theoretically possible deadlocks due to&lt;br /&gt;
locks taken in reclaim contexts, which are now detected by lockdep.&lt;br /&gt;
&lt;br /&gt;
The pace on the userspace side has been slow.  There have been a couple&lt;br /&gt;
of fixes to xfs_repair and xfs_db, and xfstests grew a few more testcases.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for June 2009 ==&lt;br /&gt;
&lt;br /&gt;
On June 9th we finally saw the release of Linux 2.6.30.  For XFS&lt;br /&gt;
this release mostly contains the improved ENOSPC handling, but also&lt;br /&gt;
various smaller bugfixes and lots of cleanups.  The code size of XFS&lt;br /&gt;
decreased again by 500 lines of code in this release.&lt;br /&gt;
&lt;br /&gt;
The Linux 2.6.31 merge opened in the mid of the month and some big XFS&lt;br /&gt;
changes have been pushed: A removal of the quotaops&lt;br /&gt;
infrastructure which simplifies the quota implementation, the switch&lt;br /&gt;
from XFS&#039;s own Posix ACL implementation to the generic one shared&lt;br /&gt;
by various other filesystems which also supports in-memory caching of&lt;br /&gt;
ACLs and another incremental refactoring of the sync code.&lt;br /&gt;
&lt;br /&gt;
A patch to better track dirty inodes and work around issues in the&lt;br /&gt;
way the VFS updates the access time stamp on inodes has been reposted&lt;br /&gt;
and discussed. Another patch to converting the existing XFS tracing&lt;br /&gt;
infrastructure to use the ftrace even tracer has been posted.&lt;br /&gt;
&lt;br /&gt;
On the userspace side there have been a few updates to xfsprogs, including&lt;br /&gt;
some repair fixes and a new fallocate command for xfs_io.  There were&lt;br /&gt;
major updates for xfstests:  The existing aio-dio-regress testsuite has&lt;br /&gt;
been merged into xfstests, and various changes went into the tree to make&lt;br /&gt;
xfstests better suitable for use with other filesystems.&lt;br /&gt;
&lt;br /&gt;
The attr and acl projects which have been traditionally been hosted&lt;br /&gt;
as part of the XFS userspace utilities have now been split into a separate&lt;br /&gt;
project maintained by Andreas Gruenbacher, who has been doing most of&lt;br /&gt;
the work on it, and moved to the Savannah hosting platform.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for May 2009 ==&lt;br /&gt;
&lt;br /&gt;
In May Linux 2.6.30 was getting close to be released, and various&lt;br /&gt;
important XFS fixes made it during the latest release candidates.&lt;br /&gt;
In the meantime some big patch series to rework the sync code and&lt;br /&gt;
the inode allocator have been posted for the next merge window.&lt;br /&gt;
&lt;br /&gt;
On the userspace side xfsprogs and xfsdump 3.0.1 were finally released,&lt;br /&gt;
quickly followed by 3.0.2 releases with updated Debian packaging.&lt;br /&gt;
After that various small patches that were held back made it into xfsprogs.&lt;br /&gt;
A patch to add the xfs_reno tool which allows to move inodes around to&lt;br /&gt;
fit into 32 bit inode number space has been posted which is also one&lt;br /&gt;
central aspect of future online shrinking support.&lt;br /&gt;
&lt;br /&gt;
There has been major activity on xfstests including adding generic&lt;br /&gt;
filesystems support to allow running tests that aren&#039;t XFS-specific on&lt;br /&gt;
any Linux filesystems.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for April 2009 ==&lt;br /&gt;
&lt;br /&gt;
In April development for Linux 2.6.30 was in full motion.  A patchset to correct flushing of delayed allocations with near full filesystems has been committed in early April, as well as various smaller fixes. A patch series to improve the behavior of sys_sync has been posted but is waiting for VFS changes queued for Linux 2.6.31.&lt;br /&gt;
&lt;br /&gt;
On the userspace side xfsprogs and xfsdump 3.0.1 have managed to split their release dates into May again after a lot of last-minute build system updates.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for March 2009 ==&lt;br /&gt;
&lt;br /&gt;
Linux 2.6.29 has been released which includes major XFS updates like the&lt;br /&gt;
new generic btree code, a fully functional 32bit compat ioctl implementation&lt;br /&gt;
and the new combined XFS and Linux inode.  (See previous status reports&lt;br /&gt;
for more details). A patch series to improve correctness and performance&lt;br /&gt;
has been posted but not yet applied.  Various minor fixes and cleanups&lt;br /&gt;
have been sent to Linus for 2.6.30 which looks like it will be a minor&lt;br /&gt;
release for XFS after the big churn in 2.6.29.&lt;br /&gt;
&lt;br /&gt;
On userspace a lot of time has been spent on fixing and improving the&lt;br /&gt;
build system shared by the various XFS utilities as well as various smaller&lt;br /&gt;
improvements leading to the xfsprogs and xfsdump 3.0.1 releases which are&lt;br /&gt;
still outstanding.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for February 2009 ==&lt;br /&gt;
&lt;br /&gt;
In February various smaller fixes have been sent to Linus for 2.6.29,&lt;br /&gt;
including a revert of the faster vmap APIs which don&#039;t seem to be quite&lt;br /&gt;
ready yet on the VM side.  At the same time various patches have been&lt;br /&gt;
queued up for 2.6.30, with another big batch pending.  There also has&lt;br /&gt;
been a repost of the CRC patch series, including support for a new,&lt;br /&gt;
larger inode core.&lt;br /&gt;
&lt;br /&gt;
SGI released various bits of work in progress from former employees&lt;br /&gt;
that will be extremely helpful for the future development of XFS,&lt;br /&gt;
thanks a lot to Mark Goodwin for making this happen.&lt;br /&gt;
&lt;br /&gt;
On the userspace side the long awaited 3.0.0 releases of xfsprogs and&lt;br /&gt;
xfsdump finally happened early in the month, accompanied by a 2.2.9&lt;br /&gt;
release of the dmapi userspace.  There have been some issues with packaging&lt;br /&gt;
so a new minor release might follow soon.&lt;br /&gt;
&lt;br /&gt;
The xfs_irecover tool has been relicensed so that it can be merged into&lt;br /&gt;
the GPLv2 codebase of xfsprogs, but the actual integration work hasn&#039;t&lt;br /&gt;
happened yet.&lt;br /&gt;
&lt;br /&gt;
Important bits of XFS documentation that have been available on the XFS&lt;br /&gt;
website in PDF form have been released in the document source form under&lt;br /&gt;
the Creative Commons license so that they can be updated as a community&lt;br /&gt;
effort, and checked into a public git tree.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for January 2009 ==&lt;br /&gt;
&lt;br /&gt;
January has been an extremely busy month on the userspace front.  Many&lt;br /&gt;
smaller and medium updates went into xfsprogs, xfstests and to a lesser&lt;br /&gt;
extent xfsdump.  xfsprogs and xfsdump are ramping up for getting a 3.0.0&lt;br /&gt;
release out in early February which will include the first major re-sync&lt;br /&gt;
with the kernel code in libxfs, a cleanup of the exported library interfaces&lt;br /&gt;
and the move of two tools (xfs_fsr and xfs_estimate) from the xfsdump&lt;br /&gt;
package to xfsprogs.  After this the xfsprogs package will contain all&lt;br /&gt;
tools that use internal libxfs interfaces which fortunately equates to those&lt;br /&gt;
needed for normal administration.  The xfsdump package now only contains&lt;br /&gt;
the xfsdump/xfsrestore tools needed for backing up and restoring XFS&lt;br /&gt;
filesystems.  In addition it grew a fix to support dump/restore on systems&lt;br /&gt;
with a 64k page size.  A large number of acl/attr package patches was&lt;br /&gt;
posted to the list, but pending a possible split of these packages from the&lt;br /&gt;
XFS project these weren&#039;t processed yet.&lt;br /&gt;
&lt;br /&gt;
On the kernel side the big excitement in January was an in-memory corruption&lt;br /&gt;
introduced in the btree refactoring which hit people running 32bit platforms&lt;br /&gt;
without support for large block devices.  This issue was fixed and pushed&lt;br /&gt;
to the 2.6.29 development tree after a long collaborative debugging effort&lt;br /&gt;
at linux.conf.au.  Besides that about a dozen minor fixes were pushed to&lt;br /&gt;
2.6.29 and the first batch of misc patches for the 2.6.30 release cycle&lt;br /&gt;
was sent out.&lt;br /&gt;
&lt;br /&gt;
At the end of December the SGI group in Melbourne which the previous&lt;br /&gt;
XFS maintainer and some other developers worked for has been closed down&lt;br /&gt;
and they will be missed greatly.  As a result maintainership has been passed&lt;br /&gt;
on in a way that has been slightly controversial in the community, and the&lt;br /&gt;
first patchset of work in progress in Melbourne have been posted to the list&lt;br /&gt;
to be picked up by others.&lt;br /&gt;
&lt;br /&gt;
The xfs.org wiki has gotten a little facelift on it&#039;s front page making it&lt;br /&gt;
a lot easier to read.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for December 2008 ==&lt;br /&gt;
&lt;br /&gt;
On Christmas Eve the 2.6.28 mainline kernel was release, with only minor XFS&lt;br /&gt;
bug fixes over 2.6.27.&lt;br /&gt;
&lt;br /&gt;
On the development side December has been busy but unspectacular month.&lt;br /&gt;
All lot of misc fixes and improvements have been sent out, tested and committed&lt;br /&gt;
especially on the user land side.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2008 ==&lt;br /&gt;
&lt;br /&gt;
The mainline kernel is now at 2.6.28-rc6 and includes a small number of&lt;br /&gt;
XFS fixes.  There have been no updates to the XFS development tree during&lt;br /&gt;
November.  Without new regressions that large number of changes that&lt;br /&gt;
missed 2.6.28 has thus stabilized to be ready for 2.6.29.  In the meantime&lt;br /&gt;
kernel-side development has been slow, with the only major patch set&lt;br /&gt;
being a wide number of fixes to the compatibility for 32 bit ioctls on&lt;br /&gt;
a 64 bit kernel.&lt;br /&gt;
&lt;br /&gt;
In the meantime there has been a large number of commits to the user space&lt;br /&gt;
tree, which mostly consist of smaller fixes.  xfsprogs is getting close&lt;br /&gt;
to have the 3.0.0 release which will be the first full resync with the&lt;br /&gt;
kernel sources since the year 2005.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2008 ==&lt;br /&gt;
&lt;br /&gt;
Linux 2.6.27 released with all the bits covered in last month&#039;s report.  It&lt;br /&gt;
did however miss two important fixes for regressions that a few people hit.&lt;br /&gt;
2.6.27.3 or later are recommended for use with XFS.&lt;br /&gt;
&lt;br /&gt;
In the meantime the generic btree implementation, the sync reorganization&lt;br /&gt;
and after a lot of merge pain the XFS and VFS inode unification hit the&lt;br /&gt;
development tree during the time allocated for the merge window.  No XFS&lt;br /&gt;
updates other than the two regression fixes also in 2.6.27.3 have made it&lt;br /&gt;
into mainline as of 2.6.28-rc3.&lt;br /&gt;
&lt;br /&gt;
The only new feature on the list in October is support for the fiemap&lt;br /&gt;
interface that has been added to the VFS during the 2.6.28 merge window.&lt;br /&gt;
However there was lot of patch traffic consisting of fixes and respun&lt;br /&gt;
versions of previously known patches.  There still is a large backlog of&lt;br /&gt;
patches on the list that is not applied to the development tree yet.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2008 ==&lt;br /&gt;
&lt;br /&gt;
With Linux 2.6.27 still not released but only making slow progress from 2.6.27-rc5 to 2.6.27-rc8 XFS changes in mainline have been minimal in September with only about half a dozen bug fixes patches.&lt;br /&gt;
&lt;br /&gt;
In the meantime the generic btree patch set has been committed to the development tree, but not many other updates yet. On the user space side xfsprogs 2.10.1 has been released on September 5th with a number of important bug fixes. Following the release of xfsprogs 2.10.1 open season for development of the user space code has started. The first full update of the shared kernel / user space code in libxfs since 2005 has been committed. In addition to that the number of headers installed for the regular devel package has been reduced to the required minimum and support for checking the source code for endianess errors using sparse has been added.&lt;br /&gt;
&lt;br /&gt;
The patch sets to unify the XFS and Linux inode structures, and rewrite various bits of the sync code have seen various iterations on the XFS list, but haven&#039;t been committed yet. A first set of patches implementing CRCs for various metadata structures has been posted to the list.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2008 ==&lt;br /&gt;
&lt;br /&gt;
With the 2.6.27-rc5 release the 2.6.27 cycle is nearing it&#039;s end. The major XFS feature in 2.6.27-rc5 is support for case-insensitive file names. At this point it is still limited to 7bit ASCII file names, with updates for utf8 file names expected to follow later. In addition to that 2.6.27-rc5 fixes a long-standing problem with non-EABI arm compiler which pack some XFS data structures wrongly. Besides this 2.6.27-rc5 also contains various cleanups, most notably the removal of the last bhv_vnode_t instances, and most uses of semaphores. As usual the diffstat for XFS from 2.6.26 to 2.6.26-rc5 is negative:&lt;br /&gt;
&lt;br /&gt;
       100 files changed, 3819 insertions(+), 4409 deletions(-)&lt;br /&gt;
&lt;br /&gt;
On the user space front a new minor xfsprogs version is about to be released containing various fixes including the user space part of arm packing fix.&lt;br /&gt;
&lt;br /&gt;
Work in progress on the XFS mailing list are a large patch set to unify the alloc, inobt and bmap btree implementation into a single that supports arbitrarily pluggable key and record formats. These btree changes are the first major preparation for adding CRC checks to all metadata structures in XFS, and an even larger patch set to unify the XFS and Linux inode structures, and perform all inode write back from the btree uses instead of an inode cache in XFS.&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Christian&amp;diff=2100</id>
		<title>User:Christian</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Christian&amp;diff=2100"/>
		<updated>2010-08-25T20:40:41Z</updated>

		<summary type="html">&lt;p&gt;Christian: -&amp;gt; ckujau&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[User:Ckujau]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Rpm_for_RedHat&amp;diff=2079</id>
		<title>XFS Rpm for RedHat</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Rpm_for_RedHat&amp;diff=2079"/>
		<updated>2010-04-27T02:08:51Z</updated>

		<summary type="html">&lt;p&gt;Christian: -&amp;gt; http://wiki.centos.org/AdditionalResources/Repositories/CentOSPlus&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The [http://www.centos.org CentOS] project has prebuilt xfs kernel modules as well as xfsprogs rpms for CentOS4 and CentOS5 in the [http://wiki.centos.org/AdditionalResources/Repositories/CentOSPlus CentOSPlus] repository.&lt;br /&gt;
&lt;br /&gt;
These originated from Eric&#039;s RPMS, but they are quite old, have known bugs at this point, and are not well maintained.  Newer versions of RHEL5 have brought better options for xfs use, at least as far as kernelspace:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;pre&amp;gt;* Fri Apr 03 2009 Don Zickus &amp;lt;dzickus@redhat.com&amp;gt; [2.6.18-138.el5]&lt;br /&gt;
- [fs] xfs: misc upstream fixes (Eric Sandeen ) [470845]&lt;br /&gt;
- [fs] xfs: fix compat ioctls (Eric Sandeen ) [470845]&lt;br /&gt;
- [fs] xfs: new aops interface (Eric Sandeen ) [470845]&lt;br /&gt;
- [fs] xfs: backport to rhel5.4 kernel (Eric Sandeen ) [470845]&lt;br /&gt;
- [fs] xfs:  update to 2.6.28.6 codebase (Eric Sandeen ) [470845]&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Aside from the RHEL5.4 xfs.ko in the x86_64 kernel - when in doubt, using the Centos RPMs is probably the best approach for now.  These can be used with RHEL as well, but of course this is not supported by Red Hat.&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2078</id>
		<title>XFS FAQ</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2078"/>
		<updated>2010-04-27T02:07:57Z</updated>

		<summary type="html">&lt;p&gt;Christian: Filesystem performance tweaking with XFS on Linux&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Info from: [http://oss.sgi.com/projects/xfs/faq.html main XFS faq at SGI]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many thanks to earlier maintainers of this document - Thomas Graichen and Seth Mos.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about XFS? ==&lt;br /&gt;
&lt;br /&gt;
The SGI XFS project page http://oss.sgi.com/projects/xfs/ is the definitive reference. It contains pointers to whitepapers, books, articles, etc.&lt;br /&gt;
&lt;br /&gt;
You could also join the [[XFS_email_list_and_archives|XFS mailing list]] or the &#039;&#039;&#039;&amp;lt;nowiki&amp;gt;#xfs&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039; IRC channel on &#039;&#039;irc.freenode.net&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about ACLs? ==&lt;br /&gt;
&lt;br /&gt;
Andreas Gruenbacher maintains the Extended Attribute and POSIX ACL documentation for Linux at http://acl.bestbits.at/&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;acl(5)&#039;&#039;&#039; manual page is also quite extensive.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find information about the internals of XFS? ==&lt;br /&gt;
&lt;br /&gt;
An [http://oss.sgi.com/projects/xfs/training/ SGI XFS Training course] aimed at developers, triage and support staff, and serious users has been in development. Parts of the course are clearly still incomplete, but there is enough content to be useful to a broad range of users.&lt;br /&gt;
&lt;br /&gt;
Barry Naujok has documented the [http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf XFS ondisk format] which is a very useful reference.&lt;br /&gt;
&lt;br /&gt;
== Q: What partition type should I use for XFS on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Linux native filesystem (83).&lt;br /&gt;
&lt;br /&gt;
== Q: What mount options does XFS have? ==&lt;br /&gt;
&lt;br /&gt;
There are a number of mount options influencing XFS filesystems - refer to the &#039;&#039;&#039;mount(8)&#039;&#039;&#039; manual page or the documentation in the kernel source tree itself ([http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD Documentation/filesystems/xfs.txt])&lt;br /&gt;
&lt;br /&gt;
== Q: Is there any relation between the XFS utilities and the kernel version? ==&lt;br /&gt;
&lt;br /&gt;
No, there is no relation. Newer utilities tend to mainly have fixes and checks the previous versions might not have. New features are also added in a backward compatible way - if they are enabled via mkfs, an incapable (old) kernel will recognize that it does not understand the new feature, and refuse to mount the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Does it run on platforms other than i386? ==&lt;br /&gt;
&lt;br /&gt;
XFS runs on all of the platforms that Linux supports. It is more tested on the more common platforms, especially the i386 family. Its also well tested on the IA64 platform since thats the platform SGI Linux products use.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Do quotas work on XFS? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
To use quotas with XFS, you need to enable XFS quota support when you configure your kernel. You also need to specify quota support when mounting. You can get the Linux quota utilities at their sourceforge website [http://sourceforge.net/projects/linuxquota/  http://sourceforge.net/projects/linuxquota/] or use &#039;&#039;&#039;xfs_quota(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: What&#039;s project quota? ==&lt;br /&gt;
&lt;br /&gt;
The  project  quota  is a quota mechanism in XFS can be used to implement a form of directory tree quota, where a specified directory and all of the files and subdirectories below it (i.e. a tree) can be restricted to using a subset of the available space in the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Can group quota and project quota be used at the same time? ==&lt;br /&gt;
&lt;br /&gt;
No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Is umounting prjquota (project quota) enabled fs and mouting it again with grpquota (group quota) removing prjquota limits previously set from fs (and vice versa) ? ==&lt;br /&gt;
&lt;br /&gt;
To be answered.&lt;br /&gt;
&lt;br /&gt;
== Q: Are there any dump/restore tools for XFS? ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and &#039;&#039;&#039;xfsrestore(8)&#039;&#039;&#039; are fully supported. The tape format is the same as on IRIX, so tapes are interchangeable between operating systems.&lt;br /&gt;
&lt;br /&gt;
== Q: Does LILO work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
This depends on where you install LILO.&lt;br /&gt;
&lt;br /&gt;
Yes, for MBR (Master Boot Record) installations.&lt;br /&gt;
&lt;br /&gt;
No, for root partition installations because the XFS superblock is written at block zero, where LILO would be installed. This is to maintain compatibility with the IRIX on-disk format, and will not be changed.&lt;br /&gt;
&lt;br /&gt;
== Q: Does GRUB work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
There is native XFS filesystem support for GRUB starting with version 0.91 and onward. Unfortunately, GRUB used to make incorrect assumptions about being able to read a block device image while a filesystem is mounted and actively being written to, which could cause intermittent problems when using XFS. This has reportedly since been fixed, and the 0.97 version (at least) of GRUB is apparently stable.&lt;br /&gt;
&lt;br /&gt;
== Q: Can XFS be used for a root filesystem? ==&lt;br /&gt;
&lt;br /&gt;
Yes, with one caveat: Linux does not support an external XFS journal for the root filesystem via the &amp;quot;rootflags=&amp;quot; kernel parameter. To use an external journal for the root filesystem in Linux, an init ramdisk must mount the root filesystem with explicit &amp;quot;logdev=&amp;quot; specified. [http://gus3.typepad.com/i_am_therefore_i_think/2008/07/scratching-an-i.html More information here.]&lt;br /&gt;
&lt;br /&gt;
== Q: Will I be able to use my IRIX XFS filesystems on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously, you should back-up your data before trying to move it between systems. Filesystems must be &amp;quot;clean&amp;quot; when moved (i.e. unmounted). If you plan to use IRIX filesystems on Linux keep the following points in mind: the kernel needs to have SGI partition support enabled; there is no XLV support in Linux, so you are unable to read IRIX filesystems which use the XLV volume manager; also not all blocksizes available on IRIX are available on Linux (only blocksizes less than or equal to the pagesize of the architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is possible for now). Make sure that the directory format is version 2 on the IRIX filesystems (this is the default since IRIX 6.5.5). Linux can only read v2 directories.&lt;br /&gt;
&lt;br /&gt;
== Q: Is there a way to make a XFS filesystem larger or smaller? ==&lt;br /&gt;
&lt;br /&gt;
You can &#039;&#039;NOT&#039;&#039; make a XFS partition smaller online. The only way to shrink is to do a complete dump, mkfs and restore.&lt;br /&gt;
&lt;br /&gt;
An XFS filesystem may be enlarged by using &#039;&#039;&#039;xfs_growfs(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
If using partitions, you need to have free space after this partition to do so. Remove partition, recreate it larger with the &#039;&#039;exact same&#039;&#039; starting point. Run &#039;&#039;&#039;xfs_growfs&#039;&#039;&#039; to make the partition larger. Note - editing partition tables is a dangerous pastime, so back up your filesystem before doing so.&lt;br /&gt;
&lt;br /&gt;
Using XFS filesystems on top of a volume manager makes this a lot easier.&lt;br /&gt;
&lt;br /&gt;
== Q: What information should I include when reporting a problem? ==&lt;br /&gt;
&lt;br /&gt;
Things to include are what version of XFS you are using, if this is a CVS version of what date and version of the kernel. If you have problems with userland packages please report the version of the package you are using.&lt;br /&gt;
&lt;br /&gt;
If the problem relates to a particular filesystem, the output from the &#039;&#039;&#039;xfs_info(8)&#039;&#039;&#039; command and any &#039;&#039;&#039;mount(8)&#039;&#039;&#039; options in use will also be useful to the developers.&lt;br /&gt;
&lt;br /&gt;
If you experience an oops, please run it through &#039;&#039;&#039;ksymoops&#039;&#039;&#039; so that it can be interpreted.&lt;br /&gt;
&lt;br /&gt;
If you have a filesystem that cannot be repaired, make sure you have xfsprogs 2.9.0 or later and run &#039;&#039;&#039;xfs_metadump(8)&#039;&#039;&#039; to capture the metadata (which obfuscates filenames and attributes to protect your privacy) and make the dump available for someone to analyse.&lt;br /&gt;
&lt;br /&gt;
== Q: Mounting an XFS filesystem does not work - what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
If mount prints an error message something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
     mount: /dev/hda5 has wrong major or minor number&lt;br /&gt;
&lt;br /&gt;
you either do not have XFS compiled into the kernel (or you forgot to load the modules) or you did not use the &amp;quot;-t xfs&amp;quot; option on mount or the &amp;quot;xfs&amp;quot; option in &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you get something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 mount: wrong fs type, bad option, bad superblock on /dev/sda1,&lt;br /&gt;
        or too many mounted file systems&lt;br /&gt;
&lt;br /&gt;
Refer to your system log file (&amp;lt;tt&amp;gt;/var/log/messages&amp;lt;/tt&amp;gt;) for a detailed diagnostic message from the kernel.&lt;br /&gt;
&lt;br /&gt;
== Q: Does the filesystem have an undelete capability? ==&lt;br /&gt;
&lt;br /&gt;
There is no undelete in XFS. However at least some XFS driver implementations does not wipe file information nodes completely so there are chance to recover files with specialized commercial software like [http://www.ufsexplorer.com/rdr_xfs.php Raise Data Recovery for XFS].&lt;br /&gt;
In this kind of XFS driver implementation it does not re-use directory entries immediately so there are chance to get back recently deleted files even with their real names.&lt;br /&gt;
&lt;br /&gt;
This applies to most recent Linux distributions, as well as to most popular NAS boxes that use embedded linux and XFS file system.&lt;br /&gt;
&lt;br /&gt;
Anyway, the best is to always keep backups.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I backup a XFS filesystem and ACLs? ==&lt;br /&gt;
&lt;br /&gt;
You can backup a XFS filesystem with utilities like &#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and standard &#039;&#039;&#039;tar(1)&#039;&#039;&#039; for standard files. If you want to backup ACLs you will need to use &#039;&#039;&#039;xfsdump&#039;&#039;&#039; or [http://www.bacula.org/en/dev-manual/Current_State_Bacula.html Bacula] (&amp;gt; version 3.1.4) or [http://rsync.samba.org/ rsync] (&amp;gt;= version 3.0.0) to backup ACLs and EAs. &#039;&#039;&#039;xfsdump&#039;&#039;&#039; can also be integrated with [http://www.amanda.org/ amanda(8)].&lt;br /&gt;
&lt;br /&gt;
== Q: I see applications returning error 990 or &amp;quot;Structure needs cleaning&amp;quot;, what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
The error 990 stands for [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=blob;f=fs/xfs/linux-2.6/xfs_linux.h#l145 EFSCORRUPTED] which usually means XFS has detected a filesystem metadata problem and has shut the filesystem down to prevent further damage. Also, since about June 2006, we [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=commit;h=da2f4d679c8070ba5b6a920281e495917b293aa0 converted from EFSCORRUPTED/990 over to using EUCLEAN], &amp;quot;Structure needs cleaning.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The cause can be pretty much anything, unfortunately - filesystem, virtual memory manager, volume manager, device driver, or hardware.&lt;br /&gt;
&lt;br /&gt;
There should be a detailed console message when this initially happens. The messages have important information giving hints to developers as to the earliest point that a problem was detected. It is there to protect your data.&lt;br /&gt;
&lt;br /&gt;
You can use xfs_check and xfs_repair to remedy the problem (with the file system unmounted).&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I see binary NULLS in some files after recovery when I unplugged the power? ==&lt;br /&gt;
&lt;br /&gt;
Update: This issue has been addressed with a CVS fix on the 29th March 2007 and merged into mainline on 8th May 2007 for 2.6.22-rc1.&lt;br /&gt;
&lt;br /&gt;
XFS journals metadata updates, not data updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash.&lt;br /&gt;
&lt;br /&gt;
Since XFS does not write data out immediately unless you tell it to with fsync, an O_SYNC or O_DIRECT open (the same is true of other filesystems), you are looking at an inode which was flushed out, but whose data was not. Typically you&#039;ll find that the inode is not taking any space since all it has is a size but no extents allocated (try examining the file with the &#039;&#039;&#039;xfs_bmap(8)&#039;&#039;&#039; command).&lt;br /&gt;
&lt;br /&gt;
== Q: What is the problem with the write cache on journaled filesystems? ==&lt;br /&gt;
&lt;br /&gt;
Many drives use a write back cache in order to speed up the performance of writes.  However, there are conditions such as power failure when the write cache memory is never flushed to the actual disk.  Further, the drive can de-stage data from the write cache to the platters in any order that it chooses.  This causes problems for XFS and journaled filesystems in general because they rely on knowing when a write has completed to the disk. They need to know that the log information has made it to disk before allowing metadata to go to disk.  When the metadata makes it to disk then the transaction can effectively be deleted from the log resulting in movement of the tail of the log and thus freeing up some log space. So if the writes never make it to the physical disk, then the ordering is violated and the log and metadata can be lost, resulting in filesystem corruption.&lt;br /&gt;
&lt;br /&gt;
With hard disk cache sizes of currently (Jan 2009) up to 32MB that can be a lot of valuable information.  In a RAID with 8 such disks these adds to 256MB, and the chance of having filesystem metadata in the cache is so high that you have a very high chance of big data losses on a power outage.&lt;br /&gt;
&lt;br /&gt;
With a single hard disk and barriers turned on (on=default), the drive write cache is flushed before and after a barrier is issued.  A powerfail &amp;quot;only&amp;quot; loses data in the cache but no essential ordering is violated, and corruption will not occur.&lt;br /&gt;
&lt;br /&gt;
With a RAID controller with battery backed controller cache and cache in write back mode, you should turn off barriers - they are unnecessary in this case, and if the controller honors the cache flushes, it will be harmful to performance.  But then you *must* disable the individual hard disk write cache in order to ensure to keep the filesystem intact after a power failure. The method for doing this is different for each RAID controller. See the section about RAID controllers below.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I tell if I have the disk write cache enabled? ==&lt;br /&gt;
&lt;br /&gt;
For SCSI/SATA:&lt;br /&gt;
&lt;br /&gt;
* Look in dmesg(8) output for a driver line, such as:&amp;lt;br /&amp;gt; &amp;quot;SCSI device sda: drive cache: write back&amp;quot;&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# sginfo -c /dev/sda | grep -i &#039;write cache&#039; &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For PATA/SATA (although for SATA this only works on a recent kernel with ATA command passthrough):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -I /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; and look under &amp;quot;Enabled Supported&amp;quot; for &amp;quot;Write cache&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
== Q: How can I address the problem with the disk write cache? ==&lt;br /&gt;
&lt;br /&gt;
=== Disabling the disk write back cache. ===&lt;br /&gt;
&lt;br /&gt;
For SATA/PATA(IDE): (although for SATA this only works on a recent kernel with ATA command passthrough):&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -W0 /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # hdparm -W0 /dev/hda&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# blktool /dev/sda wcache off&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # blktool /dev/hda wcache off&lt;br /&gt;
&lt;br /&gt;
For SCSI:&lt;br /&gt;
&lt;br /&gt;
* Using sginfo(8) which is a little tedious&amp;lt;br /&amp;gt; It takes 3 steps. For example:&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -c /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives a list of attribute names and values&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cX /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives an array of cache values which you must match up with from step 1, e.g.&amp;lt;br /&amp;gt; 0 0 0 1 0 1 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cXR /dev/sda 0 0 0 1 0 0 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; allows you to reset the value of the cache attributes.&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
This disabling is kept persistent for a SCSI disk. However, for a SATA/PATA disk this needs to be done after every reset as it will reset back to the default of the write cache enabled. And a reset can happen after reboot or on error recovery of the drive. This makes it rather difficult to guarantee that the write cache is maintained as disabled.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using an external log. ===&lt;br /&gt;
&lt;br /&gt;
Some people have considered the idea of using an external log on a separate drive with the write cache disabled and the rest of the file system on another disk with the write cache enabled. However, that will &#039;&#039;&#039;not&#039;&#039;&#039; solve the problem. For example, the tail of the log is moved when we are notified that a metadata write is completed to disk and we won&#039;t be able to guarantee that if the metadata is on a drive with the write cache enabled.&lt;br /&gt;
&lt;br /&gt;
In fact using an external log will disable XFS&#039; write barrier support.&lt;br /&gt;
&lt;br /&gt;
=== Write barrier support. ===&lt;br /&gt;
&lt;br /&gt;
Write barrier support is enabled by default in XFS since kernel version 2.6.17. It is disabled by mounting the filesystem with &amp;quot;nobarrier&amp;quot;. Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported with external log device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported by the underlying device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, trial barrier write failed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If the filesystem is mounted with an external log device then we currently don&#039;t support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn&#039;t support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.&lt;br /&gt;
&lt;br /&gt;
== Q. Should barriers be enabled with storage which has a persistent write cache? ==&lt;br /&gt;
&lt;br /&gt;
Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with &amp;quot;nobarrier&amp;quot;. But take care about the hard disk write cache, which should be off.&lt;br /&gt;
&lt;br /&gt;
== Q. Which settings does my RAID controller need ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s hard to tell because there are so many controllers. Please consult your RAID controller documentation to determine how to change these settings, but we try to give an overview here:&lt;br /&gt;
&lt;br /&gt;
Real RAID controllers (not those found onboard of mainboards) normally have a battery backed cache (or an [http://en.wikipedia.org/wiki/Electric_double-layer_capacitor ultracapacitor] + flash memory &amp;quot;[http://www.tweaktown.com/articles/2800/adaptec_zero_maintenance_cache_protection_explained/ zero maintenance cache]&amp;quot;) which is used for buffering writes to improve speed. Even if it&#039;s battery backed, the individual hard disk write caches need to be turned off, as they are not protected from a powerfail and will just lose all contents in that case.&lt;br /&gt;
&lt;br /&gt;
* onboard RAID controllers: there are so many different types it&#039;s hard to tell. Generally, those controllers have no cache, but let the hard disk write cache on. That can lead to the bad situation that after a powerfail with RAID-1 when only parts of the disk cache have been written, the controller doesn&#039;t even see that the disks are out of sync, as the disks can resort cached blocks and might have saved the superblock info, but then lost different data contents. So, turn off disk write caches before using the RAID function.&lt;br /&gt;
&lt;br /&gt;
* 3ware: /cX/uX set cache=off, see http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86&lt;br /&gt;
&lt;br /&gt;
* Adaptec: allows setting individual drives cache&lt;br /&gt;
arcconf setcache &amp;lt;disk&amp;gt; wb|wt&lt;br /&gt;
wb=write back, which means write cache on, wt=write through, which means write cache off. So &amp;quot;wt&amp;quot; should be chosen.&lt;br /&gt;
&lt;br /&gt;
* Areca: In archttp under &amp;quot;System Controls&amp;quot; -&amp;gt; &amp;quot;System Configuration&amp;quot; there&#039;s the option &amp;quot;Disk Write Cache Mode&amp;quot; (defaults &amp;quot;Auto&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Off&amp;quot;: disk write cache is turned off&lt;br /&gt;
&lt;br /&gt;
&amp;quot;On&amp;quot;: disk write cache is enabled, this is not save for your data but fast&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Auto&amp;quot;: If you use a BBM (battery backup module, which you really should use if you care about your data), the controller automatically turns disk writes off, to protect your data. In case no BBM is attached, the controller switches to &amp;quot;On&amp;quot;, because neither controller cache nor disk cache is save so you don&#039;t seem to care about your data and just want high speed (which you get then).&lt;br /&gt;
&lt;br /&gt;
That&#039;s a very sensible default so you can let it &amp;quot;Auto&amp;quot; or enforce &amp;quot;Off&amp;quot; to be sure.&lt;br /&gt;
&lt;br /&gt;
* LSI MegaRAID: allows setting individual disks cache:&lt;br /&gt;
MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL -EnDskCache|DisDskCache&lt;br /&gt;
&lt;br /&gt;
* Xyratex: from the docs: &amp;quot;Write cache includes the disk drive cache and controller cache.&amp;quot;. So that means you can only set the drive caches and the unit caches together. To protect your data, turn it off, but write performance will suffer badly as also the controller write cache is disabled.&lt;br /&gt;
&lt;br /&gt;
== Q: Which settings are best with virtualization like VMware, XEN, qemu? ==&lt;br /&gt;
&lt;br /&gt;
The biggest problem is that those products seem to also virtualize disk &lt;br /&gt;
writes in a way that even barriers don&#039;t work anymore, which means even &lt;br /&gt;
a fsync is not reliable. Tests confirm that unplugging the power from &lt;br /&gt;
such a system even with RAID controller with battery backed cache and &lt;br /&gt;
hard disk cache turned off (which is save on a normal host) you can &lt;br /&gt;
destroy a database within the virtual machine (client, domU whatever you &lt;br /&gt;
call it).&lt;br /&gt;
&lt;br /&gt;
In qemu you can specify cache=off on the line specifying the virtual &lt;br /&gt;
disk. For others information is missing.&lt;br /&gt;
&lt;br /&gt;
== Q: What is the issue with directory corruption in Linux 2.6.17? ==&lt;br /&gt;
&lt;br /&gt;
In the Linux kernel 2.6.17 release a subtle bug was accidentally introduced into the XFS directory code by some &amp;quot;sparse&amp;quot; endian annotations. This bug was sufficiently uncommon (it only affects a certain type of format change, in Node or B-Tree format directories, and only in certain situations) that it was not detected during our regular regression testing, but it has been observed in the wild by a number of people now.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: the fix is included in 2.6.17.7 and later kernels.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To add insult to injury, &#039;&#039;&#039;xfs_repair(8)&#039;&#039;&#039; is currently not correcting these directories on detection of this corrupt state either. This &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; issue is actively being worked on, and a fixed version will be available shortly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
No other kernel versions are affected. However, using a corrupt filesystem on other kernels can still result in the filesystem being shutdown if the problem has not been rectified (on disk), making it seem like other kernels are affected.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;xfs_check&#039;&#039;&#039; tool, or &#039;&#039;&#039;xfs_repair -n&#039;&#039;&#039;, should be able to detect any directory corruption.&lt;br /&gt;
&lt;br /&gt;
Until a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; binary is available, one can make use of the &#039;&#039;&#039;xfs_db(8)&#039;&#039;&#039; command to mark the problem directory for removal (see the example below). A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; invocation will remove the directory and move all contents into &amp;quot;lost+found&amp;quot;, named by inode number (see second example on how to map inode number to directory entry name, which needs to be done _before_ removing the directory itself). The inode number of the corrupt directory is included in the shutdown report issued by the kernel on detection of directory corruption. Using that inode number, this is how one would ensure it is removed:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 core.mode = 040755&lt;br /&gt;
 core.version = 2&lt;br /&gt;
 core.format = 3 (btree)&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; write core.mode 0&lt;br /&gt;
 xfs_db&amp;amp;gt; quit&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; will clear the directory, and add new entries (named by inode number) in lost+found.&lt;br /&gt;
&lt;br /&gt;
The easiest way to map inode numbers to full paths is via &#039;&#039;&#039;xfs_ncheck(8)&#039;&#039;&#039;&amp;lt;nowiki&amp;gt;: &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_ncheck -i 14101 -i 14102 /dev/sdXXX&lt;br /&gt;
       14101 full/path/mumble_fratz_foo_bar_1495&lt;br /&gt;
       14102 full/path/mumble_fratz_foo_bar_1494&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should this not work, we can manually map inode numbers in B-Tree format directory by taking the following steps:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 ...&lt;br /&gt;
 next_unlinked = null&lt;br /&gt;
 u.bmbt.level = 1&lt;br /&gt;
 u.bmbt.numrecs = 1&lt;br /&gt;
 u.bmbt.keys[1] = [startoff] 1:[0]&lt;br /&gt;
 u.bmbt.ptrs[1] = 1:3628&lt;br /&gt;
 xfs_db&amp;amp;gt; fsblock 3628&lt;br /&gt;
 xfs_db&amp;amp;gt; type bmapbtd&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 magic = 0x424d4150&lt;br /&gt;
 level = 0&lt;br /&gt;
 numrecs = 19&lt;br /&gt;
 leftsib = null&lt;br /&gt;
 rightsib = null&lt;br /&gt;
 recs[1-19] = [startoff,startblock,blockcount,extentflag]&lt;br /&gt;
        1:[0,3088,4,0] 2:[4,3128,8,0] 3:[12,3308,4,0] 4:[16,3360,4,0]&lt;br /&gt;
        5:[20,3496,8,0] 6:[28,3552,8,0] 7:[36,3624,4,0] 8:[40,3633,4,0]&lt;br /&gt;
        9:[44,3688,8,0] 10:[52,3744,4,0] 11:[56,3784,8,0]&lt;br /&gt;
        12:[64,3840,8,0] 13:[72,3896,4,0] 14:[33554432,3092,4,0]&lt;br /&gt;
        15:[33554436,3488,8,0] 16:[33554444,3629,4,0]&lt;br /&gt;
        17:[33554448,3748,4,0] 18:[33554452,3900,4,0]&lt;br /&gt;
        19:[67108864,3364,4,0]&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point we are looking at the extents that hold all of the directory information. There are three types of extent here, we have the data blocks (extents 1 through 13 above), then the leaf blocks (extents 14 through 18), then the freelist blocks (extent 19 above). The jumps in the first field (start offset) indicate our progression through each of the three types. For recovering file names, we are only interested in the data blocks, so we can now feed those offset numbers into the &#039;&#039;&#039;xfs_db&#039;&#039;&#039; dblock command. So, for the fifth extent - 5:[20,3496,8,0] - listed above:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; dblock 20&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 dhdr.magic = 0x58443244&lt;br /&gt;
 dhdr.bestfree[0].offset = 0&lt;br /&gt;
 dhdr.bestfree[0].length = 0&lt;br /&gt;
 dhdr.bestfree[1].offset = 0&lt;br /&gt;
 dhdr.bestfree[1].length = 0&lt;br /&gt;
 dhdr.bestfree[2].offset = 0&lt;br /&gt;
 dhdr.bestfree[2].length = 0&lt;br /&gt;
 du[0].inumber = 13937&lt;br /&gt;
 du[0].namelen = 25&lt;br /&gt;
 du[0].name = &amp;quot;mumble_fratz_foo_bar_1595&amp;quot;&lt;br /&gt;
 du[0].tag = 0x10&lt;br /&gt;
 du[1].inumber = 13938&lt;br /&gt;
 du[1].namelen = 25&lt;br /&gt;
 du[1].name = &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;&lt;br /&gt;
 du[1].tag = 0x38&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
So, here we can see that inode number 13938 matches up with name &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;. Iterate through all the extents, and extract all the name-to-inode-number mappings you can, as these will be useful when looking at &amp;quot;lost+found&amp;quot; (once &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; has removed the corrupt directory).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why does my &amp;gt; 2TB XFS partition disappear when I reboot ? ==&lt;br /&gt;
&lt;br /&gt;
Strictly speaking this is not an XFS problem.&lt;br /&gt;
&lt;br /&gt;
To support &amp;gt; 2TB partitions you need two things: a kernel that supports large block devices (&amp;lt;tt&amp;gt;CONFIG_LBD=y&amp;lt;/tt&amp;gt;) and a partition table format that can hold large partitions.  The default DOS partition tables don&#039;t.  The best partition format for&lt;br /&gt;
&amp;gt; 2TB partitions is the EFI GPT format (&amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Without CONFIG_LBD=y you can&#039;t even create the filesystem, but without &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt; it works fine until you reboot at which point the partition will disappear.  Note that you need to enable the &amp;lt;tt&amp;gt;CONFIG_PARTITION_ADVANCED&amp;lt;/tt&amp;gt; option before you can set &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I receive &amp;lt;tt&amp;gt;No space left on device&amp;lt;/tt&amp;gt; after &amp;lt;tt&amp;gt;xfs_growfs&amp;lt;/tt&amp;gt;? ==&lt;br /&gt;
&lt;br /&gt;
After [http://oss.sgi.com/pipermail/xfs/2009-January/039828.html growing a XFS filesystem], df(1) would show enough free space but attempts to write to the filesystem result in -ENOSPC. To fix this, [http://oss.sgi.com/pipermail/xfs/2009-January/039835.html Dave Chinner advised]:&lt;br /&gt;
&lt;br /&gt;
  The only way to fix this is to move data around to free up space&lt;br /&gt;
  below 1TB. Find your oldest data (i.e. that was around before even&lt;br /&gt;
  the first grow) and move it off the filesystem (move, not copy).&lt;br /&gt;
  Then if you copy it back on, the data blocks will end up above 1TB&lt;br /&gt;
  and that should leave you with plenty of space for inodes below 1TB.&lt;br /&gt;
  &lt;br /&gt;
  A complete dump and restore will also fix the problem ;)&lt;br /&gt;
&lt;br /&gt;
Also, you can add &#039;inode64&#039; to your mount options to allow inodes to live above 1TB.&lt;br /&gt;
&lt;br /&gt;
== Q: Is using noatime or/and nodiratime at mount time giving any performance benefits in xfs (or not using them performance decrease)? ==&lt;br /&gt;
&lt;br /&gt;
See: [http://everything2.com/index.pl?node_id=1479435 Filesystem performance tweaking with XFS on Linux]&lt;br /&gt;
&lt;br /&gt;
== Q: How to get around a bad inode repair is unable to clean up ==&lt;br /&gt;
&lt;br /&gt;
The trick is go in with xfs_db and mark the inode as a deleted, which will cause repair to clean it up and finish the remove process.&lt;br /&gt;
&lt;br /&gt;
  xfs_db -x -c &#039;inode XXX&#039; -c &#039;write core.nextents 0&#039; -c &#039;write core.size 0&#039; /dev/hdXX&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2077</id>
		<title>XFS FAQ</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2077"/>
		<updated>2010-04-27T01:36:34Z</updated>

		<summary type="html">&lt;p&gt;Christian: links fixed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Info from: [http://oss.sgi.com/projects/xfs/faq.html main XFS faq at SGI]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many thanks to earlier maintainers of this document - Thomas Graichen and Seth Mos.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about XFS? ==&lt;br /&gt;
&lt;br /&gt;
The SGI XFS project page http://oss.sgi.com/projects/xfs/ is the definitive reference. It contains pointers to whitepapers, books, articles, etc.&lt;br /&gt;
&lt;br /&gt;
You could also join the [[XFS_email_list_and_archives|XFS mailing list]] or the &#039;&#039;&#039;&amp;lt;nowiki&amp;gt;#xfs&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039; IRC channel on &#039;&#039;irc.freenode.net&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about ACLs? ==&lt;br /&gt;
&lt;br /&gt;
Andreas Gruenbacher maintains the Extended Attribute and POSIX ACL documentation for Linux at http://acl.bestbits.at/&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;acl(5)&#039;&#039;&#039; manual page is also quite extensive.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find information about the internals of XFS? ==&lt;br /&gt;
&lt;br /&gt;
An [http://oss.sgi.com/projects/xfs/training/ SGI XFS Training course] aimed at developers, triage and support staff, and serious users has been in development. Parts of the course are clearly still incomplete, but there is enough content to be useful to a broad range of users.&lt;br /&gt;
&lt;br /&gt;
Barry Naujok has documented the [http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf XFS ondisk format] which is a very useful reference.&lt;br /&gt;
&lt;br /&gt;
== Q: What partition type should I use for XFS on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Linux native filesystem (83).&lt;br /&gt;
&lt;br /&gt;
== Q: What mount options does XFS have? ==&lt;br /&gt;
&lt;br /&gt;
There are a number of mount options influencing XFS filesystems - refer to the &#039;&#039;&#039;mount(8)&#039;&#039;&#039; manual page or the documentation in the kernel source tree itself ([http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD Documentation/filesystems/xfs.txt])&lt;br /&gt;
&lt;br /&gt;
== Q: Is there any relation between the XFS utilities and the kernel version? ==&lt;br /&gt;
&lt;br /&gt;
No, there is no relation. Newer utilities tend to mainly have fixes and checks the previous versions might not have. New features are also added in a backward compatible way - if they are enabled via mkfs, an incapable (old) kernel will recognize that it does not understand the new feature, and refuse to mount the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Does it run on platforms other than i386? ==&lt;br /&gt;
&lt;br /&gt;
XFS runs on all of the platforms that Linux supports. It is more tested on the more common platforms, especially the i386 family. Its also well tested on the IA64 platform since thats the platform SGI Linux products use.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Do quotas work on XFS? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
To use quotas with XFS, you need to enable XFS quota support when you configure your kernel. You also need to specify quota support when mounting. You can get the Linux quota utilities at their sourceforge website [http://sourceforge.net/projects/linuxquota/  http://sourceforge.net/projects/linuxquota/] or use &#039;&#039;&#039;xfs_quota(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: What&#039;s project quota? ==&lt;br /&gt;
&lt;br /&gt;
The  project  quota  is a quota mechanism in XFS can be used to implement a form of directory tree quota, where a specified directory and all of the files and subdirectories below it (i.e. a tree) can be restricted to using a subset of the available space in the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Can group quota and project quota be used at the same time? ==&lt;br /&gt;
&lt;br /&gt;
No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Is umounting prjquota (project quota) enabled fs and mouting it again with grpquota (group quota) removing prjquota limits previously set from fs (and vice versa) ? ==&lt;br /&gt;
&lt;br /&gt;
To be answered.&lt;br /&gt;
&lt;br /&gt;
== Q: Are there any dump/restore tools for XFS? ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and &#039;&#039;&#039;xfsrestore(8)&#039;&#039;&#039; are fully supported. The tape format is the same as on IRIX, so tapes are interchangeable between operating systems.&lt;br /&gt;
&lt;br /&gt;
== Q: Does LILO work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
This depends on where you install LILO.&lt;br /&gt;
&lt;br /&gt;
Yes, for MBR (Master Boot Record) installations.&lt;br /&gt;
&lt;br /&gt;
No, for root partition installations because the XFS superblock is written at block zero, where LILO would be installed. This is to maintain compatibility with the IRIX on-disk format, and will not be changed.&lt;br /&gt;
&lt;br /&gt;
== Q: Does GRUB work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
There is native XFS filesystem support for GRUB starting with version 0.91 and onward. Unfortunately, GRUB used to make incorrect assumptions about being able to read a block device image while a filesystem is mounted and actively being written to, which could cause intermittent problems when using XFS. This has reportedly since been fixed, and the 0.97 version (at least) of GRUB is apparently stable.&lt;br /&gt;
&lt;br /&gt;
== Q: Can XFS be used for a root filesystem? ==&lt;br /&gt;
&lt;br /&gt;
Yes, with one caveat: Linux does not support an external XFS journal for the root filesystem via the &amp;quot;rootflags=&amp;quot; kernel parameter. To use an external journal for the root filesystem in Linux, an init ramdisk must mount the root filesystem with explicit &amp;quot;logdev=&amp;quot; specified. [http://gus3.typepad.com/i_am_therefore_i_think/2008/07/scratching-an-i.html More information here.]&lt;br /&gt;
&lt;br /&gt;
== Q: Will I be able to use my IRIX XFS filesystems on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously, you should back-up your data before trying to move it between systems. Filesystems must be &amp;quot;clean&amp;quot; when moved (i.e. unmounted). If you plan to use IRIX filesystems on Linux keep the following points in mind: the kernel needs to have SGI partition support enabled; there is no XLV support in Linux, so you are unable to read IRIX filesystems which use the XLV volume manager; also not all blocksizes available on IRIX are available on Linux (only blocksizes less than or equal to the pagesize of the architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is possible for now). Make sure that the directory format is version 2 on the IRIX filesystems (this is the default since IRIX 6.5.5). Linux can only read v2 directories.&lt;br /&gt;
&lt;br /&gt;
== Q: Is there a way to make a XFS filesystem larger or smaller? ==&lt;br /&gt;
&lt;br /&gt;
You can &#039;&#039;NOT&#039;&#039; make a XFS partition smaller online. The only way to shrink is to do a complete dump, mkfs and restore.&lt;br /&gt;
&lt;br /&gt;
An XFS filesystem may be enlarged by using &#039;&#039;&#039;xfs_growfs(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
If using partitions, you need to have free space after this partition to do so. Remove partition, recreate it larger with the &#039;&#039;exact same&#039;&#039; starting point. Run &#039;&#039;&#039;xfs_growfs&#039;&#039;&#039; to make the partition larger. Note - editing partition tables is a dangerous pastime, so back up your filesystem before doing so.&lt;br /&gt;
&lt;br /&gt;
Using XFS filesystems on top of a volume manager makes this a lot easier.&lt;br /&gt;
&lt;br /&gt;
== Q: What information should I include when reporting a problem? ==&lt;br /&gt;
&lt;br /&gt;
Things to include are what version of XFS you are using, if this is a CVS version of what date and version of the kernel. If you have problems with userland packages please report the version of the package you are using.&lt;br /&gt;
&lt;br /&gt;
If the problem relates to a particular filesystem, the output from the &#039;&#039;&#039;xfs_info(8)&#039;&#039;&#039; command and any &#039;&#039;&#039;mount(8)&#039;&#039;&#039; options in use will also be useful to the developers.&lt;br /&gt;
&lt;br /&gt;
If you experience an oops, please run it through &#039;&#039;&#039;ksymoops&#039;&#039;&#039; so that it can be interpreted.&lt;br /&gt;
&lt;br /&gt;
If you have a filesystem that cannot be repaired, make sure you have xfsprogs 2.9.0 or later and run &#039;&#039;&#039;xfs_metadump(8)&#039;&#039;&#039; to capture the metadata (which obfuscates filenames and attributes to protect your privacy) and make the dump available for someone to analyse.&lt;br /&gt;
&lt;br /&gt;
== Q: Mounting an XFS filesystem does not work - what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
If mount prints an error message something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
     mount: /dev/hda5 has wrong major or minor number&lt;br /&gt;
&lt;br /&gt;
you either do not have XFS compiled into the kernel (or you forgot to load the modules) or you did not use the &amp;quot;-t xfs&amp;quot; option on mount or the &amp;quot;xfs&amp;quot; option in &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you get something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 mount: wrong fs type, bad option, bad superblock on /dev/sda1,&lt;br /&gt;
        or too many mounted file systems&lt;br /&gt;
&lt;br /&gt;
Refer to your system log file (&amp;lt;tt&amp;gt;/var/log/messages&amp;lt;/tt&amp;gt;) for a detailed diagnostic message from the kernel.&lt;br /&gt;
&lt;br /&gt;
== Q: Does the filesystem have an undelete capability? ==&lt;br /&gt;
&lt;br /&gt;
There is no undelete in XFS. However at least some XFS driver implementations does not wipe file information nodes completely so there are chance to recover files with specialized commercial software like [http://www.ufsexplorer.com/rdr_xfs.php Raise Data Recovery for XFS].&lt;br /&gt;
In this kind of XFS driver implementation it does not re-use directory entries immediately so there are chance to get back recently deleted files even with their real names.&lt;br /&gt;
&lt;br /&gt;
This applies to most recent Linux distributions, as well as to most popular NAS boxes that use embedded linux and XFS file system.&lt;br /&gt;
&lt;br /&gt;
Anyway, the best is to always keep backups.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I backup a XFS filesystem and ACLs? ==&lt;br /&gt;
&lt;br /&gt;
You can backup a XFS filesystem with utilities like &#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and standard &#039;&#039;&#039;tar(1)&#039;&#039;&#039; for standard files. If you want to backup ACLs you will need to use &#039;&#039;&#039;xfsdump&#039;&#039;&#039; or [http://www.bacula.org/en/dev-manual/Current_State_Bacula.html Bacula] (&amp;gt; version 3.1.4) or [http://rsync.samba.org/ rsync] (&amp;gt;= version 3.0.0) to backup ACLs and EAs. &#039;&#039;&#039;xfsdump&#039;&#039;&#039; can also be integrated with [http://www.amanda.org/ amanda(8)].&lt;br /&gt;
&lt;br /&gt;
== Q: I see applications returning error 990 or &amp;quot;Structure needs cleaning&amp;quot;, what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
The error 990 stands for [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=blob;f=fs/xfs/linux-2.6/xfs_linux.h#l145 EFSCORRUPTED] which usually means XFS has detected a filesystem metadata problem and has shut the filesystem down to prevent further damage. Also, since about June 2006, we [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=commit;h=da2f4d679c8070ba5b6a920281e495917b293aa0 converted from EFSCORRUPTED/990 over to using EUCLEAN], &amp;quot;Structure needs cleaning.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The cause can be pretty much anything, unfortunately - filesystem, virtual memory manager, volume manager, device driver, or hardware.&lt;br /&gt;
&lt;br /&gt;
There should be a detailed console message when this initially happens. The messages have important information giving hints to developers as to the earliest point that a problem was detected. It is there to protect your data.&lt;br /&gt;
&lt;br /&gt;
You can use xfs_check and xfs_repair to remedy the problem (with the file system unmounted).&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I see binary NULLS in some files after recovery when I unplugged the power? ==&lt;br /&gt;
&lt;br /&gt;
Update: This issue has been addressed with a CVS fix on the 29th March 2007 and merged into mainline on 8th May 2007 for 2.6.22-rc1.&lt;br /&gt;
&lt;br /&gt;
XFS journals metadata updates, not data updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash.&lt;br /&gt;
&lt;br /&gt;
Since XFS does not write data out immediately unless you tell it to with fsync, an O_SYNC or O_DIRECT open (the same is true of other filesystems), you are looking at an inode which was flushed out, but whose data was not. Typically you&#039;ll find that the inode is not taking any space since all it has is a size but no extents allocated (try examining the file with the &#039;&#039;&#039;xfs_bmap(8)&#039;&#039;&#039; command).&lt;br /&gt;
&lt;br /&gt;
== Q: What is the problem with the write cache on journaled filesystems? ==&lt;br /&gt;
&lt;br /&gt;
Many drives use a write back cache in order to speed up the performance of writes.  However, there are conditions such as power failure when the write cache memory is never flushed to the actual disk.  Further, the drive can de-stage data from the write cache to the platters in any order that it chooses.  This causes problems for XFS and journaled filesystems in general because they rely on knowing when a write has completed to the disk. They need to know that the log information has made it to disk before allowing metadata to go to disk.  When the metadata makes it to disk then the transaction can effectively be deleted from the log resulting in movement of the tail of the log and thus freeing up some log space. So if the writes never make it to the physical disk, then the ordering is violated and the log and metadata can be lost, resulting in filesystem corruption.&lt;br /&gt;
&lt;br /&gt;
With hard disk cache sizes of currently (Jan 2009) up to 32MB that can be a lot of valuable information.  In a RAID with 8 such disks these adds to 256MB, and the chance of having filesystem metadata in the cache is so high that you have a very high chance of big data losses on a power outage.&lt;br /&gt;
&lt;br /&gt;
With a single hard disk and barriers turned on (on=default), the drive write cache is flushed before and after a barrier is issued.  A powerfail &amp;quot;only&amp;quot; loses data in the cache but no essential ordering is violated, and corruption will not occur.&lt;br /&gt;
&lt;br /&gt;
With a RAID controller with battery backed controller cache and cache in write back mode, you should turn off barriers - they are unnecessary in this case, and if the controller honors the cache flushes, it will be harmful to performance.  But then you *must* disable the individual hard disk write cache in order to ensure to keep the filesystem intact after a power failure. The method for doing this is different for each RAID controller. See the section about RAID controllers below.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I tell if I have the disk write cache enabled? ==&lt;br /&gt;
&lt;br /&gt;
For SCSI/SATA:&lt;br /&gt;
&lt;br /&gt;
* Look in dmesg(8) output for a driver line, such as:&amp;lt;br /&amp;gt; &amp;quot;SCSI device sda: drive cache: write back&amp;quot;&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# sginfo -c /dev/sda | grep -i &#039;write cache&#039; &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For PATA/SATA (although for SATA this only works on a recent kernel with ATA command passthrough):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -I /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; and look under &amp;quot;Enabled Supported&amp;quot; for &amp;quot;Write cache&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
== Q: How can I address the problem with the disk write cache? ==&lt;br /&gt;
&lt;br /&gt;
=== Disabling the disk write back cache. ===&lt;br /&gt;
&lt;br /&gt;
For SATA/PATA(IDE): (although for SATA this only works on a recent kernel with ATA command passthrough):&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -W0 /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # hdparm -W0 /dev/hda&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# blktool /dev/sda wcache off&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # blktool /dev/hda wcache off&lt;br /&gt;
&lt;br /&gt;
For SCSI:&lt;br /&gt;
&lt;br /&gt;
* Using sginfo(8) which is a little tedious&amp;lt;br /&amp;gt; It takes 3 steps. For example:&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -c /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives a list of attribute names and values&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cX /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives an array of cache values which you must match up with from step 1, e.g.&amp;lt;br /&amp;gt; 0 0 0 1 0 1 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cXR /dev/sda 0 0 0 1 0 0 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; allows you to reset the value of the cache attributes.&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
This disabling is kept persistent for a SCSI disk. However, for a SATA/PATA disk this needs to be done after every reset as it will reset back to the default of the write cache enabled. And a reset can happen after reboot or on error recovery of the drive. This makes it rather difficult to guarantee that the write cache is maintained as disabled.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using an external log. ===&lt;br /&gt;
&lt;br /&gt;
Some people have considered the idea of using an external log on a separate drive with the write cache disabled and the rest of the file system on another disk with the write cache enabled. However, that will &#039;&#039;&#039;not&#039;&#039;&#039; solve the problem. For example, the tail of the log is moved when we are notified that a metadata write is completed to disk and we won&#039;t be able to guarantee that if the metadata is on a drive with the write cache enabled.&lt;br /&gt;
&lt;br /&gt;
In fact using an external log will disable XFS&#039; write barrier support.&lt;br /&gt;
&lt;br /&gt;
=== Write barrier support. ===&lt;br /&gt;
&lt;br /&gt;
Write barrier support is enabled by default in XFS since kernel version 2.6.17. It is disabled by mounting the filesystem with &amp;quot;nobarrier&amp;quot;. Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported with external log device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported by the underlying device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, trial barrier write failed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If the filesystem is mounted with an external log device then we currently don&#039;t support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn&#039;t support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.&lt;br /&gt;
&lt;br /&gt;
== Q. Should barriers be enabled with storage which has a persistent write cache? ==&lt;br /&gt;
&lt;br /&gt;
Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with &amp;quot;nobarrier&amp;quot;. But take care about the hard disk write cache, which should be off.&lt;br /&gt;
&lt;br /&gt;
== Q. Which settings does my RAID controller need ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s hard to tell because there are so many controllers. Please consult your RAID controller documentation to determine how to change these settings, but we try to give an overview here:&lt;br /&gt;
&lt;br /&gt;
Real RAID controllers (not those found onboard of mainboards) normally have a battery backed cache (or an [http://en.wikipedia.org/wiki/Electric_double-layer_capacitor ultracapacitor] + flash memory &amp;quot;[http://www.tweaktown.com/articles/2800/adaptec_zero_maintenance_cache_protection_explained/ zero maintenance cache]&amp;quot;) which is used for buffering writes to improve speed. Even if it&#039;s battery backed, the individual hard disk write caches need to be turned off, as they are not protected from a powerfail and will just lose all contents in that case.&lt;br /&gt;
&lt;br /&gt;
* onboard RAID controllers: there are so many different types it&#039;s hard to tell. Generally, those controllers have no cache, but let the hard disk write cache on. That can lead to the bad situation that after a powerfail with RAID-1 when only parts of the disk cache have been written, the controller doesn&#039;t even see that the disks are out of sync, as the disks can resort cached blocks and might have saved the superblock info, but then lost different data contents. So, turn off disk write caches before using the RAID function.&lt;br /&gt;
&lt;br /&gt;
* 3ware: /cX/uX set cache=off, see http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86&lt;br /&gt;
&lt;br /&gt;
* Adaptec: allows setting individual drives cache&lt;br /&gt;
arcconf setcache &amp;lt;disk&amp;gt; wb|wt&lt;br /&gt;
wb=write back, which means write cache on, wt=write through, which means write cache off. So &amp;quot;wt&amp;quot; should be chosen.&lt;br /&gt;
&lt;br /&gt;
* Areca: In archttp under &amp;quot;System Controls&amp;quot; -&amp;gt; &amp;quot;System Configuration&amp;quot; there&#039;s the option &amp;quot;Disk Write Cache Mode&amp;quot; (defaults &amp;quot;Auto&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Off&amp;quot;: disk write cache is turned off&lt;br /&gt;
&lt;br /&gt;
&amp;quot;On&amp;quot;: disk write cache is enabled, this is not save for your data but fast&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Auto&amp;quot;: If you use a BBM (battery backup module, which you really should use if you care about your data), the controller automatically turns disk writes off, to protect your data. In case no BBM is attached, the controller switches to &amp;quot;On&amp;quot;, because neither controller cache nor disk cache is save so you don&#039;t seem to care about your data and just want high speed (which you get then).&lt;br /&gt;
&lt;br /&gt;
That&#039;s a very sensible default so you can let it &amp;quot;Auto&amp;quot; or enforce &amp;quot;Off&amp;quot; to be sure.&lt;br /&gt;
&lt;br /&gt;
* LSI MegaRAID: allows setting individual disks cache:&lt;br /&gt;
MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL -EnDskCache|DisDskCache&lt;br /&gt;
&lt;br /&gt;
* Xyratex: from the docs: &amp;quot;Write cache includes the disk drive cache and controller cache.&amp;quot;. So that means you can only set the drive caches and the unit caches together. To protect your data, turn it off, but write performance will suffer badly as also the controller write cache is disabled.&lt;br /&gt;
&lt;br /&gt;
== Q: Which settings are best with virtualization like VMware, XEN, qemu? ==&lt;br /&gt;
&lt;br /&gt;
The biggest problem is that those products seem to also virtualize disk &lt;br /&gt;
writes in a way that even barriers don&#039;t work anymore, which means even &lt;br /&gt;
a fsync is not reliable. Tests confirm that unplugging the power from &lt;br /&gt;
such a system even with RAID controller with battery backed cache and &lt;br /&gt;
hard disk cache turned off (which is save on a normal host) you can &lt;br /&gt;
destroy a database within the virtual machine (client, domU whatever you &lt;br /&gt;
call it).&lt;br /&gt;
&lt;br /&gt;
In qemu you can specify cache=off on the line specifying the virtual &lt;br /&gt;
disk. For others information is missing.&lt;br /&gt;
&lt;br /&gt;
== Q: What is the issue with directory corruption in Linux 2.6.17? ==&lt;br /&gt;
&lt;br /&gt;
In the Linux kernel 2.6.17 release a subtle bug was accidentally introduced into the XFS directory code by some &amp;quot;sparse&amp;quot; endian annotations. This bug was sufficiently uncommon (it only affects a certain type of format change, in Node or B-Tree format directories, and only in certain situations) that it was not detected during our regular regression testing, but it has been observed in the wild by a number of people now.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: the fix is included in 2.6.17.7 and later kernels.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To add insult to injury, &#039;&#039;&#039;xfs_repair(8)&#039;&#039;&#039; is currently not correcting these directories on detection of this corrupt state either. This &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; issue is actively being worked on, and a fixed version will be available shortly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
No other kernel versions are affected. However, using a corrupt filesystem on other kernels can still result in the filesystem being shutdown if the problem has not been rectified (on disk), making it seem like other kernels are affected.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;xfs_check&#039;&#039;&#039; tool, or &#039;&#039;&#039;xfs_repair -n&#039;&#039;&#039;, should be able to detect any directory corruption.&lt;br /&gt;
&lt;br /&gt;
Until a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; binary is available, one can make use of the &#039;&#039;&#039;xfs_db(8)&#039;&#039;&#039; command to mark the problem directory for removal (see the example below). A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; invocation will remove the directory and move all contents into &amp;quot;lost+found&amp;quot;, named by inode number (see second example on how to map inode number to directory entry name, which needs to be done _before_ removing the directory itself). The inode number of the corrupt directory is included in the shutdown report issued by the kernel on detection of directory corruption. Using that inode number, this is how one would ensure it is removed:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 core.mode = 040755&lt;br /&gt;
 core.version = 2&lt;br /&gt;
 core.format = 3 (btree)&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; write core.mode 0&lt;br /&gt;
 xfs_db&amp;amp;gt; quit&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; will clear the directory, and add new entries (named by inode number) in lost+found.&lt;br /&gt;
&lt;br /&gt;
The easiest way to map inode numbers to full paths is via &#039;&#039;&#039;xfs_ncheck(8)&#039;&#039;&#039;&amp;lt;nowiki&amp;gt;: &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_ncheck -i 14101 -i 14102 /dev/sdXXX&lt;br /&gt;
       14101 full/path/mumble_fratz_foo_bar_1495&lt;br /&gt;
       14102 full/path/mumble_fratz_foo_bar_1494&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should this not work, we can manually map inode numbers in B-Tree format directory by taking the following steps:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 ...&lt;br /&gt;
 next_unlinked = null&lt;br /&gt;
 u.bmbt.level = 1&lt;br /&gt;
 u.bmbt.numrecs = 1&lt;br /&gt;
 u.bmbt.keys[1] = [startoff] 1:[0]&lt;br /&gt;
 u.bmbt.ptrs[1] = 1:3628&lt;br /&gt;
 xfs_db&amp;amp;gt; fsblock 3628&lt;br /&gt;
 xfs_db&amp;amp;gt; type bmapbtd&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 magic = 0x424d4150&lt;br /&gt;
 level = 0&lt;br /&gt;
 numrecs = 19&lt;br /&gt;
 leftsib = null&lt;br /&gt;
 rightsib = null&lt;br /&gt;
 recs[1-19] = [startoff,startblock,blockcount,extentflag]&lt;br /&gt;
        1:[0,3088,4,0] 2:[4,3128,8,0] 3:[12,3308,4,0] 4:[16,3360,4,0]&lt;br /&gt;
        5:[20,3496,8,0] 6:[28,3552,8,0] 7:[36,3624,4,0] 8:[40,3633,4,0]&lt;br /&gt;
        9:[44,3688,8,0] 10:[52,3744,4,0] 11:[56,3784,8,0]&lt;br /&gt;
        12:[64,3840,8,0] 13:[72,3896,4,0] 14:[33554432,3092,4,0]&lt;br /&gt;
        15:[33554436,3488,8,0] 16:[33554444,3629,4,0]&lt;br /&gt;
        17:[33554448,3748,4,0] 18:[33554452,3900,4,0]&lt;br /&gt;
        19:[67108864,3364,4,0]&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point we are looking at the extents that hold all of the directory information. There are three types of extent here, we have the data blocks (extents 1 through 13 above), then the leaf blocks (extents 14 through 18), then the freelist blocks (extent 19 above). The jumps in the first field (start offset) indicate our progression through each of the three types. For recovering file names, we are only interested in the data blocks, so we can now feed those offset numbers into the &#039;&#039;&#039;xfs_db&#039;&#039;&#039; dblock command. So, for the fifth extent - 5:[20,3496,8,0] - listed above:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; dblock 20&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 dhdr.magic = 0x58443244&lt;br /&gt;
 dhdr.bestfree[0].offset = 0&lt;br /&gt;
 dhdr.bestfree[0].length = 0&lt;br /&gt;
 dhdr.bestfree[1].offset = 0&lt;br /&gt;
 dhdr.bestfree[1].length = 0&lt;br /&gt;
 dhdr.bestfree[2].offset = 0&lt;br /&gt;
 dhdr.bestfree[2].length = 0&lt;br /&gt;
 du[0].inumber = 13937&lt;br /&gt;
 du[0].namelen = 25&lt;br /&gt;
 du[0].name = &amp;quot;mumble_fratz_foo_bar_1595&amp;quot;&lt;br /&gt;
 du[0].tag = 0x10&lt;br /&gt;
 du[1].inumber = 13938&lt;br /&gt;
 du[1].namelen = 25&lt;br /&gt;
 du[1].name = &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;&lt;br /&gt;
 du[1].tag = 0x38&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
So, here we can see that inode number 13938 matches up with name &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;. Iterate through all the extents, and extract all the name-to-inode-number mappings you can, as these will be useful when looking at &amp;quot;lost+found&amp;quot; (once &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; has removed the corrupt directory).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why does my &amp;gt; 2TB XFS partition disappear when I reboot ? ==&lt;br /&gt;
&lt;br /&gt;
Strictly speaking this is not an XFS problem.&lt;br /&gt;
&lt;br /&gt;
To support &amp;gt; 2TB partitions you need two things: a kernel that supports large block devices (&amp;lt;tt&amp;gt;CONFIG_LBD=y&amp;lt;/tt&amp;gt;) and a partition table format that can hold large partitions.  The default DOS partition tables don&#039;t.  The best partition format for&lt;br /&gt;
&amp;gt; 2TB partitions is the EFI GPT format (&amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Without CONFIG_LBD=y you can&#039;t even create the filesystem, but without &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt; it works fine until you reboot at which point the partition will disappear.  Note that you need to enable the &amp;lt;tt&amp;gt;CONFIG_PARTITION_ADVANCED&amp;lt;/tt&amp;gt; option before you can set &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I receive &amp;lt;tt&amp;gt;No space left on device&amp;lt;/tt&amp;gt; after &amp;lt;tt&amp;gt;xfs_growfs&amp;lt;/tt&amp;gt;? ==&lt;br /&gt;
&lt;br /&gt;
After [http://oss.sgi.com/pipermail/xfs/2009-January/039828.html growing a XFS filesystem], df(1) would show enough free space but attempts to write to the filesystem result in -ENOSPC. To fix this, [http://oss.sgi.com/pipermail/xfs/2009-January/039835.html Dave Chinner advised]:&lt;br /&gt;
&lt;br /&gt;
  The only way to fix this is to move data around to free up space&lt;br /&gt;
  below 1TB. Find your oldest data (i.e. that was around before even&lt;br /&gt;
  the first grow) and move it off the filesystem (move, not copy).&lt;br /&gt;
  Then if you copy it back on, the data blocks will end up above 1TB&lt;br /&gt;
  and that should leave you with plenty of space for inodes below 1TB.&lt;br /&gt;
  &lt;br /&gt;
  A complete dump and restore will also fix the problem ;)&lt;br /&gt;
&lt;br /&gt;
Also, you can add &#039;inode64&#039; to your mount options to allow inodes to live above 1TB.&lt;br /&gt;
&lt;br /&gt;
== Q: Is using noatime or/and nodiratime at mount time giving any performance benefits in xfs (or not using them performance decrease) ? ==&lt;br /&gt;
See: http://everything2.com/index.pl?node_id=1479435&lt;br /&gt;
&lt;br /&gt;
== Q: How to get around a bad inode repair is unable to clean up ==&lt;br /&gt;
&lt;br /&gt;
The trick is go in with xfs_db and mark the inode as a deleted, which will cause repair to clean it up and finish the remove process.&lt;br /&gt;
&lt;br /&gt;
  xfs_db -x -c &#039;inode XXX&#039; -c &#039;write core.nextents 0&#039; -c &#039;write core.size 0&#039; /dev/hdXX&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Runtime_Stats&amp;diff=2076</id>
		<title>Runtime Stats</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Runtime_Stats&amp;diff=2076"/>
		<updated>2010-04-27T01:20:49Z</updated>

		<summary type="html">&lt;p&gt;Christian: minor formatting fixes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page intends to describe info available from &amp;lt;tt&amp;gt;/proc/fs/xfs/stat&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
__TOC__ &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Overview ==&lt;br /&gt;
Being advanced filesystem, XFS provides some internal statistics to user&#039;s view, which can be helpful on debugging/understanding IO characteristics and optimizing performance. Data available in /proc/fs/xfs/stat as dump of variables values grouped by type of information it holds.&lt;br /&gt;
&lt;br /&gt;
== output example ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
extent_alloc 4260849 125170297 4618726 131131897&lt;br /&gt;
abt 29491162 337391304 11257328 11133039&lt;br /&gt;
blk_map 381213360 115456141 10903633 69612322 7448401 507596777 0&lt;br /&gt;
bmbt 771328 6236258 602114 86646&lt;br /&gt;
dir 21253907 6921870 6969079 779205554&lt;br /&gt;
trans 126946406 38184616 6342392&lt;br /&gt;
ig 17754368 2019571 102 15734797 0 15672217 3962470&lt;br /&gt;
log 129491915 3992515264 458018 153771989 127040250&lt;br /&gt;
push_ail 171473415 0 6896837 3324292 8069877 65884 1289485 0 22535 7337&lt;br /&gt;
xstrat 4140059 0&lt;br /&gt;
rw 1595677950 1046884251&lt;br /&gt;
attr 194724197 0 7 0&lt;br /&gt;
icluster 20772185 2488203 13909520&lt;br /&gt;
vnodes 62578 15959666 0 0 15897088 15897088 15897088 0&lt;br /&gt;
buf 2090581631 1972536890 118044776 225145 9486625 0 0 2000152616 809762&lt;br /&gt;
xpc 6908312903680 67735504884757 19760115252482&lt;br /&gt;
debug 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Fields table ==&lt;br /&gt;
Numbers shown in output example above are presented as table of value names. Cells with &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;red background&amp;lt;/span&amp;gt; lack meaningful description and should be edited.&lt;br /&gt;
{| style=&amp;quot;white-space:nowrap; text-align:center; border-spacing: 2px; border: 1px solid #666666; font-family: Verdana, Cursor;font-size: 10px;font-weight: bold;&amp;quot; &lt;br /&gt;
 |-style=&amp;quot;background-color: #f0f0f0;&amp;quot;&lt;br /&gt;
 | [[#extent_alloc|extent_alloc - Extent Allocation]]&lt;br /&gt;
 | xs_allocx&lt;br /&gt;
 | xs_allocb&lt;br /&gt;
 | xs_freex&lt;br /&gt;
 | xs_freeb&lt;br /&gt;
 |- style=&amp;quot;background-color: #bfbfff;&amp;quot;&lt;br /&gt;
 | [[#abt|abt - Allocation Btree]]&lt;br /&gt;
 | xs_abt_lookup&lt;br /&gt;
 | xs_abt_compare&lt;br /&gt;
 | xs_abt_insrec&lt;br /&gt;
 | xs_abt_delrec&lt;br /&gt;
 |-style=&amp;quot;background-color: #f0f0f0;&amp;quot;&lt;br /&gt;
 | [[#blk_map|blk_map - Block Mapping]]&lt;br /&gt;
 | xs_blk_mapr&lt;br /&gt;
 | xs_blk_mapw&lt;br /&gt;
 | xs_blk_unmap&lt;br /&gt;
 | xs_add_exlist&lt;br /&gt;
 | xs_del_exlist&lt;br /&gt;
 | xs_look_exlist&lt;br /&gt;
 | xs_cmp_exlist&lt;br /&gt;
 |-style=&amp;quot;background-color: #bfbfff;&amp;quot;&lt;br /&gt;
 | [[#bmbt|bmbt - Block Map Btree]]&lt;br /&gt;
 | xs_bmbt_lookup&lt;br /&gt;
 | xs_bmbt_compare&lt;br /&gt;
 | xs_bmbt_insrec&lt;br /&gt;
 | xs_bmbt_delrec&lt;br /&gt;
 |-style=&amp;quot;background-color: #f0f0f0;&amp;quot;&lt;br /&gt;
 | [[#dir|dir - Directory Operations]]&lt;br /&gt;
 | xs_dir_lookup&lt;br /&gt;
 | xs_dir_create&lt;br /&gt;
 | xs_dir_remove&lt;br /&gt;
 | xs_dir_getdents&lt;br /&gt;
 |-style=&amp;quot;background-color: #bfbfff;&amp;quot;&lt;br /&gt;
 | [[#trans|trans - Transactions]]&lt;br /&gt;
 | xs_trans_sync&lt;br /&gt;
 | xs_trans_async&lt;br /&gt;
 | xs_trans_empty&lt;br /&gt;
 |-style=&amp;quot;background-color: #f0f0f0;&amp;quot;&lt;br /&gt;
 | [[#ig|ig - Inode Operations]]&lt;br /&gt;
 | xs_ig_attempts&lt;br /&gt;
 | xs_ig_found&lt;br /&gt;
 | xs_ig_frecycle&lt;br /&gt;
 | xs_ig_missed&lt;br /&gt;
 | xs_ig_dup&lt;br /&gt;
 | xs_ig_reclaims&lt;br /&gt;
 | xs_ig_attrchg&lt;br /&gt;
 |-style=&amp;quot;background-color: #bfbfff;&amp;quot;&lt;br /&gt;
 | [[#log|log - Log Operations]]&lt;br /&gt;
 | xs_log_writes&lt;br /&gt;
 | xs_log_blocks&lt;br /&gt;
 | xs_log_noiclogs&lt;br /&gt;
 | xs_log_force&lt;br /&gt;
 |  &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_log_force_sleep&amp;lt;/span&amp;gt;&lt;br /&gt;
 |-style=&amp;quot;background-color: #f0f0f0;&amp;quot;&lt;br /&gt;
 | [[#push_ail|push_ail - Tail-Pushing Stats]]&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_try_logspace&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_sleep_logspace&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_push_ail&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_push_ail_success&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_push_ail_pushbuf&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_push_ail_pinned&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_push_ail_locked&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_push_ail_flushing&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_push_ail_restarts&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_push_ail_flush&amp;lt;/span&amp;gt;&lt;br /&gt;
 |-style=&amp;quot;background-color: #bfbfff;&amp;quot;&lt;br /&gt;
 | [[#xstrat|xstrat - IoMap Write Convert]]&lt;br /&gt;
 | xs_xstrat_quick&lt;br /&gt;
 | xs_xstrat_split&lt;br /&gt;
 |-style=&amp;quot;background-color: #f0f0f0;&amp;quot;&lt;br /&gt;
 | [[#rw|rw - Read/Write Stats]]&lt;br /&gt;
 | xs_write_calls&lt;br /&gt;
 | xs_read_calls&lt;br /&gt;
 |-style=&amp;quot;background-color: #bfbfff;&amp;quot;&lt;br /&gt;
 | [[#attr|attr - Attribute Operations]]&lt;br /&gt;
 | xs_attr_get&lt;br /&gt;
 | xs_attr_set&lt;br /&gt;
 | xs_attr_remove&lt;br /&gt;
 | xs_attr_list&lt;br /&gt;
 |-style=&amp;quot;background-color: #f0f0f0;&amp;quot;&lt;br /&gt;
 | [[#icluster|icluster - Inode Clustering]]&lt;br /&gt;
 | xs_iflush_count&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xs_icluster_flushcnt&amp;lt;/span&amp;gt;&lt;br /&gt;
 | xs_icluster_flushinode&lt;br /&gt;
 |-style=&amp;quot;background-color: #bfbfff;&amp;quot;&lt;br /&gt;
 | [[#vnodes|vnodes - Vnode Statistics]]&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;vn_active&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;vn_alloc&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;vn_get&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;vn_hold&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;vn_rele&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;vn_reclaim&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;vn_remove&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;vn_free&amp;lt;/span&amp;gt;&lt;br /&gt;
 |-style=&amp;quot;background-color: #f0f0f0;&amp;quot;&lt;br /&gt;
 | [[#buf|buf - Buf Statistics]]&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xb_get&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xb_create&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xb_get_locked&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xb_get_locked_waited&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xb_busy_locked&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xb_miss_locked&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xb_page_retries&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xb_page_found&amp;lt;/span&amp;gt;&lt;br /&gt;
 | &amp;lt;span style=&amp;quot;background-color: red;&amp;quot;&amp;gt;xb_get_read&amp;lt;/span&amp;gt;&lt;br /&gt;
 |-style=&amp;quot;background-color: #bfbfff;&amp;quot;&lt;br /&gt;
 | [[#xpc|xpc - eXtended Precision Counters]]&lt;br /&gt;
 | xs_xstrat_bytes&lt;br /&gt;
 | xs_write_bytes&lt;br /&gt;
 | xs_read_bytes&lt;br /&gt;
 |}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Fields description: ==&lt;br /&gt;
&lt;br /&gt;
=== extent_alloc - Extent Allocation ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;extent_alloc&amp;quot;&amp;gt;&lt;br /&gt;
* xs_allocx (xfs.allocs.alloc_extent) &lt;br /&gt;
** Number of file system extents allocated over all XFS filesystems. &lt;br /&gt;
* xs_allocb (xfs.allocs.alloc_block)&lt;br /&gt;
** Number of file system blocks allocated over all XFS filesystems. &lt;br /&gt;
* xs_freex (xfs.allocs.free_extent) &lt;br /&gt;
** Number of file system extents freed over all XFS filesystems. &lt;br /&gt;
* xs_freeb (xfs.allocs.free_block) &lt;br /&gt;
** Number of file system blocks freed over all XFS filesystems. &lt;br /&gt;
&lt;br /&gt;
=== abt - Allocation Btree ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;abt&amp;quot;&amp;gt;&lt;br /&gt;
* xs_abt_lookup (xfs.alloc_btree.lookup)&lt;br /&gt;
** Number of lookup operations in XFS filesystem allocation btrees.&lt;br /&gt;
* xs_abt_compare (xfs.alloc_btree.compare)&lt;br /&gt;
** Number of compares in XFS filesystem allocation btree lookups.&lt;br /&gt;
* xs_abt_insrec (xfs.alloc_btree.insrec)&lt;br /&gt;
** Number of extent records inserted into XFS filesystem allocation btrees.&lt;br /&gt;
* xs_abt_delrec (xfs.alloc_btree.delrec)&lt;br /&gt;
** Number of extent records deleted from XFS filesystem allocation btrees.&lt;br /&gt;
&lt;br /&gt;
=== blk_map - Block Mapping ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;blk_map&amp;quot;&amp;gt;&lt;br /&gt;
* xs_blk_mapr (xfs.block_map.read_ops)&lt;br /&gt;
** Number of block map for read operations performed on XFS files.&lt;br /&gt;
* xs_blk_mapw (xfs.block_map.write_ops)&lt;br /&gt;
** Number of block map for write operations performed on XFS files.&lt;br /&gt;
* xs_blk_unmap (xfs.block_map.unmap)&lt;br /&gt;
** Number of block unmap (delete) operations performed on XFS files.&lt;br /&gt;
* xs_add_exlist (xfs.block_map.add_exlist)&lt;br /&gt;
** Number of extent list insertion operations for XFS files.&lt;br /&gt;
* xs_del_exlist (xfs.block_map.del_exlist)&lt;br /&gt;
** Number of extent list deletion operations for XFS files.&lt;br /&gt;
* xs_look_exlist (xfs.block_map.look_exlist)&lt;br /&gt;
** Number of extent list lookup operations for XFS files.&lt;br /&gt;
* xs_cmp_exlist (xfs.block_map.cmp_exlist)&lt;br /&gt;
** Number of extent list comparisons in XFS extent list lookups.&lt;br /&gt;
&lt;br /&gt;
=== bmbt - Block Map Btree ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;bmbt&amp;quot;&amp;gt;&lt;br /&gt;
* xs_bmbt_lookup (xfs.bmap_btree.lookup)&lt;br /&gt;
** Number of block map btree lookup operations on XFS files.&lt;br /&gt;
* xs_bmbt_compare (xfs.bmap_btree.compare)&lt;br /&gt;
** Number of block map btree compare operations in XFS block map lookups.&lt;br /&gt;
* xs_bmbt_insrec (xfs.bmap_btree.insrec)&lt;br /&gt;
** Number of block map btree records inserted for XFS files.&lt;br /&gt;
* xs_bmbt_delrec (xfs.bmap_btree.delrec)&lt;br /&gt;
** Number of block map btree records deleted for XFS files.&lt;br /&gt;
&lt;br /&gt;
=== dir - Directory Operations ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;dir&amp;quot;&amp;gt;&lt;br /&gt;
* xs_dir_lookup (xfs.dir_ops.lookup)&lt;br /&gt;
** This is a count of the number of file name directory lookups in XFS filesystems. It counts only those lookups which miss in the operating system&#039;s directory name lookup cache and must search the real directory structure for the name in question.  The count is incremented once for each level of a pathname search that results in a directory lookup.&lt;br /&gt;
* xs_dir_create (xfs.dir_ops.create)&lt;br /&gt;
** This is the number of times a new directory entry was created in XFS filesystems. Each time that a new file, directory, link, symbolic link, or special file is created in the directory hierarchy the count is incremented.&lt;br /&gt;
* xs_dir_remove (xfs.dir_ops.remove)&lt;br /&gt;
** This is the number of times an existing directory entry was removed in XFS filesystems. Each time that a file, directory, link, symbolic link, or special file is removed from the directory hierarchy the count is incremented.&lt;br /&gt;
* xs_dir_getdents (xfs.dir_ops.getdents)&lt;br /&gt;
** This is the number of times the XFS directory getdents operation was performed. The getdents operation is used by programs to read the contents of directories in a file system independent fashion.  This count corresponds exactly to the number of times the getdents(2) system call was successfully used on an XFS directory.&lt;br /&gt;
&lt;br /&gt;
=== trans - Transactions ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;trans&amp;quot;&amp;gt;&lt;br /&gt;
* xs_trans_sync (xfs.transactions.sync)&lt;br /&gt;
** This is the number of meta-data transactions which waited to be committed to the on-disk log before allowing the process performing the transaction to continue. These transactions are slower and more expensive than asynchronous transactions, because they force the in memory log buffers to be forced to disk more often and they wait for the completion of the log buffer writes. Synchronous transactions include file truncations and all directory updates when the file system is mounted with the &#039;wsync&#039; option.&lt;br /&gt;
* xs_trans_async (xfs.transactions.async)&lt;br /&gt;
** This is the number of meta-data transactions which did not wait to be committed to the on-disk log before allowing the process performing the transaction to continue. These transactions are faster and more efficient than synchronous transactions, because they commit their data to the in memory log buffers without forcing those buffers to be written to disk. This allows multiple asynchronous transactions to be committed to disk in a single log buffer write. Most transactions used in XFS file systems are asynchronous.&lt;br /&gt;
* xs_trans_empty (xfs.transactions.empty)&lt;br /&gt;
** This is the number of meta-data transactions which did not actually change anything. These are transactions which were started for some purpose, but in the end it turned out that no change was necessary.&lt;br /&gt;
&lt;br /&gt;
=== ig - Inode Operations ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;ig&amp;quot;&amp;gt;&lt;br /&gt;
* xs_ig_attempts (xfs.inode_ops.ig_attempts)&lt;br /&gt;
** This is the number of times the operating system looked for an XFS inode in the inode cache. Whether the inode was found in the cache or needed to be read in from the disk is not indicated here, but this can be computed from the ig_found and ig_missed counts.&lt;br /&gt;
* xs_ig_found (xfs.inode_ops.ig_found)&lt;br /&gt;
** This is the number of times the operating system looked for an XFS inode in the inode cache and found it. The closer this count is to the ig_attempts count the better the inode cache is performing.&lt;br /&gt;
* xs_ig_frecycle (xfs.inode_ops.ig_frecycle)&lt;br /&gt;
** This is the number of times the operating system looked for an XFS inode in the inode cache and saw that it was there but was unable to use the in memory inode because it was being recycled by another process.&lt;br /&gt;
* xs_ig_missed (xfs.inode_ops.ig_missed)&lt;br /&gt;
** This is the number of times the operating system looked for an XFS inode in the inode cache and the inode was not there. The further this count is from the ig_attempts count the better.&lt;br /&gt;
* xs_ig_dup (xfs.inode_ops.ig_dup)&lt;br /&gt;
** This is the number of times the operating system looked for an XFS inode in the inode cache and found that it was not there but upon attempting to add the inode to the cache found that another process had already inserted it.&lt;br /&gt;
* xs_ig_reclaims (xfs.inode_ops.ig_reclaims)&lt;br /&gt;
** This is the number of times the operating system recycled an XFS inode from the inode cache in order to use the memory for that inode for another purpose. Inodes are recycled in order to keep the inode cache from growing without bound. If the reclaim rate is high it may be beneficial to raise the vnode_free_ratio kernel tunable variable to increase the size of the inode cache.&lt;br /&gt;
* xs_ig_attrchg (xfs.inode_ops.ig_attrchg)&lt;br /&gt;
** This is the number of times the operating system explicitly changed the attributes of an XFS inode. For example, this could be to change the inode&#039;s owner, the inode&#039;s size, or the inode&#039;s timestamps.&lt;br /&gt;
&lt;br /&gt;
=== log - Log Operations ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;log&amp;quot;&amp;gt;&lt;br /&gt;
* xs_log_writes (xfs.log.writes)&lt;br /&gt;
** This variable counts the number of log buffer writes going to the physical log partitions of all XFS filesystems. Log data traffic is proportional to the level of meta-data updating. Log buffer writes get generated when they fill up or external syncs occur.&lt;br /&gt;
* xs_log_blocks (xfs.log.blocks)&lt;br /&gt;
** This variable counts (in 512-byte units) the information being written to the physical log partitions of all XFS filesystems. Log data traffic is proportional to the level of meta-data updating. The rate with which log data gets written depends on the size of internal log buffers and disk write speed. Therefore, filesystems with very high meta-data updating may need to stripe the log partition or put the log partition on a separate drive.&lt;br /&gt;
* xs_log_noiclogs (xfs.log.noiclogs)&lt;br /&gt;
** This variable keeps track of times when a logged transaction can not get any log buffer space. When this occurs, all of the internal log buffers are busy flushing their data to the physical on-disk log.&lt;br /&gt;
* xs_log_force (xfs.log.force)&lt;br /&gt;
** The number of times the in-core log is forced to disk.  It is equivalent to the number of successful calls to the function xfs_log_force().&lt;br /&gt;
* xs_log_force_sleep (xfs.log.force_sleep)&lt;br /&gt;
** Value exported from the xs_log_force_sleep field of struct xfsstats.&lt;br /&gt;
&lt;br /&gt;
=== push_ail - Tail-Pushing Stats ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;push_ail&amp;quot;&amp;gt;&lt;br /&gt;
* xs_try_logspace (xfs.log_tail.try_logspace)&lt;br /&gt;
** Value from the xs_try_logspace field of struct xfsstats. &lt;br /&gt;
* xs_sleep_logspace (xfs.log_tail.sleep_logspace)&lt;br /&gt;
** Value from the xs_sleep_logspace field of struct xfsstats.&lt;br /&gt;
* xs_push_ail (xfs.log_tail.push_ail.pushes)&lt;br /&gt;
** The number of times the tail of the AIL is moved forward.  It is equivalent to the number of successful calls to the function xfs_trans_push_ail(). &lt;br /&gt;
* xs_push_ail_success (xfs.log_tail.push_ail.success)&lt;br /&gt;
** Value from xs_push_ail_success field of struct xfsstats.&lt;br /&gt;
* xs_push_ail_pushbuf (xfs.log_tail.push_ail.pushbuf)&lt;br /&gt;
** Value from xs_push_ail_pushbuf field of struct xfsstats.&lt;br /&gt;
* xs_push_ail_pinned (xfs.log_tail.push_ail.pinned)&lt;br /&gt;
** Value from xs_push_ail_pinned field of struct xfsstats.&lt;br /&gt;
* xs_push_ail_locked (xfs.log_tail.push_ail.locked)&lt;br /&gt;
** Value from xs_push_ail_locked field of struct xfsstats.&lt;br /&gt;
* xs_push_ail_flushing (xfs.log_tail.push_ail.flushing)&lt;br /&gt;
** Value from xs_push_ail_flushing field of struct xfsstats.&lt;br /&gt;
* xs_push_ail_restarts (xfs.log_tail.push_ail.restarts)&lt;br /&gt;
** Value from xs_push_ail_restarts field of struct xfsstats.&lt;br /&gt;
* xs_push_ail_flush (xfs.log_tail.push_ail.flush)&lt;br /&gt;
** Value from xs_push_ail_flush field of struct xfsstats.&lt;br /&gt;
&lt;br /&gt;
=== xstrat - IoMap Write Convert ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;xstrat&amp;quot;&amp;gt;&lt;br /&gt;
*xs_xstrat_quick (xfs.xstrat.quick)&lt;br /&gt;
** This is the number of buffers flushed out by the XFS flushing daemons which are written to contiguous space on disk. The buffers handled by the XFS daemons are delayed allocation buffers, so this count gives an indication of the success of the XFS daemons in allocating contiguous disk space for the data being flushed to disk.&lt;br /&gt;
*xs_xstrat_split (xfs.xstrat.split)&lt;br /&gt;
** This is the number of buffers flushed out by the XFS flushing daemons which are written to non-contiguous space on disk. The buffers handled by the XFS daemons are delayed allocation buffers, so this count gives an indication of the failure of the XFS daemons in allocating contiguous disk space for the data being flushed to disk. Large values in this counter indicate that the file system has become fragmented.&lt;br /&gt;
&lt;br /&gt;
=== rw - Read/Write Stats ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;rw&amp;quot;&amp;gt;&lt;br /&gt;
* xs_write_calls&lt;br /&gt;
** This is the number of write(2) system calls made to files in XFS file systems.&lt;br /&gt;
* xs_read_calls&lt;br /&gt;
** This is the number of read(2) system calls made to files in XFS file systems.&lt;br /&gt;
&lt;br /&gt;
=== attr - Attribute Operations ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;attr&amp;quot;&amp;gt;&lt;br /&gt;
* xs_attr_get&lt;br /&gt;
** The number of &amp;quot;get&amp;quot; operations performed on extended file attributes within XFS filesystems.  The &amp;quot;get&amp;quot; operation retrieves the value of an extended attribute.&lt;br /&gt;
* xs_attr_set&lt;br /&gt;
** The number of &amp;quot;set&amp;quot; operations performed on extended file attributes within XFS filesystems.  The &amp;quot;set&amp;quot; operation creates and sets the value of an extended attribute.&lt;br /&gt;
* xs_attr_remove&lt;br /&gt;
** The number of &amp;quot;remove&amp;quot; operations performed on extended file attributes within XFS filesystems.  The &amp;quot;remove&amp;quot; operation deletes an extended attribute.&lt;br /&gt;
* xs_attr_list&lt;br /&gt;
** The number of &amp;quot;list&amp;quot; operations performed on extended file attributes within XFS filesystems.  The &amp;quot;list&amp;quot; operation retrieves the set of extended attributes associated with a file.&lt;br /&gt;
&lt;br /&gt;
=== icluster - Inode Clustering ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;icluster&amp;quot;&amp;gt;&lt;br /&gt;
* xs_iflush_count&lt;br /&gt;
** This is the number of calls to xfs_iflush which gets called when an inode is being flushed (such as by bdflush or tail pushing). xfs_iflush searches for other inodes in the same cluster which are dirty and flushable.&lt;br /&gt;
* xs_icluster_flushcnt&lt;br /&gt;
** Value from xs_icluster_flushcnt field of struct xfsstats.&lt;br /&gt;
* xs_icluster_flushinode&lt;br /&gt;
** This is the number of times that the inode clustering was not able to flush anything but the one inode it was called with.&lt;br /&gt;
&lt;br /&gt;
=== vnodes - Vnode Statistics ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;vnodes&amp;quot;&amp;gt;&lt;br /&gt;
* vn_active&lt;br /&gt;
** Number of vnodes not on free lists.&lt;br /&gt;
* vn_alloc&lt;br /&gt;
** Number of times vn_alloc called.&lt;br /&gt;
* vn_get&lt;br /&gt;
** Number of times vn_get called.&lt;br /&gt;
* vn_hold&lt;br /&gt;
** Number of times vn_hold called.&lt;br /&gt;
* vn_rele&lt;br /&gt;
** Number of times vn_rele called.&lt;br /&gt;
* vn_reclaim&lt;br /&gt;
**  Number of times vn_reclaim called.&lt;br /&gt;
* vn_remove&lt;br /&gt;
** Number of times vn_remove called.&lt;br /&gt;
&lt;br /&gt;
=== buf - Buf Statistics ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;buf&amp;quot;&amp;gt;&lt;br /&gt;
* xb_get&lt;br /&gt;
* xb_create&lt;br /&gt;
* xb_get_locked&lt;br /&gt;
* xb_get_locked_waited&lt;br /&gt;
* xb_busy_locked&lt;br /&gt;
* xb_miss_locked&lt;br /&gt;
* xb_page_retries&lt;br /&gt;
* xb_page_found&lt;br /&gt;
* xb_get_read&lt;br /&gt;
&lt;br /&gt;
=== xpc - eXtended Precision Counters ===&lt;br /&gt;
&amp;lt;span id=&amp;quot;xpc&amp;quot;&amp;gt;&lt;br /&gt;
* xs_xstrat_bytes&lt;br /&gt;
** This is a count of bytes of file data flushed out by the XFS flushing daemons.&lt;br /&gt;
* xs_write_bytes&lt;br /&gt;
** This is a count of bytes written via write(2) system calls to files in XFS file systems. It can be used in conjunction with the write_calls count to calculate the average size of the write operations to files in XFS file systems.&lt;br /&gt;
* xs_read_bytes&lt;br /&gt;
** This is a count of bytes read via read(2) system calls to files in XFS file systems. It can be used in conjunction with the read_calls count to calculate the average size of the read operations to files in XFS file systems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== NOTES ==&lt;br /&gt;
Many of these statistics are monotonically increasing counters, and of course are subject to counter overflow (the final three listed above are 64-bit values, all others are 32-bit values).  As such they are of limited value in this raw form - if you are interested in monitoring throughput (e.g. bytes read/written per second), or other rates of change, you will be better served by investigating the PCP package more thoroughly - it contains a number of performance analysis tools which can help in this regard.&lt;br /&gt;
&lt;br /&gt;
== External links ==&lt;br /&gt;
# Linux kernel sources: [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=blob;f=fs/xfs/linux-2.6/xfs_stats.h;hb=HEAD xfs_stats.h]&lt;br /&gt;
# [http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/xfsmisc/xfs_stats.pl?rev=1.7;content-type=text%2Fplain xfs_stats.pl] - script to parse and display xfs statistics&lt;br /&gt;
# Developers on [irc://irc.freenode.org/xfs irc]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Status/2008-August&amp;diff=2075</id>
		<title>Status/2008-August</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Status/2008-August&amp;diff=2075"/>
		<updated>2010-04-27T01:12:14Z</updated>

		<summary type="html">&lt;p&gt;Christian: double redirect :-\&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[XFS_status_update_for_August_2008]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Mediawiki/mediawiki/index.php&amp;diff=2074</id>
		<title>Mediawiki/mediawiki/index.php</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Mediawiki/mediawiki/index.php&amp;diff=2074"/>
		<updated>2010-04-27T01:10:38Z</updated>

		<summary type="html">&lt;p&gt;Christian: -&amp;gt; Main_Page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Main_Page]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Mediawiki/index.php&amp;diff=2073</id>
		<title>Mediawiki/index.php</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Mediawiki/index.php&amp;diff=2073"/>
		<updated>2010-04-27T01:10:01Z</updated>

		<summary type="html">&lt;p&gt;Christian: -&amp;gt; Main_Page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Main_Page]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2072</id>
		<title>XFS Papers and Documentation</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2072"/>
		<updated>2010-04-27T01:03:00Z</updated>

		<summary type="html">&lt;p&gt;Christian: Runtime_Stats de-orphaned&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* Someone managed to document &amp;lt;tt&amp;gt;/proc/fs/xfs/stat&amp;lt;/tt&amp;gt;: [[Runtime_Stats|Runtime_Stats]]&lt;br /&gt;
&lt;br /&gt;
The XFS team has been working on a training course aimed at developers, support staff and experienced users, that explores the internals and ondisk format of XFS.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS Overview and Internals&#039;&#039; [[http://oss.sgi.com/projects/xfs/training/index.html Index]]&lt;br /&gt;
&lt;br /&gt;
Barry Naujok has documented most of the XFS ondisk format, including examples on how to traverse the structure and diagnose ondisk problems:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS Filesystem Structure&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
The October 2009 issue of the USENIX ;login: magazine published an article about XFS targeted at system administrators:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: The big storage file system for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/hellwig.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium (July 2006), Dave Chinner presented a paper on filesystem scalability in Linux 2.6 kernels:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;High Bandwidth Filesystems on Large Systems&#039;&#039; (July 2006) [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-paper.pdf paper]] [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-presentation.pdf presentation]]&lt;br /&gt;
&lt;br /&gt;
At linux.conf.au 2008 Dave Chinner gave a presentation about xfs_repair that he co-authored with Barry Naujok:&lt;br /&gt;
&lt;br /&gt;
* Fixing XFS Filesystems Faster [[http://mirror.linux.org.au/pub/linux.conf.au/2008/slides/135-fixing_xfs_faster.pdf]]&lt;br /&gt;
&lt;br /&gt;
In July 2006, SGI storage marketing updated the XFS datasheet:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Open Source XFS for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/datasheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At UKUUG 2003, Christoph Hellwig presented a talk on XFS:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS for Linux&#039;&#039; (July 2003) [[http://oss.sgi.com/projects/xfs/papers/ukuug2003.pdf pdf]] [[http://verein.lst.de/~hch/talks/ukuug2003/ html]]&lt;br /&gt;
&lt;br /&gt;
Originally published in Proceedings of the FREENIX Track: 2002 Usenix Annual Technical Conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Filesystem Performance and Scalability in Linux 2.4.17&#039;&#039; (June 2002) [[http://oss.sgi.com/projects/xfs/papers/filesystem-perf-tm.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium, an updated presentation on porting XFSÂ to Linux was given:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting XFS to Linux&#039;&#039; (July 2000) [[http://oss.sgi.com/projects/xfs/papers/ols2000/ols-xfs.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the Atlanta Linux Showcase, SGI presented the following paper on the port of XFS to Linux:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting the SGI XFS File System to Linux&#039;&#039; (October 1999) [[http://oss.sgi.com/projects/xfs/papers/als/als.ps ps]] [[http://oss.sgi.com/projects/xfs/papers/als/als.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the 6th Linux Kongress &amp;amp;amp; the Linux Storage Management Workshop (LSMW) in Germany in September, 1999, SGI had a few presentations including the following:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;SGI&#039;s port of XFS to Linux&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/linux_kongress/index.htm html]]&lt;br /&gt;
* &#039;&#039;Overview of DMF&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/DMF-over/index.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the LinuxWorld Conference &amp;amp;amp; Expo in August 1999, SGI published:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An Open Source XFS data sheet&#039;&#039; (August 1999) [[http://oss.sgi.com/projects/xfs/papers/xfs_GPL.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
From the 1996 USENIX conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An XFS white paper&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/xfs_usenix/index.html html]]&lt;br /&gt;
&lt;br /&gt;
=== Other historical articles, press-releases, etc ===&lt;br /&gt;
&lt;br /&gt;
* IBM&#039;s &#039;&#039;Advanced Filesystem Implementor&#039;s Guide&#039;&#039; has a chapter &#039;&#039;Introducing XFS&#039;&#039; [[http://www-106.ibm.com/developerworks/library/l-fs9.html html]]&lt;br /&gt;
&lt;br /&gt;
* An editorial titled &#039;&#039;Tired of fscking? Try a journaling filesystem!&#039;&#039;, Freshmeat (February 2001) [[http://freshmeat.net/articles/view/212/ html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Who give a fsck about filesystems&#039;&#039; provides an overview of the Linux 2.4 filesystems [[http://www.linuxuser.co.uk/articles/issue6/lu6-All_you_need_to_know_about-Filesystems.pdf html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Journal File Systems&#039;&#039; in issue 55 of &#039;&#039;Linux Gazette&#039;&#039; provides a comparison of journaled filesystems.&lt;br /&gt;
&lt;br /&gt;
* The original XFS beta release announcement was published in &#039;&#039;Linux Today&#039;&#039; (September 2000) [[http://linuxtoday.com/news_story.php3?ltsn=2000-09-26-017-04-OS-SW html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: It&#039;s worth the wait&#039;&#039; was published on &#039;&#039;EarthWeb&#039;&#039; (July 2000) [[http://networking.earthweb.com/netos/oslin/article/0,,12284_623661,00.html html]]&lt;br /&gt;
&lt;br /&gt;
* An &#039;&#039;IRIX-XFS data sheet&#039;&#039; (July 1999) [[http://oss.sgi.com/projects/xfs/papers/IRIX_xfs_data_sheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;Getting Started with XFS&#039;&#039; book (1994) [[http://oss.sgi.com/projects/xfs/papers/getting_started_with_xfs.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* Original &#039;&#039;XFS design documents&#039;&#039; (1993) ([http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_ps/ ps], [http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_pdf/ pdf])&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Main_Page&amp;diff=2071</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Main_Page&amp;diff=2071"/>
		<updated>2010-04-27T00:53:20Z</updated>

		<summary type="html">&lt;p&gt;Christian: omit the (obvious) Welcome header (and unnecessary scrolling)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Welcome &lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#C5C5FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to XFS.org. This site is set up to help with the XFS file system.&amp;lt;/div&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Information --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#E2EAFF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== Information about XFS ==&lt;br /&gt;
&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs Main sgi xfs website]&lt;br /&gt;
* [[XFS FAQ]]&lt;br /&gt;
* [[XFS Status Updates]]&lt;br /&gt;
* [[XFS Papers and Documentation]]&lt;br /&gt;
* [[Linux Distributions shipping XFS]]&lt;br /&gt;
* [[XFS Rpm for RedHat]]&lt;br /&gt;
* [[XFS Companies]]&lt;br /&gt;
* [[OLD News]]&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs/training/index.html Link to XFS training material]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/XFS Wikipedia xfs page, good detailed information.]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Consulting --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#fffff0; align:right; &amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Professional XFS Consulting Services == &lt;br /&gt;
&lt;br /&gt;
[[Consulting Resources]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;50%&amp;quot; style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Developers --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#F8F8FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== XFS Developer Resources ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS email list and archives]]&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs Main sgi xfs website]&lt;br /&gt;
* [http://oss.sgi.com/bugzilla/buglist.cgi?product=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ oss.sgi.com]&lt;br /&gt;
* [http://bugzilla.kernel.org/buglist.cgi?product=File+System&amp;amp;component=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ kernel.org]&lt;br /&gt;
* [[Getting the latest source code]]&lt;br /&gt;
* [[Unfinished work]]&lt;br /&gt;
* [[Shrinking Support]]&lt;br /&gt;
* [[Ideas for XFS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{#meta: | u+4/+rib+YG96TifD0SN88xS84YSDm2cl61IU7ZIk9g= | verify-v1 }}&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_email_list_and_archives&amp;diff=2068</id>
		<title>XFS email list and archives</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_email_list_and_archives&amp;diff=2068"/>
		<updated>2010-04-01T06:49:34Z</updated>

		<summary type="html">&lt;p&gt;Christian: s/www/old/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== XFS email list ==&lt;br /&gt;
Patches, comments, requests and questions should go to [mailto:xfs@oss.sgi.com xfs@oss.sgi.com]&lt;br /&gt;
&lt;br /&gt;
The list archives on oss.sgi.com are available [http://oss.sgi.com/archives/xfs here] and [http://oss.sgi.com/pipermail/xfs here] (pipermail).&lt;br /&gt;
&lt;br /&gt;
Other archives include:&lt;br /&gt;
&lt;br /&gt;
* [http://old.nabble.com/Xfs-f1029.html Nabble]&lt;br /&gt;
* [http://www.opensubscriber.com/messages/xfs@oss.sgi.com/topic.html OpenSubscriber]&lt;br /&gt;
* [http://archives.free.net.ph/list/linux-xfs.html archives.free.net.ph]&lt;br /&gt;
* [http://news.gmane.org/group/gmane.comp.file-systems.xfs.general Gmane]&lt;br /&gt;
&lt;br /&gt;
== Subscribing to the list ==&lt;br /&gt;
&lt;br /&gt;
The easiest method is to use the [http://oss.sgi.com/mailman/listinfo/xfs mailman web interface].&lt;br /&gt;
&lt;br /&gt;
Subscribing is also possible by sending an email with the body:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;subscribe&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
to [mailto:xfs-request@oss.sgi.com?body=subscribe xfs-request@oss.sgi.com]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Main_Page&amp;diff=2064</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Main_Page&amp;diff=2064"/>
		<updated>2010-02-08T04:31:32Z</updated>

		<summary type="html">&lt;p&gt;Christian: bugzilla urls are now linking to actual xfs bugs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Welcome --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#C5C5FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to XFS.org. This site is set up to help with the XFS file system.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Information --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#E2EAFF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== Information about XFS ==&lt;br /&gt;
&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs Main sgi xfs website]&lt;br /&gt;
* [[XFS FAQ]]&lt;br /&gt;
* [[XFS Status Updates]]&lt;br /&gt;
* [[XFS Papers and Documentation]]&lt;br /&gt;
* [[Linux Distributions shipping XFS]]&lt;br /&gt;
* [[XFS Rpm for RedHat]]&lt;br /&gt;
* [[XFS Companies]]&lt;br /&gt;
* [[OLD News]]&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs/training/index.html Link to XFS training material]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/XFS Wikipedia xfs page, good detailed information.]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Consulting --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#fffff0; align:right; &amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Professional XFS Consulting Services == &lt;br /&gt;
&lt;br /&gt;
[[Consulting Resources]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;50%&amp;quot; style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Developers --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#F8F8FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== XFS Developer Resources ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS email list and archives]]&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs Main sgi xfs website]&lt;br /&gt;
* [http://oss.sgi.com/bugzilla/buglist.cgi?product=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ oss.sgi.com]&lt;br /&gt;
* [http://bugzilla.kernel.org/buglist.cgi?product=File+System&amp;amp;component=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ kernel.org]&lt;br /&gt;
* [[Getting the latest source code]]&lt;br /&gt;
* [[Unfinished work]]&lt;br /&gt;
* [[Shrinking Support]]&lt;br /&gt;
* [[Ideas for XFS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{#meta: | u+4/+rib+YG96TifD0SN88xS84YSDm2cl61IU7ZIk9g= | verify-v1 }}&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2062</id>
		<title>Getting the latest source code</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2062"/>
		<updated>2010-01-02T21:50:49Z</updated>

		<summary type="html">&lt;p&gt;Christian: dependencies added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; XFS Released/Stable source &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mainline kernels&#039;&#039;&#039;&lt;br /&gt;
:XFS has been maintained in the official Linux kernel [http://www.kernel.org/ kernel trees] starting with [http://lkml.org/lkml/2003/12/8/35 Linux 2.4] and is frequently updated with the latest stable fixes and features from the SGI XFS development team.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Vendor kernels&#039;&#039;&#039;&lt;br /&gt;
:All modern Linux distributions include support for XFS. SGI actively works with [http://www.suse.com/  SUSE] to provide a supported version of XFS in that distribution.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;XFS userspace&#039;&#039;&#039;&lt;br /&gt;
:Sgi also provides [ftp://oss.sgi.com/projects/xfs source code taballs] of the xfs userspace tools. These tarballs form the basis of the xfsprogs packages found in Linux distributions.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; Development and bleeding edge Development &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS git howto]]&lt;br /&gt;
&lt;br /&gt;
Note: there are also [http://git.kernel.org/ XFS git repositories on kernel.org] for external (i.e. non-Sgi) contributers. Sgi periodically pulls those in to [http://oss.sgi.com/cgi-bin/gitweb.cgi oss.sgi.com]. This also means that one or the other may be a bit more current.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Current XFS kernel source ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=summary xfs]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/xfs&lt;br /&gt;
&lt;br /&gt;
=== XFS user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsprogs.git;a=summary xfsprogs]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/xfsprogs&lt;br /&gt;
&lt;br /&gt;
A few packages are needed to compile &amp;lt;tt&amp;gt;xfsprogs&amp;lt;/tt&amp;gt;, depending on your package manager:&lt;br /&gt;
  * &amp;lt;tt&amp;gt;apt-get install libtool automake gettext uuid-dev&amp;lt;/tt&amp;gt; &lt;br /&gt;
  * &amp;lt;tt&amp;gt;yum install libtool automake gettext uuid-devel&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XFS dump ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsdump.git;a=summary xfsdump]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/xfsdump&lt;br /&gt;
&lt;br /&gt;
=== XFS tests ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfstests.git;a=summary xfstests]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/xfstests&lt;br /&gt;
&lt;br /&gt;
=== DMAPI user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/dmapi.git;a=summary dmapi]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/dmapi&lt;br /&gt;
&lt;br /&gt;
=== git-cvsimport generated trees ===&lt;br /&gt;
&lt;br /&gt;
The Git trees are automated mirrored copies of the CVS trees using [http://www.kernel.org/pub/software/scm/git/docs/git-cvsimport.html git-cvsimport].&lt;br /&gt;
Since git-cvsimport utilized the tool [http://www.cobite.com/cvsps/ cvsps] to recreate the atomic commits of ptools or &amp;quot;mod&amp;quot; it is easier to see the entire change that was committed using git.&lt;br /&gt;
&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-import.git;a=summary linux-2.6-xfs-from-cvs]&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-cmds.git;a=summary xfs-cmds]&lt;br /&gt;
&lt;br /&gt;
Before building in the &amp;lt;tt&amp;gt;xfsdump&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;dmapi&amp;lt;/tt&amp;gt; directories (after building &amp;lt;tt&amp;gt;xfsprogs&amp;lt;/tt&amp;gt;), you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-dev&lt;br /&gt;
to create &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt; and install appropriate files there.&lt;br /&gt;
&lt;br /&gt;
Before building in the xfstests directory, you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-qa&lt;br /&gt;
to install a somewhat larger set of files in &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt;XFS cvs trees &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
The cvs trees were created using a script that converted sgi&#039;s internal&lt;br /&gt;
ptools repository to a cvs repository, so the cvs trees were considered read only.&lt;br /&gt;
&lt;br /&gt;
At this point all new development is being managed by the git trees thus the cvs trees&lt;br /&gt;
are no longer active in terms of current development and should only be used&lt;br /&gt;
for reference.&lt;br /&gt;
&lt;br /&gt;
* [[XFS CVS howto]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2051</id>
		<title>XFS FAQ</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2051"/>
		<updated>2009-10-14T17:23:23Z</updated>

		<summary type="html">&lt;p&gt;Christian: acls, xattrs...even rsync can do it :)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Info from: [http://oss.sgi.com/projects/xfs/faq.html main XFS faq at SGI]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many thanks to earlier maintainers of this document - Thomas Graichen and Seth Mos.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about XFS? ==&lt;br /&gt;
&lt;br /&gt;
The SGI XFS project page http://oss.sgi.com/projects/xfs/ is the definitive reference. It contains pointers to whitepapers, books, articles, etc.&lt;br /&gt;
&lt;br /&gt;
You could also join the [[XFS_email_list_and_archives|XFS mailing list]] or the &#039;&#039;&#039;&amp;lt;nowiki&amp;gt;#xfs&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039; IRC channel on &#039;&#039;irc.freenode.net&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about ACLs? ==&lt;br /&gt;
&lt;br /&gt;
Andreas Gruenbacher maintains the Extended Attribute and POSIX ACL documentation for Linux at http://acl.bestbits.at/&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;acl(5)&#039;&#039;&#039; manual page is also quite extensive.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find information about the internals of XFS? ==&lt;br /&gt;
&lt;br /&gt;
An [training/index.html SGI XFS Training course] aimed at developers, triage and support staff, and serious users has been in development. Parts of the course are clearly still incomplete, but there is enough content to be useful to a broad range of users.&lt;br /&gt;
&lt;br /&gt;
Barry Naujok has documented the [papers/xfs_filesystem_structure.doc XFS ondisk format] which is a very useful reference.&lt;br /&gt;
&lt;br /&gt;
== Q: What partition type should I use for XFS on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Linux native filesystem (83).&lt;br /&gt;
&lt;br /&gt;
== Q: What mount options does XFS have? ==&lt;br /&gt;
&lt;br /&gt;
There are a number of mount options influencing XFS filesystems - refer to the &#039;&#039;&#039;mount(8)&#039;&#039;&#039; manual page or the documentation in the kernel source tree itself ([http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD Documentation/filesystems/xfs.txt])&lt;br /&gt;
&lt;br /&gt;
== Q: Is there any relation between the XFS utilities and the kernel version? ==&lt;br /&gt;
&lt;br /&gt;
No, there is no relation. Newer utilities tend to mainly have fixes and checks the previous versions might not have. New features are also added in a backward compatible way - if they are enabled via mkfs, an incapable (old) kernel will recognize that it does not understand the new feature, and refuse to mount the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Does it run on platforms other than i386? ==&lt;br /&gt;
&lt;br /&gt;
XFS runs on all of the platforms that Linux supports. It is more tested on the more common platforms, especially the i386 family. Its also well tested on the IA64 platform since thats the platform SGI Linux products use.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Do quotas work on XFS? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
To use quotas with XFS, you need to enable XFS quota support when you configure your kernel. You also need to specify quota support when mounting. You can get the Linux quota utilities at their sourceforge website [http://sourceforge.net/projects/linuxquota/  http://sourceforge.net/projects/linuxquota/] or use &#039;&#039;&#039;xfs_quota(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: What&#039;s project quota? ==&lt;br /&gt;
&lt;br /&gt;
The  project  quota  is a quota mechanism in XFS can be used to implement a form of directory tree quota, where a specified directory and all of the files and subdirectories below it (i.e. a tree) can be restricted to using a subset of the available space in the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Can group quota and project quota be used at the same time? ==&lt;br /&gt;
&lt;br /&gt;
No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Is umounting prjquota (project quota) enabled fs and mouting it again with grpquota (group quota) removing prjquota limits previously set from fs (and vice versa) ? ==&lt;br /&gt;
&lt;br /&gt;
To be answered.&lt;br /&gt;
&lt;br /&gt;
== Q: Are there any dump/restore tools for XFS? ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and &#039;&#039;&#039;xfsrestore(8)&#039;&#039;&#039; are fully supported. The tape format is the same as on IRIX, so tapes are interchangeable between operating systems.&lt;br /&gt;
&lt;br /&gt;
== Q: Does LILO work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
This depends on where you install LILO.&lt;br /&gt;
&lt;br /&gt;
Yes, for MBR (Master Boot Record) installations.&lt;br /&gt;
&lt;br /&gt;
No, for root partition installations because the XFS superblock is written at block zero, where LILO would be installed. This is to maintain compatibility with the IRIX on-disk format, and will not be changed.&lt;br /&gt;
&lt;br /&gt;
== Q: Does GRUB work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
There is native XFS filesystem support for GRUB starting with version 0.91 and onward. Unfortunately, GRUB used to make incorrect assumptions about being able to read a block device image while a filesystem is mounted and actively being written to, which could cause intermittent problems when using XFS. This has reportedly since been fixed, and the 0.97 version (at least) of GRUB is apparently stable.&lt;br /&gt;
&lt;br /&gt;
== Q: Can XFS be used for a root filesystem? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
== Q: Will I be able to use my IRIX XFS filesystems on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously, you should back-up your data before trying to move it between systems. Filesystems must be &amp;quot;clean&amp;quot; when moved (i.e. unmounted). If you plan to use IRIX filesystems on Linux keep the following points in mind: the kernel needs to have SGI partition support enabled; there is no XLV support in Linux, so you are unable to read IRIX filesystems which use the XLV volume manager; also not all blocksizes available on IRIX are available on Linux (only blocksizes less than or equal to the pagesize of the architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is possible for now). Make sure that the directory format is version 2 on the IRIX filesystems (this is the default since IRIX 6.5.5). Linux can only read v2 directories.&lt;br /&gt;
&lt;br /&gt;
== Q: Is there a way to make a XFS filesystem larger or smaller? ==&lt;br /&gt;
&lt;br /&gt;
You can &#039;&#039;NOT&#039;&#039; make a XFS partition smaller online. The only way to shrink is to do a complete dump, mkfs and restore.&lt;br /&gt;
&lt;br /&gt;
An XFS filesystem may be enlarged by using &#039;&#039;&#039;xfs_growfs(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
If using partitions, you need to have free space after this partition to do so. Remove partition, recreate it larger with the &#039;&#039;exact same&#039;&#039; starting point. Run &#039;&#039;&#039;xfs_growfs&#039;&#039;&#039; to make the partition larger. Note - editing partition tables is a dangerous pastime, so back up your filesystem before doing so.&lt;br /&gt;
&lt;br /&gt;
Using XFS filesystems on top of a volume manager makes this a lot easier.&lt;br /&gt;
&lt;br /&gt;
== Q: What information should I include when reporting a problem? ==&lt;br /&gt;
&lt;br /&gt;
Things to include are what version of XFS you are using, if this is a CVS version of what date and version of the kernel. If you have problems with userland packages please report the version of the package you are using.&lt;br /&gt;
&lt;br /&gt;
If the problem relates to a particular filesystem, the output from the &#039;&#039;&#039;xfs_info(8)&#039;&#039;&#039; command and any &#039;&#039;&#039;mount(8)&#039;&#039;&#039; options in use will also be useful to the developers.&lt;br /&gt;
&lt;br /&gt;
If you experience an oops, please run it through &#039;&#039;&#039;ksymoops&#039;&#039;&#039; so that it can be interpreted.&lt;br /&gt;
&lt;br /&gt;
If you have a filesystem that cannot be repaired, make sure you have xfsprogs 2.9.0 or later and run &#039;&#039;&#039;xfs_metadump(8)&#039;&#039;&#039; to capture the metadata (which obfuscates filenames and attributes to protect your privacy) and make the dump available for someone to analyse.&lt;br /&gt;
&lt;br /&gt;
== Q: Mounting an XFS filesystem does not work - what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
If mount prints an error message something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
     mount: /dev/hda5 has wrong major or minor number&lt;br /&gt;
&lt;br /&gt;
you either do not have XFS compiled into the kernel (or you forgot to load the modules) or you did not use the &amp;quot;-t xfs&amp;quot; option on mount or the &amp;quot;xfs&amp;quot; option in &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you get something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 mount: wrong fs type, bad option, bad superblock on /dev/sda1,&lt;br /&gt;
        or too many mounted file systems&lt;br /&gt;
&lt;br /&gt;
Refer to your system log file (&amp;lt;tt&amp;gt;/var/log/messages&amp;lt;/tt&amp;gt;) for a detailed diagnostic message from the kernel.&lt;br /&gt;
&lt;br /&gt;
== Q: Does the filesystem have an undelete capability? ==&lt;br /&gt;
&lt;br /&gt;
There is no undelete in XFS. However at least some XFS driver implementations does not wipe file information nodes completely so there are chance to recover files with specialized commercial software like [http://www.ufsexplorer.com/rdr_xfs.php Raise Data Recovery for XFS].&lt;br /&gt;
In this kind of XFS driver implementation it does not re-use directory entries immediately so there are chance to get back recently deleted files even with their real names.&lt;br /&gt;
&lt;br /&gt;
This applies to most recent Linux distributions, as well as to most popular NAS boxes that use embedded linux and XFS file system.&lt;br /&gt;
&lt;br /&gt;
Anyway, the best is to always keep backups.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I backup a XFS filesystem and ACLs? ==&lt;br /&gt;
&lt;br /&gt;
You can backup a XFS filesystem with utilities like &#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and standard &#039;&#039;&#039;tar(1)&#039;&#039;&#039; for standard files. If you want to backup ACLs you will need to use &#039;&#039;&#039;xfsdump&#039;&#039;&#039; or [http://www.bacula.org/en/dev-manual/Current_State_Bacula.html Bacula] (&amp;gt; version 3.1.4) or [http://rsync.samba.org/ rsync] (&amp;gt;= version 3.0.0) to backup ACLs and EAs. &#039;&#039;&#039;xfsdump&#039;&#039;&#039; can also be integrated with [http://www.amanda.org/ amanda(8)].&lt;br /&gt;
&lt;br /&gt;
== Q: I see applications returning error 990 or &amp;quot;Structure needs cleaning&amp;quot;, what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
The error 990 stands for [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=blob;f=fs/xfs/linux-2.6/xfs_linux.h#l145 EFSCORRUPTED] which usually means XFS has detected a filesystem metadata problem and has shut the filesystem down to prevent further damage. Also, since about June 2006, we [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=commit;h=da2f4d679c8070ba5b6a920281e495917b293aa0 converted from EFSCORRUPTED/990 over to using EUCLEAN], &amp;quot;Structure needs cleaning.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The cause can be pretty much anything, unfortunately - filesystem, virtual memory manager, volume manager, device driver, or hardware.&lt;br /&gt;
&lt;br /&gt;
There should be a detailed console message when this initially happens. The messages have important information giving hints to developers as to the earliest point that a problem was detected. It is there to protect your data.&lt;br /&gt;
&lt;br /&gt;
You can use xfs_check and xfs_repair to remedy the problem (with the file system unmounted).&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I see binary NULLS in some files after recovery when I unplugged the power? ==&lt;br /&gt;
&lt;br /&gt;
Update: This issue has been addressed with a CVS fix on the 29th March 2007 and merged into mainline on 8th May 2007 for 2.6.22-rc1.&lt;br /&gt;
&lt;br /&gt;
XFS journals metadata updates, not data updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash.&lt;br /&gt;
&lt;br /&gt;
Since XFS does not write data out immediately unless you tell it to with fsync, an O_SYNC or O_DIRECT open (the same is true of other filesystems), you are looking at an inode which was flushed out, but whose data was not. Typically you&#039;ll find that the inode is not taking any space since all it has is a size but no extents allocated (try examining the file with the &#039;&#039;&#039;xfs_bmap(8)&#039;&#039;&#039; command).&lt;br /&gt;
&lt;br /&gt;
== Q: What is the problem with the write cache on journaled filesystems? ==&lt;br /&gt;
&lt;br /&gt;
Many drives use a write back cache in order to speed up the performance of writes.  However, there are conditions such as power failure when the write cache memory is never flushed to the actual disk.  Further, the drive can de-stage data from the write cache to the platters in any order that it chooses.  This causes problems for XFS and journaled filesystems in general because they rely on knowing when a write has completed to the disk. They need to know that the log information has made it to disk before allowing metadata to go to disk.  When the metadata makes it to disk then the transaction can effectively be deleted from the log resulting in movement of the tail of the log and thus freeing up some log space. So if the writes never make it to the physical disk, then the ordering is violated and the log and metadata can be lost, resulting in filesystem corruption.&lt;br /&gt;
&lt;br /&gt;
With hard disk cache sizes of currently (Jan 2009) up to 32MB that can be a lot of valuable information.  In a RAID with 8 such disks these adds to 256MB, and the chance of having filesystem metadata in the cache is so high that you have a very high chance of big data losses on a power outage.&lt;br /&gt;
&lt;br /&gt;
With a single hard disk and barriers turned on (on=default), the drive write cache is flushed before an after a barrier is issued.  A powerfail &amp;quot;only&amp;quot; loses data in the cache but no essential ordering is violated, and corruption will not occur.&lt;br /&gt;
&lt;br /&gt;
With a RAID controller with battery backed controller cache and cache in write back mode, you should turn off barriers - they are unnecessary in this case, and if the controller honors the cache flushes, it will be harmful to performance.  But then you *must* disable the individual hard disk write cache in order to ensure to keep the filesystem intact after a power failure. The method for doing this is different for each RAID controller. See the section about RAID controllers below.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I tell if I have the disk write cache enabled? ==&lt;br /&gt;
&lt;br /&gt;
For SCSI/SATA:&lt;br /&gt;
&lt;br /&gt;
* Look in dmesg(8) output for a driver line, such as:&amp;lt;br /&amp;gt; &amp;quot;SCSI device sda: drive cache: write back&amp;quot;&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# sginfo -c /dev/sda | grep -i &#039;write cache&#039; &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For PATA/SATA (although for SATA this only works on a recent kernel with ATA command passthrough):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -I /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; and look under &amp;quot;Enabled Supported&amp;quot; for &amp;quot;Write cache&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
== Q: How can I address the problem with the disk write cache? ==&lt;br /&gt;
&lt;br /&gt;
=== Disabling the disk write back cache. ===&lt;br /&gt;
&lt;br /&gt;
For SATA/PATA(IDE): (although for SATA this only works on a recent kernel with ATA command passthrough):&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -W0 /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # hdparm -W0 /dev/hda&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# blktool /dev/sda wcache off&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # blktool /dev/hda wcache off&lt;br /&gt;
&lt;br /&gt;
For SCSI:&lt;br /&gt;
&lt;br /&gt;
* Using sginfo(8) which is a little tedious&amp;lt;br /&amp;gt; It takes 3 steps. For example:&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -c /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives a list of attribute names and values&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cX /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives an array of cache values which you must match up with from step 1, e.g.&amp;lt;br /&amp;gt; 0 0 0 1 0 1 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cXR /dev/sda 0 0 0 1 0 0 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; allows you to reset the value of the cache attributes.&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
This disabling is kept persistent for a SCSI disk. However, for a SATA/PATA disk this needs to be done after every reset as it will reset back to the default of the write cache enabled. And a reset can happen after reboot or on error recovery of the drive. This makes it rather difficult to guarantee that the write cache is maintained as disabled.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using an external log. ===&lt;br /&gt;
&lt;br /&gt;
Some people have considered the idea of using an external log on a separate drive with the write cache disabled and the rest of the file system on another disk with the write cache enabled. However, that will &#039;&#039;&#039;not&#039;&#039;&#039; solve the problem. For example, the tail of the log is moved when we are notified that a metadata write is completed to disk and we won&#039;t be able to guarantee that if the metadata is on a drive with the write cache enabled.&lt;br /&gt;
&lt;br /&gt;
In fact using an external log will disable XFS&#039; write barrier support.&lt;br /&gt;
&lt;br /&gt;
=== Write barrier support. ===&lt;br /&gt;
&lt;br /&gt;
Write barrier support is enabled by default in XFS since kernel version 2.6.17. It is disabled by mounting the filesystem with &amp;quot;nobarrier&amp;quot;. Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported with external log device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported by the underlying device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, trial barrier write failed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If the filesystem is mounted with an external log device then we currently don&#039;t support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn&#039;t support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.&lt;br /&gt;
&lt;br /&gt;
== Q. Should barriers be enabled with storage which has a persistent write cache? ==&lt;br /&gt;
&lt;br /&gt;
Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with &amp;quot;nobarrier&amp;quot;. But take care about the hard disk write cache, which should be off.&lt;br /&gt;
&lt;br /&gt;
== Q. Which settings does my RAID controller need ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s hard to tell because there are so many controllers. Please consult your RAID controller documentation to determine how to change these settings, but we try to give an overview here:&lt;br /&gt;
&lt;br /&gt;
Real RAID controllers (not those found onboard of mainboards) normally have a battery backed cache (or an [http://en.wikipedia.org/wiki/Electric_double-layer_capacitor ultracapacitor] + flash memory &amp;quot;[http://www.tweaktown.com/articles/2800/adaptec_zero_maintenance_cache_protection_explained/ zero maintenance cache]&amp;quot;) which is used for buffering writes to improve speed. Even if it&#039;s battery backed, the individual hard disk write caches need to be turned off, as they are not protected from a powerfail and will just lose all contents in that case.&lt;br /&gt;
&lt;br /&gt;
* onboard RAID controllers: there are so many different types it&#039;s hard to tell. Generally, those controllers have no cache, but let the hard disk write cache on. That can lead to the bad situation that after a powerfail with RAID-1 when only parts of the disk cache have been written, the controller doesn&#039;t even see that the disks are out of sync, as the disks can resort cached blocks and might have saved the superblock info, but then lost different data contents. So, turn off disk write caches before using the RAID function.&lt;br /&gt;
&lt;br /&gt;
* 3ware: /cX/uX set cache=off, see http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86&lt;br /&gt;
&lt;br /&gt;
* Adaptec: allows setting individual drives cache&lt;br /&gt;
arcconf setcache &amp;lt;disk&amp;gt; wb|wt&lt;br /&gt;
wb=write back, which means write cache on, wt=write through, which means write cache off. So &amp;quot;wt&amp;quot; should be chosen.&lt;br /&gt;
&lt;br /&gt;
* Areca: In archttp under &amp;quot;System Controls&amp;quot; -&amp;gt; &amp;quot;System Configuration&amp;quot; there&#039;s the option &amp;quot;Disk Write Cache Mode&amp;quot; (defaults &amp;quot;Auto&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Off&amp;quot;: disk write cache is turned off&lt;br /&gt;
&lt;br /&gt;
&amp;quot;On&amp;quot;: disk write cache is enabled, this is not save for your data but fast&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Auto&amp;quot;: If you use a BBM (battery backup module, which you really should use if you care about your data), the controller automatically turns disk writes off, to protect your data. In case no BBM is attached, the controller switches to &amp;quot;On&amp;quot;, because neither controller cache nor disk cache is save so you don&#039;t seem to care about your data and just want high speed (which you get then).&lt;br /&gt;
&lt;br /&gt;
That&#039;s a very sensible default so you can let it &amp;quot;Auto&amp;quot; or enforce &amp;quot;Off&amp;quot; to be sure.&lt;br /&gt;
&lt;br /&gt;
* LSI MegaRAID: allows setting individual disks cache:&lt;br /&gt;
MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL -EnDskCache|DisDskCache&lt;br /&gt;
&lt;br /&gt;
* Xyratex: from the docs: &amp;quot;Write cache includes the disk drive cache and controller cache.&amp;quot;. So that means you can only set the drive caches and the unit caches together. To protect your data, turn it off, but write performance will suffer badly as also the controller write cache is disabled.&lt;br /&gt;
&lt;br /&gt;
== Q: Which settings are best with virtualization like VMware, XEN, qemu? ==&lt;br /&gt;
&lt;br /&gt;
The biggest problem is that those products seem to also virtualize disk &lt;br /&gt;
writes in a way that even barriers don&#039;t work anymore, which means even &lt;br /&gt;
a fsync is not reliable. Tests confirm that unplugging the power from &lt;br /&gt;
such a system even with RAID controller with battery backed cache and &lt;br /&gt;
hard disk cache turned off (which is save on a normal host) you can &lt;br /&gt;
destroy a database within the virtual machine (client, domU whatever you &lt;br /&gt;
call it).&lt;br /&gt;
&lt;br /&gt;
In qemu you can specify cache=off on the line specifying the virtual &lt;br /&gt;
disk. For others information is missing.&lt;br /&gt;
&lt;br /&gt;
== Q: What is the issue with directory corruption in Linux 2.6.17? ==&lt;br /&gt;
&lt;br /&gt;
In the Linux kernel 2.6.17 release a subtle bug was accidentally introduced into the XFS directory code by some &amp;quot;sparse&amp;quot; endian annotations. This bug was sufficiently uncommon (it only affects a certain type of format change, in Node or B-Tree format directories, and only in certain situations) that it was not detected during our regular regression testing, but it has been observed in the wild by a number of people now.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: the fix is included in 2.6.17.7 and later kernels.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To add insult to injury, &#039;&#039;&#039;xfs_repair(8)&#039;&#039;&#039; is currently not correcting these directories on detection of this corrupt state either. This &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; issue is actively being worked on, and a fixed version will be available shortly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
No other kernel versions are affected. However, using a corrupt filesystem on other kernels can still result in the filesystem being shutdown if the problem has not been rectified (on disk), making it seem like other kernels are affected.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;xfs_check&#039;&#039;&#039; tool, or &#039;&#039;&#039;xfs_repair -n&#039;&#039;&#039;, should be able to detect any directory corruption.&lt;br /&gt;
&lt;br /&gt;
Until a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; binary is available, one can make use of the &#039;&#039;&#039;xfs_db(8)&#039;&#039;&#039; command to mark the problem directory for removal (see the example below). A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; invocation will remove the directory and move all contents into &amp;quot;lost+found&amp;quot;, named by inode number (see second example on how to map inode number to directory entry name, which needs to be done _before_ removing the directory itself). The inode number of the corrupt directory is included in the shutdown report issued by the kernel on detection of directory corruption. Using that inode number, this is how one would ensure it is removed:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 core.mode = 040755&lt;br /&gt;
 core.version = 2&lt;br /&gt;
 core.format = 3 (btree)&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; write core.mode 0&lt;br /&gt;
 xfs_db&amp;amp;gt; quit&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; will clear the directory, and add new entries (named by inode number) in lost+found.&lt;br /&gt;
&lt;br /&gt;
The easiest way to map inode numbers to full paths is via &#039;&#039;&#039;xfs_ncheck(8)&#039;&#039;&#039;&amp;lt;nowiki&amp;gt;: &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_ncheck -i 14101 -i 14102 /dev/sdXXX&lt;br /&gt;
       14101 full/path/mumble_fratz_foo_bar_1495&lt;br /&gt;
       14102 full/path/mumble_fratz_foo_bar_1494&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should this not work, we can manually map inode numbers in B-Tree format directory by taking the following steps:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 ...&lt;br /&gt;
 next_unlinked = null&lt;br /&gt;
 u.bmbt.level = 1&lt;br /&gt;
 u.bmbt.numrecs = 1&lt;br /&gt;
 u.bmbt.keys[1] = [startoff] 1:[0]&lt;br /&gt;
 u.bmbt.ptrs[1] = 1:3628&lt;br /&gt;
 xfs_db&amp;amp;gt; fsblock 3628&lt;br /&gt;
 xfs_db&amp;amp;gt; type bmapbtd&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 magic = 0x424d4150&lt;br /&gt;
 level = 0&lt;br /&gt;
 numrecs = 19&lt;br /&gt;
 leftsib = null&lt;br /&gt;
 rightsib = null&lt;br /&gt;
 recs[1-19] = [startoff,startblock,blockcount,extentflag]&lt;br /&gt;
        1:[0,3088,4,0] 2:[4,3128,8,0] 3:[12,3308,4,0] 4:[16,3360,4,0]&lt;br /&gt;
        5:[20,3496,8,0] 6:[28,3552,8,0] 7:[36,3624,4,0] 8:[40,3633,4,0]&lt;br /&gt;
        9:[44,3688,8,0] 10:[52,3744,4,0] 11:[56,3784,8,0]&lt;br /&gt;
        12:[64,3840,8,0] 13:[72,3896,4,0] 14:[33554432,3092,4,0]&lt;br /&gt;
        15:[33554436,3488,8,0] 16:[33554444,3629,4,0]&lt;br /&gt;
        17:[33554448,3748,4,0] 18:[33554452,3900,4,0]&lt;br /&gt;
        19:[67108864,3364,4,0]&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point we are looking at the extents that hold all of the directory information. There are three types of extent here, we have the data blocks (extents 1 through 13 above), then the leaf blocks (extents 14 through 18), then the freelist blocks (extent 19 above). The jumps in the first field (start offset) indicate our progression through each of the three types. For recovering file names, we are only interested in the data blocks, so we can now feed those offset numbers into the &#039;&#039;&#039;xfs_db&#039;&#039;&#039; dblock command. So, for the fifth extent - 5:[20,3496,8,0] - listed above:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; dblock 20&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 dhdr.magic = 0x58443244&lt;br /&gt;
 dhdr.bestfree[0].offset = 0&lt;br /&gt;
 dhdr.bestfree[0].length = 0&lt;br /&gt;
 dhdr.bestfree[1].offset = 0&lt;br /&gt;
 dhdr.bestfree[1].length = 0&lt;br /&gt;
 dhdr.bestfree[2].offset = 0&lt;br /&gt;
 dhdr.bestfree[2].length = 0&lt;br /&gt;
 du[0].inumber = 13937&lt;br /&gt;
 du[0].namelen = 25&lt;br /&gt;
 du[0].name = &amp;quot;mumble_fratz_foo_bar_1595&amp;quot;&lt;br /&gt;
 du[0].tag = 0x10&lt;br /&gt;
 du[1].inumber = 13938&lt;br /&gt;
 du[1].namelen = 25&lt;br /&gt;
 du[1].name = &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;&lt;br /&gt;
 du[1].tag = 0x38&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
So, here we can see that inode number 13938 matches up with name &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;. Iterate through all the extents, and extract all the name-to-inode-number mappings you can, as these will be useful when looking at &amp;quot;lost+found&amp;quot; (once &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; has removed the corrupt directory).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why does my &amp;gt; 2TB XFS partition disappear when I reboot ? ==&lt;br /&gt;
&lt;br /&gt;
Strictly speaking this is not an XFS problem.&lt;br /&gt;
&lt;br /&gt;
To support &amp;gt; 2TB partitions you need two things: a kernel that supports large block devices (&amp;lt;tt&amp;gt;CONFIG_LBD=y&amp;lt;/tt&amp;gt;) and a partition table format that can hold large partitions.  The default DOS partition tables don&#039;t.  The best partition format for&lt;br /&gt;
&amp;gt; 2TB partitions is the EFI GPT format (&amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Without CONFIG_LBD=y you can&#039;t even create the filesystem, but without &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt; it works fine until you reboot at which point the partition will disappear.  Note that you need to enable the &amp;lt;tt&amp;gt;CONFIG_PARTITION_ADVANCED&amp;lt;/tt&amp;gt; option before you can set &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I receive &amp;lt;tt&amp;gt;No space left on device&amp;lt;/tt&amp;gt; after &amp;lt;tt&amp;gt;xfs_growfs&amp;lt;/tt&amp;gt;? ==&lt;br /&gt;
&lt;br /&gt;
After [http://oss.sgi.com/pipermail/xfs/2009-January/039828.html growing a XFS filesystem], df(1) would show enough free space but attempts to write to the filesystem result in -ENOSPC. To fix this, [http://oss.sgi.com/pipermail/xfs/2009-January/039835.html Dave Chinner advised]:&lt;br /&gt;
&lt;br /&gt;
  The only way to fix this is to move data around to free up space&lt;br /&gt;
  below 1TB. Find your oldest data (i.e. that was around before even&lt;br /&gt;
  the first grow) and move it off the filesystem (move, not copy).&lt;br /&gt;
  Then if you copy it back on, the data blocks will end up above 1TB&lt;br /&gt;
  and that should leave you with plenty of space for inodes below 1TB.&lt;br /&gt;
  &lt;br /&gt;
  A complete dump and restore will also fix the problem ;)&lt;br /&gt;
&lt;br /&gt;
Also, you can add &#039;inode64&#039; to your mount options to allow inodes to live above 1TB.&lt;br /&gt;
&lt;br /&gt;
== Q: Is using noatime or/and nodiratime at mount time giving any performance benefits in xfs (or not using them performance decrease) ? ==&lt;br /&gt;
See: http://everything2.com/index.pl?node_id=1479435&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Christian&amp;diff=2039</id>
		<title>User:Christian</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Christian&amp;diff=2039"/>
		<updated>2009-07-21T06:11:19Z</updated>

		<summary type="html">&lt;p&gt;Christian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:lists___nospam@nerdbynature.de me]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Christian&amp;diff=2038</id>
		<title>User:Christian</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Christian&amp;diff=2038"/>
		<updated>2009-07-21T06:10:40Z</updated>

		<summary type="html">&lt;p&gt;Christian: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:lists___nospam@nerdbynature.de Christian]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2035</id>
		<title>Getting the latest source code</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2035"/>
		<updated>2009-07-19T17:37:59Z</updated>

		<summary type="html">&lt;p&gt;Christian: cvs imported trees were moved to archive&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; XFS Released/Stable source &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mainline kernels&#039;&#039;&#039;&lt;br /&gt;
:XFS has been maintained in the official Linux kernel [http://www.kernel.org/ kernel trees] starting with [http://lkml.org/lkml/2003/12/8/35 Linux 2.4] and is frequently updated with the latest stable fixes and features from the SGI XFS development team.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Vendor kernels&#039;&#039;&#039;&lt;br /&gt;
:All modern Linux distributions include support for XFS. SGI actively works with [http://www.suse.com/  SUSE] to provide a supported version of XFS in that distribution.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;XFS userspace&#039;&#039;&#039;&lt;br /&gt;
:Sgi also provides [ftp://oss.sgi.com/projects/xfs source code taballs] of the xfs userspace tools. These tarballs form the basis of the xfsprogs packages found in Linux distributions.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; Development and bleeding edge Development &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS git howto]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Current XFS kernel source ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=summary xfs]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/xfs&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XFS user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsprogs.git;a=summary xfsprogs]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/xfsprogs&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XFS dump ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsdump.git;a=summary xfsdump]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/xfsdump&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XFS tests ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfstests.git;a=summary xfstests]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/xfstests&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== DMAPI user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/dmapi.git;a=summary dmapi]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/dmapi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== git-cvsimport generated trees ===&lt;br /&gt;
&lt;br /&gt;
The Git trees are automated mirrored copies of the cvs trees using git-cvsimport.&lt;br /&gt;
Since git-cvsimport utilized the tool cvsps to recreate the atomic commits of ptools&lt;br /&gt;
or &amp;quot;mod&amp;quot; it is easier to see the entire change that was committed using git.&lt;br /&gt;
&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-import.git;a=summary linux-2.6-xfs-from-cvs]&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-cmds.git;a=summary xfs-cmds]&lt;br /&gt;
&lt;br /&gt;
Before building in the &amp;lt;tt&amp;gt;xfsdump&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;dmapi&amp;lt;/tt&amp;gt; directories (after building &amp;lt;tt&amp;gt;xfsprogs&amp;lt;/tt&amp;gt;), you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-dev&lt;br /&gt;
to create &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt; and install appropriate files there.&lt;br /&gt;
&lt;br /&gt;
Before building in the xfstests directory, you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-qa&lt;br /&gt;
to install a somewhat larger set of files in &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt;XFS cvs trees &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
The cvs trees were created using a script that converted sgi&#039;s internal&lt;br /&gt;
ptools repository to a cvs repository, so the cvs trees were considered read only.&lt;br /&gt;
&lt;br /&gt;
At this point all new development is being managed by the git trees thus the cvs trees&lt;br /&gt;
are not longer active in terms of current development and should only be used&lt;br /&gt;
for reference.&lt;br /&gt;
&lt;br /&gt;
* [[XFS CVS howto]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2034</id>
		<title>Getting the latest source code</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2034"/>
		<updated>2009-07-19T17:31:45Z</updated>

		<summary type="html">&lt;p&gt;Christian: formatting fixes&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; XFS Released/Stable source &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mainline kernels&#039;&#039;&#039;&lt;br /&gt;
:XFS has been maintained in the official Linux kernel [http://www.kernel.org/ kernel trees] starting with [http://lkml.org/lkml/2003/12/8/35 Linux 2.4] and is frequently updated with the latest stable fixes and features from the SGI XFS development team.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Vendor kernels&#039;&#039;&#039;&lt;br /&gt;
:All modern Linux distributions include support for XFS. SGI actively works with [http://www.suse.com/  SUSE] to provide a supported version of XFS in that distribution.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;XFS userspace&#039;&#039;&#039;&lt;br /&gt;
:Sgi also provides [ftp://oss.sgi.com/projects/xfs source code taballs] of the xfs userspace tools. These tarballs form the basis of the xfsprogs packages found in Linux distributions.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; Development and bleeding edge Development &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS git howto]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Current XFS kernel source ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=summary xfs]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/xfs&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XFS user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsprogs.git;a=summary xfsprogs]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/xfsprogs&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XFS dump ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsdump.git;a=summary xfsdump]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/xfsdump&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XFS tests ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfstests.git;a=summary xfstests]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/xfstests&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== DMAPI user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/dmapi.git;a=summary dmapi]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/dmapi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== git-cvsimport generated trees ===&lt;br /&gt;
&lt;br /&gt;
The Git trees are automated mirrored copies of the cvs trees using git-cvsimport.&lt;br /&gt;
Since git-cvsimport utilized the tool cvsps to recreate the atomic commits of ptools&lt;br /&gt;
or &amp;quot;mod&amp;quot; it is easier to see the entire change that was committed using git.&lt;br /&gt;
&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=linux-2.6-xfs-from-cvs/.git;a=summary linux-2.6-xfs-from-cvs]&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs-cmds/.git;a=summary xfs-cmds]&lt;br /&gt;
&lt;br /&gt;
Before building in the &amp;lt;tt&amp;gt;xfsdump&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;dmapi&amp;lt;/tt&amp;gt; directories (after building &amp;lt;tt&amp;gt;xfsprogs&amp;lt;/tt&amp;gt;), you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-dev&lt;br /&gt;
to create &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt; and install appropriate files there.&lt;br /&gt;
&lt;br /&gt;
Before building in the xfstests directory, you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-qa&lt;br /&gt;
to install a somewhat larger set of files in &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt;XFS cvs trees &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
The cvs trees were created using a script that converted sgi&#039;s internal&lt;br /&gt;
ptools repository to a cvs repository, so the cvs trees were considered read only.&lt;br /&gt;
&lt;br /&gt;
At this point all new development is being managed by the git trees thus the cvs trees&lt;br /&gt;
are not longer active in terms of current development and should only be used&lt;br /&gt;
for reference.&lt;br /&gt;
&lt;br /&gt;
* [[XFS CVS howto]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2033</id>
		<title>XFS FAQ</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2033"/>
		<updated>2009-07-17T01:15:00Z</updated>

		<summary type="html">&lt;p&gt;Christian: ultracapacitor, zero maintenance cache explained&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Info from: [http://oss.sgi.com/projects/xfs/faq.html main XFS faq at SGI]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many thanks to earlier maintainers of this document - Thomas Graichen and Seth Mos.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about XFS? ==&lt;br /&gt;
&lt;br /&gt;
The SGI XFS project page http://oss.sgi.com/projects/xfs/ is the definitive reference. It contains pointers to whitepapers, books, articles, etc.&lt;br /&gt;
&lt;br /&gt;
You could also join the [[XFS_email_list_and_archives|XFS mailing list]] or the &#039;&#039;&#039;&amp;lt;nowiki&amp;gt;#xfs&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039; IRC channel on &#039;&#039;irc.freenode.net&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about ACLs? ==&lt;br /&gt;
&lt;br /&gt;
Andreas Gruenbacher maintains the Extended Attribute and POSIX ACL documentation for Linux at http://acl.bestbits.at/&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;acl(5)&#039;&#039;&#039; manual page is also quite extensive.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find information about the internals of XFS? ==&lt;br /&gt;
&lt;br /&gt;
An [training/index.html SGI XFS Training course] aimed at developers, triage and support staff, and serious users has been in development. Parts of the course are clearly still incomplete, but there is enough content to be useful to a broad range of users.&lt;br /&gt;
&lt;br /&gt;
Barry Naujok has documented the [papers/xfs_filesystem_structure.doc XFS ondisk format] which is a very useful reference.&lt;br /&gt;
&lt;br /&gt;
== Q: What partition type should I use for XFS on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Linux native filesystem (83).&lt;br /&gt;
&lt;br /&gt;
== Q: What mount options does XFS have? ==&lt;br /&gt;
&lt;br /&gt;
There are a number of mount options influencing XFS filesystems - refer to the &#039;&#039;&#039;mount(8)&#039;&#039;&#039; manual page or the documentation in the kernel source tree itself ([http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD Documentation/filesystems/xfs.txt])&lt;br /&gt;
&lt;br /&gt;
== Q: Is there any relation between the XFS utilities and the kernel version? ==&lt;br /&gt;
&lt;br /&gt;
No, there is no relation. Newer utilities tend to mainly have fixes and checks the previous versions might not have. New features are also added in a backward compatible way - if they are enabled via mkfs, an incapable (old) kernel will recognize that it does not understand the new feature, and refuse to mount the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Does it run on platforms other than i386? ==&lt;br /&gt;
&lt;br /&gt;
XFS runs on all of the platforms that Linux supports. It is more tested on the more common platforms, especially the i386 family. Its also well tested on the IA64 platform since thats the platform SGI Linux products use.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Do quotas work on XFS? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
To use quotas with XFS, you need to enable XFS quota support when you configure your kernel. You also need to specify quota support when mounting. You can get the Linux quota utilities at their sourceforge website [http://sourceforge.net/projects/linuxquota/  http://sourceforge.net/projects/linuxquota/] or use &#039;&#039;&#039;xfs_quota(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: What&#039;s project quota? ==&lt;br /&gt;
&lt;br /&gt;
The  project  quota  is a quota mechanism in XFS can be used to implement a form of directory tree quota, where a specified directory and all of the files and subdirectories below it (i.e. a tree) can be restricted to using a subset of the available space in the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Can group quota and project quota be used at the same time? ==&lt;br /&gt;
&lt;br /&gt;
No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Is umounting prjquota (project quota) enabled fs and mouting it again with grpquota (group quota) removing prjquota limits previously set from fs (and vice versa) ? ==&lt;br /&gt;
&lt;br /&gt;
To be answered.&lt;br /&gt;
&lt;br /&gt;
== Q: Are there any dump/restore tools for XFS? ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and &#039;&#039;&#039;xfsrestore(8)&#039;&#039;&#039; are fully supported. The tape format is the same as on IRIX, so tapes are interchangeable between operating systems.&lt;br /&gt;
&lt;br /&gt;
== Q: Does LILO work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
This depends on where you install LILO.&lt;br /&gt;
&lt;br /&gt;
Yes, for MBR (Master Boot Record) installations.&lt;br /&gt;
&lt;br /&gt;
No, for root partition installations because the XFS superblock is written at block zero, where LILO would be installed. This is to maintain compatibility with the IRIX on-disk format, and will not be changed.&lt;br /&gt;
&lt;br /&gt;
== Q: Does GRUB work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
There is native XFS filesystem support for GRUB starting with version 0.91 and onward. Unfortunately, GRUB used to make incorrect assumptions about being able to read a block device image while a filesystem is mounted and actively being written to, which could cause intermittent problems when using XFS. This has reportedly since been fixed, and the 0.97 version (at least) of GRUB is apparently stable.&lt;br /&gt;
&lt;br /&gt;
== Q: Can XFS be used for a root filesystem? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
== Q: Will I be able to use my IRIX XFS filesystems on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously, you should back-up your data before trying to move it between systems. Filesystems must be &amp;quot;clean&amp;quot; when moved (i.e. unmounted). If you plan to use IRIX filesystems on Linux keep the following points in mind: the kernel needs to have SGI partition support enabled; there is no XLV support in Linux, so you are unable to read IRIX filesystems which use the XLV volume manager; also not all blocksizes available on IRIX are available on Linux (only blocksizes less than or equal to the pagesize of the architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is possible for now). Make sure that the directory format is version 2 on the IRIX filesystems (this is the default since IRIX 6.5.5). Linux can only read v2 directories.&lt;br /&gt;
&lt;br /&gt;
== Q: Is there a way to make a XFS filesystem larger or smaller? ==&lt;br /&gt;
&lt;br /&gt;
You can &#039;&#039;NOT&#039;&#039; make a XFS partition smaller online. The only way to shrink is to do a complete dump, mkfs and restore.&lt;br /&gt;
&lt;br /&gt;
An XFS filesystem may be enlarged by using &#039;&#039;&#039;xfs_growfs(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
If using partitions, you need to have free space after this partition to do so. Remove partition, recreate it larger with the &#039;&#039;exact same&#039;&#039; starting point. Run &#039;&#039;&#039;xfs_growfs&#039;&#039;&#039; to make the partition larger. Note - editing partition tables is a dangerous pastime, so back up your filesystem before doing so.&lt;br /&gt;
&lt;br /&gt;
Using XFS filesystems on top of a volume manager makes this a lot easier.&lt;br /&gt;
&lt;br /&gt;
== Q: What information should I include when reporting a problem? ==&lt;br /&gt;
&lt;br /&gt;
Things to include are what version of XFS you are using, if this is a CVS version of what date and version of the kernel. If you have problems with userland packages please report the version of the package you are using.&lt;br /&gt;
&lt;br /&gt;
If the problem relates to a particular filesystem, the output from the &#039;&#039;&#039;xfs_info(8)&#039;&#039;&#039; command and any &#039;&#039;&#039;mount(8)&#039;&#039;&#039; options in use will also be useful to the developers.&lt;br /&gt;
&lt;br /&gt;
If you experience an oops, please run it through &#039;&#039;&#039;ksymoops&#039;&#039;&#039; so that it can be interpreted.&lt;br /&gt;
&lt;br /&gt;
If you have a filesystem that cannot be repaired, make sure you have xfsprogs 2.9.0 or later and run &#039;&#039;&#039;xfs_metadump(8)&#039;&#039;&#039; to capture the metadata (which obfuscates filenames and attributes to protect your privacy) and make the dump available for someone to analyse.&lt;br /&gt;
&lt;br /&gt;
== Q: Mounting a XFS filesystem does not work - what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
If mount prints an error message something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
     mount: /dev/hda5 has wrong major or minor number&lt;br /&gt;
&lt;br /&gt;
you either do not have XFS compiled into the kernel (or you forgot to load the modules) or you did not use the &amp;quot;-t xfs&amp;quot; option on mount or the &amp;quot;xfs&amp;quot; option in &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you get something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 mount: wrong fs type, bad option, bad superblock on /dev/sda1,&lt;br /&gt;
        or too many mounted file systems&lt;br /&gt;
&lt;br /&gt;
Refer to your system log file (&amp;lt;tt&amp;gt;/var/log/messages&amp;lt;/tt&amp;gt;) for a detailed diagnostic message from the kernel.&lt;br /&gt;
&lt;br /&gt;
== Q: Does the filesystem have an undelete capability? ==&lt;br /&gt;
&lt;br /&gt;
There is no undelete in XFS. However at least some XFS driver implementations does not wipe file information nodes completely so there are chance to recover files with specialized commercial software like [http://www.ufsexplorer.com/rdr_xfs.php Raise Data Recovery for XFS].&lt;br /&gt;
In this kind of XFS driver implementation it does not re-use directory entries immediately so there are chance to get back recently deleted files even with their real names.&lt;br /&gt;
&lt;br /&gt;
This applies to most recent Linux distributions, as well as to most popular NAS boxes that use embedded linux and XFS file system.&lt;br /&gt;
&lt;br /&gt;
Anyway, the best is to always keep backups.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I backup a XFS filesystem and ACLs? ==&lt;br /&gt;
&lt;br /&gt;
You can backup a XFS filesystem with utilities like &#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and standard &#039;&#039;&#039;tar(1)&#039;&#039;&#039; for standard files. If you want to backup ACLs you will need to use &#039;&#039;&#039;xfsdump&#039;&#039;&#039;, this is the only tool at the moment that supports backing up extended attributes. &#039;&#039;&#039;xfsdump&#039;&#039;&#039; can also be integrated with &#039;&#039;&#039;amanda(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: I see applications returning error 990 or &amp;quot;Structure needs cleaning&amp;quot;, what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
The error 990 stands for [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=blob;f=fs/xfs/linux-2.6/xfs_linux.h#l145 EFSCORRUPTED] which usually means XFS has detected a filesystem metadata problem and has shut the filesystem down to prevent further damage. Also, since about June 2006, we [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=commit;h=da2f4d679c8070ba5b6a920281e495917b293aa0 converted from EFSCORRUPTED/990 over to using EUCLEAN], &amp;quot;Structure needs cleaning.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The cause can be pretty much anything, unfortunately - filesystem, virtual memory manager, volume manager, device driver, or hardware.&lt;br /&gt;
&lt;br /&gt;
There should be a detailed console message when this initially happens. The messages have important information giving hints to developers as to the earliest point that a problem was detected. It is there to protect your data.&lt;br /&gt;
&lt;br /&gt;
You can use xfs_check and xfs_repair to remedy the problem (with the file system unmounted).&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I see binary NULLS in some files after recovery when I unplugged the power? ==&lt;br /&gt;
&lt;br /&gt;
Update: This issue has been addressed with a CVS fix on the 29th March 2007 and merged into mainline on 8th May 2007 for 2.6.22-rc1.&lt;br /&gt;
&lt;br /&gt;
XFS journals metadata updates, not data updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash.&lt;br /&gt;
&lt;br /&gt;
Since XFS does not write data out immediately unless you tell it to with fsync, an O_SYNC or O_DIRECT open (the same is true of other filesystems), you are looking at an inode which was flushed out, but whose data was not. Typically you&#039;ll find that the inode is not taking any space since all it has is a size but no extents allocated (try examining the file with the &#039;&#039;&#039;xfs_bmap(8)&#039;&#039;&#039; command).&lt;br /&gt;
&lt;br /&gt;
== Q: What is the problem with the write cache on journaled filesystems? ==&lt;br /&gt;
&lt;br /&gt;
Many drives use a write back cache in order to speed up the performance of writes.  However, there are conditions such as power failure when the write cache memory is never flushed to the actual disk.  Further, the drive can de-stage data from the write cache to the platters in any order that it chooses.  This causes problems for XFS and journaled filesystems in general because they rely on knowing when a write has completed to the disk. They need to know that the log information has made it to disk before allowing metadata to go to disk.  When the metadata makes it to disk then the transaction can effectively be deleted from the log resulting in movement of the tail of the log and thus freeing up some log space. So if the writes never make it to the physical disk, then the ordering is violated and the log and metadata can be lost, resulting in filesystem corruption.&lt;br /&gt;
&lt;br /&gt;
With hard disk cache sizes of currently (Jan 2009) up to 32MB that can be a lot of valuable information.  In a RAID with 8 such disks these adds to 256MB, and the chance of having filesystem metadata in the cache is so high that you have a very high chance of big data losses on a power outage.&lt;br /&gt;
&lt;br /&gt;
With a single hard disk and barriers turned on (on=default), the drive write cache is flushed before an after a barrier is issued.  A powerfail &amp;quot;only&amp;quot; loses data in the cache but no essential ordering is violated, and corruption will not occur.&lt;br /&gt;
&lt;br /&gt;
With a RAID controller with battery backed controller cache and cache in write back mode, you should turn off barriers - they are unnecessary in this case, and if the controller honors the cache flushes, it will be harmful to performance.  But then you *must* disable the individual hard disk write cache in order to ensure to keep the filesystem intact after a power failure. The method for doing this is different for each RAID controller. See the section about RAID controllers below.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I tell if I have the disk write cache enabled? ==&lt;br /&gt;
&lt;br /&gt;
For SCSI/SATA:&lt;br /&gt;
&lt;br /&gt;
* Look in dmesg(8) output for a driver line, such as:&amp;lt;br /&amp;gt; &amp;quot;SCSI device sda: drive cache: write back&amp;quot;&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# sginfo -c /dev/sda | grep -i &#039;write cache&#039; &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For PATA/SATA (although for SATA this only works on a recent kernel with ATA command passthrough):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -I /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; and look under &amp;quot;Enabled Supported&amp;quot; for &amp;quot;Write cache&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
== Q: How can I address the problem with the disk write cache? ==&lt;br /&gt;
&lt;br /&gt;
=== Disabling the disk write back cache. ===&lt;br /&gt;
&lt;br /&gt;
For SATA/PATA(IDE): (although for SATA this only works on a recent kernel with ATA command passthrough):&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -W0 /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # hdparm -W0 /dev/hda&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# blktool /dev/sda wcache off&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # blktool /dev/hda wcache off&lt;br /&gt;
&lt;br /&gt;
For SCSI:&lt;br /&gt;
&lt;br /&gt;
* Using sginfo(8) which is a little tedious&amp;lt;br /&amp;gt; It takes 3 steps. For example:&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -c /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives a list of attribute names and values&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cX /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives an array of cache values which you must match up with from step 1, e.g.&amp;lt;br /&amp;gt; 0 0 0 1 0 1 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cXR /dev/sda 0 0 0 1 0 0 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; allows you to reset the value of the cache attributes.&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
This disabling is kept persistent for a SCSI disk. However, for a SATA/PATA disk this needs to be done after every reset as it will reset back to the default of the write cache enabled. And a reset can happen after reboot or on error recovery of the drive. This makes it rather difficult to guarantee that the write cache is maintained as disabled.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using an external log. ===&lt;br /&gt;
&lt;br /&gt;
Some people have considered the idea of using an external log on a separate drive with the write cache disabled and the rest of the file system on another disk with the write cache enabled. However, that will &#039;&#039;&#039;not&#039;&#039;&#039; solve the problem. For example, the tail of the log is moved when we are notified that a metadata write is completed to disk and we won&#039;t be able to guarantee that if the metadata is on a drive with the write cache enabled.&lt;br /&gt;
&lt;br /&gt;
In fact using an external log will disable XFS&#039; write barrier support.&lt;br /&gt;
&lt;br /&gt;
=== Write barrier support. ===&lt;br /&gt;
&lt;br /&gt;
Write barrier support is enabled by default in XFS since kernel version 2.6.17. It is disabled by mounting the filesystem with &amp;quot;nobarrier&amp;quot;. Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported with external log device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported by the underlying device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, trial barrier write failed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If the filesystem is mounted with an external log device then we currently don&#039;t support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn&#039;t support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.&lt;br /&gt;
&lt;br /&gt;
== Q. Should barriers be enabled with storage which has a persistent write cache? ==&lt;br /&gt;
&lt;br /&gt;
Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with &amp;quot;nobarrier&amp;quot;. But take care about the hard disk write cache, which should be off.&lt;br /&gt;
&lt;br /&gt;
== Q. Which settings does my RAID controller need ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s hard to tell because there are so many controllers. Please consult your RAID controller documentation to determine how to change these settings, but we try to give an overview here:&lt;br /&gt;
&lt;br /&gt;
Real RAID controllers (not those found onboard of mainboards) normally have a battery backed cache (or an [http://en.wikipedia.org/wiki/Electric_double-layer_capacitor ultracapacitor] + flash memory &amp;quot;[http://www.tweaktown.com/articles/2800/adaptec_zero_maintenance_cache_protection_explained/ zero maintenance cache]&amp;quot;) which is used for buffering writes to improve speed. Even if it&#039;s battery backed, the individual hard disk write caches need to be turned off, as they are not protected from a powerfail and will just lose all contents in that case.&lt;br /&gt;
&lt;br /&gt;
* onboard RAID controllers: there are so many different types it&#039;s hard to tell. Generally, those controllers have no cache, but let the hard disk write cache on. That can lead to the bad situation that after a powerfail with RAID-1 when only parts of the disk cache have been written, the controller doesn&#039;t even see that the disks are out of sync, as the disks can resort cached blocks and might have saved the superblock info, but then lost different data contents. So, turn off disk write caches before using the RAID function.&lt;br /&gt;
&lt;br /&gt;
* 3ware: /cX/uX set cache=off, see http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86&lt;br /&gt;
&lt;br /&gt;
* Adaptec: allows setting individual drives cache&lt;br /&gt;
arcconf setcache &amp;lt;disk&amp;gt; wb|wt&lt;br /&gt;
wb=write back, which means write cache on, wt=write through, which means write cache off. So &amp;quot;wt&amp;quot; should be chosen.&lt;br /&gt;
&lt;br /&gt;
* Areca: In archttp under &amp;quot;System Controls&amp;quot; -&amp;gt; &amp;quot;System Configuration&amp;quot; there&#039;s the option &amp;quot;Disk Write Cache Mode&amp;quot; (defaults &amp;quot;Auto&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Off&amp;quot;: disk write cache is turned off&lt;br /&gt;
&lt;br /&gt;
&amp;quot;On&amp;quot;: disk write cache is enabled, this is not save for your data but fast&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Auto&amp;quot;: If you use a BBM (battery backup module, which you really should use if you care about your data), the controller automatically turns disk writes off, to protect your data. In case no BBM is attached, the controller switches to &amp;quot;On&amp;quot;, because neither controller cache nor disk cache is save so you don&#039;t seem to care about your data and just want high speed (which you get then).&lt;br /&gt;
&lt;br /&gt;
That&#039;s a very sensible default so you can let it &amp;quot;Auto&amp;quot; or enforce &amp;quot;Off&amp;quot; to be sure.&lt;br /&gt;
&lt;br /&gt;
* LSI MegaRAID: allows setting individual disks cache:&lt;br /&gt;
MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL -EnDskCache|DisDskCache&lt;br /&gt;
&lt;br /&gt;
* Xyratex: from the docs: &amp;quot;Write cache includes the disk drive cache and controller cache.&amp;quot;. So that means you can only set the drive caches and the unit caches together. To protect your data, turn it off, but write performance will suffer badly as also the controller write cache is disabled.&lt;br /&gt;
&lt;br /&gt;
== Q: Which settings are best with virtualization like VMware, XEN, qemu? ==&lt;br /&gt;
&lt;br /&gt;
The biggest problem is that those products seem to also virtualize disk &lt;br /&gt;
writes in a way that even barriers don&#039;t work anymore, which means even &lt;br /&gt;
a fsync is not reliable. Tests confirm that unplugging the power from &lt;br /&gt;
such a system even with RAID controller with battery backed cache and &lt;br /&gt;
hard disk cache turned off (which is save on a normal host) you can &lt;br /&gt;
destroy a database within the virtual machine (client, domU whatever you &lt;br /&gt;
call it).&lt;br /&gt;
&lt;br /&gt;
In qemu you can specify cache=off on the line specifying the virtual &lt;br /&gt;
disk. For others information is missing.&lt;br /&gt;
&lt;br /&gt;
== Q: What is the issue with directory corruption in Linux 2.6.17? ==&lt;br /&gt;
&lt;br /&gt;
In the Linux kernel 2.6.17 release a subtle bug was accidentally introduced into the XFS directory code by some &amp;quot;sparse&amp;quot; endian annotations. This bug was sufficiently uncommon (it only affects a certain type of format change, in Node or B-Tree format directories, and only in certain situations) that it was not detected during our regular regression testing, but it has been observed in the wild by a number of people now.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: the fix is included in 2.6.17.7 and later kernels.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To add insult to injury, &#039;&#039;&#039;xfs_repair(8)&#039;&#039;&#039; is currently not correcting these directories on detection of this corrupt state either. This &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; issue is actively being worked on, and a fixed version will be available shortly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
No other kernel versions are affected. However, using a corrupt filesystem on other kernels can still result in the filesystem being shutdown if the problem has not been rectified (on disk), making it seem like other kernels are affected.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;xfs_check&#039;&#039;&#039; tool, or &#039;&#039;&#039;xfs_repair -n&#039;&#039;&#039;, should be able to detect any directory corruption.&lt;br /&gt;
&lt;br /&gt;
Until a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; binary is available, one can make use of the &#039;&#039;&#039;xfs_db(8)&#039;&#039;&#039; command to mark the problem directory for removal (see the example below). A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; invocation will remove the directory and move all contents into &amp;quot;lost+found&amp;quot;, named by inode number (see second example on how to map inode number to directory entry name, which needs to be done _before_ removing the directory itself). The inode number of the corrupt directory is included in the shutdown report issued by the kernel on detection of directory corruption. Using that inode number, this is how one would ensure it is removed:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 core.mode = 040755&lt;br /&gt;
 core.version = 2&lt;br /&gt;
 core.format = 3 (btree)&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; write core.mode 0&lt;br /&gt;
 xfs_db&amp;amp;gt; quit&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; will clear the directory, and add new entries (named by inode number) in lost+found.&lt;br /&gt;
&lt;br /&gt;
The easiest way to map inode numbers to full paths is via &#039;&#039;&#039;xfs_ncheck(8)&#039;&#039;&#039;&amp;lt;nowiki&amp;gt;: &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_ncheck -i 14101 -i 14102 /dev/sdXXX&lt;br /&gt;
       14101 full/path/mumble_fratz_foo_bar_1495&lt;br /&gt;
       14102 full/path/mumble_fratz_foo_bar_1494&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should this not work, we can manually map inode numbers in B-Tree format directory by taking the following steps:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 ...&lt;br /&gt;
 next_unlinked = null&lt;br /&gt;
 u.bmbt.level = 1&lt;br /&gt;
 u.bmbt.numrecs = 1&lt;br /&gt;
 u.bmbt.keys[1] = [startoff] 1:[0]&lt;br /&gt;
 u.bmbt.ptrs[1] = 1:3628&lt;br /&gt;
 xfs_db&amp;amp;gt; fsblock 3628&lt;br /&gt;
 xfs_db&amp;amp;gt; type bmapbtd&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 magic = 0x424d4150&lt;br /&gt;
 level = 0&lt;br /&gt;
 numrecs = 19&lt;br /&gt;
 leftsib = null&lt;br /&gt;
 rightsib = null&lt;br /&gt;
 recs[1-19] = [startoff,startblock,blockcount,extentflag]&lt;br /&gt;
        1:[0,3088,4,0] 2:[4,3128,8,0] 3:[12,3308,4,0] 4:[16,3360,4,0]&lt;br /&gt;
        5:[20,3496,8,0] 6:[28,3552,8,0] 7:[36,3624,4,0] 8:[40,3633,4,0]&lt;br /&gt;
        9:[44,3688,8,0] 10:[52,3744,4,0] 11:[56,3784,8,0]&lt;br /&gt;
        12:[64,3840,8,0] 13:[72,3896,4,0] 14:[33554432,3092,4,0]&lt;br /&gt;
        15:[33554436,3488,8,0] 16:[33554444,3629,4,0]&lt;br /&gt;
        17:[33554448,3748,4,0] 18:[33554452,3900,4,0]&lt;br /&gt;
        19:[67108864,3364,4,0]&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point we are looking at the extents that hold all of the directory information. There are three types of extent here, we have the data blocks (extents 1 through 13 above), then the leaf blocks (extents 14 through 18), then the freelist blocks (extent 19 above). The jumps in the first field (start offset) indicate our progression through each of the three types. For recovering file names, we are only interested in the data blocks, so we can now feed those offset numbers into the &#039;&#039;&#039;xfs_db&#039;&#039;&#039; dblock command. So, for the fifth extent - 5:[20,3496,8,0] - listed above:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; dblock 20&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 dhdr.magic = 0x58443244&lt;br /&gt;
 dhdr.bestfree[0].offset = 0&lt;br /&gt;
 dhdr.bestfree[0].length = 0&lt;br /&gt;
 dhdr.bestfree[1].offset = 0&lt;br /&gt;
 dhdr.bestfree[1].length = 0&lt;br /&gt;
 dhdr.bestfree[2].offset = 0&lt;br /&gt;
 dhdr.bestfree[2].length = 0&lt;br /&gt;
 du[0].inumber = 13937&lt;br /&gt;
 du[0].namelen = 25&lt;br /&gt;
 du[0].name = &amp;quot;mumble_fratz_foo_bar_1595&amp;quot;&lt;br /&gt;
 du[0].tag = 0x10&lt;br /&gt;
 du[1].inumber = 13938&lt;br /&gt;
 du[1].namelen = 25&lt;br /&gt;
 du[1].name = &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;&lt;br /&gt;
 du[1].tag = 0x38&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
So, here we can see that inode number 13938 matches up with name &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;. Iterate through all the extents, and extract all the name-to-inode-number mappings you can, as these will be useful when looking at &amp;quot;lost+found&amp;quot; (once &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; has removed the corrupt directory).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why does my &amp;gt; 2TB XFS partition disappear when I reboot ? ==&lt;br /&gt;
&lt;br /&gt;
Strictly speaking this is not an XFS problem.&lt;br /&gt;
&lt;br /&gt;
To support &amp;gt; 2TB partitions you need two things: a kernel that supports large block devices (&amp;lt;tt&amp;gt;CONFIG_LBD=y&amp;lt;/tt&amp;gt;) and a partition table format that can hold large partitions.  The default DOS partition tables don&#039;t.  The best partition format for&lt;br /&gt;
&amp;gt; 2TB partitions is the EFI GPT format (&amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Without CONFIG_LBD=y you can&#039;t even create the filesystem, but without &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt; it works fine until you reboot at which point the partition will disappear.  Note that you need to enable the &amp;lt;tt&amp;gt;CONFIG_PARTITION_ADVANCED&amp;lt;/tt&amp;gt; option before you can set &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I receive &amp;lt;tt&amp;gt;No space left on device&amp;lt;/tt&amp;gt; after &amp;lt;tt&amp;gt;xfs_growfs&amp;lt;/tt&amp;gt;? ==&lt;br /&gt;
&lt;br /&gt;
After [http://oss.sgi.com/pipermail/xfs/2009-January/039828.html growing a XFS filesystem], df(1) would show enough free space but attempts to write to the filesystem result in -ENOSPC. To fix this, [http://oss.sgi.com/pipermail/xfs/2009-January/039835.html Dave Chinner advised]:&lt;br /&gt;
&lt;br /&gt;
  The only way to fix this is to move data around to free up space&lt;br /&gt;
  below 1TB. Find your oldest data (i.e. that was around before even&lt;br /&gt;
  the first grow) and move it off the filesystem (move, not copy).&lt;br /&gt;
  Then if you copy it back on, the data blocks will end up above 1TB&lt;br /&gt;
  and that should leave you with plenty of space for inodes below 1TB.&lt;br /&gt;
  &lt;br /&gt;
  A complete dump and restore will also fix the problem ;)&lt;br /&gt;
&lt;br /&gt;
Also, you can add &#039;inode64&#039; to your mount options to allow inodes to live above 1TB.&lt;br /&gt;
&lt;br /&gt;
== Q: Is using noatime or/and nodiratime at mount time giving any performance benefits in xfs (or not using them performance decrease) ? ==&lt;br /&gt;
See: http://everything2.com/index.pl?node_id=1479435&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Christian&amp;diff=2013</id>
		<title>User:Christian</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Christian&amp;diff=2013"/>
		<updated>2009-06-06T22:59:31Z</updated>

		<summary type="html">&lt;p&gt;Christian: New page: lists at nerdbynature dot de&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;lists at nerdbynature dot de&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Ideas_for_XFS&amp;diff=2012</id>
		<title>Ideas for XFS</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Ideas_for_XFS&amp;diff=2012"/>
		<updated>2009-06-06T04:07:55Z</updated>

		<summary type="html">&lt;p&gt;Christian: dunno who put the ida on this page, but I think s/he refers to http://oss.sgi.com/pipermail/xfs/2009-May/041379.html&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Future Directions for XFS&lt;br /&gt;
&lt;br /&gt;
Dave Chinner ideas:&lt;br /&gt;
&lt;br /&gt;
* [[ Improving inode Caching ]]&lt;br /&gt;
&lt;br /&gt;
* [[ Improving Metadata Performance By Reducing Journal Overhead ]]&lt;br /&gt;
&lt;br /&gt;
* [[ Reliable Detection and Repair of Metadata Corruption ]]&lt;br /&gt;
&lt;br /&gt;
Other ideas:&lt;br /&gt;
&lt;br /&gt;
* [[ Splitting project quota support from group quota support ]]&lt;br /&gt;
* [[ Assigning project quota to a linux container ]]&lt;br /&gt;
* [[ Support discarding of unused sectors ]]&lt;br /&gt;
* Superblock flag for when 64-bit inodes are present (see [http://oss.sgi.com/pipermail/xfs/2009-May/041379.html])&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_email_list_and_archives&amp;diff=2010</id>
		<title>XFS email list and archives</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_email_list_and_archives&amp;diff=2010"/>
		<updated>2009-05-29T03:06:03Z</updated>

		<summary type="html">&lt;p&gt;Christian: &amp;quot;The email interface is also available&amp;quot;?&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== XFS email list ==&lt;br /&gt;
Patches, comments, requests and questions should go to [mailto:xfs@oss.sgi.com xfs@oss.sgi.com]&lt;br /&gt;
&lt;br /&gt;
The list archives on oss.sgi.com are available [http://oss.sgi.com/archives/xfs here] and [http://oss.sgi.com/pipermail/xfs here] (pipermail).&lt;br /&gt;
&lt;br /&gt;
Other archives include:&lt;br /&gt;
&lt;br /&gt;
* [http://www.nabble.com/Xfs-f1029.html Nabble]&lt;br /&gt;
* [http://www.opensubscriber.com/messages/xfs@oss.sgi.com/topic.html OpenSubscriber]&lt;br /&gt;
* [http://archives.free.net.ph/list/linux-xfs.html archives.free.net.ph]&lt;br /&gt;
* [http://news.gmane.org/group/gmane.comp.file-systems.xfs.general Gmane]&lt;br /&gt;
&lt;br /&gt;
== Subscribing to the list ==&lt;br /&gt;
&lt;br /&gt;
The easiest method is to use the [http://oss.sgi.com/mailman/listinfo/xfs mailman web interface].&lt;br /&gt;
&lt;br /&gt;
Subscribing is also possible by sending an email with the body:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;subscribe&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
to [mailto:xfs-request@oss.sgi.com?body=subscribe xfs-request@oss.sgi.com]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Companies&amp;diff=2006</id>
		<title>XFS Companies</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Companies&amp;diff=2006"/>
		<updated>2009-05-13T01:48:55Z</updated>

		<summary type="html">&lt;p&gt;Christian: s/D0/DØ/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== These are companies that either use XFS or have a product that utilizes XFS . ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Info gathered from: [http://oss.sgi.com/projects/xfs/users.html XFS Users] on [http://oss.sgi.com/ oss.sgi.com]&lt;br /&gt;
&lt;br /&gt;
== [http://www.sdss.org/ The Sloan Digital Sky Survey] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Sloan Digital Sky Survey is an ambitious effort to map one-quarter of the sky at optical and very-near infrared wavelengths and take spectra of 1 million extra-galactic objects. The estimated amount of data that will be acquired over the 5 year lifespan of the project is 15TB, however, the total amount of storage space required for object informational databases, corrected frames, and reduced spectra will be several factors more than this. The goal is to have all the data online and available to the collaborators at all times. To accomplish this goal we are using commodity, off the shelf (COTS) Intel servers with EIDE disks configured as RAID50 arrays using XFS. Currently, 14 machines are in production accounting for over 18TB. By the scheduled end of the survey in 2005, 50TB of XFS disks will be online serving SDSS data to collaborators and the public.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;For complete details and status of the project please see [http://www.sdss.org/ http://www.sdss.org]. For details of the storage systems, see the [http://home.fnal.gov/~yocum/storageServerTechnicalNote.html SDSS Storage Server Technical Note].&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www-d0.fnal.gov/  The DØ Experiment at Fermilab] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At the DØ experiment at the Fermi National Accelerator Laboratory we have a ~150 node cluster of desktop machines all using the SGI-patched kernel. Every large disk (&amp;amp;gt;40Gb) or disk array in the cluster uses XFS including 4x640Gb disk servers and several 60-120Gb disks/arrays. Originally we chose reiserfs as our journaling filesystem, however, this was a disaster. We need to export these disks via NFS and this seemed perpetually broken in 2.4 series kernels. We switched to XFS and have been very happy. The only inconvenience is that it is not included in the standard kernel. The SGI guys are very prompt in their support of new kernels, but it is still an extra step which should not be necessary.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.ciprico.com/pDiMeda.shtml  Ciprico DiMeda NAS Solutions] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Ciprico DiMeda line of Network Attached Storage solutions combine the ease of connectivity of NAS with the SAN like performance levels required for digital media applications. The DiMeda 3600 provides high availability and high performance through dual NAS servers and redundant, scalable Fibre Channel RAID storage. The DiMeda 1700 provides high performance files services at a low price by using the latest Serial ATA RAID technology. All DiMeda systems are based on Linux and use XFS as the filesystem. We tested a number of filesystem alternatives and XFS was chosen because it provided the highest performance in digital media applications and the journaling feature ensures rapid failover in our dual node fault tolerant configurations.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.quantum.com/Products/NAS+Servers/Guardian+14000/Default.htm  The Quantum GuardianÂ 14000] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Quantum GuardianÂ 14000, the latest Network Attached Storage (NAS) solution from Quantum, delivers 1.4TB of enterprise-class storage for less than $25,000. The Guardian 14000 is a Linux-based device which utilizes XFS to provide a highly reliable journaling filesystem with simultaneous support for Windows, UNIX, Linux and Macintosh environments. As dedicated appliance optimized for fast, reliable file sharing, the Guardian 14000 combines the simplicity of NAS with a robust feature set designed for the most demanding enterprise environments. Support for tools such as Active Directory Service (ADS), UNIX Network Information Service (NIS) and Simple Network Management Protocol (SNMP) provides ease of management and seamless integration. Hardware redundancy, Snapshots and StorageCareÂ on-site service ensure security for business-critical data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.bigstorage.com/products_approach_overview.html  BigStorage K2~NAS] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At BigStorage we pride ourselves on tailoring our NAS systems to meet our customer&#039;s needs, with the help of XFS we are able to provide them with the most reliable Journaling Filesystem technology available. Our open systems approach, which allows for cross-platform integration, gives our customers the flexibility to grow with their data requirements. In addition, BigStorage offers a variety of other features including total hardware redundancy, snapshotting, replication and backups directly from the unit. All of our products include BigStorageï¿½s 24/7 LiveResponseÂ support. With LiveResponseÂ, we keep our team of experienced technical experts on call 24 hours a day, every day, to ensure that your storage investment remains online, all the time.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.echostar.com  Echostar DishPVR 721] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Echostar uses the XFS filesystem for its latest generation of satellite receivers, the DP721. Echostar chose XFS for its performance, stability and unique set of features.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS allowed us to meet a demanding requirement of recording two mpeg2 streams to the internal hard drive while simultaneously viewing a third pre-recorded stream. In addition, XFS allowed us to withstand unexpected power loss without filesystem corruption or user interaction.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We tested several other filesystems, but XFS emerged as the clear winner.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.sun.com/hardware/serverappliances/raq550/  Sun Cobalt RaQÂ 550] ==&lt;br /&gt;
&lt;br /&gt;
From the [http://www.sun.com/hardware/serverappliances/raq550/features.html features] page:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS is a journaling file system capable of quick fail over recovery after unexpected interruptions. XFS is an important feature for mission-critical applications as it ensures data integrity and dramatically reduces startup time by avoiding FSCK delay.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://pingu.salk.edu/  Center for Cytometry and Molecular Imaging at the Salk Institute] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I run the Center for Cytometry and Molecular Imaging at the Salk Institute in La Jolla, CA. We&#039;re a core facility for the Institute, offering flow cytometry, basic and deconvolution microscopy, phosphorimaging (radioactivity imaging) and fluorescent imaging.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;m currently in the process of migrating our data server to Linux/XFS. Our web server currently uses Linux/XFS. We have about 60 Gb on the data server which has a 100Gb SCSI RAID 5 array. This is a bit restrictive for our microscopists so in order that they can put more data online, I&#039;m adding another machine, also running Linux/XFS, with about 420 Gb of IDE-RAID5, based on Adaptec controllers....&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Servers are configured with quota and run Samba, NFS, and Netatalk for connectivity to the mixed bag of computers we have around here. I use the CVS XFS tree most of the time. I have not seen any problems in the several months I have been testing.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://coltex.nl/ Coltex Retail Group BV] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Coltex Retail group BV in the Netherlands uses Red Hat Linux with XFS for their main database server which collects the data from over 240 clothing retail stores throughout the Netherlands. Coltex depends on the availability of the server for over 100 hundred employees in the main office for retrieval of logistical and sales figures. The database size is roughly 10GB large containing both historical and current data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The entire production and logistical system depends on the availability of the system and downtime would mean a significant financial penalty. The speed and reliability of the XFS filesystem which has a proven track record and mature tools to go with it is fundamental to the availability of the system.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS has saved us a lot of time during testing and implementation. A long filesystems check is no longer needed when bad things happen when they do. The increased speed of our database system which is based on Progress 9.1C is also a nice benefit to this filesystem.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.dkp.com/ DKP Effects] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;re a 3D computer graphics/post-production house. We&#039;ve currently got four fileservers using XFS under Linux online - three 350GB servers and one 800GB server. The servers are under fairly heavy load - network load to and from the dual NICs on the box is basically maxed out 18 hours a day - and we do have occasional lockups and drive failures. Thanks to Linux SW RAID5 and XFS, though, we haven&#039;t had any data loss, or significant down time.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.epigenomics.com/ Epigenomics] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We currently have several IDE-to-SCSI-RAID systems with XFS in production. The largest has a capacity of 1.5TB, the other 2 have 430GB each.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Data stored on these filesystems is on the one hand &amp;quot;normal&amp;quot; home directories and corporate documents and on the other hand scientific data for our laboratory and IT department.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.incyte.com/ Incyte Genomics] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;m currently in the process of slowly converting 21 clusters totaling 2300+ processors over to XFS.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;These machines are running a fairly stock RH7.1+XFS. The application is our own custom scheduler for doing genomic research. We have one of the worlds largest sequencing labs which generates a tremendous amount of raw data. Vast amounts of CPU cycles must be applied to it to turn it into useful data we can then sell access to.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Currently, a minority of these machines are running XFS, but as I can get downtime on the clusters I am upgrading them to 7.1+XFS. When I&#039;m done, it&#039;ll be about 10TB of XFS goodness... across 9G disks mostly.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.monmouth.edu/ Monmouth University] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve replaced our NetApp filer (80GB, $40,000). NetApp ONTAP software [runs on NetApp filers] is basically an NFS and CIFS server with their own proprietary filesystem. We were quickly running out of space and our annual budget almost depleted. What were we to do?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;With an off-the-shelf Dell 4400 series server and 300GB of disks ($8,000 total). We were able to run Linux and Samba to emulate a NetApp filer.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS allowed us to manage 300GB of data with absolutely no downtime (now going on 79 days) since implementation. Gone are the days of fearing the fsck of 300GB.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.astro.wisc.edu  The University of Wisconsin Astronomy Department] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At the University of Wisconsin Astronomy Department we have been using Linux XFS since the first release. We currently have 31 Linux boxes running XFS on all filesystems with about 2.6 TB of disk space on these machines. We use XFS primarily on our data reduction systems, but we also use it on our web server and on one of the remote observing machines at the WIYN 3.5m Telescope at Kitt Peak (http://www.noao.edu/wiyn/wiyn.html).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We will likely be using Linux XFS at least in part on the GLIMPSE program (http://www.astro.wisc.edu/sirtf/) which will likely require several TB of disk space to process the data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.amoa.org/ The Austin Museum of Art] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Austin Museum of Art has two file servers running RedHat 7.2_XFS upgraded from RedHat 7.1_XFS. Our webserver runs Domino on top of RedHat 7.3_XFS and we&#039;re getting about 70% better performance than the Domino server running on Windows 2000 Server. We&#039;re moving our workstations away from Windows and Microsoft Office to an LTSP server running on RedHat 7.3_XFS.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve become solely dependent on XFS for all of our data systems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.tecmath.com/ tecmath AG] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We use a production server with a 270 GB RAID 5 (hardware) disk array. It is based on a Suse 7.2 distribution, but with a standard 2.4.12 kernel with XFS and LVM patches. The server provides NFS to 8 Unix clients as well as Samba to about 80 PCs. The machine also runs Bind 9, Apache, Exim, DHCP, POP3, MySQL. I have tried out different configurations with ReiserFS, but I didn&#039;t manage to find a stable configuration with respect to NFS. Since I converted all disks to XFS some 3 months ago, we never had any filesystem-related problems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.theiqgroup.com/ The IQ Group] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Here at the IQ Group, Inc. we use XFS for all our production and development servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Our OS of choice is Slackware Linux 8.0. Our hardware of choice is Dell and VALinux servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;As for applications, we run the standard Unix/Linux apps like Sendmail, Apache, BIND, DHCP, iptables, etc.; as well as Oracle 9i and Arkeia.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve been running XFS across the board for about 3 months now without a hitch (so far).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Size-wise, our biggest server is about 40 GB, but that will be increasing substantially in the near future.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Our production servers are collocated so a journaled FS was a must. Reboots are quick and no human interaction is required like with a bad fsck on ext2. Additionally, our database servers gain additional integrity and robustness.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We originally chose XFS over ReiserFS and ext3 because of it&#039;s age (it&#039;s been in production on SGI boxes for probably longer than all the other journaling FS&#039;s combined) and it&#039;s speed appeared comparable as well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.artsit.usyd.edu.au  Arts IT Unit, Sydney University] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;ve got XFS on a &#039;production&#039; file server. The machine could have up to 500 people logged in, but typically less than 200. Most are Mac users, connected via NetAtalk for &#039;personal files&#039;, although there are shared areas for admin units. Probably about 30-40 windows users. (Samba) It&#039;s the file server for an Academic faculty at a University.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Hardware RAID, via Mylex dual channel controller with 4 drives, Intel Tupelo MB, Intel &#039;SC5000&#039; server chassis with redundant power and hot-swap scsi bays. The system boots off a non RAID single 9gb UW-scsi drive.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Only system &#039;crash&#039; was caused by some one accidentally unplugging it, just before we put it into production. It was back in full operation within 5 minutes. Without journaling, the fsck would have taken well over an hour. In day to day use it has run well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://structbio.vanderbilt.edu/comp/  Vanderbilt University Center for Structural Biology] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I run a high-performance computing center for Structural Biology research at Vanderbilt University. We use XFS extensively, and have been since the late prerelease versions. I&#039;ve had nothing but good experiences with it.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We began using XFS in our search for a good solution for our RAID fileservers. We had such good experiences with it on these systems that we&#039;ve begun putting it on the root/usr/var partitions of every Linux system we run here. I even have it on my laptop these days. XFS in combination with the 2.4 NFS3 implementation performs very well for us, and we have great uptimes on these systems (Our 750GB ArenaII setup is at 143 days right now).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;All told, we&#039;ve got about 1.2TB of XFS filesystems spinning right now. It&#039;s spread out across maybe a dozen or so filesystems and will continue to increase as we are growing fast and that&#039;s all we use now. Next up is putting it on our 17-node Linux cluster, which will bring that up to 1.5TB spread across 30 filesystems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I, for one, would LOVE to see XFS make it into the kernel tree. From my perspectives, it&#039;s one of the best things to happen to Linux in the 7 years I&#039;ve been using/administering it.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== 2008 Update ====&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve since moved our main home directories to a proprietary NAS, but continue to use XFS on 10TB of LVM storage for doing backup-to-disk from the same NAS&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www-cdf.fnal.gov/  CDF Experiment at Fermi National Lab] ==&lt;br /&gt;
&lt;br /&gt;
CDF, an elementary particle physics experiment at Fermi National Lab, is using XFS for all our cache disks.&lt;br /&gt;
&lt;br /&gt;
The usage model is that we have a PB tape archive (2 STK silos) as permanent storage. In front of this archive we are deploying a roughly 100TB disk cache system. The cache is made up of 50 2TB file server based on cheap commodity hardware (3ware based hardware raid using IDE drives). The data is then processed by a cluster of 300 Dual CPU Linux PC&#039;s. The cache software is dCache, a DESY/FNAL product.&lt;br /&gt;
&lt;br /&gt;
The whole system is used by more than 300 active users from all over the world for batch processing for their physics data analysis.&lt;br /&gt;
&lt;br /&gt;
== [http://www.get2chip.com  Get2Chip, Inc.] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are using XFS on 3 production file servers with approximately 1.5T of data. Quite impressive especially when we had a power outage and all three servers shutdown. All servers came back up in minutes with no problems! We are looking at creating two more servers that would manage 2+ TB of data store.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.lando.co.za  Lando International Group Technologies] ==&lt;br /&gt;
&lt;br /&gt;
Lando International Group Technologies is the home of:&lt;br /&gt;
&lt;br /&gt;
* [www.lando.co.za Lanndo Technologies Africa (Pty) Ltd] - Internet Service Provider&lt;br /&gt;
* [www.lbsd.net Linux Based Systems Design] (Article 21). Not-For-Profit company established to provide free Linux distributions and programs.&lt;br /&gt;
* Cell Park South Africa (Pty) Ltd. RSA Pat Appln 2001/10406. Collecting parking fees by means of cell phone SMS or voice.&lt;br /&gt;
* Read Plus Education (Pty) Ltd. Software based reading skills training and testing for ages 4 to 100.&lt;br /&gt;
* Mobivan. Mobile office including Internet access, fax, copying, printing, telephone, collection and delivery services, legal services, pre-paid phone and electricity services, bill payment email, secretarial services, training facilities and management services.&lt;br /&gt;
* Lando International Marketing Agency. Direct marketing services, design and supply of promotional material, consulting, sourcing of capital and other funding.&lt;br /&gt;
* Illico. Software development and systems analysis on most platforms.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Throughout these companies, we use the XFS filesystem with [http://idms.lbsd.net IDMS Linux] on high-end Intel servers, with an average of 100 GB storage each. XFS stores our customer and user data, including credit card details, mail, routing tables, etc.. We have not had one problem since the release of the first XFS patch.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.fcb-wilkens.com  Foote, Cone, &amp;amp;amp; Belding] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are an advertisement company in Germany, and the use of the XFS filesystem is a story of success for us. In our Hamburg office, we have two file servers having a 420 Gig RAID in XFS format serving (almost) all our data to about 180 Macs and about 30 PCs using Samba and Netatalk. Some of the data is used in our offices in Frankfurt and Berlin, and in fact the Berlin office is just getting it&#039;s own 250 Gig fileserver (using XFS) right now.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The general success with XFS has led us to switch over all our Linux servers to run on XFS as well (with the exception of two systems that are tied to tight specifications configuration wise). XFS, even the old 1.0 version, has happily taken on various abuse - broken SCSI controllers, broken RAID systems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.moving-picture.co.uk/  Moving Picture Company] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We here at MPC use XFS/RedHat 7.2 on all of our graphics-workstations and file-servers. More info can be found in an [http://www.linuxuser.co.uk/articles/issue20/lu20-Linux_at_work-In_the_picture.pdf  article] LinuxUser magazine did on us recently.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.coremetrics.com/  Coremetrics, Inc.] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are currently using XFS for 25+ production web-servers, ~900GB Oracle db servers, with potentially 15+ more servers by mid 2003, with ~900GB+ databases. All XFS installed.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Also, our dev environment, except for the Sun boxes which all are being migrated to X86 in the aforementioned server additions, plus the dev Sun boxes as well, are all x86 dual proc servers running Oracle, application servers, or web services as needed. All servers run XFS from images we&#039;ve got on our SystemImager servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;All production back-end servers are connected via FC1 or FC2 to a SAN containing ~13TB of raw storage, which, will soon be converted from VxFS to XFS with the migration of Oracle to our x86 platforms.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://evolt.org Evolt.org] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;evolt.org, a world community for web developers promoting the mutual free exchange of ideas, skills and experiences, has had a great deal of success using XFS. Our primary webserver which serves 100K hosts/month, primary Oracle database with ~25Gb of data, and free member hosting for 1000 users haven&#039;t had a minute of downtime since XFS has been installed. Performance has been spectacular and maintenance a breeze.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font size=&amp;quot;-1&amp;quot;&amp;gt; &#039;&#039;All testimonials on this page represent the views of the submitters, and references to other products and companies should not be construed as an endorsement by either the organizations profiled, or by SGI. All trademarks (r) their respective owners.&#039;&#039; &amp;lt;/font&amp;gt;&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Current_events&amp;diff=2001</id>
		<title>Current events</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Current_events&amp;diff=2001"/>
		<updated>2009-04-17T16:23:51Z</updated>

		<summary type="html">&lt;p&gt;Christian: Redirecting to XFS Status Updates&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[XFS_Status_Updates]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Current_events&amp;diff=2000</id>
		<title>Current events</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Current_events&amp;diff=2000"/>
		<updated>2009-04-17T16:22:27Z</updated>

		<summary type="html">&lt;p&gt;Christian: Redirecting to XFS.org:Current events&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[XFS.org:Current_events]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=1997</id>
		<title>Getting the latest source code</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=1997"/>
		<updated>2009-04-16T00:54:40Z</updated>

		<summary type="html">&lt;p&gt;Christian: XFS merged in 2.4 announcement added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; XFS Released/Stable source &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mainline kernels&#039;&#039;&#039;&amp;lt;br /&amp;gt; XFS has been maintained in the official Linux kernel [http://www.kernel.org/ kernel trees] starting with [http://lkml.org/lkml/2003/12/8/35 Linux 2.4] and is frequently updated with the latest stable fixes and features from the SGI XFS development team.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Vendor kernels&#039;&#039;&#039;&amp;lt;br /&amp;gt; All modern Linux distributions include support for XFS. SGI actively works with [http://www.suse.com/  SUSE] to provide a supported version of XFS in that distribution.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;XFS userspace&#039;&#039;&#039;&amp;lt;br /&amp;gt; Sgi also provides [ftp://oss.sgi.com/projects/xfs source code taballs] of the xfs userspace tools. These tarballs form the basis of the xfsprogs packages found in Linux distributions.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; Development and bleeding edge Development &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
[[XFS git howto]]&lt;br /&gt;
&lt;br /&gt;
Development git trees&lt;br /&gt;
&lt;br /&gt;
Current XFS kernel source&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=summary xfs]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/xfs&amp;lt;/pre&amp;gt;&lt;br /&gt;
XFS user space tools&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsprogs.git;a=summary xfsprogs]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/xfsprogs&amp;lt;/pre&amp;gt;&lt;br /&gt;
XFS dump&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsdump.git;a=summary xfsdump]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/xfsdump&amp;lt;/pre&amp;gt;&lt;br /&gt;
XFS tests&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfstests.git;a=summary xfstests]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/xfstests&amp;lt;/pre&amp;gt;&lt;br /&gt;
DMAPI user space tools&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/dmapi.git;a=summary dmapi]&lt;br /&gt;
&amp;lt;pre&amp;gt;$ git clone git://oss.sgi.com/xfs/cmds/dmapi&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Git trees are automated mirrored copies of the cvs trees using git-cvsimport.&lt;br /&gt;
Since git-cvsimport utilized the tool cvsps to recreate the atomic commits of ptools&lt;br /&gt;
or &amp;quot;mod&amp;quot; it is easier to see the entire change that was committed using git.&lt;br /&gt;
&lt;br /&gt;
git-cvsimport generated trees.&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=linux-2.6-xfs-from-cvs/.git;a=summary linux-2.6-xfs-from-cvs]&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs-cmds/.git;a=summary xfs-cmds]&lt;br /&gt;
&lt;br /&gt;
Before building in the xfsdump or dmapi directories (after building xfsprogs), you will need to run:&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd xfsprogs&lt;br /&gt;
# make install-dev&amp;lt;/pre&amp;gt;&lt;br /&gt;
to create /usr/include/xfs and install appropriate files there.&lt;br /&gt;
&lt;br /&gt;
Before building in the xfstests directory, you will need to run:&lt;br /&gt;
&amp;lt;pre&amp;gt;# cd xfsprogs&lt;br /&gt;
# make install-qa&amp;lt;/pre&amp;gt;&lt;br /&gt;
to install a somewhat larger set of files in /usr/include/xfs.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt;XFS cvs trees &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
The cvs trees were created using a script that converted sgi&#039;s internal&lt;br /&gt;
ptools repository to a cvs repository, so the cvs trees were considered read only.&lt;br /&gt;
&lt;br /&gt;
At this point a new development is being managed by the git trees so the cvs trees&lt;br /&gt;
are not longer active in terms of current development and should only be used&lt;br /&gt;
for reference.&lt;br /&gt;
&lt;br /&gt;
[[XFS CVS howto]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Status_Updates&amp;diff=1979</id>
		<title>XFS Status Updates</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Status_Updates&amp;diff=1979"/>
		<updated>2009-03-08T02:27:14Z</updated>

		<summary type="html">&lt;p&gt;Christian: s/be updates/be updated/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== XFS status update for February 2009 ==&lt;br /&gt;
&lt;br /&gt;
In February various smaller fixes have been sent to Linus for 2.6.29,&lt;br /&gt;
including a revert of the faster vmap APIs which don&#039;t seem to be quite&lt;br /&gt;
ready yet on the VM side.  At the same time various patches have been&lt;br /&gt;
queued up for 2.6.30, with another big batch pending.  There also has&lt;br /&gt;
been a repost of the CRC patch series, including support for a new,&lt;br /&gt;
larger inode core.&lt;br /&gt;
&lt;br /&gt;
SGI released various bits of work in progress from former employees&lt;br /&gt;
that will be extremely helpful for the future development of XFS,&lt;br /&gt;
thanks a lot to Mark Goodwin for making this happen.&lt;br /&gt;
&lt;br /&gt;
On the userspace side the long awaited 3.0.0 releases of xfsprogs and&lt;br /&gt;
xfsdump finally happened early in the month, accompanied by a 2.2.9&lt;br /&gt;
release of the dmapi userspace.  There have been some issues with packaging&lt;br /&gt;
so a new minor release might follow soon.&lt;br /&gt;
&lt;br /&gt;
The xfs_irecover tool has been relicensed so that it can be merged into&lt;br /&gt;
the GPLv2 codebase of xfsprogs, but the actual integration work hasn&#039;t&lt;br /&gt;
happened yet.&lt;br /&gt;
&lt;br /&gt;
Important bits of XFS documentation that have been available on the XFS&lt;br /&gt;
website in PDF form have been released in the document source form under&lt;br /&gt;
the Creative Commons license so that they can be updated as a community&lt;br /&gt;
effort, and checked into a public git tree.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for January 2009 ==&lt;br /&gt;
&lt;br /&gt;
January has been an extremely busy month on the userspace front.  Many&lt;br /&gt;
smaller and medium updates went into xfsprogs, xfstests and to a lesser&lt;br /&gt;
extent xfsdump.  xfsprogs and xfsdump are ramping up for getting a 3.0.0&lt;br /&gt;
release out in early February which will include the first major re-sync&lt;br /&gt;
with the kernel code in libxfs, a cleanup of the exported library interfaces&lt;br /&gt;
and the move of two tools (xfs_fsr and xfs_estimate) from the xfsdump&lt;br /&gt;
package to xfsprogs.  After this the xfsprogs package will contain all&lt;br /&gt;
tools that use internal libxfs interfaces which fortunately equates to those&lt;br /&gt;
needed for normal administration.  The xfsdump package now only contains&lt;br /&gt;
the xfsdump/xfsrestore tools needed for backing up and restoring XFS&lt;br /&gt;
filesystems.  In addition it grew a fix to support dump/restore on systems&lt;br /&gt;
with a 64k page size.  A large number of acl/attr package patches was&lt;br /&gt;
posted to the list, but pending a possible split of these packages from the&lt;br /&gt;
XFS project these weren&#039;t processed yet.&lt;br /&gt;
&lt;br /&gt;
On the kernel side the big excitement in January was an in-memory corruption&lt;br /&gt;
introduced in the btree refactoring which hit people running 32bit platforms&lt;br /&gt;
without support for large block devices.  This issue was fixed and pushed&lt;br /&gt;
to the 2.6.29 development tree after a long collaborative debugging effort&lt;br /&gt;
at linux.conf.au.  Besides that about a dozen minor fixes were pushed to&lt;br /&gt;
2.6.29 and the first batch of misc patches for the 2.6.30 release cycle&lt;br /&gt;
was sent out.&lt;br /&gt;
&lt;br /&gt;
At the end of December the SGI group in Melbourne which the previous&lt;br /&gt;
XFS maintainer and some other developers worked for has been closed down&lt;br /&gt;
and they will be missed greatly.  As a result maintainership has been passed&lt;br /&gt;
on in a way that has been slightly controversial in the community, and the&lt;br /&gt;
first patchset of work in progress in Melbourne have been posted to the list&lt;br /&gt;
to be picked up by others.&lt;br /&gt;
&lt;br /&gt;
The xfs.org wiki has gotten a little facelift on it&#039;s front page making it&lt;br /&gt;
a lot easier to read.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for December 2008 ==&lt;br /&gt;
&lt;br /&gt;
On Christmas Eve the 2.6.28 mainline kernel was release, with only minor XFS&lt;br /&gt;
bug fixes over 2.6.27.&lt;br /&gt;
&lt;br /&gt;
On the development side December has been busy but unspectacular month.&lt;br /&gt;
All lot of misc fixes and improvements have been sent out, tested and committed&lt;br /&gt;
especially on the user land side.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2008 ==&lt;br /&gt;
&lt;br /&gt;
The mainline kernel is now at 2.6.28-rc6 and includes a small number of&lt;br /&gt;
XFS fixes.  There have been no updates to the XFS development tree during&lt;br /&gt;
November.  Without new regressions that large number of changes that&lt;br /&gt;
missed 2.6.28 has thus stabilized to be ready for 2.6.29.  In the meantime&lt;br /&gt;
kernel-side development has been slow, with the only major patch set&lt;br /&gt;
being a wide number of fixes to the compatibility for 32 bit ioctls on&lt;br /&gt;
a 64 bit kernel.&lt;br /&gt;
&lt;br /&gt;
In the meantime there has been a large number of commits to the user space&lt;br /&gt;
tree, which mostly consist of smaller fixes.  xfsprogs is getting close&lt;br /&gt;
to have the 3.0.0 release which will be the first full resync with the&lt;br /&gt;
kernel sources since the year 2005.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2008 ==&lt;br /&gt;
&lt;br /&gt;
Linux 2.6.27 released with all the bits covered in last month&#039;s report.  It&lt;br /&gt;
did however miss two important fixes for regressions that a few people hit.&lt;br /&gt;
2.6.27.3 or later are recommended for use with XFS.&lt;br /&gt;
&lt;br /&gt;
In the meantime the generic btree implementation, the sync reorganization&lt;br /&gt;
and after a lot of merge pain the XFS and VFS inode unification hit the&lt;br /&gt;
development tree during the time allocated for the merge window.  No XFS&lt;br /&gt;
updates other than the two regression fixes also in 2.6.27.3 have made it&lt;br /&gt;
into mainline as of 2.6.28-rc3.&lt;br /&gt;
&lt;br /&gt;
The only new feature on the list in October is support for the fiemap&lt;br /&gt;
interface that has been added to the VFS during the 2.6.28 merge window.&lt;br /&gt;
However there was lot of patch traffic consisting of fixes and respun&lt;br /&gt;
versions of previously known patches.  There still is a large backlog of&lt;br /&gt;
patches on the list that is not applied to the development tree yet.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2008 ==&lt;br /&gt;
&lt;br /&gt;
With Linux 2.6.27 still not released but only making slow progress from 2.6.27-rc5 to 2.6.27-rc8 XFS changes in mainline have been minimal in September with only about half a dozen bug fixes patches.&lt;br /&gt;
&lt;br /&gt;
In the meantime the generic btree patch set has been committed to the development tree, but not many other updates yet. On the user space side xfsprogs 2.10.1 has been released on September 5th with a number of important bug fixes. Following the release of xfsprogs 2.10.1 open season for development of the user space code has started. The first full update of the shared kernel / user space code in libxfs since 2005 has been committed. In addition to that the number of headers installed for the regular devel package has been reduced to the required minimum and support for checking the source code for endianess errors using sparse has been added.&lt;br /&gt;
&lt;br /&gt;
The patch sets to unify the XFS and Linux inode structures, and rewrite various bits of the sync code have seen various iterations on the XFS list, but haven&#039;t been committed yet. A first set of patches implementing CRCs for various metadata structures has been posted to the list.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2008 ==&lt;br /&gt;
&lt;br /&gt;
With the 2.6.27-rc5 release the 2.6.27 cycle is nearing it&#039;s end. The major XFS feature in 2.6.27-rc5 is support for case-insensitive file names. At this point it is still limited to 7bit ASCII file names, with updates for utf8 file names expected to follow later. In addition to that 2.6.27-rc5 fixes a long-standing problem with non-EABI arm compiler which pack some XFS data structures wrongly. Besides this 2.6.27-rc5 also contains various cleanups, most notably the removal of the last bhv_vnode_t instances, and most uses of semaphores. As usual the diffstat for XFS from 2.6.26 to 2.6.26-rc5 is negative:&lt;br /&gt;
&lt;br /&gt;
       100 files changed, 3819 insertions(+), 4409 deletions(-)&lt;br /&gt;
&lt;br /&gt;
On the user space front a new minor xfsprogs version is about to be released containing various fixes including the user space part of arm packing fix.&lt;br /&gt;
&lt;br /&gt;
Work in progress on the XFS mailing list are a large patch set to unify the alloc, inobt and bmap btree implementation into a single that supports arbitrarily pluggable key and record formats. These btree changes are the first major preparation for adding CRC checks to all metadata structures in XFS, and an even larger patch set to unify the XFS and Linux inode structures, and perform all inode write back from the btree uses instead of an inode cache in XFS.&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_FAQ&amp;diff=1967</id>
		<title>XFS FAQ</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_FAQ&amp;diff=1967"/>
		<updated>2009-02-13T10:14:59Z</updated>

		<summary type="html">&lt;p&gt;Christian: links to git repo added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Info from: [http://oss.sgi.com/projects/xfs/faq.html main XFS faq at SGI]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many thanks to earlier maintainers of this document - Thomas Graichen and Seth Mos.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about XFS? ==&lt;br /&gt;
&lt;br /&gt;
The SGI XFS project page http://oss.sgi.com/projects/xfs/ is the definitive reference. It contains pointers to whitepapers, books, articles, etc.&lt;br /&gt;
&lt;br /&gt;
You could also join the [[XFS_email_list_and_archives|XFS mailing list]] or the &#039;&#039;&#039;&amp;lt;nowiki&amp;gt;#xfs&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039; IRC channel on &#039;&#039;irc.freenode.net&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about ACLs? ==&lt;br /&gt;
&lt;br /&gt;
Andreas Gruenbacher maintains the Extended Attribute and POSIX ACL documentation for Linux at http://acl.bestbits.at/&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;acl(5)&#039;&#039;&#039; manual page is also quite extensive.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find information about the internals of XFS? ==&lt;br /&gt;
&lt;br /&gt;
An [training/index.html SGI XFS Training course] aimed at developers, triage and support staff, and serious users has been in development. Parts of the course are clearly still incomplete, but there is enough content to be useful to a broad range of users.&lt;br /&gt;
&lt;br /&gt;
Barry Naujok has documented the [papers/xfs_filesystem_structure.doc XFS ondisk format] which is a very useful reference.&lt;br /&gt;
&lt;br /&gt;
== Q: What partition type should I use for XFS on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Linux native filesystem (83).&lt;br /&gt;
&lt;br /&gt;
== Q: What mount options does XFS have? ==&lt;br /&gt;
&lt;br /&gt;
There are a number of mount options influencing XFS filesystems - refer to the &#039;&#039;&#039;mount(8)&#039;&#039;&#039; manual page or the documentation in the kernel source tree itself ([http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD Documentation/filesystems/xfs.txt])&lt;br /&gt;
&lt;br /&gt;
== Q: Is there any relation between the XFS utilities and the kernel version? ==&lt;br /&gt;
&lt;br /&gt;
No, there is no relation. Newer utilities tend to mainly have fixes and checks the previous versions might not have. New features are also added in a backward compatible way - if they are enabled via mkfs, an incapable (old) kernel will recognize that it does not understand the new feature, and refuse to mount the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Does it run on platforms other than i386? ==&lt;br /&gt;
&lt;br /&gt;
XFS runs on all of the platforms that Linux supports. It is more tested on the more common platforms, especially the i386 family. Its also well tested on the IA64 platform since thats the platform SGI Linux products use.&lt;br /&gt;
&lt;br /&gt;
== Q: Do quotas work on XFS? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
To use quotas with XFS, you need to enable XFS quota support when you configure your kernel. You also need to specify quota support when mounting. You can get the Linux quota utilities at their sourceforge website [http://sourceforge.net/projects/linuxquota/  http://sourceforge.net/projects/linuxquota/] or use &#039;&#039;&#039;xfs_quota(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Are there any dump/restore tools for XFS? ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and &#039;&#039;&#039;xfsrestore(8)&#039;&#039;&#039; are fully supported. The tape format is the same as on IRIX, so tapes are interchangeable between operating systems.&lt;br /&gt;
&lt;br /&gt;
== Q: Does LILO work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
This depends on where you install LILO.&lt;br /&gt;
&lt;br /&gt;
Yes, for MBR (Master Boot Record) installations.&lt;br /&gt;
&lt;br /&gt;
No, for root partition installations because the XFS superblock is written at block zero, where LILO would be installed. This is to maintain compatibility with the IRIX on-disk format, and will not be changed.&lt;br /&gt;
&lt;br /&gt;
== Q: Does GRUB work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
There is native XFS filesystem support for GRUB starting with version 0.91 and onward. Unfortunately, GRUB used to make incorrect assumptions about being able to read a block device image while a filesystem is mounted and actively being written to, which could cause intermittent problems when using XFS. This has reportedly since been fixed, and the 0.97 version (at least) of GRUB is apparently stable.&lt;br /&gt;
&lt;br /&gt;
== Q: Can XFS be used for a root filesystem? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
== Q: Will I be able to use my IRIX XFS filesystems on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously, you should back-up your data before trying to move it between systems. Filesystems must be &amp;quot;clean&amp;quot; when moved (i.e. unmounted). If you plan to use IRIX filesystems on Linux keep the following points in mind: the kernel needs to have SGI partition support enabled; there is no XLV support in Linux, so you are unable to read IRIX filesystems which use the XLV volume manager; also not all blocksizes available on IRIX are available on Linux (only blocksizes less than or equal to the pagesize of the architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is possible for now). Make sure that the directory format is version 2 on the IRIX filesystems (this is the default since IRIX 6.5.5). Linux can only read v2 directories.&lt;br /&gt;
&lt;br /&gt;
== Q: Is there a way to make a XFS filesystem larger or smaller? ==&lt;br /&gt;
&lt;br /&gt;
You can &#039;&#039;NOT&#039;&#039; make a XFS partition smaller online. The only way to shrink is to do a complete dump, mkfs and restore.&lt;br /&gt;
&lt;br /&gt;
An XFS filesystem may be enlarged by using &#039;&#039;&#039;xfs_growfs(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
If using partitions, you need to have free space after this partition to do so. Remove partition, recreate it larger with the &#039;&#039;exact same&#039;&#039; starting point. Run &#039;&#039;&#039;xfs_growfs&#039;&#039;&#039; to make the partition larger. Note - editing partition tables is a dangerous pastime, so back up your filesystem before doing so.&lt;br /&gt;
&lt;br /&gt;
Using XFS filesystems on top of a volume manager makes this a lot easier.&lt;br /&gt;
&lt;br /&gt;
== Q: What information should I include when reporting a problem? ==&lt;br /&gt;
&lt;br /&gt;
Things to include are what version of XFS you are using, if this is a CVS version of what date and version of the kernel. If you have problems with userland packages please report the version of the package you are using.&lt;br /&gt;
&lt;br /&gt;
If the problem relates to a particular filesystem, the output from the &#039;&#039;&#039;xfs_info(8)&#039;&#039;&#039; command and any &#039;&#039;&#039;mount(8)&#039;&#039;&#039; options in use will also be useful to the developers.&lt;br /&gt;
&lt;br /&gt;
If you experience an oops, please run it through &#039;&#039;&#039;ksymoops&#039;&#039;&#039; so that it can be interpreted.&lt;br /&gt;
&lt;br /&gt;
If you have a filesystem that cannot be repaired, make sure you have xfsprogs 2.9.0 or later and run &#039;&#039;&#039;xfs_metadump(8)&#039;&#039;&#039; to capture the metadata (which obfuscates filenames and attributes to protect your privacy) and make the dump available for someone to analyse.&lt;br /&gt;
&lt;br /&gt;
== Q: Mounting a XFS filesystem does not work - what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
If mount prints an error message something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
     mount: /dev/hda5 has wrong major or minor number&lt;br /&gt;
&lt;br /&gt;
you either do not have XFS compiled into the kernel (or you forgot to load the modules) or you did not use the &amp;quot;-t xfs&amp;quot; option on mount or the &amp;quot;xfs&amp;quot; option in &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you get something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 mount: wrong fs type, bad option, bad superblock on /dev/sda1,&lt;br /&gt;
        or too many mounted file systems&lt;br /&gt;
&lt;br /&gt;
Refer to your system log file (&amp;lt;tt&amp;gt;/var/log/messages&amp;lt;/tt&amp;gt;) for a detailed diagnostic message from the kernel.&lt;br /&gt;
&lt;br /&gt;
== Q: Does the filesystem have an undelete capability? ==&lt;br /&gt;
&lt;br /&gt;
There is no undelete in XFS. Always keep backups.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I backup a XFS filesystem and ACLs? ==&lt;br /&gt;
&lt;br /&gt;
You can backup a XFS filesystem with utilities like &#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and standard &#039;&#039;&#039;tar(1)&#039;&#039;&#039; for standard files. If you want to backup ACLs you will need to use &#039;&#039;&#039;xfsdump&#039;&#039;&#039;, this is the only tool at the moment that supports backing up extended attributes. &#039;&#039;&#039;xfsdump&#039;&#039;&#039; can also be integrated with &#039;&#039;&#039;amanda(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: I see applications returning error 990 or &amp;quot;Structure needs cleaning&amp;quot;, what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
The error 990 stands for [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=blob;f=fs/xfs/linux-2.6/xfs_linux.h#l145 EFSCORRUPTED] which usually means XFS has detected a filesystem metadata problem and has shut the filesystem down to prevent further damage. Also, since about June 2006, we [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=commit;h=da2f4d679c8070ba5b6a920281e495917b293aa0 converted from EFSCORRUPTED/990 over to using EUCLEAN], &amp;quot;Structure needs cleaning.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The cause can be pretty much anything, unfortunately - filesystem, virtual memory manager, volume manager, device driver, or hardware.&lt;br /&gt;
&lt;br /&gt;
There should be a detailed console message when this initially happens. The messages have important information giving hints to developers as to the earliest point that a problem was detected. It is there to protect your data.&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I see binary NULLS in some files after recovery when I unplugged the power? ==&lt;br /&gt;
&lt;br /&gt;
Update: This issue has been addressed with a CVS fix on the 29th March 2007 and merged into mainline on 8th May 2007 for 2.6.22-rc1.&lt;br /&gt;
&lt;br /&gt;
XFS journals metadata updates, not data updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash.&lt;br /&gt;
&lt;br /&gt;
Since XFS does not write data out immediately unless you tell it to with fsync, an O_SYNC or O_DIRECT open (the same is true of other filesystems), you are looking at an inode which was flushed out, but whose data was not. Typically you&#039;ll find that the inode is not taking any space since all it has is a size but no extents allocated (try examining the file with the &#039;&#039;&#039;xfs_bmap(8)&#039;&#039;&#039; command).&lt;br /&gt;
&lt;br /&gt;
== Q: What is the problem with the write cache on journaled filesystems? ==&lt;br /&gt;
&lt;br /&gt;
Many drives use a write back cache in order to speed up the performance of writes.  However, there are conditions such as power failure when the write cache memory is never flushed to the actual disk.  Further, the drive can de-stage data from the write cache to the platters in any order that it chooses.  This causes problems for XFS and journaled filesystems in general because they rely on knowing when a write has completed to the disk. They need to know that the log information has made it to disk before allowing metadata to go to disk.  When the metadata makes it to disk then the transaction can effectively be deleted from the log resulting in movement of the tail of the log and thus freeing up some log space. So if the writes never make it to the physical disk, then the ordering is violated and the log and metadata can be lost, resulting in filesystem corruption.&lt;br /&gt;
&lt;br /&gt;
With hard disk cache sizes of currently (Jan 2009) up to 32MB that can be a lot of valuable information.  In a RAID with 8 such disks these adds to 256MB, and the chance of having filesystem metadata in the cache is so high that you have a very high chance of big data losses on a power outage.&lt;br /&gt;
&lt;br /&gt;
With a single hard disk and barriers turned on (on=default), the drive write cache is flushed before an after a barrier is issued.  A powerfail &amp;quot;only&amp;quot; loses data in the cache but no essential ordering is violated, and corruption will not occur.&lt;br /&gt;
&lt;br /&gt;
With a RAID controller with battery backed controller cache and cache in write back mode, you should turn off barriers - they are unnecessary in this case, and if the controller honors the cache flushes, it will be harmful to performance.  But then you *must* disable the individual hard disk write cache in order to ensure to keep the filesystem intact after a power failure. The method for doing this is different for each RAID controller. See the section about RAID controllers below.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I tell if I have the disk write cache enabled? ==&lt;br /&gt;
&lt;br /&gt;
For SCSI/SATA:&lt;br /&gt;
&lt;br /&gt;
* Look in dmesg(8) output for a driver line, such as:&amp;lt;br /&amp;gt; &amp;quot;SCSI device sda: drive cache: write back&amp;quot;&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# sginfo -c /dev/sda | grep -i &#039;write cache&#039; &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For PATA/SATA (although for SATA this only works on a recent kernel with ATA command passthrough):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -I /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; and look under &amp;quot;Enabled Supported&amp;quot; for &amp;quot;Write cache&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
== Q: How can I address the problem with the disk write cache? ==&lt;br /&gt;
&lt;br /&gt;
=== Disabling the disk write back cache. ===&lt;br /&gt;
&lt;br /&gt;
For SATA/PATA(IDE): (although for SATA this only works on a recent kernel with ATA command passthrough):&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -W0 /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # hdparm -W0 /dev/hda&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# blktool /dev/sda wcache off&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # blktool /dev/hda wcache off&lt;br /&gt;
&lt;br /&gt;
For SCSI:&lt;br /&gt;
&lt;br /&gt;
* Using sginfo(8) which is a little tedious&amp;lt;br /&amp;gt; It takes 3 steps. For example:&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -c /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives a list of attribute names and values&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cX /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives an array of cache values which you must match up with from step 1, e.g.&amp;lt;br /&amp;gt; 0 0 0 1 0 1 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cXR /dev/sda 0 0 0 1 0 0 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; allows you to reset the value of the cache attributes.&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
This disabling is kept persistent for a SCSI disk. However, for a SATA/PATA disk this needs to be done after every reset as it will reset back to the default of the write cache enabled. And a reset can happen after reboot or on error recovery of the drive. This makes it rather difficult to guarantee that the write cache is maintained as disabled.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using an external log. ===&lt;br /&gt;
&lt;br /&gt;
Some people have considered the idea of using an external log on a separate drive with the write cache disabled and the rest of the file system on another disk with the write cache enabled. However, that will &#039;&#039;&#039;not&#039;&#039;&#039; solve the problem. For example, the tail of the log is moved when we are notified that a metadata write is completed to disk and we won&#039;t be able to guarantee that if the metadata is on a drive with the write cache enabled.&lt;br /&gt;
&lt;br /&gt;
In fact using an external log will disable XFS&#039; write barrier support.&lt;br /&gt;
&lt;br /&gt;
=== Write barrier support. ===&lt;br /&gt;
&lt;br /&gt;
Write barrier support is enabled by default in XFS since 2.6.17. It is disabled by mounting the filesystem with &amp;quot;nobarrier&amp;quot;. Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported with external log device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported by the underlying device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, trial barrier write failed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If the filesystem is mounted with an external log device then we currently don&#039;t support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn&#039;t support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.&lt;br /&gt;
&lt;br /&gt;
== Q. Should barriers be enabled with storage which has a persistent write cache? ==&lt;br /&gt;
&lt;br /&gt;
Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with &amp;quot;nobarrier&amp;quot;. But take care about the hard disk write cache, which should be off.&lt;br /&gt;
&lt;br /&gt;
== Q. Which settings does my RAID controller need ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s hard to tell because there are so many controllers. Please consult your RAID controller documentation to determine how to change these settings, but we try to give an overview here:&lt;br /&gt;
&lt;br /&gt;
Real RAID controllers (not those found onboard of mainboards) normally have a battery backed cache which is used for buffering writes to improve speed. Even if it&#039;s battery backed, the individual hard disk write caches need to be turned off, as they are not protected from a powerfail and will just lose all contents in that case.&lt;br /&gt;
&lt;br /&gt;
* onboard RAID controllers: there are so many different types it&#039;s hard to tell. Generally, those controllers have no cache, but let the hard disk write cache on. That can lead to the bad situation that after a powerfail with RAID-1 when only parts of the disk cache have been written, the controller doesn&#039;t even see that the disks are out of sync, as the disks can resort cached blocks and might have saved the superblock info, but then lost different data contents. So, turn off disk write caches before using the RAID function.&lt;br /&gt;
&lt;br /&gt;
* 3ware: /cX/uX set cache=off, see http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86&lt;br /&gt;
&lt;br /&gt;
* Adaptec: allows setting individual drives cache&lt;br /&gt;
arcconf setcache &amp;lt;disk&amp;gt; wb|wt&lt;br /&gt;
wb=write back, which means write cache on, wt=write through, which means write cache off. So &amp;quot;wt&amp;quot; should be chosen.&lt;br /&gt;
&lt;br /&gt;
* Areca: In archttp under &amp;quot;System Controls&amp;quot; -&amp;gt; &amp;quot;System Configuration&amp;quot; there&#039;s the option &amp;quot;Disk Write Cache Mode&amp;quot; (defaults &amp;quot;Auto&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Off&amp;quot;: disk write cache is turned off&lt;br /&gt;
&lt;br /&gt;
&amp;quot;On&amp;quot;: disk write cache is enabled, this is not save for your data but fast&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Auto&amp;quot;: If you use a BBM (battery backup module, which you really should use if you care about your data), the controller automatically turns disk writes off, to protect your data. In case no BBM is attached, the controller switches to &amp;quot;On&amp;quot;, because neither controller cache nor disk cache is save so you don&#039;t seem to care about your data and just want high speed (which you get then).&lt;br /&gt;
&lt;br /&gt;
That&#039;s a very sensible default so you can let it &amp;quot;Auto&amp;quot; or enforce &amp;quot;Off&amp;quot; to be sure.&lt;br /&gt;
&lt;br /&gt;
* LSI MegaRAID: allows setting individual disks cache:&lt;br /&gt;
MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL -EnDskCache|DisDskCache&lt;br /&gt;
&lt;br /&gt;
* Xyratex: from the docs: &amp;quot;Write cache includes the disk drive cache and controller cache.&amp;quot;. So that means you can only set the drive caches and the unit caches together. To protect your data, turn it off, but write performance will suffer badly as also the controller write cache is disabled.&lt;br /&gt;
&lt;br /&gt;
== Q: What is the issue with directory corruption in Linux 2.6.17? ==&lt;br /&gt;
&lt;br /&gt;
In the Linux kernel 2.6.17 release a subtle bug was accidentally introduced into the XFS directory code by some &amp;quot;sparse&amp;quot; endian annotations. This bug was sufficiently uncommon (it only affects a certain type of format change, in Node or B-Tree format directories, and only in certain situations) that it was not detected during our regular regression testing, but it has been observed in the wild by a number of people now.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: the fix is included in 2.6.17.7 and later kernels.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To add insult to injury, &#039;&#039;&#039;xfs_repair(8)&#039;&#039;&#039; is currently not correcting these directories on detection of this corrupt state either. This &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; issue is actively being worked on, and a fixed version will be available shortly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
No other kernel versions are affected. However, using a corrupt filesystem on other kernels can still result in the filesystem being shutdown if the problem has not been rectified (on disk), making it seem like other kernels are affected.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;xfs_check&#039;&#039;&#039; tool, or &#039;&#039;&#039;xfs_repair -n&#039;&#039;&#039;, should be able to detect any directory corruption.&lt;br /&gt;
&lt;br /&gt;
Until a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; binary is available, one can make use of the &#039;&#039;&#039;xfs_db(8)&#039;&#039;&#039; command to mark the problem directory for removal (see the example below). A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; invocation will remove the directory and move all contents into &amp;quot;lost+found&amp;quot;, named by inode number (see second example on how to map inode number to directory entry name, which needs to be done _before_ removing the directory itself). The inode number of the corrupt directory is included in the shutdown report issued by the kernel on detection of directory corruption. Using that inode number, this is how one would ensure it is removed:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 core.mode = 040755&lt;br /&gt;
 core.version = 2&lt;br /&gt;
 core.format = 3 (btree)&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; write core.mode 0&lt;br /&gt;
 xfs_db&amp;amp;gt; quit&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; will clear the directory, and add new entries (named by inode number) in lost+found.&lt;br /&gt;
&lt;br /&gt;
The easiest way to map inode numbers to full paths is via &#039;&#039;&#039;xfs_ncheck(8)&#039;&#039;&#039;&amp;lt;nowiki&amp;gt;: &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_ncheck -i 14101 -i 14102 /dev/sdXXX&lt;br /&gt;
       14101 full/path/mumble_fratz_foo_bar_1495&lt;br /&gt;
       14102 full/path/mumble_fratz_foo_bar_1494&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should this not work, we can manually map inode numbers in B-Tree format directory by taking the following steps:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 ...&lt;br /&gt;
 next_unlinked = null&lt;br /&gt;
 u.bmbt.level = 1&lt;br /&gt;
 u.bmbt.numrecs = 1&lt;br /&gt;
 u.bmbt.keys[1] = [startoff] 1:[0]&lt;br /&gt;
 u.bmbt.ptrs[1] = 1:3628&lt;br /&gt;
 xfs_db&amp;amp;gt; fsblock 3628&lt;br /&gt;
 xfs_db&amp;amp;gt; type bmapbtd&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 magic = 0x424d4150&lt;br /&gt;
 level = 0&lt;br /&gt;
 numrecs = 19&lt;br /&gt;
 leftsib = null&lt;br /&gt;
 rightsib = null&lt;br /&gt;
 recs[1-19] = [startoff,startblock,blockcount,extentflag]&lt;br /&gt;
        1:[0,3088,4,0] 2:[4,3128,8,0] 3:[12,3308,4,0] 4:[16,3360,4,0]&lt;br /&gt;
        5:[20,3496,8,0] 6:[28,3552,8,0] 7:[36,3624,4,0] 8:[40,3633,4,0]&lt;br /&gt;
        9:[44,3688,8,0] 10:[52,3744,4,0] 11:[56,3784,8,0]&lt;br /&gt;
        12:[64,3840,8,0] 13:[72,3896,4,0] 14:[33554432,3092,4,0]&lt;br /&gt;
        15:[33554436,3488,8,0] 16:[33554444,3629,4,0]&lt;br /&gt;
        17:[33554448,3748,4,0] 18:[33554452,3900,4,0]&lt;br /&gt;
        19:[67108864,3364,4,0]&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point we are looking at the extents that hold all of the directory information. There are three types of extent here, we have the data blocks (extents 1 through 13 above), then the leaf blocks (extents 14 through 18), then the freelist blocks (extent 19 above). The jumps in the first field (start offset) indicate our progression through each of the three types. For recovering file names, we are only interested in the data blocks, so we can now feed those offset numbers into the &#039;&#039;&#039;xfs_db&#039;&#039;&#039; dblock command. So, for the fifth extent - 5:[20,3496,8,0] - listed above:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; dblock 20&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 dhdr.magic = 0x58443244&lt;br /&gt;
 dhdr.bestfree[0].offset = 0&lt;br /&gt;
 dhdr.bestfree[0].length = 0&lt;br /&gt;
 dhdr.bestfree[1].offset = 0&lt;br /&gt;
 dhdr.bestfree[1].length = 0&lt;br /&gt;
 dhdr.bestfree[2].offset = 0&lt;br /&gt;
 dhdr.bestfree[2].length = 0&lt;br /&gt;
 du[0].inumber = 13937&lt;br /&gt;
 du[0].namelen = 25&lt;br /&gt;
 du[0].name = &amp;quot;mumble_fratz_foo_bar_1595&amp;quot;&lt;br /&gt;
 du[0].tag = 0x10&lt;br /&gt;
 du[1].inumber = 13938&lt;br /&gt;
 du[1].namelen = 25&lt;br /&gt;
 du[1].name = &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;&lt;br /&gt;
 du[1].tag = 0x38&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
So, here we can see that inode number 13938 matches up with name &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;. Iterate through all the extents, and extract all the name-to-inode-number mappings you can, as these will be useful when looking at &amp;quot;lost+found&amp;quot; (once &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; has removed the corrupt directory).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why does my &amp;gt; 2TB XFS partition disappear when I reboot ? ==&lt;br /&gt;
&lt;br /&gt;
Strictly speaking this is not an XFS problem.&lt;br /&gt;
&lt;br /&gt;
To support &amp;gt; 2TB partitions you need two things: a kernel that supports large block devices (&amp;lt;tt&amp;gt;CONFIG_LBD=y&amp;lt;/tt&amp;gt;) and a partition table format that can hold large partitions.  The default DOS partition tables don&#039;t.  The best partition format for&lt;br /&gt;
&amp;gt; 2TB partitions is the EFI GPT format (&amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Without CONFIG_LBD=y you can&#039;t even create the filesystem, but without &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt; it works fine until you reboot at which point the partition will disappear.  Note that you need to enable the &amp;lt;tt&amp;gt;CONFIG_PARTITION_ADVANCED&amp;lt;/tt&amp;gt; option before you can set &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I receive &amp;lt;tt&amp;gt;No space left on device&amp;lt;/tt&amp;gt; after &amp;lt;tt&amp;gt;xfs_growfs&amp;lt;/tt&amp;gt;? ==&lt;br /&gt;
&lt;br /&gt;
After [http://oss.sgi.com/pipermail/xfs/2009-January/039828.html growing a XFS filesystem], df(1) would show enough free space but attempts to write to the filesystem result in -ENOSPC. To fix this, [http://oss.sgi.com/pipermail/xfs/2009-January/039835.html Dave Chinner advised]:&lt;br /&gt;
&lt;br /&gt;
  The only way to fix this is to move data around to free up space&lt;br /&gt;
  below 1TB. Find your oldest data (i.e. that was around before even&lt;br /&gt;
  the first grow) and move it off the filesystem (move, not copy).&lt;br /&gt;
  Then if you copy it back on, the data blocks will end up above 1TB&lt;br /&gt;
  and that should leave you with plenty of space for inodes below 1TB.&lt;br /&gt;
  &lt;br /&gt;
  A complete dump and restore will also fix the problem ;)&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Main_Page&amp;diff=1957</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Main_Page&amp;diff=1957"/>
		<updated>2009-01-28T00:08:43Z</updated>

		<summary type="html">&lt;p&gt;Christian: WTF is &amp;quot;Gerson Chicarelli&amp;quot;? http://en.wikipedia.org/w/index.php?title=Namespace_Routing_Language&amp;amp;diff=262819379&amp;amp;oldid=175167226&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Welcome --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#C5C5FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to XFS.org. This site is set up to help with the XFS file system.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Information --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#E2EAFF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== Information about XFS ==&lt;br /&gt;
&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs Main sgi xfs website]&lt;br /&gt;
* [[XFS FAQ]]&lt;br /&gt;
* [[XFS Status Updates]]&lt;br /&gt;
* [[XFS Papers and Documentation]]&lt;br /&gt;
* [[Linux Distributions shipping XFS]]&lt;br /&gt;
* [[XFS Rpm for RedHat]]&lt;br /&gt;
* [[XFS Companies]]&lt;br /&gt;
* [[OLD News]]&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs/training/index.html Link to XFS training material]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/XFS Wikipedia xfs page, good detailed information.]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Consulting --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#fffff0; align:right; &amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Professional XFS Consulting Services == &lt;br /&gt;
&lt;br /&gt;
[[Consulting Resources]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;50%&amp;quot; style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Developers --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#F8F8FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== XFS Developer Resources ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS email list and archives]]&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs Main sgi xfs website]&lt;br /&gt;
* [http://oss.sgi.com/bugzilla/ Bugzilla @ oss.sgi.com]&lt;br /&gt;
* [http://bugzilla.kernel.org/ Bugzilla @ kernel.org]&lt;br /&gt;
* [[Getting the latest source code]]&lt;br /&gt;
* [[Unfinished work]]&lt;br /&gt;
* [[Shrinking Support]]&lt;br /&gt;
* [[Ideas for XFS from Dave Chinner]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_status_update_for_September_2008&amp;diff=1955</id>
		<title>XFS status update for September 2008</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_status_update_for_September_2008&amp;diff=1955"/>
		<updated>2009-01-23T10:08:58Z</updated>

		<summary type="html">&lt;p&gt;Christian: page is orphaned anyway....could be deleted&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[XFS_Status_Updates#XFS_status_update_for_September_2008]]&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Mediawiki/mediawiki/index.php&amp;diff=1954</id>
		<title>Mediawiki/mediawiki/index.php</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Mediawiki/mediawiki/index.php&amp;diff=1954"/>
		<updated>2009-01-23T10:07:35Z</updated>

		<summary type="html">&lt;p&gt;Christian: spam removed, page could be deleted though...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Mediawiki/index.php&amp;diff=1953</id>
		<title>Mediawiki/index.php</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Mediawiki/index.php&amp;diff=1953"/>
		<updated>2009-01-23T10:06:26Z</updated>

		<summary type="html">&lt;p&gt;Christian: spam removed, page could be deleted though...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Christian</name></author>
	</entry>
</feed>