<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://xfs.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ckujau</id>
	<title>xfs.org - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://xfs.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Ckujau"/>
	<link rel="alternate" type="text/html" href="https://xfs.org/index.php/Special:Contributions/Ckujau"/>
	<updated>2026-04-20T10:32:23Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.42.3</generator>
	<entry>
		<id>https://xfs.org/index.php?title=Shrinking_Support&amp;diff=3008</id>
		<title>Shrinking Support</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Shrinking_Support&amp;diff=3008"/>
		<updated>2019-03-10T21:41:39Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: mod_speling; https upgrades&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Currently XFS Filesystems can&#039;t be shrunk.&lt;br /&gt;
&lt;br /&gt;
To support shrinking XFS filesystems a few things needs to be implemented, based on a list by Dave Chinner [https://marc.info/?l=linux-xfs&amp;amp;m=118091640624488&amp;amp;]:&lt;br /&gt;
&lt;br /&gt;
* A way to check space is available for shrink&lt;br /&gt;
&lt;br /&gt;
* An ioctl or similar interface to prevent new allocations from a given allocation group.&lt;br /&gt;
&lt;br /&gt;
* A variant of the xfs_reno tool to support moving inodes out of filesystem areas that go away.&lt;br /&gt;
&lt;br /&gt;
* A variant of the xfs_fsr tool to support moving data out of the filesystem areas that go away.&lt;br /&gt;
&lt;br /&gt;
* Some way to move out orphan metadata out of the AGs truncated off&lt;br /&gt;
&lt;br /&gt;
* A transaction to shrink the filesystem.&lt;br /&gt;
&lt;br /&gt;
At that point, we&#039;ll have a &amp;quot;working&amp;quot; shrink that will allow&lt;br /&gt;
shrinking to only 50% of the original size because the log &lt;br /&gt;
(in the middle of the filesystem) will&lt;br /&gt;
get in the way.  To fix that, we&#039;ll need to implement transactions&lt;br /&gt;
to move the log...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Available pieces ==&lt;br /&gt;
&lt;br /&gt;
* A script from Ruben Porras to check if enough free space is available to support shrinking.[https://marc.info/?l=linux-xfs&amp;amp;m=118581682117599]&lt;br /&gt;
&lt;br /&gt;
* A patch from Ruben Porras to allow / disallow allocation from an allocation group [https://marc.info/?l=linux-xfs&amp;amp;m=118302806818420] plus userspace support for setting / clearing it [https://marc.info/?l=linux-xfs&amp;amp;m=118881137031101]&lt;br /&gt;
&lt;br /&gt;
* The xfs_fsr tool in xfsprogs&lt;br /&gt;
&lt;br /&gt;
* The xfs_reno tool, see [[Unfinished_work#The_xfs_reno_tool]]&lt;br /&gt;
&lt;br /&gt;
* An untested patch from Dave Chinner for a xfs_swap_inodes ioctl that allows to not just defragment extents but moving the whole inode [https://marc.info/?l=linux-xfs&amp;amp;m=119552278931942] and a patch to xfs_reno to use it from Ruben Porras [https://marc.info/?l=linux-xfs&amp;amp;m=119582841808985]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Ckujau&amp;diff=2990</id>
		<title>User:Ckujau</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Ckujau&amp;diff=2990"/>
		<updated>2016-05-29T00:24:16Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: dot&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; [mailto:xfswiki_at_nerdbynature_dot_de Christian Kujau]&lt;br /&gt;
&lt;br /&gt;
* [[/maintenance/]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_email_list_and_archives&amp;diff=2986</id>
		<title>XFS email list and archives</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_email_list_and_archives&amp;diff=2986"/>
		<updated>2016-02-01T10:20:50Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: 404s removed; Spinics added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== XFS email list ==&lt;br /&gt;
Patches, comments, requests and questions should go to [mailto:xfs@oss.sgi.com xfs@oss.sgi.com]&lt;br /&gt;
&lt;br /&gt;
The list archives on oss.sgi.com are available [http://oss.sgi.com/archives/xfs here] (MHonArc) and [http://oss.sgi.com/pipermail/xfs here] (mailman).&lt;br /&gt;
&lt;br /&gt;
Other archives include:&lt;br /&gt;
&lt;br /&gt;
* [http://news.gmane.org/group/gmane.comp.file-systems.xfs.general Gmane]&lt;br /&gt;
* [https://www.spinics.net/lists/xfs/ Spinics]&lt;br /&gt;
* [http://www.opensubscriber.com/messages/xfs@oss.sgi.com/topic.html OpenSubscriber]&lt;br /&gt;
&lt;br /&gt;
== Subscribing to the list ==&lt;br /&gt;
&lt;br /&gt;
The easiest method is to use the [http://oss.sgi.com/mailman/listinfo/xfs mailman web interface].&lt;br /&gt;
&lt;br /&gt;
Subscribing is also possible by sending an email with the body:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;subscribe&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
to [mailto:xfs-request@oss.sgi.com?body=subscribe xfs-request@oss.sgi.com]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2980</id>
		<title>XFS Papers and Documentation</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2980"/>
		<updated>2015-10-15T18:17:26Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: s/Image/File/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Primary XFS Documentation ===&lt;br /&gt;
&lt;br /&gt;
The XFS documentation started by SGI has been converted to docbook/[https://fedorahosted.org/publican/ Publican] format.  The material is suitable for experienced users as well as developers and support staff.  The XML source is available in a [http://git.kernel.org/?p=fs/xfs/xfsdocs-xml-dev.git;a=summary git repository] and builds of the documentation are available here:&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/index.html XFS User Guide]&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure//tmp/en-US/html/index.html XFS File System Structure]&lt;br /&gt;
** [http://sites.google.com/site/kandamotohiro/xfs Japanese translation] is also available.&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_Labs/tmp/en-US/html/index.html XFS Training Labs]&lt;br /&gt;
&lt;br /&gt;
* (Original versions of this material are still available at [http://oss.sgi.com/projects/xfs/training/index.html XFS Overview and Internals (html)] and [http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf XFS Filesystem Structure (pdf)]&lt;br /&gt;
&lt;br /&gt;
The format of &amp;lt;tt&amp;gt;/proc/fs/xfs/stat&amp;lt;/tt&amp;gt; also has been documented:&lt;br /&gt;
* [[Runtime_Stats|Runtime_Stats]]&lt;br /&gt;
&lt;br /&gt;
=== Papers, Presentations, Etc ===&lt;br /&gt;
&lt;br /&gt;
At the linux.conf.au 2012 event, Dave Chinner presented a talk on filesystem metadata scalability:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS - Recent and Future Adventures in Filesystem Scalability&#039;&#039; [[http://www.youtube.com/watch?v=FegjLbCnoBw Video]] [ [[:File:Xfs-scalability-lca2012.pdf|Presentation Slides]] ]&lt;br /&gt;
&lt;br /&gt;
The October 2009 issue of the USENIX ;login: magazine published an article about XFS targeted at system administrators:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: The big storage file system for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/hellwig.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium (July 2006), Dave Chinner presented a paper on filesystem scalability in Linux 2.6 kernels:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;High Bandwidth Filesystems on Large Systems&#039;&#039; (July 2006) [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-paper.pdf paper]] [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-presentation.pdf presentation]]&lt;br /&gt;
&lt;br /&gt;
At linux.conf.au 2008 Dave Chinner gave a presentation about xfs_repair that he co-authored with Barry Naujok:&lt;br /&gt;
&lt;br /&gt;
* Fixing XFS Filesystems Faster [[http://mirror.linux.org.au/pub/linux.conf.au/2008/slides/135-fixing_xfs_faster.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
In July 2006, SGI storage marketing updated the XFS datasheet:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Open Source XFS for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/datasheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At UKUUG 2003, Christoph Hellwig presented a talk on XFS:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS for Linux&#039;&#039; (July 2003) [[http://oss.sgi.com/projects/xfs/papers/ukuug2003.pdf pdf]] [[http://verein.lst.de/~hch/talks/ukuug2003/ html]]&lt;br /&gt;
&lt;br /&gt;
Originally published in Proceedings of the FREENIX Track: 2002 Usenix Annual Technical Conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Filesystem Performance and Scalability in Linux 2.4.17&#039;&#039; (June 2002) [[http://oss.sgi.com/projects/xfs/papers/filesystem-perf-tm.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium, an updated presentation on porting XFS to Linux was given:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting XFS to Linux&#039;&#039; (July 2000) [[http://oss.sgi.com/projects/xfs/papers/ols2000/ols-xfs.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the Atlanta Linux Showcase, SGI presented the following paper on the port of XFS to Linux:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting the SGI XFS File System to Linux&#039;&#039; (October 1999) [[http://oss.sgi.com/projects/xfs/papers/als/als.ps ps]] [[http://oss.sgi.com/projects/xfs/papers/als/als.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the 6th Linux Kongress &amp;amp;amp; the Linux Storage Management Workshop (LSMW) in Germany in September, 1999, SGI had a few presentations including the following:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;SGI&#039;s port of XFS to Linux&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/linux_kongress/index.htm html]]&lt;br /&gt;
* &#039;&#039;Overview of DMF&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/DMF-over/index.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the LinuxWorld Conference &amp;amp;amp; Expo in August 1999, SGI published:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An Open Source XFS data sheet&#039;&#039; (August 1999) [[http://oss.sgi.com/projects/xfs/papers/xfs_GPL.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
From the 1996 USENIX conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An XFS white paper&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/xfs_usenix/index.html html]]&lt;br /&gt;
&lt;br /&gt;
=== Other historical articles, press-releases, etc ===&lt;br /&gt;
&lt;br /&gt;
* IBM&#039;s &#039;&#039;Advanced Filesystem Implementor&#039;s Guide&#039;&#039; has a chapter &#039;&#039;Introducing XFS&#039;&#039; [[http://www-106.ibm.com/developerworks/library/l-fs9.html html]]&lt;br /&gt;
&lt;br /&gt;
* An editorial titled &#039;&#039;Tired of fscking? Try a journaling filesystem!&#039;&#039;, Freshmeat (February 2001) [[http://freshmeat.net/articles/view/212/ html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Who give a fsck about filesystems&#039;&#039; provides an overview of the Linux 2.4 filesystems [[http://www.linuxuser.co.uk/articles/issue6/lu6-All_you_need_to_know_about-Filesystems.pdf html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Journal File Systems&#039;&#039; in issue 55 of &#039;&#039;Linux Gazette&#039;&#039; provides a comparison of journaled filesystems.&lt;br /&gt;
&lt;br /&gt;
* The original XFS beta release announcement was published in &#039;&#039;Linux Today&#039;&#039; (September 2000) [[http://linuxtoday.com/news_story.php3?ltsn=2000-09-26-017-04-OS-SW html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: It&#039;s worth the wait&#039;&#039; was published on &#039;&#039;EarthWeb&#039;&#039; (July 2000) [[http://networking.earthweb.com/netos/oslin/article/0,,12284_623661,00.html html]]&lt;br /&gt;
&lt;br /&gt;
* An &#039;&#039;IRIX-XFS data sheet&#039;&#039; (July 1999) [[http://oss.sgi.com/projects/xfs/papers/IRIX_xfs_data_sheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;Getting Started with XFS&#039;&#039; book (1994) [[http://oss.sgi.com/projects/xfs/papers/getting_started_with_xfs.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* Original &#039;&#039;XFS design documents&#039;&#039; (1993) ([http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_ps/ ps], [http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_pdf/ pdf])&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Ckujau&amp;diff=2976</id>
		<title>User:Ckujau</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Ckujau&amp;diff=2976"/>
		<updated>2015-04-14T16:57:50Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: rot13&amp;#039;ed, again :)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;  [mailto:xfswiki@nerdbynature.de Christian Kujau]&lt;br /&gt;
&lt;br /&gt;
* [[/maintenance/]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Ckujau/maintenance&amp;diff=2975</id>
		<title>User:Ckujau/maintenance</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Ckujau/maintenance&amp;diff=2975"/>
		<updated>2015-04-14T16:52:56Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: maintenance page update&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[Special:BrokenRedirects]]&lt;br /&gt;
* [[Special:DeadendPages]]&lt;br /&gt;
* [[Special:DoubleRedirects]]&lt;br /&gt;
* [[Special:Lonelypages]]&lt;br /&gt;
* [[Special:UncategorizedCategories]]&lt;br /&gt;
* [[Special:UncategorizedFiles]]&lt;br /&gt;
* [[Special:UncategorizedPages]]&lt;br /&gt;
* [[Special:UncategorizedTemplates]]&lt;br /&gt;
* [[Special:UnusedCategories]]&lt;br /&gt;
* [[Special:UnusedFiles]]&lt;br /&gt;
* [[Special:UnusedTemplates]]&lt;br /&gt;
* [[Special:WantedCategories]]&lt;br /&gt;
* [[Special:WantedFiles]]&lt;br /&gt;
* [[Special:WantedPages]]&lt;br /&gt;
* [[Special:WantedTemplates]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;small&amp;gt;&amp;lt;div align=right&amp;gt;v{{CURRENTVERSION}}/{{CONTENTLANGUAGE}}&amp;lt;/div&amp;gt;&amp;lt;/small&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2959</id>
		<title>Getting the latest source code</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2959"/>
		<updated>2015-01-23T02:05:39Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: s/i sno/is no/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; XFS Released/Stable source &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mainline kernels&#039;&#039;&#039;&lt;br /&gt;
:XFS has been maintained in the official Linux kernel [http://www.kernel.org/ kernel trees] starting with [http://lkml.org/lkml/2003/12/8/35 Linux 2.4] and is frequently updated with the latest stable fixes and features from the XFS development team.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Vendor kernels&#039;&#039;&#039;&lt;br /&gt;
:All modern Linux distributions include support for XFS. &lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;XFS userspace&#039;&#039;&#039;&lt;br /&gt;
:[ftp://oss.sgi.com/projects/xfs source code tarballs] of the xfs userspace tools. These tarballs form the basis of the xfsprogs packages found in Linux distributions.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; Development and bleeding edge Development &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS git howto]]&lt;br /&gt;
&lt;br /&gt;
=== Current XFS kernel source ===&lt;br /&gt;
&lt;br /&gt;
* [https://git.kernel.org/cgit/linux/kernel/git/dgc/linux-xfs.git/ xfs]&lt;br /&gt;
&lt;br /&gt;
 $ git clone git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs.git &lt;br /&gt;
&lt;br /&gt;
Note: the old kernel tree on [http://oss.sgi.com/cgi-bin/gitweb.cgi oss.sgi.com] is no longer kept up to date with the master tree on kernel.org.&lt;br /&gt;
&lt;br /&gt;
=== XFS user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsprogs.git;a=summary xfsprogs]&lt;br /&gt;
&lt;br /&gt;
 git clone git://oss.sgi.com/xfs/cmds/xfsprogs&lt;br /&gt;
&lt;br /&gt;
A few packages are needed to compile &amp;lt;tt&amp;gt;xfsprogs&amp;lt;/tt&amp;gt;, depending on your package manager:&lt;br /&gt;
&lt;br /&gt;
 apt-get install libtool automake gettext libblkid-dev uuid-dev&lt;br /&gt;
 yum     install libtool automake gettext libblkid-devel libuuid-devel&lt;br /&gt;
&lt;br /&gt;
=== XFS dump ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsdump.git;a=summary xfsdump]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/xfsdump&lt;br /&gt;
&lt;br /&gt;
=== XFS tests ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfstests.git;a=summary xfstests]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/xfstests&lt;br /&gt;
&lt;br /&gt;
=== DMAPI user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/dmapi.git;a=summary dmapi]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/dmapi&lt;br /&gt;
&lt;br /&gt;
=== git-cvsimport generated trees ===&lt;br /&gt;
&lt;br /&gt;
The Git trees are automated mirrored copies of the CVS trees using [http://www.kernel.org/pub/software/scm/git/docs/git-cvsimport.html git-cvsimport].&lt;br /&gt;
Since git-cvsimport utilized the tool [http://www.cobite.com/cvsps/ cvsps] to recreate the atomic commits of ptools or &amp;quot;mod&amp;quot; it is easier to see the entire change that was committed using git.&lt;br /&gt;
&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-import.git;a=summary linux-2.6-xfs-from-cvs]&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-cmds.git;a=summary xfs-cmds]&lt;br /&gt;
&lt;br /&gt;
Before building in the &amp;lt;tt&amp;gt;xfsdump&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;dmapi&amp;lt;/tt&amp;gt; directories (after building &amp;lt;tt&amp;gt;xfsprogs&amp;lt;/tt&amp;gt;), you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-dev&lt;br /&gt;
to create &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt; and install appropriate files there.&lt;br /&gt;
&lt;br /&gt;
Before building in the xfstests directory, you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-qa&lt;br /&gt;
to install a somewhat larger set of files in &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt;XFS cvs trees &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
The cvs trees were created using a script that converted sgi&#039;s internal&lt;br /&gt;
ptools repository to a cvs repository, so the cvs trees were considered read only.&lt;br /&gt;
&lt;br /&gt;
At this point all new development is being managed by the git trees thus the cvs trees&lt;br /&gt;
are no longer active in terms of current development and should only be used&lt;br /&gt;
for reference.&lt;br /&gt;
&lt;br /&gt;
* [[XFS CVS howto]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2945</id>
		<title>Getting the latest source code</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Getting_the_latest_source_code&amp;diff=2945"/>
		<updated>2014-05-21T18:23:35Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: formatting snafu fixed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; XFS Released/Stable source &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Mainline kernels&#039;&#039;&#039;&lt;br /&gt;
:XFS has been maintained in the official Linux kernel [http://www.kernel.org/ kernel trees] starting with [http://lkml.org/lkml/2003/12/8/35 Linux 2.4] and is frequently updated with the latest stable fixes and features from the SGI XFS development team.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;Vendor kernels&#039;&#039;&#039;&lt;br /&gt;
:All modern Linux distributions include support for XFS. SGI actively works with [http://www.suse.com/  SUSE] to provide a supported version of XFS in that distribution.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;&#039;XFS userspace&#039;&#039;&#039;&lt;br /&gt;
:Sgi also provides [ftp://oss.sgi.com/projects/xfs source code tarballs] of the xfs userspace tools. These tarballs form the basis of the xfsprogs packages found in Linux distributions.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt; Development and bleeding edge Development &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS git howto]]&lt;br /&gt;
&lt;br /&gt;
Note: there are also [http://git.kernel.org/?s=xfs XFS git repositories on kernel.org] for external (i.e. non-Sgi) contributers. Sgi periodically pulls those in to [http://oss.sgi.com/cgi-bin/gitweb.cgi oss.sgi.com]. This also means that one or the other may be a bit more current.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Current XFS kernel source ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=summary xfs]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/xfs&lt;br /&gt;
&lt;br /&gt;
=== XFS user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsprogs.git;a=summary xfsprogs]&lt;br /&gt;
&lt;br /&gt;
 git clone git://oss.sgi.com/xfs/cmds/xfsprogs&lt;br /&gt;
&lt;br /&gt;
A few packages are needed to compile &amp;lt;tt&amp;gt;xfsprogs&amp;lt;/tt&amp;gt;, depending on your package manager:&lt;br /&gt;
&lt;br /&gt;
 apt-get install libtool automake gettext libblkid-dev uuid-dev&lt;br /&gt;
 yum     install libtool automake gettext libblkid-devel libuuid-devel&lt;br /&gt;
&lt;br /&gt;
=== XFS dump ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfsdump.git;a=summary xfsdump]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/xfsdump&lt;br /&gt;
&lt;br /&gt;
=== XFS tests ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/xfstests.git;a=summary xfstests]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/xfstests&lt;br /&gt;
&lt;br /&gt;
=== DMAPI user space tools ===&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/cmds/dmapi.git;a=summary dmapi]&lt;br /&gt;
 $ git clone git://oss.sgi.com/xfs/cmds/dmapi&lt;br /&gt;
&lt;br /&gt;
=== git-cvsimport generated trees ===&lt;br /&gt;
&lt;br /&gt;
The Git trees are automated mirrored copies of the CVS trees using [http://www.kernel.org/pub/software/scm/git/docs/git-cvsimport.html git-cvsimport].&lt;br /&gt;
Since git-cvsimport utilized the tool [http://www.cobite.com/cvsps/ cvsps] to recreate the atomic commits of ptools or &amp;quot;mod&amp;quot; it is easier to see the entire change that was committed using git.&lt;br /&gt;
&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-import.git;a=summary linux-2.6-xfs-from-cvs]&lt;br /&gt;
* [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=archive/xfs-cmds.git;a=summary xfs-cmds]&lt;br /&gt;
&lt;br /&gt;
Before building in the &amp;lt;tt&amp;gt;xfsdump&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;dmapi&amp;lt;/tt&amp;gt; directories (after building &amp;lt;tt&amp;gt;xfsprogs&amp;lt;/tt&amp;gt;), you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-dev&lt;br /&gt;
to create &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt; and install appropriate files there.&lt;br /&gt;
&lt;br /&gt;
Before building in the xfstests directory, you will need to run:&lt;br /&gt;
  # cd xfsprogs&lt;br /&gt;
  # make install-qa&lt;br /&gt;
to install a somewhat larger set of files in &amp;lt;tt&amp;gt;/usr/include/xfs&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;font face=&amp;quot;ARIAL NARROW,HELVETICA&amp;quot;&amp;gt;XFS cvs trees &amp;lt;/font&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
The cvs trees were created using a script that converted sgi&#039;s internal&lt;br /&gt;
ptools repository to a cvs repository, so the cvs trees were considered read only.&lt;br /&gt;
&lt;br /&gt;
At this point all new development is being managed by the git trees thus the cvs trees&lt;br /&gt;
are no longer active in terms of current development and should only be used&lt;br /&gt;
for reference.&lt;br /&gt;
&lt;br /&gt;
* [[XFS CVS howto]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User_talk:Cattelan&amp;diff=2826</id>
		<title>User talk:Cattelan</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User_talk:Cattelan&amp;diff=2826"/>
		<updated>2012-10-19T19:07:55Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: reply&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== XFS_IOCORE_R ==&lt;br /&gt;
&lt;br /&gt;
To Developers , &lt;br /&gt;
I have read about the new member named as xfs_extdelta that is passed in different xfs internal routines i.e xfs_bmapi , In the 2.4 versions instead of using it is just passed as NULL can anyone provide info regarding that where to initialize and if I pass it NULl then is there any adverse effect of it &lt;br /&gt;
&lt;br /&gt;
XFS_IOCORE_RT  not been used in 2.6 version , so if instead of this flag I will pass XFS_IOCORE_EXCL it will be ok or will cause any crash or adverse effects or either there is any alternative present to sought out from these two problems &lt;br /&gt;
&lt;br /&gt;
Regards &lt;br /&gt;
Anshul Kundra &lt;br /&gt;
HCL TECHNOLOGIES &lt;br /&gt;
ERS&lt;br /&gt;
: Has been answered on the [http://www.spinics.net/lists/xfs/msg09007.html mailinglist] -- [[User:Ckujau|Ckujau]] 23:39, 16 February 2012 (UTC)&lt;br /&gt;
&lt;br /&gt;
== XFS File Inode number is changing using the utilities  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To Developers,&lt;br /&gt;
&lt;br /&gt;
I have seen a different behaviour in XFS  &lt;br /&gt;
&lt;br /&gt;
Suppose I have a file with inode number &amp;quot;131&amp;quot;, I have noticed that the inode number of file got changed without deleting the file. When we change the data of the file everytime it changes the inode number. The complete description over the test is as follows &lt;br /&gt;
&lt;br /&gt;
Steps are as follows:&lt;br /&gt;
&lt;br /&gt;
1) I have created a file using &amp;quot;dd&amp;quot; of size 100MB:&lt;br /&gt;
&lt;br /&gt;
#dd if=/dev/zero of=xfs.img bs=1M count=100&lt;br /&gt;
&lt;br /&gt;
2) Created a loopback device over the image:&lt;br /&gt;
#losetup /dev/loop1 xfs.img&lt;br /&gt;
&lt;br /&gt;
3) Created file system:&lt;br /&gt;
#mkfs.xfs /dev/loop1 &lt;br /&gt;
&lt;br /&gt;
4) Mounted:&lt;br /&gt;
#mount /dev/loop1 /mnt/xfs_mnt &lt;br /&gt;
&lt;br /&gt;
5) Please check the mount output:&lt;br /&gt;
# mount&lt;br /&gt;
&lt;br /&gt;
/dev/sdb2 on / type ext3 (rw,acl,user_xattr)&lt;br /&gt;
proc on /proc type proc (rw)&lt;br /&gt;
sysfs on /sys type sysfs (rw)&lt;br /&gt;
debugfs on /sys/kernel/debug type debugfs (rw)&lt;br /&gt;
devtmpfs on /dev type devtmpfs (rw,mode=0755)&lt;br /&gt;
tmpfs on /dev/shm type tmpfs (rw,mode=1777)&lt;br /&gt;
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)&lt;br /&gt;
fusectl on /sys/fs/fuse/connections type fusectl (rw)&lt;br /&gt;
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)&lt;br /&gt;
/dev/loop0 on /mnt/mount_test type xfs (rw)&lt;br /&gt;
/dev/loop1 on /mnt/xfs_mnt type xfs (rw)&lt;br /&gt;
&lt;br /&gt;
6) Created a file using &amp;quot;touch&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# touch kundra.txt&lt;br /&gt;
&lt;br /&gt;
7) Checking the file and its inode number:&lt;br /&gt;
&lt;br /&gt;
# ls -li &lt;br /&gt;
total 0&lt;br /&gt;
131 -rw-r--r-- 1 root root 0 2012-10-20 01:41 kundra.txt&lt;br /&gt;
&lt;br /&gt;
8) I have written some data using the vim editor, I can&#039;t provide snapshot of vim on the list:&lt;br /&gt;
&lt;br /&gt;
#vim kundra.txt&lt;br /&gt;
&lt;br /&gt;
9) Now I checked the inode number using the &amp;quot;ls -li&amp;quot;&lt;br /&gt;
# ls -li&lt;br /&gt;
&lt;br /&gt;
total 4&lt;br /&gt;
133 -rw-r--r-- 1 root root 19 2012-10-20 01:43 kundra.txt&lt;br /&gt;
&lt;br /&gt;
Please check that the inode number ( from &amp;quot;131&amp;quot; to &amp;quot;133&amp;quot; )  and total value (from &amp;quot;0&amp;quot; to &amp;quot;4&amp;quot; )in the filesystem got changed, I am assuming that the reasom may be due to filesystem of small size but it is showing unexpected behaviour. &lt;br /&gt;
&lt;br /&gt;
Please provide some description over this issue, I am working on Linux SLES &lt;br /&gt;
&lt;br /&gt;
# cat /etc/issue&lt;br /&gt;
&lt;br /&gt;
Welcome to SUSE Linux Enterprise Server 11 SP1  (x86_64) - Kernel \r (\l).&lt;br /&gt;
&lt;br /&gt;
# uname -a &lt;br /&gt;
Linux linux-sles 2.6.32.19-0.6-default #1 SMP Fri Aug 31 01:37:50 IST 2012 x86_64 x86_64 x86_64 GNU/Linux&lt;br /&gt;
&lt;br /&gt;
Thanks &amp;amp; Best Regards  &lt;br /&gt;
Anshul Kundra&lt;br /&gt;
: Anshul, as I have suggested [[User:Anshul.kundra|earlier]]: please ask questions on the [[XFS_email_list_and_archives|mailing lists]]. Also, your question [https://encrypted.google.com/search?hl=en&amp;amp;q=vi%20inode%20change has been answered many times] already. -- [[User:Ckujau|Ckujau]] ([[User talk:Ckujau|talk]]) 19:07, 19 October 2012 (UTC)&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Xfs.org_talk:Anshul.kundra&amp;diff=2824</id>
		<title>Xfs.org talk:Anshul.kundra</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Xfs.org_talk:Anshul.kundra&amp;diff=2824"/>
		<updated>2012-10-15T17:37:28Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: namespace cleanup, moved back to User:Anshul.kundra&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Talk:XFS_Status_Updates&amp;diff=2464</id>
		<title>Talk:XFS Status Updates</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Talk:XFS_Status_Updates&amp;diff=2464"/>
		<updated>2012-03-19T21:43:35Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= quite/quiet =&lt;br /&gt;
&lt;br /&gt;
Someone&#039;s annoyingly dyslexic:&lt;br /&gt;
&amp;quot;quite&amp;quot; == totally/actually;&lt;br /&gt;
&amp;quot;quiet&amp;quot; == low level of noise/silent.&lt;br /&gt;
: (Hopefully) fixed. Why didn&#039;t you? -- [[User:Christian|chris_goe]] 20:54, 3 April 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
Teach a man to fish, and all that ;)&lt;br /&gt;
&lt;br /&gt;
= Combined status page =&lt;br /&gt;
&lt;br /&gt;
Out of curiosity: why was the status page split up? Was it too long to be edited? Or, asked another way: is there a need for a combined status page, with &#039;&#039;&#039;all&#039;&#039;&#039; the statuses on one page, as before? This could be achived through article inclusion, e.g.&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;{{:XFS_status_update_for_2012}}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;{{:XFS_status_update_for_2011}}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
:This would include the statuses from 2012 and 2011 on a single page, yet each year would still have its own page. -- [[User:Ckujau|Ckujau]] 21:43, 19 March 2012 (UTC)&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=FITRIM/discard&amp;diff=2434</id>
		<title>FITRIM/discard</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=FITRIM/discard&amp;diff=2434"/>
		<updated>2012-03-17T03:23:54Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: SPAM: Undo revision 2433 by Vegasseo (Talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Purpose ==&lt;br /&gt;
&lt;br /&gt;
FITRIM is a mounted filesystem feature to discard (or &amp;quot;[https://en.wikipedia.org/wiki/TRIM trim]&amp;quot;) blocks which are not in use by the filesystem. This is useful for solid-state drives (SSDs) and thinly-provisioned storage.&lt;br /&gt;
&lt;br /&gt;
== Requirements ==&lt;br /&gt;
&lt;br /&gt;
#The block device underneath the filesystem must support the FITRIM operation.&lt;br /&gt;
#The kernel must include TRIM support and XFS must include FITRIM support (this has been true for Linux since v2.6.38, Jan 18 2011)&lt;br /&gt;
#Realtime discard mode requires a more recent v3.0 kernel&lt;br /&gt;
&lt;br /&gt;
This can be verified by viewing /sys/block/&amp;lt;dev&amp;gt;/queue/discard_max_bytes -- If the value is zero then your device doesn&#039;t support discard&lt;br /&gt;
operations.&lt;br /&gt;
&lt;br /&gt;
== Modes of Operation ==&lt;br /&gt;
&lt;br /&gt;
* Realtime discard -- As files are removed, the filesystem issues discard requests automatically&lt;br /&gt;
* Batch Mode -- A user procedure that trims all or portions of the filesystem&lt;br /&gt;
&lt;br /&gt;
=== Realtime discard ===&lt;br /&gt;
&lt;br /&gt;
This mode issues discard requests automatically as files are removed from the filesystem.  No other command or process is required.&lt;br /&gt;
&lt;br /&gt;
There can be a severe performance penalty for enabling realtime discard. (4)&lt;br /&gt;
&lt;br /&gt;
Realtime discard is selected by adding the filesystem option &amp;lt;code&amp;gt;discard&amp;lt;/code&amp;gt; while mounting.&lt;br /&gt;
&lt;br /&gt;
This can be done by the following examples:&lt;br /&gt;
&lt;br /&gt;
# placing &amp;lt;code&amp;gt;discard&amp;lt;/code&amp;gt; in your /etc/fstab for the filesystem: &amp;lt;code&amp;gt;/dev/sda1 /mountpoint xfs defaults,discard 0 1&amp;lt;/code&amp;gt;&lt;br /&gt;
# mount options: &amp;lt;code&amp;gt;mount -o discard /dev/sda1 /mountpoint&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Batch Mode ===&lt;br /&gt;
&lt;br /&gt;
This mode requires user intervention.  This intervention is in the form of the command &amp;lt;code&amp;gt;fstrim&amp;lt;/code&amp;gt;.  It has been included in [https://en.wikipedia.org/wiki/Util-linux util-linux-ng] since about Nov 26, 2010.&lt;br /&gt;
&lt;br /&gt;
Usage example:&lt;br /&gt;
&amp;lt;code&amp;gt;fstrim /mountpoint&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
# FITRIM description - Lukas Czerner &amp;lt;lczerner at redhat.com&amp;gt; http://patchwork.xfs.org/patch/1490/&lt;br /&gt;
# Block requirements - Dave Chinner &amp;lt;david at fromorbit.com&amp;gt; http://oss.sgi.com/pipermail/xfs/2011-October/053379.html&lt;br /&gt;
# util-linux-ng addition - Karel Zak &amp;lt;kzak@xxxxxxxxxx&amp;gt; http://www.spinics.net/lists/util-linux-ng/msg03646.&lt;br /&gt;
# Online TRIM/discard performance impact - http://oss.sgi.com/pipermail/xfs/2011-November/053841.html&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=File:Xfs-scalability-lca2012.pdf&amp;diff=2423</id>
		<title>File:Xfs-scalability-lca2012.pdf</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=File:Xfs-scalability-lca2012.pdf&amp;diff=2423"/>
		<updated>2012-02-16T23:48:09Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: +title, date&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;  Title: XFS: Adventures in Metadata Scalability&lt;br /&gt;
 Author: Dave Chinner&lt;br /&gt;
   Date: 2012-01-18&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2422</id>
		<title>XFS Papers and Documentation</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2422"/>
		<updated>2012-02-16T23:45:01Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: link wikified&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Primary XFS Documentation ===&lt;br /&gt;
&lt;br /&gt;
The XFS documentation started by SGI has been converted to docbook/[https://fedorahosted.org/publican/ Publican] format.  The material is suitable for experienced users as well as developers and support staff.  The XML source is available in a [http://git.kernel.org/?p=fs/xfs/xfsdocs-xml-dev.git;a=summary git repository] and builds of the documentation are available here:&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/index.html XFS User Guide]&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure//tmp/en-US/html/index.html XFS File System Structure]&lt;br /&gt;
** [http://sites.google.com/site/kandamotohiro/xfs Japanese translation] is also available.&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_Labs/tmp/en-US/html/index.html XFS Training Labs]&lt;br /&gt;
&lt;br /&gt;
* (Original versions of this material are still available at [http://oss.sgi.com/projects/xfs/training/index.html XFS Overview and Internals (html)] and [http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf XFS Filesystem Structure (pdf)]&lt;br /&gt;
&lt;br /&gt;
The format of &amp;lt;tt&amp;gt;/proc/fs/xfs/stat&amp;lt;/tt&amp;gt; also has been documented:&lt;br /&gt;
* [[Runtime_Stats|Runtime_Stats]]&lt;br /&gt;
&lt;br /&gt;
=== Papers, Presentations, Etc ===&lt;br /&gt;
&lt;br /&gt;
At the linux.conf.au 2012 event, Dave Chinner presented a talk on filesystem metadata scalability:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS - Recent and Future Adventures in Filesystem Scalability&#039;&#039; [[http://www.youtube.com/watch?v=FegjLbCnoBw Video]] [ [[:Image:Xfs-scalability-lca2012.pdf|Presentation Slides]] ]&lt;br /&gt;
&lt;br /&gt;
The October 2009 issue of the USENIX ;login: magazine published an article about XFS targeted at system administrators:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: The big storage file system for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/hellwig.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium (July 2006), Dave Chinner presented a paper on filesystem scalability in Linux 2.6 kernels:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;High Bandwidth Filesystems on Large Systems&#039;&#039; (July 2006) [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-paper.pdf paper]] [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-presentation.pdf presentation]]&lt;br /&gt;
&lt;br /&gt;
At linux.conf.au 2008 Dave Chinner gave a presentation about xfs_repair that he co-authored with Barry Naujok:&lt;br /&gt;
&lt;br /&gt;
* Fixing XFS Filesystems Faster [[http://mirror.linux.org.au/pub/linux.conf.au/2008/slides/135-fixing_xfs_faster.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
In July 2006, SGI storage marketing updated the XFS datasheet:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Open Source XFS for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/datasheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At UKUUG 2003, Christoph Hellwig presented a talk on XFS:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS for Linux&#039;&#039; (July 2003) [[http://oss.sgi.com/projects/xfs/papers/ukuug2003.pdf pdf]] [[http://verein.lst.de/~hch/talks/ukuug2003/ html]]&lt;br /&gt;
&lt;br /&gt;
Originally published in Proceedings of the FREENIX Track: 2002 Usenix Annual Technical Conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Filesystem Performance and Scalability in Linux 2.4.17&#039;&#039; (June 2002) [[http://oss.sgi.com/projects/xfs/papers/filesystem-perf-tm.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium, an updated presentation on porting XFS to Linux was given:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting XFS to Linux&#039;&#039; (July 2000) [[http://oss.sgi.com/projects/xfs/papers/ols2000/ols-xfs.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the Atlanta Linux Showcase, SGI presented the following paper on the port of XFS to Linux:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting the SGI XFS File System to Linux&#039;&#039; (October 1999) [[http://oss.sgi.com/projects/xfs/papers/als/als.ps ps]] [[http://oss.sgi.com/projects/xfs/papers/als/als.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the 6th Linux Kongress &amp;amp;amp; the Linux Storage Management Workshop (LSMW) in Germany in September, 1999, SGI had a few presentations including the following:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;SGI&#039;s port of XFS to Linux&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/linux_kongress/index.htm html]]&lt;br /&gt;
* &#039;&#039;Overview of DMF&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/DMF-over/index.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the LinuxWorld Conference &amp;amp;amp; Expo in August 1999, SGI published:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An Open Source XFS data sheet&#039;&#039; (August 1999) [[http://oss.sgi.com/projects/xfs/papers/xfs_GPL.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
From the 1996 USENIX conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An XFS white paper&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/xfs_usenix/index.html html]]&lt;br /&gt;
&lt;br /&gt;
=== Other historical articles, press-releases, etc ===&lt;br /&gt;
&lt;br /&gt;
* IBM&#039;s &#039;&#039;Advanced Filesystem Implementor&#039;s Guide&#039;&#039; has a chapter &#039;&#039;Introducing XFS&#039;&#039; [[http://www-106.ibm.com/developerworks/library/l-fs9.html html]]&lt;br /&gt;
&lt;br /&gt;
* An editorial titled &#039;&#039;Tired of fscking? Try a journaling filesystem!&#039;&#039;, Freshmeat (February 2001) [[http://freshmeat.net/articles/view/212/ html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Who give a fsck about filesystems&#039;&#039; provides an overview of the Linux 2.4 filesystems [[http://www.linuxuser.co.uk/articles/issue6/lu6-All_you_need_to_know_about-Filesystems.pdf html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Journal File Systems&#039;&#039; in issue 55 of &#039;&#039;Linux Gazette&#039;&#039; provides a comparison of journaled filesystems.&lt;br /&gt;
&lt;br /&gt;
* The original XFS beta release announcement was published in &#039;&#039;Linux Today&#039;&#039; (September 2000) [[http://linuxtoday.com/news_story.php3?ltsn=2000-09-26-017-04-OS-SW html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: It&#039;s worth the wait&#039;&#039; was published on &#039;&#039;EarthWeb&#039;&#039; (July 2000) [[http://networking.earthweb.com/netos/oslin/article/0,,12284_623661,00.html html]]&lt;br /&gt;
&lt;br /&gt;
* An &#039;&#039;IRIX-XFS data sheet&#039;&#039; (July 1999) [[http://oss.sgi.com/projects/xfs/papers/IRIX_xfs_data_sheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;Getting Started with XFS&#039;&#039; book (1994) [[http://oss.sgi.com/projects/xfs/papers/getting_started_with_xfs.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* Original &#039;&#039;XFS design documents&#039;&#039; (1993) ([http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_ps/ ps], [http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_pdf/ pdf])&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User_talk:Cattelan&amp;diff=2421</id>
		<title>User talk:Cattelan</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User_talk:Cattelan&amp;diff=2421"/>
		<updated>2012-02-16T23:39:25Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: see http://www.spinics.net/lists/xfs/msg09007.html&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== XFS_IOCORE_R ==&lt;br /&gt;
&lt;br /&gt;
To Developers , &lt;br /&gt;
I have read about the new member named as xfs_extdelta that is passed in different xfs internal routines i.e xfs_bmapi , In the 2.4 versions instead of using it is just passed as NULL can anyone provide info regarding that where to initialize and if I pass it NULl then is there any adverse effect of it &lt;br /&gt;
&lt;br /&gt;
XFS_IOCORE_RT  not been used in 2.6 version , so if instead of this flag I will pass XFS_IOCORE_EXCL it will be ok or will cause any crash or adverse effects or either there is any alternative present to sought out from these two problems &lt;br /&gt;
&lt;br /&gt;
Regards &lt;br /&gt;
Anshul Kundra &lt;br /&gt;
HCL TECHNOLOGIES &lt;br /&gt;
ERS&lt;br /&gt;
: Has been answered on the [http://www.spinics.net/lists/xfs/msg09007.html mailinglist] -- [[User:Ckujau|Ckujau]] 23:39, 16 February 2012 (UTC)&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User_talk:59.151.53.100&amp;diff=2419</id>
		<title>User talk:59.151.53.100</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User_talk:59.151.53.100&amp;diff=2419"/>
		<updated>2012-02-16T23:29:03Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: spam!&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{delete}}&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Status_Updates&amp;diff=2415</id>
		<title>XFS Status Updates</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Status_Updates&amp;diff=2415"/>
		<updated>2012-02-09T17:39:16Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: s/quite/quiet/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== XFS status update for December 2011 ==&lt;br /&gt;
&lt;br /&gt;
December saw further stabilization of the Linux 3.2 release candidates.&lt;br /&gt;
For XFS that meant two important fixes for sync() data integrity, to&lt;br /&gt;
work around issues introduced in the VFS sync code in the past few&lt;br /&gt;
kernel releases.  These fixes have also been backported to the 3.0-stable&lt;br /&gt;
release.&lt;br /&gt;
&lt;br /&gt;
Development for the next merge windows continue in fast pace, although&lt;br /&gt;
only a relatively small amount of patches was merged into the development&lt;br /&gt;
tree for the Linux 3.3 window.  The most interesting topic in December&lt;br /&gt;
probably was further development of the SEEK_DATA /  SEEK_HOLE support,&lt;br /&gt;
including defining the exact semantics in presence of unwritten extents&lt;br /&gt;
and proper test coverage.&lt;br /&gt;
&lt;br /&gt;
On the user space side December was fairly quiet, with about a handful&lt;br /&gt;
fixes commit to xfsprogs, two new test cases and a couple of fixes in&lt;br /&gt;
xfstests, and no activity in xfsdump.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2011 ==&lt;br /&gt;
&lt;br /&gt;
November saw stabilization of the Linux 3.2 release candidates, including&lt;br /&gt;
a few fixes for XFS.  In addition a lot of bug fixes were backported to&lt;br /&gt;
the 3.0 long term stable and 3.1-stable releases for users not on&lt;br /&gt;
bleeding edge kernels.&lt;br /&gt;
&lt;br /&gt;
At the same time development for Linux 3.3 went on at a fast pace, although&lt;br /&gt;
no pages were merged into the development tree yet.  The highlights are:&lt;br /&gt;
&lt;br /&gt;
 - further versions of the patches to log all file size updates instead of&lt;br /&gt;
   relying the the flaky VM writeback code for them&lt;br /&gt;
 - an initial version of SEEK_HOLE/SEEK_DATA support&lt;br /&gt;
 - removal of the old non-delaylog logging code, and cleanups resulting&lt;br /&gt;
   from the removal&lt;br /&gt;
 - large updates for the quota code&lt;br /&gt;
&lt;br /&gt;
Userspace development was even more busy:&lt;br /&gt;
&lt;br /&gt;
Xfsprogs saw the rushed 3.1.7 release which contains Debian packaging fixes,&lt;br /&gt;
a polish translation update and a xfs_repair fix.  In the meantime a lot of&lt;br /&gt;
xfs_repair fixes were posted but mostly not reviewed and commit yet.&lt;br /&gt;
&lt;br /&gt;
Xfsdump grew support for using pthreads to write backup streams to multiple&lt;br /&gt;
tapes in parallel, and SGI_XFSDUMP_SKIP_FILE which has been deprecated in&lt;br /&gt;
favor of the nodump flag has finally been removed.&lt;br /&gt;
Xfstests saw an enormous amount of updates.  The fsstress tool saw major&lt;br /&gt;
updates to exercise even more system calls, and found numerous bugs in&lt;br /&gt;
all major Linux filesystems, additional ENOSPC tests, a new test for&lt;br /&gt;
btrfs-specific functionality and the usual amount of bug fixes and small&lt;br /&gt;
cleanups.  Also a series to clean up the very large filesystem testing,&lt;br /&gt;
including extending the support to ext4 was posted but not committed yet.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2011 ==&lt;br /&gt;
&lt;br /&gt;
October finally saw the delayed release of Linux 3.1, which is a fairly&lt;br /&gt;
boring release as XFS is concerned.  In addition to a few bug fixes&lt;br /&gt;
and cleanups the biggest item is an XFS-internal re organization of the&lt;br /&gt;
source files, dropping all sub directories under fs/xfs.&lt;br /&gt;
&lt;br /&gt;
Due to the long Linux 3.1 release cycle development for 3.3 has already&lt;br /&gt;
started full steam in October while adding a few more small optimization&lt;br /&gt;
and fixes to the development tree for Linux 3.2, and merging that tree&lt;br /&gt;
into mainline.&lt;br /&gt;
&lt;br /&gt;
Notable items for Linux 3.2 are speedup for parallel O_DIRECT reads and&lt;br /&gt;
writes on high IOPS devices, optimizations for fsync(2) on directories&lt;br /&gt;
and sync(2) latency, as well as further small improvements for metadata&lt;br /&gt;
performance on highly parallel workloads.&lt;br /&gt;
&lt;br /&gt;
On the user space side xfsprogs saw a few more xfs_repair fixes, as well&lt;br /&gt;
as some updates of mount point handling for the xfs_quota tools, which&lt;br /&gt;
together with the updates from the last months was published in form&lt;br /&gt;
of the xfsprogs 3.1.6 release.  This was accompanied by an xfsdump&lt;br /&gt;
3.0.6 release, which does not include any new updates in October, but&lt;br /&gt;
lots of work from the previous month.  Xfstests saw two additional&lt;br /&gt;
test cases and various fixes, and it&#039;s first versioned release ever.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2011 ==&lt;br /&gt;
&lt;br /&gt;
August saw further release candidates of Linux 3.1, which had been&lt;br /&gt;
completely uneventful with just a single small regression fix being&lt;br /&gt;
merged.&lt;br /&gt;
&lt;br /&gt;
In the meantime developments for the Linux 3.2 kernel went on with the merge&lt;br /&gt;
of a large series that completely refactors the XFS-internal xfs_bmapi&lt;br /&gt;
interfaces for simpler interfaces and less stack usage, as well as various&lt;br /&gt;
smaller cleanups and fixes.&lt;br /&gt;
&lt;br /&gt;
September also was a very busy month for userspace development. In xfsprogs&lt;br /&gt;
we saw various error handling fixes to libxcmd, libxfs, mkfs.xfs, xfs_quota&lt;br /&gt;
and xfs_repair, xfsdump saw a few smaller changes finishing up the large&lt;br /&gt;
work done in August.  Xfstests saw 4 new test cases contributed from&lt;br /&gt;
various developers, and the usual handful of bug fixes.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2011 ==&lt;br /&gt;
&lt;br /&gt;
August saw further release candidates of Linux 3.1, which had been quiet&lt;br /&gt;
for XFS except for the bulk renaming of many XFS source files so that all&lt;br /&gt;
source files are now located directly underneath the fs/xfs/ directory.&lt;br /&gt;
&lt;br /&gt;
A lot of development for the Linux 3.2 kernel series was going on,&lt;br /&gt;
including an overhaul of the data I/O completion handler, further buffer&lt;br /&gt;
cache speedups and cleanups, a major refactoring around xfs_bmapi, quota&lt;br /&gt;
locking changes, and optimization for direct I/O on high IOP solid state&lt;br /&gt;
devices.&lt;br /&gt;
&lt;br /&gt;
On the userspace side xfsdump saw a lot of cleanups in preparation of porting&lt;br /&gt;
the multithreaded dump and restore code from IRIX.  Xfsprogs saw a few fix&lt;br /&gt;
to the xfs_quota tool and mkfs.xfs as well as a man page update.  This month&lt;br /&gt;
xfstests did not see any new test cases, but it got the usual amount of fixes&lt;br /&gt;
and  grew support for jfs and NFS v4.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for July 2011 ==&lt;br /&gt;
&lt;br /&gt;
July finally saw the release of Linux 3.0, including a relatively small XFS&lt;br /&gt;
update:&lt;br /&gt;
&lt;br /&gt;
  37 files changed, 1168 insertions(+), 847 deletions(-)&lt;br /&gt;
&lt;br /&gt;
The primary news in this release is a complete rework of the busy extent&lt;br /&gt;
tracking, which speeds up allocation heavy multithreaded workloads.  This&lt;br /&gt;
feature also allowed adding support for discard support at transaction&lt;br /&gt;
commit time using the discard mount option.  While the implementation of&lt;br /&gt;
discards in XFS is state of the art it should be considered mostly a&lt;br /&gt;
technology preview until various efficiency issue in the block layer&lt;br /&gt;
discard support are sorted out.  Another important feature visible to&lt;br /&gt;
users is that XFS now supports using external logs even when using volatile&lt;br /&gt;
write caches, although the implementation is not fully optimized yet,&lt;br /&gt;
the rest of the changes consists of the usual pile of bug fixes and a&lt;br /&gt;
relatively small set of cleanups.  After the release of Linux 3.0 the&lt;br /&gt;
merge window for Linux 3.1 also fell mostly into July.   The XFS merge&lt;br /&gt;
for 3.1 included further speedups for the AIL code, and a huge amount&lt;br /&gt;
of cleanups.&lt;br /&gt;
&lt;br /&gt;
On the userspace side the biggest item was the merge of the libxfs resync&lt;br /&gt;
with the Linux 2.6.39 kernel code.  In addition to that xfsprogs saw small&lt;br /&gt;
xfs_repair updates, xfstests saw various fixes to fsx, and various build&lt;br /&gt;
system fixes were commit to all userspace repositories.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== XFS status update for June 2011 ==&lt;br /&gt;
&lt;br /&gt;
In June we saw more release candidates for Linux 3.0, which contain a few&lt;br /&gt;
XFS fixes, but no major updates.  No updates were committed to the XFS&lt;br /&gt;
development tree for Linux 3.1 either, although the mailing list has been&lt;br /&gt;
rather busy with updates for that merge window.&lt;br /&gt;
&lt;br /&gt;
On the user space side the xfsprogs and xfsdump repositories didn&#039;t see&lt;br /&gt;
any updates, while xfstests has been rather busy with a lot of fixes&lt;br /&gt;
to various test cases.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for May 2011 ==&lt;br /&gt;
&lt;br /&gt;
May finally saw the release of Linux 2.6.39, which was a little more calm&lt;br /&gt;
than usual for XFS, and only contains about half the amount of the changes&lt;br /&gt;
we are used to see:&lt;br /&gt;
&lt;br /&gt;
  58 files changed, 1660 insertions(+), 1912 deletions(-)&lt;br /&gt;
&lt;br /&gt;
The most visible change is an overhaul of the XFS-internal interfaces&lt;br /&gt;
to print kernel messages, which makes all messages from XFS look slightly&lt;br /&gt;
different from before by always providing information about which device&lt;br /&gt;
these messages relate to.  In addition to that support for the RT subvolume,&lt;br /&gt;
which had been broken for a while has been resurrect, the XFS buffer cache&lt;br /&gt;
switched away from using the Linux pagecache to improve performance on&lt;br /&gt;
metadata intensive workloads, and all but one of the XFS kernel threads have&lt;br /&gt;
been switched to the new concurrent managed workqueue infrastructure that&lt;br /&gt;
is present in more recent Linux 2.6 releases.&lt;br /&gt;
&lt;br /&gt;
In the meantime development for the release now known as Linux 3.0 went&lt;br /&gt;
ahead full steam up to the merge of the XFS tree into Linux 3.0-rc1. News&lt;br /&gt;
in that release contain support for vastly improved busy extent tracking,&lt;br /&gt;
support for online discard (aka TRIM) and the usual amount of bug fixes.&lt;br /&gt;
&lt;br /&gt;
On the user space side the xfsprogs saw a fix for a corner case in xfs_repair,&lt;br /&gt;
and xfstests saw a few bug fixes as well as a new test case to test&lt;br /&gt;
btrfs-specific functionality.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for April 2011 ==&lt;br /&gt;
&lt;br /&gt;
April saw further stabilization work on the Linux 2.6.39 kernel, including&lt;br /&gt;
a number of XFS bug fixes.  Most importantly a series of patches fixes various&lt;br /&gt;
OOM problems due to bad interactions between the generic writeback code&lt;br /&gt;
and XFS inode reclaim, but there also were other patches for various smaller&lt;br /&gt;
issues.  In the meantime the XFS development tree saw the addition of the&lt;br /&gt;
optimized busy extent tracking, which allows large speedups for multi-threaded&lt;br /&gt;
meta data heavy workloads, and lays the groundwork for discard support on&lt;br /&gt;
transaction commit, and a few other smaller patches.&lt;br /&gt;
&lt;br /&gt;
On the user space side the xfsprogs and xfsdump repositories saw a very quiet &lt;br /&gt;
month with no applied patches, although a few were posted and discussed on &lt;br /&gt;
the mailing list. The xfstests repository on the other hand saw a new test &lt;br /&gt;
cases exercising the xfs_metadump functionally as well as a fixes to existing &lt;br /&gt;
tests.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for March 2011 ==&lt;br /&gt;
&lt;br /&gt;
March saw the release of Linux 2.6.38, which included a sizable XFS update.&lt;br /&gt;
The most prominent new features of XFS in Linux 2.6.39 is support for the&lt;br /&gt;
FITRIM ioctl that allows discarding unused space on the filesystem&lt;br /&gt;
periodically, better handling of persistent preallocations especially on&lt;br /&gt;
NFS servers, and further scalability improvements in the buffer cache&lt;br /&gt;
and log code.  In additions to that the release includes a wide range&lt;br /&gt;
of fixes and cleanups to the code base.  The diff stat for XFS in the&lt;br /&gt;
Linux 2.6.39 release is:&lt;br /&gt;
&lt;br /&gt;
  57 files changed, 2964 insertions(+), 2528 deletions(-)&lt;br /&gt;
&lt;br /&gt;
Which means the XFS code base actually had a minor growth in code size&lt;br /&gt;
this time around.  In the second half of March the XFS development&lt;br /&gt;
tree got merged into Linus&#039; tree for Linux 2.6.39.  Linux 2.6.39 is going&lt;br /&gt;
to be a rather quiet release for XFS, mostly concentrating on settling&lt;br /&gt;
the large changes that went into the last releases and smaller cleanups.&lt;br /&gt;
The only user visible change will be that the delaylog option which&lt;br /&gt;
improves metadata performance and scalability is now turned on by default,&lt;br /&gt;
and a couple of fixes that make the realtime subvolume support usable&lt;br /&gt;
again.&lt;br /&gt;
&lt;br /&gt;
On the user space side both xfsprogs and xfsdump saw new releases in March.&lt;br /&gt;
The xfsprogs 3.1.5 release contains various smaller updates to xfs_repair,&lt;br /&gt;
xfs_metadump and xfs_quota, as well as support for the new generic hole&lt;br /&gt;
punching in the falloc system call in the xfs_io tool.  The xfsdump 3.0.5&lt;br /&gt;
release now supports up to 4 billion directory entries, has much better&lt;br /&gt;
performance for large dumps, and some improvements to the inventory code&lt;br /&gt;
and dumping of quota information, as well as long overdue updates to the&lt;br /&gt;
build system.  The xfstests repository has seen various build system&lt;br /&gt;
improvements, better FIEMAP testing, falloc support for fsx and a few&lt;br /&gt;
cleanups.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for February 2011 ==&lt;br /&gt;
&lt;br /&gt;
February saw the stabilization of the Linux 2.6.38 tree, with just two&lt;br /&gt;
small XFS fixes going into Linus&#039; tree, and the XFS development tree&lt;br /&gt;
has been similarly quiet with just a few cleanups, and the delaylog option&lt;br /&gt;
propagated to the default operation mode.  A few more patches for the 2.6.39&lt;br /&gt;
merge window have been posted and/or discussed on the mailing list, but February&lt;br /&gt;
was a rather quiet month in general.&lt;br /&gt;
&lt;br /&gt;
On the user space side xfsprogs saw a few bug fixes, and a speedup for&lt;br /&gt;
phase2 of xfs_repair, xfsdump saw a bug fix and support for pruning the&lt;br /&gt;
inventory by session id, and xfstests saw it&#039;s usual stream of bug fixes&lt;br /&gt;
as well as two new test cases.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for January 2011 ==&lt;br /&gt;
&lt;br /&gt;
On the 4th of January we saw the release of Linux 2.6.37, which contains a&lt;br /&gt;
large XFS update:&lt;br /&gt;
&lt;br /&gt;
    67 files changed, 1424 insertions(+), 1524 deletions(-)&lt;br /&gt;
&lt;br /&gt;
User visible changes are the new XFS_IOC_ZERO_RANGE ioctl which allows&lt;br /&gt;
to convert already allocated space into unwritten extents that return&lt;br /&gt;
zeros on a read, and support for 32-bit wide project IDs.  The other large&lt;br /&gt;
item are various changes to improve metadata scalability even further,&lt;br /&gt;
by changes to the the buffer cache, inode lookup and other parts of the&lt;br /&gt;
filesystem driver.&lt;br /&gt;
&lt;br /&gt;
After that the XFS development tree for 2.6.38 was merged into mainline,&lt;br /&gt;
with an even larger set of changes.  Notable items include support for the&lt;br /&gt;
FITRIM ioctl to discard unused space on SSDs and thinly provisioned storage&lt;br /&gt;
systems, a buffer LRU scheme to improve hit rates for metadata, an&lt;br /&gt;
overhaul of the log subsystem locking, dramatically improving scalability&lt;br /&gt;
in that area, and much smarter handling of preallocations, especially&lt;br /&gt;
for files closed and reopened frequently, e.g. by the NFS server.&lt;br /&gt;
&lt;br /&gt;
User space development has been very quiet, with just a few fixes committed&lt;br /&gt;
to the xfstests repository, although various additional patches for xfsprogs&lt;br /&gt;
and xfstests that haven&#039;t been committed yet were discussed on the mailing list.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for December 2010 ==&lt;br /&gt;
&lt;br /&gt;
The release process of the Linux 2.6.37 kernel with it&#039;s large XFS updates&lt;br /&gt;
was in it&#039;s final days in December, which explains why we only saw a single&lt;br /&gt;
one-liner regression fix for XFS in Linus&#039; tree.  The XFS development tree&lt;br /&gt;
finally saw some updates when the writeback updates and some small cleanups&lt;br /&gt;
to the allocator and log recovery code were merged, but the large metadata&lt;br /&gt;
scalability updates that have been posted to the list multiple times are&lt;br /&gt;
still missing.  In addition to this on-going work the list also saw patches&lt;br /&gt;
that fix smaller issues, which are also still waiting to be merged.&lt;br /&gt;
&lt;br /&gt;
On the userspace side xfsprogs and xfsdump development has been quit, with&lt;br /&gt;
no commits to either repository in December, although a large series of&lt;br /&gt;
updates to the metadump command has been reposted near the end of the month.&lt;br /&gt;
The xfstests repository saw a new regression test for a btrfs problem,&lt;br /&gt;
and various updates to existing tests.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2010 ==&lt;br /&gt;
&lt;br /&gt;
From looking at the kernel git commits November looked like a pretty&lt;br /&gt;
slow month with just two hand full fixes going into the release candidates&lt;br /&gt;
for Linux 2.6.37, and none at all going into the development tree.&lt;br /&gt;
But in this case git statistics didn&#039;t tell the whole story - there&lt;br /&gt;
was a lot of activity on patches for the next merge window on the list.&lt;br /&gt;
The focus in November was still at metadata scalability, with various&lt;br /&gt;
patchsets that improves parallel creates and unlinks again, and also&lt;br /&gt;
improves 8-way dbench throughput by 30%.  In addition to that there&lt;br /&gt;
were patches to improve preallocation for NFS servers, to simplify&lt;br /&gt;
the writeback code, and to remove the XFS-internal percpu counters&lt;br /&gt;
for free space for the generic kernel percpu counters, which just needed&lt;br /&gt;
a small improvement.&lt;br /&gt;
&lt;br /&gt;
On the user space side we saw the release of xfsprogs 3.1.4, which&lt;br /&gt;
contains various accumulated bug fixes and Debian packaging updates.&lt;br /&gt;
The xfsdump tree saw a large update to speed up restore by using&lt;br /&gt;
mmap for an internal database and remove the limitation of ~ 214&lt;br /&gt;
million directory entries per dump file.  The xfstests test suite&lt;br /&gt;
saw three new testcases and various fixes, including support for the&lt;br /&gt;
hfsplus filesystem.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2010 ==&lt;br /&gt;
&lt;br /&gt;
Near the end of the month we finally saw the release of Linux 2.6.36.&lt;br /&gt;
Just a single fix made it into mainline in this month, showing that the&lt;br /&gt;
stabilization period before has worked very well.&lt;br /&gt;
&lt;br /&gt;
Linux 2.6.36 has been another impressive release for XFS, seeing&lt;br /&gt;
various performance improvements in the new delayed logging code,&lt;br /&gt;
for direct I/O and the sync system call, a few bug fixes, and lots&lt;br /&gt;
of cleanups, resulting in a net removal of over 2000 lines of code:&lt;br /&gt;
&lt;br /&gt;
        89 files changed, 1998 insertions(+), 4279 deletions(-)&lt;br /&gt;
&lt;br /&gt;
The merge window for Linux 2.6.37 opened just a few days after the&lt;br /&gt;
release of Linux 2.6.36 and already contains another large XFS update&lt;br /&gt;
at the end of October.  Highlights of the XFS tree merged into 2.6.37-rc1&lt;br /&gt;
are another large set of metadata scalability patches, support for 32-bit&lt;br /&gt;
wide project IDs, and support for the new XFS_IOC_ZERO_RANGE ioctl,&lt;br /&gt;
which allows to punch a whole and convert it to an unwritten extent&lt;br /&gt;
in a single atomic operation.&lt;br /&gt;
&lt;br /&gt;
The metadata scalability changes improve 8-way fs_mark of 50 million files&lt;br /&gt;
by over 15% and removal of those files by over 100%, with further&lt;br /&gt;
improvements expected by the next round of XFS metadata scalability&lt;br /&gt;
and VFS scalability improvements targeted at Linux 2.6.38.&lt;br /&gt;
&lt;br /&gt;
On the user space side October was a rather quit month for xfsprogs, which&lt;br /&gt;
only saw the addition of 32-bit project ID handling, and a fix for&lt;br /&gt;
parsing the mount table in fsr when used together with disk encryption&lt;br /&gt;
tools.  A few patches for xfsdump were posted on the list, but none&lt;br /&gt;
was applied, leaving the majority of the user space activity to&lt;br /&gt;
xfstests, which saw very active development.  Various patches went&lt;br /&gt;
into xfstests to improve portability to filesystems with a limited&lt;br /&gt;
feature set, and to move more filters to generic code.  In addition&lt;br /&gt;
various cleanups to test cases in test programs were applied.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2010 ==&lt;br /&gt;
&lt;br /&gt;
Mainline activity has been rather low in September while with only&lt;br /&gt;
two more fixes going into the 2.6.36 release candidates after the&lt;br /&gt;
large merge activity in August.  Development for the next merge&lt;br /&gt;
window has been more active.  The largest item was the inclusion&lt;br /&gt;
of the metadata scalability patch series, which provides very large&lt;br /&gt;
speedups for parallel metadata operations.  In addition a new&lt;br /&gt;
ioctl to punch holes and convert the whole to an unwritten extent&lt;br /&gt;
was added and a small number of cleanups also made it into the tree.&lt;br /&gt;
&lt;br /&gt;
Patches to add support for 32bit wide project ID identifiers and&lt;br /&gt;
using group and project quotas concurrently were posted to the list&lt;br /&gt;
and discussed but not yet included.&lt;br /&gt;
&lt;br /&gt;
Userspace development has been rather quiet again, with a single fix&lt;br /&gt;
committed to xfsprogs and xfsdump each.  The xfstests test suite grew&lt;br /&gt;
a new test case and received a few additional fixes.  Last but not least&lt;br /&gt;
the [http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/index.html XFS Users Guide]&lt;br /&gt;
was updated with various factual corrections and spelling fixes.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2010 ==&lt;br /&gt;
&lt;br /&gt;
At the first of August we finally saw the release of Linux 2.6.35,&lt;br /&gt;
which includes a large XFS update.  The most prominent feature in&lt;br /&gt;
Linux 2.6.35 is the new delayed logging code which provides massive&lt;br /&gt;
speedups for metadata-intensive workloads, but there has been&lt;br /&gt;
a large amount of other fixes and cleanups, leading to the following&lt;br /&gt;
diffstat:&lt;br /&gt;
&lt;br /&gt;
         67 files changed, 4426 insertions(+), 3835 deletions(-)&lt;br /&gt;
&lt;br /&gt;
Given the early release of Linux 2.6.35 the merge window for the&lt;br /&gt;
next release fully fell into the month of August.  The XFS updates&lt;br /&gt;
for Linux 2.6.36 include various additional performance improvements&lt;br /&gt;
in the delayed logging code, for direct I/O writes and for avoiding&lt;br /&gt;
synchronous transactions, as well as various fixed and large amount&lt;br /&gt;
of cleanups, including the removal of the remaining dead DMAPI&lt;br /&gt;
code.&lt;br /&gt;
&lt;br /&gt;
On the userspace side we saw the 3.1.3 release of xfsprogs, which includes&lt;br /&gt;
various smaller fixes, support for the new XFS_IOC_ZERO_RANGE ioctl and&lt;br /&gt;
Debian packaging updates.  The xfstests package saw one new test case&lt;br /&gt;
and a couple of smaller patches, and xfsdump has not seen any updates at&lt;br /&gt;
all.&lt;br /&gt;
&lt;br /&gt;
The XMLified versions of the XFS users guide, training labs and filesystem&lt;br /&gt;
structure documentation are now available as on the fly generated html on&lt;br /&gt;
the xfs.org website and can be found at [[XFS_Papers_and_Documentation|Papers &amp;amp; Documentation]].&lt;br /&gt;
&lt;br /&gt;
== XFS status update for July 2010 ==&lt;br /&gt;
&lt;br /&gt;
July saw three more release candidates for the Linux 2.6.35 kernel, which&lt;br /&gt;
included a relatively large number of XFS updates.  There were two security&lt;br /&gt;
fixes, a small one to prevent swapext to operate on write-only file&lt;br /&gt;
descriptors, and a much larger one to properly validate inode numbers&lt;br /&gt;
coming from NFS clients or userspace applications using the bulkstat or&lt;br /&gt;
the open-by-handle interfaces.  In addition to that another relatively&lt;br /&gt;
large patch fixes the way inodes get reclaimed in the background, and&lt;br /&gt;
avoids inode caches growing out of bounds.&lt;br /&gt;
&lt;br /&gt;
In the meantime the code for the Linux 2.6.36 got the last touches before&lt;br /&gt;
the expected opening of the merge window, by merging a few more last&lt;br /&gt;
minute fixes and cleanups.  The most notable one is a patch series&lt;br /&gt;
that fixes in-memory corruption when concurrently accessing unwritten&lt;br /&gt;
extents using the in-kernel AIO code.&lt;br /&gt;
&lt;br /&gt;
The userspace side was still quite slow, but some a bit more activity&lt;br /&gt;
than June.  In xfsprogs the xfs_db code grew two bug fixes, as did&lt;br /&gt;
the xfs_io script.  The xfstests package saw one new test cases and&lt;br /&gt;
various fixes to existing code.  Last but not least a few patches&lt;br /&gt;
affecting the build system for all userspace tools were committed.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for June 2010 ==&lt;br /&gt;
&lt;br /&gt;
The month of June saw a few important bug fixes for the Linux 2.6.35&lt;br /&gt;
release candidates.  That includes ensuring that files used for the&lt;br /&gt;
swapext ioctl are writable to the user, and doing proper validation&lt;br /&gt;
of inodes coming from untrusted sources, such as NFS exporting and&lt;br /&gt;
the open by handle system calls.  The main work however has been&lt;br /&gt;
focused on development for the Linux 2.6.36 merge window, including&lt;br /&gt;
merging various patches that have been out on the mainline list&lt;br /&gt;
for a long time.  Highlights include further performance improvements&lt;br /&gt;
for sync heavy metadata workloads, stack space reduction in the&lt;br /&gt;
writeback path and improvements of the XFS tracing infrastructure.&lt;br /&gt;
Also after some discussion the remaining hooks for DMAPI are going&lt;br /&gt;
to be dropped in mainline.   As a replacement a tree containing&lt;br /&gt;
full DMAPI support with a slightly cleaner XFS interaction will be&lt;br /&gt;
hosted by SGI.&lt;br /&gt;
&lt;br /&gt;
On the userspace side June was a rather slow month, with no updates&lt;br /&gt;
to xfsprogs and xfsdump at all, and just one new test case and a cleanup&lt;br /&gt;
applied to xfstests.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for May 2010 ==&lt;br /&gt;
&lt;br /&gt;
In May 2010 we saw the long awaited release of Linux 2.6.34, which includes&lt;br /&gt;
a large XFS update.  The most important features appearing in 2.6.34 was the&lt;br /&gt;
new inode and quota flushing code, which leads to much better I/O patterns&lt;br /&gt;
for metadata-intensive workloads.  Additionally support for synchronous NFS&lt;br /&gt;
exports has been improved to give much better performance, and performance&lt;br /&gt;
for the fsync, fdatasync and sync system calls has been improved slightly.&lt;br /&gt;
A bug when resizing extremely busy filesystems has been fixed, which required&lt;br /&gt;
extensive modification to the data structure used for looking up the&lt;br /&gt;
per-allocation group data.  Last but not least there was a steady flow of&lt;br /&gt;
minor bug fixes and cleanups, leading to the following diffstat from&lt;br /&gt;
2.6.33 to 2.6.34:&lt;br /&gt;
&lt;br /&gt;
  86 files changed, 3209 insertions(+), 3178 deletions(-)&lt;br /&gt;
&lt;br /&gt;
Meanwhile active development aimed at 2.6.35 merge progressed.  The&lt;br /&gt;
major feature for this window is the merge of the delayed logging code,&lt;br /&gt;
which adds a new logging mode that dramatically reduces the bandwidth&lt;br /&gt;
required for log I/O.  See the &lt;br /&gt;
[http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs-delayed-logging-design.txt;h=96d0df28bed323d5596fc051b0ffb96ed8e3c8df;hb=HEAD documentation] for details.  Testers&lt;br /&gt;
for this new code are welcome.&lt;br /&gt;
&lt;br /&gt;
In userland xfsprogs saw the long awaited 3.1.2 release, which can be&lt;br /&gt;
considered a bug fix release for xfs_repair, xfs_fsr and mkfs.xfs.  After&lt;br /&gt;
the release a few more fixes were merged into the development tree.&lt;br /&gt;
The xfstests package saw various new tests, including many tests to&lt;br /&gt;
exercise the quota code, and a few fixes to existing tests.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for April 2010 ==&lt;br /&gt;
&lt;br /&gt;
In April 2.6.34 still was in the release candidate phase, with&lt;br /&gt;
a hand full of XFS fixes making it into mainline.  Development for&lt;br /&gt;
the 2.6.35 merge window went ahead full steam at the same time.&lt;br /&gt;
&lt;br /&gt;
While a fair amount of patches hit the development tree these were&lt;br /&gt;
largely cleanups, with the real development activity happening on&lt;br /&gt;
the mailing list.  There was another round of patches and following&lt;br /&gt;
discussion on the scalable busy extent tracking and delayed logging&lt;br /&gt;
features mentioned last month.  They are expected to be merged in&lt;br /&gt;
May and queue up for the Linux 2.6.35 window.  Last but not least&lt;br /&gt;
April saw a large number of XFS fixes backported to the 2.6.32 and&lt;br /&gt;
2.6.33 -stable series.&lt;br /&gt;
&lt;br /&gt;
In user land xfsprogs has seen few but important updates, preparing&lt;br /&gt;
for a new release next month.  The xfs_repair tool saw a fix to&lt;br /&gt;
correctly enable the lazy superblock counters on an existing&lt;br /&gt;
filesystem, and xfs_fsr saw updates to better deal with dynamic&lt;br /&gt;
attribute forks.  Last but not a least a port to Debian GNU/kFreeBSD&lt;br /&gt;
got merged. The xfstests test suite saw two new test cases and various&lt;br /&gt;
smaller fixes.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for March 2010 ==&lt;br /&gt;
&lt;br /&gt;
The merge window for Linux 2.6.34 closed in the first week of March,&lt;br /&gt;
with the important XFS features already landing in February.  Not&lt;br /&gt;
surprisingly the XFS merge activity in March has been rather slow,&lt;br /&gt;
with only about a dozen bug fixes patches making it towards Linus&#039;&lt;br /&gt;
tree in that time.&lt;br /&gt;
&lt;br /&gt;
On the other hand active development for the 2.6.35 merge window has&lt;br /&gt;
been very active.  Most importantly there was a lot of work on the&lt;br /&gt;
transaction and log subsystems.  Starting with a large patchset to&lt;br /&gt;
clean up and refactor the transaction subsystem and introducing more&lt;br /&gt;
flexible I/O containers in the low-level logging code work is&lt;br /&gt;
progressing to a new, more efficient logging implementation.  While&lt;br /&gt;
this preparatory work has already been merged in the development tree,&lt;br /&gt;
the actual delayed logging implementation still needs more work after&lt;br /&gt;
the initial public posting.  The delayed logging implementation which&lt;br /&gt;
is very briefly modeled after the journaling mode in the ext3/4&lt;br /&gt;
and reiserfs filesystems allows to accumulated multiple asynchronous&lt;br /&gt;
transactions in memory instead of possibly writing them out&lt;br /&gt;
many times.  Using the new delayed logging mechanism I/O bandwidth&lt;br /&gt;
used for the log decreases by orders of magnitude and performance&lt;br /&gt;
on metadata intensive workloads increases massively.&lt;br /&gt;
&lt;br /&gt;
In addition to that a new version of the discard (aka TRIM) support&lt;br /&gt;
has been posted, this time entirely contained in kernel space&lt;br /&gt;
and without the need of a userspace utility to drive it.  Last but&lt;br /&gt;
not least the usual steady stream of cleanups and bug fixes has not&lt;br /&gt;
ceased this month either.&lt;br /&gt;
&lt;br /&gt;
Besides the usual flow of fixes and new test cases in the xfstests&lt;br /&gt;
test suite development on the userspace side has been rather slow.&lt;br /&gt;
Xfsprogs has only seen a single fix for SMP locking in xfs_repair&lt;br /&gt;
and support for building on Debian GNU/kFreeBSD, and xfsdump&lt;br /&gt;
has seen no commit at all.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for February 2010 ==&lt;br /&gt;
&lt;br /&gt;
February saw the release of the Linux 2.6.33 kernel, which includes&lt;br /&gt;
a large XFS update.  The biggest user-visible change in Linux 2.6.33&lt;br /&gt;
is that XFS now support the generic Linux trace event infrastructure,&lt;br /&gt;
which allows tracing lots of XFS behavior with a normal production&lt;br /&gt;
built kernel.  Except for this Linux 2.6.33 has been mostly a bug-fix&lt;br /&gt;
release, fixing various user reported bugs in previous releases.&lt;br /&gt;
The total diffstat for XFS in Linux 2.6.33 looks like:&lt;br /&gt;
&lt;br /&gt;
  84 files changed, 3023 insertions(+), 3550 deletions(-)&lt;br /&gt;
&lt;br /&gt;
In addition to that the merge window for Linux 2.6.34 opened and the&lt;br /&gt;
first merge of the XFS tree made it into Linus tree.  Unlike Linux&lt;br /&gt;
2.6.33 this merge window includes major feature work.  The most&lt;br /&gt;
important change for users is a new algorithm for inode and quota&lt;br /&gt;
writeback that leads to better I/O locality and improved metadata&lt;br /&gt;
performance.  The second big change is a rewrite of the per-allocation&lt;br /&gt;
group data lookup which fixes a long-standing problem in the code&lt;br /&gt;
to grow a life filesystem and will also ease future filesystem&lt;br /&gt;
shrinking support.  Not merged through the XFS tree, but of great&lt;br /&gt;
importance for embedded users is a new API that allows XFS to properly&lt;br /&gt;
flush cache lines on it&#039;s log and large directory buffers, making&lt;br /&gt;
XFS work properly on architectures with virtually indexed caches,&lt;br /&gt;
such as parisc and various arm and mips variants.  Last but not&lt;br /&gt;
least there is an above average amount of cleanups that went into&lt;br /&gt;
Linus tree in this cycle.&lt;br /&gt;
&lt;br /&gt;
There have been more patches on the mailing list that haven&#039;t made&lt;br /&gt;
it to Linus tree yet, including an optimized implementation of&lt;br /&gt;
fdatasync(2) and massive speedups for metadata workloads on&lt;br /&gt;
NFS exported XFS filesystems.&lt;br /&gt;
&lt;br /&gt;
On the userspace side February has been a relatively quiet month.&lt;br /&gt;
Lead by xfstests only a moderate amount of fixes made it into&lt;br /&gt;
the respective trees.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for January 2010 ==&lt;br /&gt;
&lt;br /&gt;
January saw additional release candidates of the Linux 2.6.33 kernel,&lt;br /&gt;
including a couple of bug fixes for XFS.  In the meantime the XFS tree&lt;br /&gt;
has been growing a large number of patches destined for the Linux 2.6.34&lt;br /&gt;
merge window: a large rework of the handling of per-AG data, support for&lt;br /&gt;
the quota netlink interface, and better power saving behavior of the&lt;br /&gt;
XFS kernel threads, and of course various cleanups.&lt;br /&gt;
&lt;br /&gt;
A large patch series to replace the current asynchronous inode writeback&lt;br /&gt;
with a new scheme that uses the delayed write buffers was posted to&lt;br /&gt;
the list.  The new scheme, which allows archive better I/O locality by&lt;br /&gt;
dispatching meta-data I/O from a single place has been discussed&lt;br /&gt;
extensively and is expected to be merged in February.&lt;br /&gt;
&lt;br /&gt;
On the userspace side January saw the 3.1.0 and 3.1.1 releases of xfsprogs,&lt;br /&gt;
as well as the 3.0.4 release of xfsdump.  The biggest changes in xfsprogs&lt;br /&gt;
3.1.0 were optimizations in xfs_repair that lead to a much lower memory&lt;br /&gt;
usage, and optional use of the blkid library for filesystem detection&lt;br /&gt;
and retrieving storage topology information.  The 3.1.1 release contained&lt;br /&gt;
various important bug fixes for these changes and a various improvements to&lt;br /&gt;
the build system.  The major feature of xfsdump 3.0.4 were fixes for&lt;br /&gt;
time stamp handling on 64-bit systems.&lt;br /&gt;
&lt;br /&gt;
The xfstests package also lots of activity including various new testcases&lt;br /&gt;
and an improved build system.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for December 2009 ==&lt;br /&gt;
&lt;br /&gt;
December finally saw the long awaited release of Linux 2.6.32, which for&lt;br /&gt;
XFS is mostly a bug fix release, with the major changes being various&lt;br /&gt;
improvement to the sync path, including working around the expectation&lt;br /&gt;
from the grub boot loader where metadata is supposed to be after a sync()&lt;br /&gt;
system call.  Together with a refactoring of the inode allocator this&lt;br /&gt;
gives a nice diffstat for this kernel release:&lt;br /&gt;
&lt;br /&gt;
 46 files changed, 767 insertions(+), 1048 deletions(-)&lt;br /&gt;
&lt;br /&gt;
In the meantime development for the 2.6.33 has been going strong.  The&lt;br /&gt;
new event tracing code that allows to observe the inner workings of XFS&lt;br /&gt;
in production systems has finally been merged, with another patch to&lt;br /&gt;
reduce the size of the tracing code by using new upstream kernel features&lt;br /&gt;
posted for review.  Also a large patch series has been posted which&lt;br /&gt;
changes per-AG data to be looked up by a radix tree instead of the&lt;br /&gt;
existing array.  This works around possible deadlocks and user after&lt;br /&gt;
free issues during growfs, and prepares for removing a global (shared)&lt;br /&gt;
lock from the free space allocators.  In addition to that a wide range&lt;br /&gt;
of fixes has been posted and applied.&lt;br /&gt;
&lt;br /&gt;
Work on the userspace packages has been just as busy.  In mkfs.xfs the&lt;br /&gt;
lazy superblock counter feature has now been enabled by default for the&lt;br /&gt;
upcoming xfsprogs 3.1.0 release, which will require kernel 2.6.22 for&lt;br /&gt;
the default mkfs invocation.  Also for mkfs.xfs as patch was posted&lt;br /&gt;
to correct the automatic detection of 4 kilobyte sector drivers which&lt;br /&gt;
are expected to show up in large quantities the real work soon.  The&lt;br /&gt;
norepair mode in xfs_repair has been enhanced with additional freespace&lt;br /&gt;
btree correction checks from xfs_db and is now identical to xfs_check in&lt;br /&gt;
filesystem consistency checking coverage.  A temporary file permission&lt;br /&gt;
problems has been fixed in xfs_fsr, and the libhandle library has been&lt;br /&gt;
fixed to better deal with symbolic links.  In xfs_io a few commands&lt;br /&gt;
that were added years ago have finally been wired up to actually be&lt;br /&gt;
usable.  And last but not least xfsdump saw a fix to the time stamp&lt;br /&gt;
handling in the backup format and some usability and documentation&lt;br /&gt;
improvements to xfsinvutil.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2009 ==&lt;br /&gt;
&lt;br /&gt;
November was a relatively slow month for XFS development.  The XFS tree&lt;br /&gt;
that is destined for the Linux 2.6.33 merge window saw a few fixes and&lt;br /&gt;
cleanups applied to it, and few important fixes still made it into the&lt;br /&gt;
last Linux 2.6.32 release candidates.  A few more patches including a&lt;br /&gt;
final version of the event tracing support for XFS were posted but not&lt;br /&gt;
reviewed yet.&lt;br /&gt;
&lt;br /&gt;
On the userspace side there has been a fair amount of xfsprogs activity.&lt;br /&gt;
The repair speedup patches have finally been merged into the main development&lt;br /&gt;
branch and a couple of other fixes to the various utilities made it in, too.&lt;br /&gt;
The xfstests test suite saw another new regression test suite and a build&lt;br /&gt;
system fix up.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2009 ==&lt;br /&gt;
In October we saw the Linux 2.6.32 merge window with a major XFS update.&lt;br /&gt;
This update includes a refactoring of the inode allocator which also&lt;br /&gt;
allows for speedups for very large filesystems, major sync fixes, updates&lt;br /&gt;
to the fsync and O_SYNC handling which merge the two code paths into a single&lt;br /&gt;
and more efficient one, a workaround for the VFS time stamp behavior,&lt;br /&gt;
and of course various smaller fixes.  A couple of additional fixes have been&lt;br /&gt;
queued up for the next merge window.&lt;br /&gt;
&lt;br /&gt;
On the userspace side there has been a healthy activity on xfsprogs:  mkfs can&lt;br /&gt;
now discard unused sectors on SSDs and thinly provisioned storage devices and&lt;br /&gt;
use the more generic libblkid for topology information and filesystems detection&lt;br /&gt;
instead of the older libdisk, and the build system gained some updates to&lt;br /&gt;
make the source package generation simpler and shared for different package&lt;br /&gt;
types.  A patch has been out to the list but yet committed to add symbol&lt;br /&gt;
versioning to the libhandle library to make future ABI additions easier.&lt;br /&gt;
The xfstests package only saw some minor activity with a new test case&lt;br /&gt;
and small build system fixes.&lt;br /&gt;
&lt;br /&gt;
New minor releases of xfsprogs and xfsdump were tagged but not formally&lt;br /&gt;
released after additional discussion.  Instead a new major xfsprogs release&lt;br /&gt;
is planned for next month.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2009 ==&lt;br /&gt;
&lt;br /&gt;
In September the Linux 2.6.31 kernel was finally released, including another&lt;br /&gt;
last minute XFS fix for the swapext (defragmentation) compat ioctl handler.&lt;br /&gt;
The final patch from 2.6.30 to 2.6.31 shows the following impressive diffstat&lt;br /&gt;
for XFS:&lt;br /&gt;
&lt;br /&gt;
   55 files changed, 1476 insertions(+), 2269 deletions(-)&lt;br /&gt;
&lt;br /&gt;
The 2.6.32 merge window started with a large XFS merge that included changes&lt;br /&gt;
to the inode allocator, and a few smaller fixes.  New versions of the sync&lt;br /&gt;
and time stamp fixes as well as the event tracing support have been posted&lt;br /&gt;
in September but not yet merged into the XFS development tree and/or mainline.&lt;br /&gt;
&lt;br /&gt;
On the userspace side a large patch series to reduce the memory usage in&lt;br /&gt;
xfs_repair to acceptable levels was posted, but not yet merged.  A new xfs_df&lt;br /&gt;
shell script to measure use of the on disk space was posted but not yet&lt;br /&gt;
merged pending some minor review comments and a missing man page.  In addition&lt;br /&gt;
we saw the usual amount of smaller fixes and cleanups.&lt;br /&gt;
&lt;br /&gt;
Also this month Felix Blyakher resigned from his post as XFS maintainer and handed off to Alex Elder.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2009 ==&lt;br /&gt;
&lt;br /&gt;
In August the Linux 2.6.31 kernel has still been in the release candidate&lt;br /&gt;
stage, but a couple of important XFS fixes made it in time for the release,&lt;br /&gt;
including a fix for the inode cache races with NFS workloads that have&lt;br /&gt;
plagued us for a long time.&lt;br /&gt;
&lt;br /&gt;
The list saw various patches destined for the Linux 2.6.32 merge window,&lt;br /&gt;
including a merge of the fsync and O_SYNC handling code to address various&lt;br /&gt;
issues with the latter, a workaround for deficits in the timestamp handling&lt;br /&gt;
interface between the VFS and filesystems, a repost of the sync improvements&lt;br /&gt;
patch series and various smaller patches.&lt;br /&gt;
&lt;br /&gt;
August also saw the minor 3.0.3 release of xfsprogs which collects smaller&lt;br /&gt;
fixes to the various tools and most importantly a fix to allow xfsprogs to&lt;br /&gt;
work again on SPARC and other strict alignment handling which regressed a&lt;br /&gt;
few releases ago.  The xfstests repository saw a few new test cases and a&lt;br /&gt;
various small improvements.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for July 2009 ==&lt;br /&gt;
&lt;br /&gt;
As a traditional summer vacation month July has not seen a lot of XFS&lt;br /&gt;
activity.  The mainline 2.6.31 kernel made it to the 5th release candidate&lt;br /&gt;
but besides a few kernel-wide patches touching XFS the only activity were&lt;br /&gt;
two small patches fixing a bug in FIEMAP and working around writeback&lt;br /&gt;
performance problems in the VM.&lt;br /&gt;
&lt;br /&gt;
A few more patches were posted to the list but haven&#039;t been merged yet.&lt;br /&gt;
Two big patch series deal with theoretically possible deadlocks due to&lt;br /&gt;
locks taken in reclaim contexts, which are now detected by lockdep.&lt;br /&gt;
&lt;br /&gt;
The pace on the userspace side has been slow.  There have been a couple&lt;br /&gt;
of fixes to xfs_repair and xfs_db, and xfstests grew a few more testcases.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for June 2009 ==&lt;br /&gt;
&lt;br /&gt;
On June 9th we finally saw the release of Linux 2.6.30.  For XFS&lt;br /&gt;
this release mostly contains the improved ENOSPC handling, but also&lt;br /&gt;
various smaller bugfixes and lots of cleanups.  The code size of XFS&lt;br /&gt;
decreased again by 500 lines of code in this release.&lt;br /&gt;
&lt;br /&gt;
The Linux 2.6.31 merge opened in the mid of the month and some big XFS&lt;br /&gt;
changes have been pushed: A removal of the quotaops&lt;br /&gt;
infrastructure which simplifies the quota implementation, the switch&lt;br /&gt;
from XFS&#039;s own Posix ACL implementation to the generic one shared&lt;br /&gt;
by various other filesystems which also supports in-memory caching of&lt;br /&gt;
ACLs and another incremental refactoring of the sync code.&lt;br /&gt;
&lt;br /&gt;
A patch to better track dirty inodes and work around issues in the&lt;br /&gt;
way the VFS updates the access time stamp on inodes has been reposted&lt;br /&gt;
and discussed. Another patch to converting the existing XFS tracing&lt;br /&gt;
infrastructure to use the ftrace even tracer has been posted.&lt;br /&gt;
&lt;br /&gt;
On the userspace side there have been a few updates to xfsprogs, including&lt;br /&gt;
some repair fixes and a new fallocate command for xfs_io.  There were&lt;br /&gt;
major updates for xfstests:  The existing aio-dio-regress testsuite has&lt;br /&gt;
been merged into xfstests, and various changes went into the tree to make&lt;br /&gt;
xfstests better suitable for use with other filesystems.&lt;br /&gt;
&lt;br /&gt;
The attr and acl projects which have been traditionally been hosted&lt;br /&gt;
as part of the XFS userspace utilities have now been split into a separate&lt;br /&gt;
project maintained by Andreas Gruenbacher, who has been doing most of&lt;br /&gt;
the work on it, and moved to the Savannah hosting platform.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for May 2009 ==&lt;br /&gt;
&lt;br /&gt;
In May Linux 2.6.30 was getting close to be released, and various&lt;br /&gt;
important XFS fixes made it during the latest release candidates.&lt;br /&gt;
In the meantime some big patch series to rework the sync code and&lt;br /&gt;
the inode allocator have been posted for the next merge window.&lt;br /&gt;
&lt;br /&gt;
On the userspace side xfsprogs and xfsdump 3.0.1 were finally released,&lt;br /&gt;
quickly followed by 3.0.2 releases with updated Debian packaging.&lt;br /&gt;
After that various small patches that were held back made it into xfsprogs.&lt;br /&gt;
A patch to add the xfs_reno tool which allows to move inodes around to&lt;br /&gt;
fit into 32 bit inode number space has been posted which is also one&lt;br /&gt;
central aspect of future online shrinking support.&lt;br /&gt;
&lt;br /&gt;
There has been major activity on xfstests including adding generic&lt;br /&gt;
filesystems support to allow running tests that aren&#039;t XFS-specific on&lt;br /&gt;
any Linux filesystems.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for April 2009 ==&lt;br /&gt;
&lt;br /&gt;
In April development for Linux 2.6.30 was in full motion.  A patchset to correct flushing of delayed allocations with near full filesystems has been committed in early April, as well as various smaller fixes. A patch series to improve the behavior of sys_sync has been posted but is waiting for VFS changes queued for Linux 2.6.31.&lt;br /&gt;
&lt;br /&gt;
On the userspace side xfsprogs and xfsdump 3.0.1 have managed to split their release dates into May again after a lot of last-minute build system updates.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for March 2009 ==&lt;br /&gt;
&lt;br /&gt;
Linux 2.6.29 has been released which includes major XFS updates like the&lt;br /&gt;
new generic btree code, a fully functional 32bit compat ioctl implementation&lt;br /&gt;
and the new combined XFS and Linux inode.  (See previous status reports&lt;br /&gt;
for more details). A patch series to improve correctness and performance&lt;br /&gt;
has been posted but not yet applied.  Various minor fixes and cleanups&lt;br /&gt;
have been sent to Linus for 2.6.30 which looks like it will be a minor&lt;br /&gt;
release for XFS after the big churn in 2.6.29.&lt;br /&gt;
&lt;br /&gt;
On userspace a lot of time has been spent on fixing and improving the&lt;br /&gt;
build system shared by the various XFS utilities as well as various smaller&lt;br /&gt;
improvements leading to the xfsprogs and xfsdump 3.0.1 releases which are&lt;br /&gt;
still outstanding.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for February 2009 ==&lt;br /&gt;
&lt;br /&gt;
In February various smaller fixes have been sent to Linus for 2.6.29,&lt;br /&gt;
including a revert of the faster vmap APIs which don&#039;t seem to be quite&lt;br /&gt;
ready yet on the VM side.  At the same time various patches have been&lt;br /&gt;
queued up for 2.6.30, with another big batch pending.  There also has&lt;br /&gt;
been a repost of the CRC patch series, including support for a new,&lt;br /&gt;
larger inode core.&lt;br /&gt;
&lt;br /&gt;
SGI released various bits of work in progress from former employees&lt;br /&gt;
that will be extremely helpful for the future development of XFS,&lt;br /&gt;
thanks a lot to Mark Goodwin for making this happen.&lt;br /&gt;
&lt;br /&gt;
On the userspace side the long awaited 3.0.0 releases of xfsprogs and&lt;br /&gt;
xfsdump finally happened early in the month, accompanied by a 2.2.9&lt;br /&gt;
release of the dmapi userspace.  There have been some issues with packaging&lt;br /&gt;
so a new minor release might follow soon.&lt;br /&gt;
&lt;br /&gt;
The xfs_irecover tool has been relicensed so that it can be merged into&lt;br /&gt;
the GPLv2 codebase of xfsprogs, but the actual integration work hasn&#039;t&lt;br /&gt;
happened yet.&lt;br /&gt;
&lt;br /&gt;
Important bits of XFS documentation that have been available on the XFS&lt;br /&gt;
website in PDF form have been released in the document source form under&lt;br /&gt;
the Creative Commons license so that they can be updated as a community&lt;br /&gt;
effort, and checked into a public git tree.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for January 2009 ==&lt;br /&gt;
&lt;br /&gt;
January has been an extremely busy month on the userspace front.  Many&lt;br /&gt;
smaller and medium updates went into xfsprogs, xfstests and to a lesser&lt;br /&gt;
extent xfsdump.  xfsprogs and xfsdump are ramping up for getting a 3.0.0&lt;br /&gt;
release out in early February which will include the first major re-sync&lt;br /&gt;
with the kernel code in libxfs, a cleanup of the exported library interfaces&lt;br /&gt;
and the move of two tools (xfs_fsr and xfs_estimate) from the xfsdump&lt;br /&gt;
package to xfsprogs.  After this the xfsprogs package will contain all&lt;br /&gt;
tools that use internal libxfs interfaces which fortunately equates to those&lt;br /&gt;
needed for normal administration.  The xfsdump package now only contains&lt;br /&gt;
the xfsdump/xfsrestore tools needed for backing up and restoring XFS&lt;br /&gt;
filesystems.  In addition it grew a fix to support dump/restore on systems&lt;br /&gt;
with a 64k page size.  A large number of acl/attr package patches was&lt;br /&gt;
posted to the list, but pending a possible split of these packages from the&lt;br /&gt;
XFS project these weren&#039;t processed yet.&lt;br /&gt;
&lt;br /&gt;
On the kernel side the big excitement in January was an in-memory corruption&lt;br /&gt;
introduced in the btree refactoring which hit people running 32bit platforms&lt;br /&gt;
without support for large block devices.  This issue was fixed and pushed&lt;br /&gt;
to the 2.6.29 development tree after a long collaborative debugging effort&lt;br /&gt;
at linux.conf.au.  Besides that about a dozen minor fixes were pushed to&lt;br /&gt;
2.6.29 and the first batch of misc patches for the 2.6.30 release cycle&lt;br /&gt;
was sent out.&lt;br /&gt;
&lt;br /&gt;
At the end of December the SGI group in Melbourne which the previous&lt;br /&gt;
XFS maintainer and some other developers worked for has been closed down&lt;br /&gt;
and they will be missed greatly.  As a result maintainership has been passed&lt;br /&gt;
on in a way that has been slightly controversial in the community, and the&lt;br /&gt;
first patchset of work in progress in Melbourne have been posted to the list&lt;br /&gt;
to be picked up by others.&lt;br /&gt;
&lt;br /&gt;
The xfs.org wiki has gotten a little facelift on it&#039;s front page making it&lt;br /&gt;
a lot easier to read.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for December 2008 ==&lt;br /&gt;
&lt;br /&gt;
On Christmas Eve the 2.6.28 mainline kernel was release, with only minor XFS&lt;br /&gt;
bug fixes over 2.6.27.&lt;br /&gt;
&lt;br /&gt;
On the development side December has been busy but unspectacular month.&lt;br /&gt;
All lot of misc fixes and improvements have been sent out, tested and committed&lt;br /&gt;
especially on the user land side.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for November 2008 ==&lt;br /&gt;
&lt;br /&gt;
The mainline kernel is now at 2.6.28-rc6 and includes a small number of&lt;br /&gt;
XFS fixes.  There have been no updates to the XFS development tree during&lt;br /&gt;
November.  Without new regressions that large number of changes that&lt;br /&gt;
missed 2.6.28 has thus stabilized to be ready for 2.6.29.  In the meantime&lt;br /&gt;
kernel-side development has been slow, with the only major patch set&lt;br /&gt;
being a wide number of fixes to the compatibility for 32 bit ioctls on&lt;br /&gt;
a 64 bit kernel.&lt;br /&gt;
&lt;br /&gt;
In the meantime there has been a large number of commits to the user space&lt;br /&gt;
tree, which mostly consist of smaller fixes.  xfsprogs is getting close&lt;br /&gt;
to have the 3.0.0 release which will be the first full resync with the&lt;br /&gt;
kernel sources since the year 2005.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for October 2008 ==&lt;br /&gt;
&lt;br /&gt;
Linux 2.6.27 released with all the bits covered in last month&#039;s report.  It&lt;br /&gt;
did however miss two important fixes for regressions that a few people hit.&lt;br /&gt;
2.6.27.3 or later are recommended for use with XFS.&lt;br /&gt;
&lt;br /&gt;
In the meantime the generic btree implementation, the sync reorganization&lt;br /&gt;
and after a lot of merge pain the XFS and VFS inode unification hit the&lt;br /&gt;
development tree during the time allocated for the merge window.  No XFS&lt;br /&gt;
updates other than the two regression fixes also in 2.6.27.3 have made it&lt;br /&gt;
into mainline as of 2.6.28-rc3.&lt;br /&gt;
&lt;br /&gt;
The only new feature on the list in October is support for the fiemap&lt;br /&gt;
interface that has been added to the VFS during the 2.6.28 merge window.&lt;br /&gt;
However there was lot of patch traffic consisting of fixes and respun&lt;br /&gt;
versions of previously known patches.  There still is a large backlog of&lt;br /&gt;
patches on the list that is not applied to the development tree yet.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for September 2008 ==&lt;br /&gt;
&lt;br /&gt;
With Linux 2.6.27 still not released but only making slow progress from 2.6.27-rc5 to 2.6.27-rc8 XFS changes in mainline have been minimal in September with only about half a dozen bug fixes patches.&lt;br /&gt;
&lt;br /&gt;
In the meantime the generic btree patch set has been committed to the development tree, but not many other updates yet. On the user space side xfsprogs 2.10.1 has been released on September 5th with a number of important bug fixes. Following the release of xfsprogs 2.10.1 open season for development of the user space code has started. The first full update of the shared kernel / user space code in libxfs since 2005 has been committed. In addition to that the number of headers installed for the regular devel package has been reduced to the required minimum and support for checking the source code for endianess errors using sparse has been added.&lt;br /&gt;
&lt;br /&gt;
The patch sets to unify the XFS and Linux inode structures, and rewrite various bits of the sync code have seen various iterations on the XFS list, but haven&#039;t been committed yet. A first set of patches implementing CRCs for various metadata structures has been posted to the list.&lt;br /&gt;
&lt;br /&gt;
== XFS status update for August 2008 ==&lt;br /&gt;
&lt;br /&gt;
With the 2.6.27-rc5 release the 2.6.27 cycle is nearing it&#039;s end. The major XFS feature in 2.6.27-rc5 is support for case-insensitive file names. At this point it is still limited to 7bit ASCII file names, with updates for utf8 file names expected to follow later. In addition to that 2.6.27-rc5 fixes a long-standing problem with non-EABI arm compiler which pack some XFS data structures wrongly. Besides this 2.6.27-rc5 also contains various cleanups, most notably the removal of the last bhv_vnode_t instances, and most uses of semaphores. As usual the diffstat for XFS from 2.6.26 to 2.6.26-rc5 is negative:&lt;br /&gt;
&lt;br /&gt;
       100 files changed, 3819 insertions(+), 4409 deletions(-)&lt;br /&gt;
&lt;br /&gt;
On the user space front a new minor xfsprogs version is about to be released containing various fixes including the user space part of arm packing fix.&lt;br /&gt;
&lt;br /&gt;
Work in progress on the XFS mailing list are a large patch set to unify the alloc, inobt and bmap btree implementation into a single that supports arbitrarily pluggable key and record formats. These btree changes are the first major preparation for adding CRC checks to all metadata structures in XFS, and an even larger patch set to unify the XFS and Linux inode structures, and perform all inode write back from the btree uses instead of an inode cache in XFS.&lt;br /&gt;
&lt;br /&gt;
== Updates before 2008 ==&lt;br /&gt;
&lt;br /&gt;
News up to 2007 can be found on a separate page: [[OLD_News]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Companies&amp;diff=2413</id>
		<title>XFS Companies</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Companies&amp;diff=2413"/>
		<updated>2012-02-03T19:59:44Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: 404 fixed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= These are companies that either use XFS or have a product that utilizes XFS . =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Info gathered from: [http://oss.sgi.com/projects/xfs/users.html XFS Users] on [http://oss.sgi.com/ oss.sgi.com]&lt;br /&gt;
&lt;br /&gt;
== [http://www.dell.com/ Dell&#039;s HPC NFS Storage Solution] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Dell NFS Storage Solution (NSS) is a unique new storage solution providing cost-effective NFS storage as an appliance. Designed to scale from 20 TB installations up to 80 TB of usable space, the NSS is delivered as a fully configured, ready-to-go storage solution and is available with full hardware and software support from Dell. ... XFS was chosen for the NSS because XFS is capable of scaling beyond 16 TB and provides good performance for a broad range of applications.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
More information is available in [http://i.dell.com/sites/content/business/solutions/hpcc/en/Documents/Dell-NSS-NFS-Storage-solution-final.pdf this solution guide].&lt;br /&gt;
&lt;br /&gt;
== [http://www.kernel.org/ The Linux Kernel Archives] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A bit more than a year ago (as of October 2008) kernel.org, in an ever increasing need to squeeze more performance out of it&#039;s machines, made the leap of migrating the primary mirror machines (mirrors.kernel.org) to XFS.  We site a number of reasons including fscking 5.5T of disk is long and painful, we were hitting various cache issues, and we were seeking better performance out of our file system.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;After initial tests looked positive we made the jump, and have been quite happy with the results.  With an instant increase in performance and throughput, as well as the worst xfs_check we&#039;ve ever seen taking 10 minutes, we were quite happy.  Subsequently we&#039;ve moved all primary mirroring file-systems to XFS, including www.kernel.org , and mirrors.kernel.org.  With an average constant movement of about 400mbps around the world, and with peaks into the 3.1gbps range serving thousands of users simultaneously it&#039;s been a file system that has taken the brunt we can throw at it and held up spectacularly.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.sdss.org/ The Sloan Digital Sky Survey] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Sloan Digital Sky Survey is an ambitious effort to map one-quarter of the sky at optical and very-near infrared wavelengths and take spectra of 1 million extra-galactic objects. The estimated amount of data that will be acquired over the 5 year lifespan of the project is 15TB, however, the total amount of storage space required for object informational databases, corrected frames, and reduced spectra will be several factors more than this. The goal is to have all the data online and available to the collaborators at all times. To accomplish this goal we are using commodity, off the shelf (COTS) Intel servers with EIDE disks configured as RAID50 arrays using XFS. Currently, 14 machines are in production accounting for over 18TB. By the scheduled end of the survey in 2005, 50TB of XFS disks will be online serving SDSS data to collaborators and the public.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;For complete details and status of the project please see [http://www.sdss.org/ http://www.sdss.org]. For details of the storage systems, see the [http://web.archive.org/web/20100228003734/http://home.fnal.gov/~yocum/storageServerTechnicalNote.html SDSS Storage Server Technical Note] (Dan Yocum, Fermi National Lab, November 9 2001, archived).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www-d0.fnal.gov/  The DØ Experiment at Fermilab] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At the DØ experiment at the Fermi National Accelerator Laboratory we have a ~150 node cluster of desktop machines all using the SGI-patched kernel. Every large disk (&amp;amp;gt;40Gb) or disk array in the cluster uses XFS including 4x640Gb disk servers and several 60-120Gb disks/arrays. Originally we chose reiserfs as our journaling filesystem, however, this was a disaster. We need to export these disks via NFS and this seemed perpetually broken in 2.4 series kernels. We switched to XFS and have been very happy. The only inconvenience is that it is not included in the standard kernel. The SGI guys are very prompt in their support of new kernels, but it is still an extra step which should not be necessary.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.ciprico.com/pDiMeda.shtml  Ciprico DiMeda NAS Solutions] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Ciprico DiMeda line of Network Attached Storage solutions combine the ease of connectivity of NAS with the SAN like performance levels required for digital media applications. The DiMeda 3600 provides high availability and high performance through dual NAS servers and redundant, scalable Fibre Channel RAID storage. The DiMeda 1700 provides high performance files services at a low price by using the latest Serial ATA RAID technology. All DiMeda systems are based on Linux and use XFS as the filesystem. We tested a number of filesystem alternatives and XFS was chosen because it provided the highest performance in digital media applications and the journaling feature ensures rapid failover in our dual node fault tolerant configurations.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.quantum.com/Products/NAS+Servers/Guardian+14000/Default.htm  The Quantum Guardian™ 14000] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Quantum Guardian™ 14000, the latest Network Attached Storage (NAS) solution from Quantum, delivers 1.4TB of enterprise-class storage for less than $25,000. The Guardian 14000 is a Linux-based device which utilizes XFS to provide a highly reliable journaling filesystem with simultaneous support for Windows, UNIX, Linux and Macintosh environments. As dedicated appliance optimized for fast, reliable file sharing, the Guardian 14000 combines the simplicity of NAS with a robust feature set designed for the most demanding enterprise environments. Support for tools such as Active Directory Service (ADS), UNIX Network Information Service (NIS) and Simple Network Management Protocol (SNMP) provides ease of management and seamless integration. Hardware redundancy, Snapshots and StorageCare™ on-site service ensure security for business-critical data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.bigstorage.com/products_approach_overview.html  BigStorage K2~NAS] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At BigStorage we pride ourselves on tailoring our NAS systems to meet our customer&#039;s needs, with the help of XFS we are able to provide them with the most reliable Journaling Filesystem technology available. Our open systems approach, which allows for cross-platform integration, gives our customers the flexibility to grow with their data requirements. In addition, BigStorage offers a variety of other features including total hardware redundancy, snapshotting, replication and backups directly from the unit. All of our products include BigStorageï¿½s 24/7 LiveResponse™ support. With LiveResponse™, we keep our team of experienced technical experts on call 24 hours a day, every day, to ensure that your storage investment remains online, all the time.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.echostar.com  Echostar DishPVR 721] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Echostar uses the XFS filesystem for its latest generation of satellite receivers, the DP721. Echostar chose XFS for its performance, stability and unique set of features.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS allowed us to meet a demanding requirement of recording two mpeg2 streams to the internal hard drive while simultaneously viewing a third pre-recorded stream. In addition, XFS allowed us to withstand unexpected power loss without filesystem corruption or user interaction.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We tested several other filesystems, but XFS emerged as the clear winner.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.sun.com/hardware/serverappliances/raq550/  Sun Cobalt RaQ™ 550] ==&lt;br /&gt;
&lt;br /&gt;
From the [http://www.sun.com/hardware/serverappliances/raq550/features.html features] page:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS is a journaling file system capable of quick fail over recovery after unexpected interruptions. XFS is an important feature for mission-critical applications as it ensures data integrity and dramatically reduces startup time by avoiding FSCK delay.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://pingu.salk.edu/  Center for Cytometry and Molecular Imaging at the Salk Institute] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I run the Center for Cytometry and Molecular Imaging at the Salk Institute in La Jolla, CA. We&#039;re a core facility for the Institute, offering flow cytometry, basic and deconvolution microscopy, phosphorimaging (radioactivity imaging) and fluorescent imaging.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;m currently in the process of migrating our data server to Linux/XFS. Our web server currently uses Linux/XFS. We have about 60 Gb on the data server which has a 100Gb SCSI RAID 5 array. This is a bit restrictive for our microscopists so in order that they can put more data online, I&#039;m adding another machine, also running Linux/XFS, with about 420 Gb of IDE-RAID5, based on Adaptec controllers....&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Servers are configured with quota and run Samba, NFS, and Netatalk for connectivity to the mixed bag of computers we have around here. I use the CVS XFS tree most of the time. I have not seen any problems in the several months I have been testing.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://coltex.nl/ Coltex Retail Group BV] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Coltex Retail group BV in the Netherlands uses Red Hat Linux with XFS for their main database server which collects the data from over 240 clothing retail stores throughout the Netherlands. Coltex depends on the availability of the server for over 100 hundred employees in the main office for retrieval of logistical and sales figures. The database size is roughly 10GB large containing both historical and current data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The entire production and logistical system depends on the availability of the system and downtime would mean a significant financial penalty. The speed and reliability of the XFS filesystem which has a proven track record and mature tools to go with it is fundamental to the availability of the system.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS has saved us a lot of time during testing and implementation. A long filesystems check is no longer needed when bad things happen when they do. The increased speed of our database system which is based on Progress 9.1C is also a nice benefit to this filesystem.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.dkp.com/ DKP Effects] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;re a 3D computer graphics/post-production house. We&#039;ve currently got four fileservers using XFS under Linux online - three 350GB servers and one 800GB server. The servers are under fairly heavy load - network load to and from the dual NICs on the box is basically maxed out 18 hours a day - and we do have occasional lockups and drive failures. Thanks to Linux SW RAID5 and XFS, though, we haven&#039;t had any data loss, or significant down time.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.epigenomics.com/ Epigenomics] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We currently have several IDE-to-SCSI-RAID systems with XFS in production. The largest has a capacity of 1.5TB, the other 2 have 430GB each.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Data stored on these filesystems is on the one hand &amp;quot;normal&amp;quot; home directories and corporate documents and on the other hand scientific data for our laboratory and IT department.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.incyte.com/ Incyte Genomics] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;m currently in the process of slowly converting 21 clusters totaling 2300+ processors over to XFS.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;These machines are running a fairly stock RH7.1+XFS. The application is our own custom scheduler for doing genomic research. We have one of the worlds largest sequencing labs which generates a tremendous amount of raw data. Vast amounts of CPU cycles must be applied to it to turn it into useful data we can then sell access to.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Currently, a minority of these machines are running XFS, but as I can get downtime on the clusters I am upgrading them to 7.1+XFS. When I&#039;m done, it&#039;ll be about 10TB of XFS goodness... across 9G disks mostly.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.monmouth.edu/ Monmouth University] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve replaced our NetApp filer (80GB, $40,000). NetApp ONTAP software [runs on NetApp filers] is basically an NFS and CIFS server with their own proprietary filesystem. We were quickly running out of space and our annual budget almost depleted. What were we to do?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;With an off-the-shelf Dell 4400 series server and 300GB of disks ($8,000 total). We were able to run Linux and Samba to emulate a NetApp filer.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS allowed us to manage 300GB of data with absolutely no downtime (now going on 79 days) since implementation. Gone are the days of fearing the fsck of 300GB.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.astro.wisc.edu  The University of Wisconsin Astronomy Department] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At the University of Wisconsin Astronomy Department we have been using Linux XFS since the first release. We currently have 31 Linux boxes running XFS on all filesystems with about 2.6 TB of disk space on these machines. We use XFS primarily on our data reduction systems, but we also use it on our web server and on one of the remote observing machines at the WIYN 3.5m Telescope at Kitt Peak (http://www.noao.edu/wiyn/wiyn.html).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We will likely be using Linux XFS at least in part on the GLIMPSE program (http://www.astro.wisc.edu/sirtf/) which will likely require several TB of disk space to process the data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.amoa.org/ The Austin Museum of Art] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Austin Museum of Art has two file servers running RedHat 7.2_XFS upgraded from RedHat 7.1_XFS. Our webserver runs Domino on top of RedHat 7.3_XFS and we&#039;re getting about 70% better performance than the Domino server running on Windows 2000 Server. We&#039;re moving our workstations away from Windows and Microsoft Office to an LTSP server running on RedHat 7.3_XFS.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve become solely dependent on XFS for all of our data systems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.tecmath.com/ tecmath AG] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We use a production server with a 270 GB RAID 5 (hardware) disk array. It is based on a Suse 7.2 distribution, but with a standard 2.4.12 kernel with XFS and LVM patches. The server provides NFS to 8 Unix clients as well as Samba to about 80 PCs. The machine also runs Bind 9, Apache, Exim, DHCP, POP3, MySQL. I have tried out different configurations with ReiserFS, but I didn&#039;t manage to find a stable configuration with respect to NFS. Since I converted all disks to XFS some 3 months ago, we never had any filesystem-related problems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.theiqgroup.com/ The IQ Group] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Here at the IQ Group, Inc. we use XFS for all our production and development servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Our OS of choice is Slackware Linux 8.0. Our hardware of choice is Dell and VALinux servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;As for applications, we run the standard Unix/Linux apps like Sendmail, Apache, BIND, DHCP, iptables, etc.; as well as Oracle 9i and Arkeia.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve been running XFS across the board for about 3 months now without a hitch (so far).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Size-wise, our biggest server is about 40 GB, but that will be increasing substantially in the near future.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Our production servers are collocated so a journaled FS was a must. Reboots are quick and no human interaction is required like with a bad fsck on ext2. Additionally, our database servers gain additional integrity and robustness.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We originally chose XFS over ReiserFS and ext3 because of it&#039;s age (it&#039;s been in production on SGI boxes for probably longer than all the other journaling FS&#039;s combined) and it&#039;s speed appeared comparable as well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.artsit.usyd.edu.au  Arts IT Unit, Sydney University] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;ve got XFS on a &#039;production&#039; file server. The machine could have up to 500 people logged in, but typically less than 200. Most are Mac users, connected via NetAtalk for &#039;personal files&#039;, although there are shared areas for admin units. Probably about 30-40 windows users. (Samba) It&#039;s the file server for an Academic faculty at a University.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Hardware RAID, via Mylex dual channel controller with 4 drives, Intel Tupelo MB, Intel &#039;SC5000&#039; server chassis with redundant power and hot-swap scsi bays. The system boots off a non RAID single 9gb UW-scsi drive.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Only system &#039;crash&#039; was caused by some one accidentally unplugging it, just before we put it into production. It was back in full operation within 5 minutes. Without journaling, the fsck would have taken well over an hour. In day to day use it has run well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://structbio.vanderbilt.edu/comp/  Vanderbilt University Center for Structural Biology] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I run a high-performance computing center for Structural Biology research at Vanderbilt University. We use XFS extensively, and have been since the late prerelease versions. I&#039;ve had nothing but good experiences with it.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We began using XFS in our search for a good solution for our RAID fileservers. We had such good experiences with it on these systems that we&#039;ve begun putting it on the root/usr/var partitions of every Linux system we run here. I even have it on my laptop these days. XFS in combination with the 2.4 NFS3 implementation performs very well for us, and we have great uptimes on these systems (Our 750GB ArenaII setup is at 143 days right now).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;All told, we&#039;ve got about 1.2TB of XFS filesystems spinning right now. It&#039;s spread out across maybe a dozen or so filesystems and will continue to increase as we are growing fast and that&#039;s all we use now. Next up is putting it on our 17-node Linux cluster, which will bring that up to 1.5TB spread across 30 filesystems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I, for one, would LOVE to see XFS make it into the kernel tree. From my perspectives, it&#039;s one of the best things to happen to Linux in the 7 years I&#039;ve been using/administering it.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== 2008 Update ====&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve since moved our main home directories to a proprietary NAS, but continue to use XFS on 10TB of LVM storage for doing backup-to-disk from the same NAS&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www-cdf.fnal.gov/  CDF Experiment at Fermi National Lab] ==&lt;br /&gt;
&lt;br /&gt;
CDF, an elementary particle physics experiment at Fermi National Lab, is using XFS for all our cache disks.&lt;br /&gt;
&lt;br /&gt;
The usage model is that we have a PB tape archive (2 STK silos) as permanent storage. In front of this archive we are deploying a roughly 100TB disk cache system. The cache is made up of 50 2TB file server based on cheap commodity hardware (3ware based hardware raid using IDE drives). The data is then processed by a cluster of 300 Dual CPU Linux PC&#039;s. The cache software is dCache, a DESY/FNAL product.&lt;br /&gt;
&lt;br /&gt;
The whole system is used by more than 300 active users from all over the world for batch processing for their physics data analysis.&lt;br /&gt;
&lt;br /&gt;
== [http://www.get2chip.com  Get2Chip, Inc.] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are using XFS on 3 production file servers with approximately 1.5T of data. Quite impressive especially when we had a power outage and all three servers shutdown. All servers came back up in minutes with no problems! We are looking at creating two more servers that would manage 2+ TB of data store.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.lando.co.za  Lando International Group Technologies] ==&lt;br /&gt;
&lt;br /&gt;
Lando International Group Technologies is the home of:&lt;br /&gt;
&lt;br /&gt;
* [www.lando.co.za Lanndo Technologies Africa (Pty) Ltd] - Internet Service Provider&lt;br /&gt;
* [www.lbsd.net Linux Based Systems Design] (Article 21). Not-For-Profit company established to provide free Linux distributions and programs.&lt;br /&gt;
* Cell Park South Africa (Pty) Ltd. RSA Pat Appln 2001/10406. Collecting parking fees by means of cell phone SMS or voice.&lt;br /&gt;
* Read Plus Education (Pty) Ltd. Software based reading skills training and testing for ages 4 to 100.&lt;br /&gt;
* Mobivan. Mobile office including Internet access, fax, copying, printing, telephone, collection and delivery services, legal services, pre-paid phone and electricity services, bill payment email, secretarial services, training facilities and management services.&lt;br /&gt;
* Lando International Marketing Agency. Direct marketing services, design and supply of promotional material, consulting, sourcing of capital and other funding.&lt;br /&gt;
* Illico. Software development and systems analysis on most platforms.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Throughout these companies, we use the XFS filesystem with [http://idms.lbsd.net IDMS Linux] on high-end Intel servers, with an average of 100 GB storage each. XFS stores our customer and user data, including credit card details, mail, routing tables, etc.. We have not had one problem since the release of the first XFS patch.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.fcb-wilkens.com  Foote, Cone, &amp;amp;amp; Belding] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are an advertisement company in Germany, and the use of the XFS filesystem is a story of success for us. In our Hamburg office, we have two file servers having a 420 Gig RAID in XFS format serving (almost) all our data to about 180 Macs and about 30 PCs using Samba and Netatalk. Some of the data is used in our offices in Frankfurt and Berlin, and in fact the Berlin office is just getting it&#039;s own 250 Gig fileserver (using XFS) right now.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The general success with XFS has led us to switch over all our Linux servers to run on XFS as well (with the exception of two systems that are tied to tight specifications configuration wise). XFS, even the old 1.0 version, has happily taken on various abuse - broken SCSI controllers, broken RAID systems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.moving-picture.co.uk/  Moving Picture Company] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We here at MPC use XFS/RedHat 7.2 on all of our graphics-workstations and file-servers. More info can be found in an [http://www.linuxuser.co.uk/articles/issue20/lu20-Linux_at_work-In_the_picture.pdf  article] LinuxUser magazine did on us recently.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.coremetrics.com/  Coremetrics, Inc.] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are currently using XFS for 25+ production web-servers, ~900GB Oracle db servers, with potentially 15+ more servers by mid 2003, with ~900GB+ databases. All XFS installed.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Also, our dev environment, except for the Sun boxes which all are being migrated to X86 in the aforementioned server additions, plus the dev Sun boxes as well, are all x86 dual proc servers running Oracle, application servers, or web services as needed. All servers run XFS from images we&#039;ve got on our SystemImager servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;All production back-end servers are connected via FC1 or FC2 to a SAN containing ~13TB of raw storage, which, will soon be converted from VxFS to XFS with the migration of Oracle to our x86 platforms.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://evolt.org Evolt.org] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;evolt.org, a world community for web developers promoting the mutual free exchange of ideas, skills and experiences, has had a great deal of success using XFS. Our primary webserver which serves 100K hosts/month, primary Oracle database with ~25Gb of data, and free member hosting for 1000 users haven&#039;t had a minute of downtime since XFS has been installed. Performance has been spectacular and maintenance a breeze.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font size=&amp;quot;-1&amp;quot;&amp;gt; &#039;&#039;All testimonials on this page represent the views of the submitters, and references to other products and companies should not be construed as an endorsement by either the organizations profiled, or by SGI. All trademarks (r) their respective owners.&#039;&#039; &amp;lt;/font&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2409</id>
		<title>XFS Papers and Documentation</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2409"/>
		<updated>2012-01-30T03:47:51Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: s/XFSÂ/XFS/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Primary XFS Documentation ===&lt;br /&gt;
&lt;br /&gt;
The XFS documentation started by SGI has been converted to docbook/[https://fedorahosted.org/publican/ Publican] format.  The material is suitable for experienced users as well as developers and support staff.  The XML source is available in a [http://git.kernel.org/?p=fs/xfs/xfsdocs-xml-dev.git;a=summary git repository] and builds of the documentation are available here:&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/index.html XFS User Guide]&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure//tmp/en-US/html/index.html XFS File System Structure]&lt;br /&gt;
** [http://sites.google.com/site/kandamotohiro/xfs Japanese translation] is also available.&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_Labs/tmp/en-US/html/index.html XFS Training Labs]&lt;br /&gt;
&lt;br /&gt;
* (Original versions of this material are still available at [http://oss.sgi.com/projects/xfs/training/index.html XFS Overview and Internals (html)] and [http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf XFS Filesystem Structure (pdf)]&lt;br /&gt;
&lt;br /&gt;
The format of &amp;lt;tt&amp;gt;/proc/fs/xfs/stat&amp;lt;/tt&amp;gt; also has been documented:&lt;br /&gt;
* [[Runtime_Stats|Runtime_Stats]]&lt;br /&gt;
&lt;br /&gt;
=== Papers, Presentations, Etc ===&lt;br /&gt;
&lt;br /&gt;
At the linux.conf.au 2012 event, Dave Chinner presented a talk on filesystem metadata scalability:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS - Recent and Future Adventures in Filesystem Scalability&#039;&#039; [[http://www.youtube.com/watch?v=FegjLbCnoBw Video]] [[http://xfs.org/images/d/d1/Xfs-scalability-lca2012.pdf Presentation Slides]]&lt;br /&gt;
&lt;br /&gt;
The October 2009 issue of the USENIX ;login: magazine published an article about XFS targeted at system administrators:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: The big storage file system for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/hellwig.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium (July 2006), Dave Chinner presented a paper on filesystem scalability in Linux 2.6 kernels:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;High Bandwidth Filesystems on Large Systems&#039;&#039; (July 2006) [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-paper.pdf paper]] [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-presentation.pdf presentation]]&lt;br /&gt;
&lt;br /&gt;
At linux.conf.au 2008 Dave Chinner gave a presentation about xfs_repair that he co-authored with Barry Naujok:&lt;br /&gt;
&lt;br /&gt;
* Fixing XFS Filesystems Faster [[http://mirror.linux.org.au/pub/linux.conf.au/2008/slides/135-fixing_xfs_faster.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
In July 2006, SGI storage marketing updated the XFS datasheet:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Open Source XFS for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/datasheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At UKUUG 2003, Christoph Hellwig presented a talk on XFS:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS for Linux&#039;&#039; (July 2003) [[http://oss.sgi.com/projects/xfs/papers/ukuug2003.pdf pdf]] [[http://verein.lst.de/~hch/talks/ukuug2003/ html]]&lt;br /&gt;
&lt;br /&gt;
Originally published in Proceedings of the FREENIX Track: 2002 Usenix Annual Technical Conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Filesystem Performance and Scalability in Linux 2.4.17&#039;&#039; (June 2002) [[http://oss.sgi.com/projects/xfs/papers/filesystem-perf-tm.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium, an updated presentation on porting XFS to Linux was given:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting XFS to Linux&#039;&#039; (July 2000) [[http://oss.sgi.com/projects/xfs/papers/ols2000/ols-xfs.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the Atlanta Linux Showcase, SGI presented the following paper on the port of XFS to Linux:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting the SGI XFS File System to Linux&#039;&#039; (October 1999) [[http://oss.sgi.com/projects/xfs/papers/als/als.ps ps]] [[http://oss.sgi.com/projects/xfs/papers/als/als.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the 6th Linux Kongress &amp;amp;amp; the Linux Storage Management Workshop (LSMW) in Germany in September, 1999, SGI had a few presentations including the following:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;SGI&#039;s port of XFS to Linux&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/linux_kongress/index.htm html]]&lt;br /&gt;
* &#039;&#039;Overview of DMF&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/DMF-over/index.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the LinuxWorld Conference &amp;amp;amp; Expo in August 1999, SGI published:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An Open Source XFS data sheet&#039;&#039; (August 1999) [[http://oss.sgi.com/projects/xfs/papers/xfs_GPL.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
From the 1996 USENIX conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An XFS white paper&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/xfs_usenix/index.html html]]&lt;br /&gt;
&lt;br /&gt;
=== Other historical articles, press-releases, etc ===&lt;br /&gt;
&lt;br /&gt;
* IBM&#039;s &#039;&#039;Advanced Filesystem Implementor&#039;s Guide&#039;&#039; has a chapter &#039;&#039;Introducing XFS&#039;&#039; [[http://www-106.ibm.com/developerworks/library/l-fs9.html html]]&lt;br /&gt;
&lt;br /&gt;
* An editorial titled &#039;&#039;Tired of fscking? Try a journaling filesystem!&#039;&#039;, Freshmeat (February 2001) [[http://freshmeat.net/articles/view/212/ html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Who give a fsck about filesystems&#039;&#039; provides an overview of the Linux 2.4 filesystems [[http://www.linuxuser.co.uk/articles/issue6/lu6-All_you_need_to_know_about-Filesystems.pdf html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Journal File Systems&#039;&#039; in issue 55 of &#039;&#039;Linux Gazette&#039;&#039; provides a comparison of journaled filesystems.&lt;br /&gt;
&lt;br /&gt;
* The original XFS beta release announcement was published in &#039;&#039;Linux Today&#039;&#039; (September 2000) [[http://linuxtoday.com/news_story.php3?ltsn=2000-09-26-017-04-OS-SW html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: It&#039;s worth the wait&#039;&#039; was published on &#039;&#039;EarthWeb&#039;&#039; (July 2000) [[http://networking.earthweb.com/netos/oslin/article/0,,12284_623661,00.html html]]&lt;br /&gt;
&lt;br /&gt;
* An &#039;&#039;IRIX-XFS data sheet&#039;&#039; (July 1999) [[http://oss.sgi.com/projects/xfs/papers/IRIX_xfs_data_sheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;Getting Started with XFS&#039;&#039; book (1994) [[http://oss.sgi.com/projects/xfs/papers/getting_started_with_xfs.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* Original &#039;&#039;XFS design documents&#039;&#039; (1993) ([http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_ps/ ps], [http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_pdf/ pdf])&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Ideas_for_XFS&amp;diff=2405</id>
		<title>Ideas for XFS</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Ideas_for_XFS&amp;diff=2405"/>
		<updated>2012-01-27T00:29:23Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: url fixed, bogus article link removed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Future Directions for XFS =&lt;br /&gt;
&lt;br /&gt;
Dave Chinner ideas:&lt;br /&gt;
&lt;br /&gt;
* [[Improving inode Caching]]&lt;br /&gt;
&lt;br /&gt;
* [[Improving Metadata Performance By Reducing Journal Overhead]]&lt;br /&gt;
&lt;br /&gt;
* [[Reliable Detection and Repair of Metadata Corruption]]&lt;br /&gt;
&lt;br /&gt;
Other ideas:&lt;br /&gt;
&lt;br /&gt;
* [[Splitting project quota support from group quota support]]&lt;br /&gt;
&lt;br /&gt;
* [[Assigning project quota to a linux container]]&lt;br /&gt;
&lt;br /&gt;
* [[Support discarding of unused sectors]] (status: completed)&lt;br /&gt;
&lt;br /&gt;
* Superblock flag for when 64-bit inodes are present (see [http://oss.sgi.com/pipermail/xfs/2009-May/041379.html xfs: regarding the inode64 mount option])&lt;br /&gt;
&lt;br /&gt;
* Wishlist: Please integrate &#039;&#039;xfs_irecover&#039;&#039; or provide [http://www.who.is.free.fr/wiki/doku.php?id=recover inode recovery feature]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2404</id>
		<title>XFS FAQ</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2404"/>
		<updated>2012-01-27T00:26:22Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: url fixed, bogus article links removed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Info from: [http://oss.sgi.com/projects/xfs/faq.html main XFS faq at SGI]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many thanks to earlier maintainers of this document - Thomas Graichen and Seth Mos.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about XFS? ==&lt;br /&gt;
&lt;br /&gt;
The SGI XFS project page http://oss.sgi.com/projects/xfs/ is the definitive reference. It contains pointers to whitepapers, books, articles, etc.&lt;br /&gt;
&lt;br /&gt;
You could also join the [[XFS_email_list_and_archives|XFS mailing list]] or the &#039;&#039;&#039;&amp;lt;nowiki&amp;gt;#xfs&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039; IRC channel on &#039;&#039;irc.freenode.net&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about ACLs? ==&lt;br /&gt;
&lt;br /&gt;
Andreas Gruenbacher maintains the Extended Attribute and POSIX ACL documentation for Linux at http://acl.bestbits.at/&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;acl(5)&#039;&#039;&#039; manual page is also quite extensive.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find information about the internals of XFS? ==&lt;br /&gt;
&lt;br /&gt;
An [http://oss.sgi.com/projects/xfs/training/ SGI XFS Training course] aimed at developers, triage and support staff, and serious users has been in development. Parts of the course are clearly still incomplete, but there is enough content to be useful to a broad range of users.&lt;br /&gt;
&lt;br /&gt;
Barry Naujok has documented the [http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf XFS ondisk format] which is a very useful reference.&lt;br /&gt;
&lt;br /&gt;
== Q: What partition type should I use for XFS on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Linux native filesystem (83).&lt;br /&gt;
&lt;br /&gt;
== Q: What mount options does XFS have? ==&lt;br /&gt;
&lt;br /&gt;
There are a number of mount options influencing XFS filesystems - refer to the &#039;&#039;&#039;mount(8)&#039;&#039;&#039; manual page or the documentation in the kernel source tree itself ([http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD Documentation/filesystems/xfs.txt])&lt;br /&gt;
&lt;br /&gt;
== Q: Is there any relation between the XFS utilities and the kernel version? ==&lt;br /&gt;
&lt;br /&gt;
No, there is no relation. Newer utilities tend to mainly have fixes and checks the previous versions might not have. New features are also added in a backward compatible way - if they are enabled via mkfs, an incapable (old) kernel will recognize that it does not understand the new feature, and refuse to mount the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Does it run on platforms other than i386? ==&lt;br /&gt;
&lt;br /&gt;
XFS runs on all of the platforms that Linux supports. It is more tested on the more common platforms, especially the i386 family. Its also well tested on the IA64 platform since thats the platform SGI Linux products use.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Do quotas work on XFS? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
To use quotas with XFS, you need to enable XFS quota support when you configure your kernel. You also need to specify quota support when mounting. You can get the Linux quota utilities at their sourceforge website [http://sourceforge.net/projects/linuxquota/  http://sourceforge.net/projects/linuxquota/] or use &#039;&#039;&#039;xfs_quota(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: What&#039;s project quota? ==&lt;br /&gt;
&lt;br /&gt;
The  project  quota  is a quota mechanism in XFS can be used to implement a form of directory tree quota, where a specified directory and all of the files and subdirectories below it (i.e. a tree) can be restricted to using a subset of the available space in the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Can group quota and project quota be used at the same time? ==&lt;br /&gt;
&lt;br /&gt;
No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Is umounting prjquota (project quota) enabled fs and mounting it again with grpquota (group quota) removing prjquota limits previously set from fs (and vice versa) ? ==&lt;br /&gt;
&lt;br /&gt;
To be answered.&lt;br /&gt;
&lt;br /&gt;
== Q: Are there any dump/restore tools for XFS? ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and &#039;&#039;&#039;xfsrestore(8)&#039;&#039;&#039; are fully supported. The tape format is the same as on IRIX, so tapes are interchangeable between operating systems.&lt;br /&gt;
&lt;br /&gt;
== Q: Does LILO work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
This depends on where you install LILO.&lt;br /&gt;
&lt;br /&gt;
Yes, for MBR (Master Boot Record) installations.&lt;br /&gt;
&lt;br /&gt;
No, for root partition installations because the XFS superblock is written at block zero, where LILO would be installed. This is to maintain compatibility with the IRIX on-disk format, and will not be changed.&lt;br /&gt;
&lt;br /&gt;
== Q: Does GRUB work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
There is native XFS filesystem support for GRUB starting with version 0.91 and onward. Unfortunately, GRUB used to make incorrect assumptions about being able to read a block device image while a filesystem is mounted and actively being written to, which could cause intermittent problems when using XFS. This has reportedly since been fixed, and the 0.97 version (at least) of GRUB is apparently stable.&lt;br /&gt;
&lt;br /&gt;
== Q: Can XFS be used for a root filesystem? ==&lt;br /&gt;
&lt;br /&gt;
Yes, with one caveat: Linux does not support an external XFS journal for the root filesystem via the &amp;quot;rootflags=&amp;quot; kernel parameter. To use an external journal for the root filesystem in Linux, an init ramdisk must mount the root filesystem with explicit &amp;quot;logdev=&amp;quot; specified. [http://mindplusplus.wordpress.com/2008/07/27/scratching-an-i.html More information here.]&lt;br /&gt;
&lt;br /&gt;
== Q: Will I be able to use my IRIX XFS filesystems on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously, you should back-up your data before trying to move it between systems. Filesystems must be &amp;quot;clean&amp;quot; when moved (i.e. unmounted). If you plan to use IRIX filesystems on Linux keep the following points in mind: the kernel needs to have SGI partition support enabled; there is no XLV support in Linux, so you are unable to read IRIX filesystems which use the XLV volume manager; also not all blocksizes available on IRIX are available on Linux (only blocksizes less than or equal to the pagesize of the architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is possible for now). Make sure that the directory format is version 2 on the IRIX filesystems (this is the default since IRIX 6.5.5). Linux can only read v2 directories.&lt;br /&gt;
&lt;br /&gt;
== Q: Is there a way to make a XFS filesystem larger or smaller? ==&lt;br /&gt;
&lt;br /&gt;
You can &#039;&#039;NOT&#039;&#039; make a XFS partition smaller online. The only way to shrink is to do a complete dump, mkfs and restore.&lt;br /&gt;
&lt;br /&gt;
An XFS filesystem may be enlarged by using &#039;&#039;&#039;xfs_growfs(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
If using partitions, you need to have free space after this partition to do so. Remove partition, recreate it larger with the &#039;&#039;exact same&#039;&#039; starting point. Run &#039;&#039;&#039;xfs_growfs&#039;&#039;&#039; to make the partition larger. Note - editing partition tables is a dangerous pastime, so back up your filesystem before doing so.&lt;br /&gt;
&lt;br /&gt;
Using XFS filesystems on top of a volume manager makes this a lot easier.&lt;br /&gt;
&lt;br /&gt;
== Q: What information should I include when reporting a problem? ==&lt;br /&gt;
&lt;br /&gt;
Things to include are what version of XFS you are using, if this is a CVS version of what date and version of the kernel. If you have problems with userland packages please report the version of the package you are using.&lt;br /&gt;
&lt;br /&gt;
If the problem relates to a particular filesystem, the output from the &#039;&#039;&#039;xfs_info(8)&#039;&#039;&#039; command and any &#039;&#039;&#039;mount(8)&#039;&#039;&#039; options in use will also be useful to the developers.&lt;br /&gt;
&lt;br /&gt;
If you experience an oops, please run it through &#039;&#039;&#039;ksymoops&#039;&#039;&#039; so that it can be interpreted.&lt;br /&gt;
&lt;br /&gt;
If you have a filesystem that cannot be repaired, make sure you have xfsprogs 2.9.0 or later and run &#039;&#039;&#039;xfs_metadump(8)&#039;&#039;&#039; to capture the metadata (which obfuscates filenames and attributes to protect your privacy) and make the dump available for someone to analyse.&lt;br /&gt;
&lt;br /&gt;
== Q: Mounting an XFS filesystem does not work - what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
If mount prints an error message something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
     mount: /dev/hda5 has wrong major or minor number&lt;br /&gt;
&lt;br /&gt;
you either do not have XFS compiled into the kernel (or you forgot to load the modules) or you did not use the &amp;quot;-t xfs&amp;quot; option on mount or the &amp;quot;xfs&amp;quot; option in &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you get something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 mount: wrong fs type, bad option, bad superblock on /dev/sda1,&lt;br /&gt;
        or too many mounted file systems&lt;br /&gt;
&lt;br /&gt;
Refer to your system log file (&amp;lt;tt&amp;gt;/var/log/messages&amp;lt;/tt&amp;gt;) for a detailed diagnostic message from the kernel.&lt;br /&gt;
&lt;br /&gt;
== Q: Does the filesystem have an undelete capability? ==&lt;br /&gt;
&lt;br /&gt;
There is no undelete in XFS (so far).&lt;br /&gt;
&lt;br /&gt;
However at least some XFS driver implementations do not wipe file information nodes completely so there are chance to recover files with specialized commercial closed source software like [http://www.ufsexplorer.com/rdr_xfs.php Raise Data Recovery for XFS].&lt;br /&gt;
&lt;br /&gt;
In this kind of XFS driver implementation it does not re-use directory entries immediately so there are chance to get back recently deleted files even with their real names.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;xfs_irecover&#039;&#039; or &#039;&#039;xfsr&#039;&#039; may help too, [http://www.who.is.free.fr/wiki/doku.php?id=recover this site] has a few links.&lt;br /&gt;
&lt;br /&gt;
This applies to most recent Linux distributions (versions?), as well as to most popular NAS boxes that use embedded linux and XFS file system.&lt;br /&gt;
&lt;br /&gt;
Anyway, the best is to always keep backups.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I backup a XFS filesystem and ACLs? ==&lt;br /&gt;
&lt;br /&gt;
You can backup a XFS filesystem with utilities like &#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and standard &#039;&#039;&#039;tar(1)&#039;&#039;&#039; for standard files. If you want to backup ACLs you will need to use &#039;&#039;&#039;xfsdump&#039;&#039;&#039; or [http://www.bacula.org/en/dev-manual/Current_State_Bacula.html Bacula] (&amp;gt; version 3.1.4) or [http://rsync.samba.org/ rsync] (&amp;gt;= version 3.0.0) to backup ACLs and EAs. &#039;&#039;&#039;xfsdump&#039;&#039;&#039; can also be integrated with [http://www.amanda.org/ amanda(8)].&lt;br /&gt;
&lt;br /&gt;
== Q: I see applications returning error 990 or &amp;quot;Structure needs cleaning&amp;quot;, what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
The error 990 stands for [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=blob;f=fs/xfs/linux-2.6/xfs_linux.h#l145 EFSCORRUPTED] which usually means XFS has detected a filesystem metadata problem and has shut the filesystem down to prevent further damage. Also, since about June 2006, we [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=commit;h=da2f4d679c8070ba5b6a920281e495917b293aa0 converted from EFSCORRUPTED/990 over to using EUCLEAN], &amp;quot;Structure needs cleaning.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The cause can be pretty much anything, unfortunately - filesystem, virtual memory manager, volume manager, device driver, or hardware.&lt;br /&gt;
&lt;br /&gt;
There should be a detailed console message when this initially happens. The messages have important information giving hints to developers as to the earliest point that a problem was detected. It is there to protect your data.&lt;br /&gt;
&lt;br /&gt;
You can use xfs_check and xfs_repair to remedy the problem (with the file system unmounted).&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I see binary NULLS in some files after recovery when I unplugged the power? ==&lt;br /&gt;
&lt;br /&gt;
Update: This issue has been addressed with a CVS fix on the 29th March 2007 and merged into mainline on 8th May 2007 for 2.6.22-rc1.&lt;br /&gt;
&lt;br /&gt;
XFS journals metadata updates, not data updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash.&lt;br /&gt;
&lt;br /&gt;
Since XFS does not write data out immediately unless you tell it to with fsync, an O_SYNC or O_DIRECT open (the same is true of other filesystems), you are looking at an inode which was flushed out, but whose data was not. Typically you&#039;ll find that the inode is not taking any space since all it has is a size but no extents allocated (try examining the file with the &#039;&#039;&#039;xfs_bmap(8)&#039;&#039;&#039; command).&lt;br /&gt;
&lt;br /&gt;
== Q: What is the problem with the write cache on journaled filesystems? ==&lt;br /&gt;
&lt;br /&gt;
Many drives use a write back cache in order to speed up the performance of writes.  However, there are conditions such as power failure when the write cache memory is never flushed to the actual disk.  Further, the drive can de-stage data from the write cache to the platters in any order that it chooses.  This causes problems for XFS and journaled filesystems in general because they rely on knowing when a write has completed to the disk. They need to know that the log information has made it to disk before allowing metadata to go to disk.  When the metadata makes it to disk then the transaction can effectively be deleted from the log resulting in movement of the tail of the log and thus freeing up some log space. So if the writes never make it to the physical disk, then the ordering is violated and the log and metadata can be lost, resulting in filesystem corruption.&lt;br /&gt;
&lt;br /&gt;
With hard disk cache sizes of currently (Jan 2009) up to 32MB that can be a lot of valuable information.  In a RAID with 8 such disks these adds to 256MB, and the chance of having filesystem metadata in the cache is so high that you have a very high chance of big data losses on a power outage.&lt;br /&gt;
&lt;br /&gt;
With a single hard disk and barriers turned on (on=default), the drive write cache is flushed before and after a barrier is issued.  A powerfail &amp;quot;only&amp;quot; loses data in the cache but no essential ordering is violated, and corruption will not occur.&lt;br /&gt;
&lt;br /&gt;
With a RAID controller with battery backed controller cache and cache in write back mode, you should turn off barriers - they are unnecessary in this case, and if the controller honors the cache flushes, it will be harmful to performance.  But then you *must* disable the individual hard disk write cache in order to ensure to keep the filesystem intact after a power failure. The method for doing this is different for each RAID controller. See the section about RAID controllers below.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I tell if I have the disk write cache enabled? ==&lt;br /&gt;
&lt;br /&gt;
For SCSI/SATA:&lt;br /&gt;
&lt;br /&gt;
* Look in dmesg(8) output for a driver line, such as:&amp;lt;br /&amp;gt; &amp;quot;SCSI device sda: drive cache: write back&amp;quot;&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# sginfo -c /dev/sda | grep -i &#039;write cache&#039; &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For PATA/SATA (although for SATA this only works on a recent kernel with ATA command passthrough):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -I /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; and look under &amp;quot;Enabled Supported&amp;quot; for &amp;quot;Write cache&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
== Q: How can I address the problem with the disk write cache? ==&lt;br /&gt;
&lt;br /&gt;
=== Disabling the disk write back cache. ===&lt;br /&gt;
&lt;br /&gt;
For SATA/PATA(IDE): (although for SATA this only works on a recent kernel with ATA command passthrough):&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -W0 /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # hdparm -W0 /dev/hda&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# blktool /dev/sda wcache off&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # blktool /dev/hda wcache off&lt;br /&gt;
&lt;br /&gt;
For SCSI:&lt;br /&gt;
&lt;br /&gt;
* Using sginfo(8) which is a little tedious&amp;lt;br /&amp;gt; It takes 3 steps. For example:&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -c /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives a list of attribute names and values&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cX /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives an array of cache values which you must match up with from step 1, e.g.&amp;lt;br /&amp;gt; 0 0 0 1 0 1 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cXR /dev/sda 0 0 0 1 0 0 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; allows you to reset the value of the cache attributes.&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
This disabling is kept persistent for a SCSI disk. However, for a SATA/PATA disk this needs to be done after every reset as it will reset back to the default of the write cache enabled. And a reset can happen after reboot or on error recovery of the drive. This makes it rather difficult to guarantee that the write cache is maintained as disabled.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using an external log. ===&lt;br /&gt;
&lt;br /&gt;
Some people have considered the idea of using an external log on a separate drive with the write cache disabled and the rest of the file system on another disk with the write cache enabled. However, that will &#039;&#039;&#039;not&#039;&#039;&#039; solve the problem. For example, the tail of the log is moved when we are notified that a metadata write is completed to disk and we won&#039;t be able to guarantee that if the metadata is on a drive with the write cache enabled.&lt;br /&gt;
&lt;br /&gt;
In fact using an external log will disable XFS&#039; write barrier support.&lt;br /&gt;
&lt;br /&gt;
=== Write barrier support. ===&lt;br /&gt;
&lt;br /&gt;
Write barrier support is enabled by default in XFS since kernel version 2.6.17. It is disabled by mounting the filesystem with &amp;quot;nobarrier&amp;quot;. Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported with external log device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported by the underlying device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, trial barrier write failed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If the filesystem is mounted with an external log device then we currently don&#039;t support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn&#039;t support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.&lt;br /&gt;
&lt;br /&gt;
== Q. Should barriers be enabled with storage which has a persistent write cache? ==&lt;br /&gt;
&lt;br /&gt;
Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with &amp;quot;nobarrier&amp;quot;. But take care about the hard disk write cache, which should be off.&lt;br /&gt;
&lt;br /&gt;
== Q. Which settings does my RAID controller need ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s hard to tell because there are so many controllers. Please consult your RAID controller documentation to determine how to change these settings, but we try to give an overview here:&lt;br /&gt;
&lt;br /&gt;
Real RAID controllers (not those found onboard of mainboards) normally have a battery backed cache (or an [http://en.wikipedia.org/wiki/Electric_double-layer_capacitor ultracapacitor] + flash memory &amp;quot;[http://www.tweaktown.com/articles/2800/adaptec_zero_maintenance_cache_protection_explained/ zero maintenance cache]&amp;quot;) which is used for buffering writes to improve speed. Even if it&#039;s battery backed, the individual hard disk write caches need to be turned off, as they are not protected from a powerfail and will just lose all contents in that case.&lt;br /&gt;
&lt;br /&gt;
* onboard RAID controllers: there are so many different types it&#039;s hard to tell. Generally, those controllers have no cache, but let the hard disk write cache on. That can lead to the bad situation that after a powerfail with RAID-1 when only parts of the disk cache have been written, the controller doesn&#039;t even see that the disks are out of sync, as the disks can resort cached blocks and might have saved the superblock info, but then lost different data contents. So, turn off disk write caches before using the RAID function.&lt;br /&gt;
&lt;br /&gt;
* 3ware: /cX/uX set cache=off, see http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86&lt;br /&gt;
&lt;br /&gt;
* Adaptec: allows setting individual drives cache&lt;br /&gt;
arcconf setcache &amp;lt;disk&amp;gt; wb|wt&lt;br /&gt;
wb=write back, which means write cache on, wt=write through, which means write cache off. So &amp;quot;wt&amp;quot; should be chosen.&lt;br /&gt;
&lt;br /&gt;
* Areca: In archttp under &amp;quot;System Controls&amp;quot; -&amp;gt; &amp;quot;System Configuration&amp;quot; there&#039;s the option &amp;quot;Disk Write Cache Mode&amp;quot; (defaults &amp;quot;Auto&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Off&amp;quot;: disk write cache is turned off&lt;br /&gt;
&lt;br /&gt;
&amp;quot;On&amp;quot;: disk write cache is enabled, this is not safe for your data but fast&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Auto&amp;quot;: If you use a BBM (battery backup module, which you really should use if you care about your data), the controller automatically turns disk writes off, to protect your data. In case no BBM is attached, the controller switches to &amp;quot;On&amp;quot;, because neither controller cache nor disk cache is safe so you don&#039;t seem to care about your data and just want high speed (which you get then).&lt;br /&gt;
&lt;br /&gt;
That&#039;s a very sensible default so you can let it &amp;quot;Auto&amp;quot; or enforce &amp;quot;Off&amp;quot; to be sure.&lt;br /&gt;
&lt;br /&gt;
* LSI MegaRAID: allows setting individual disks cache:&lt;br /&gt;
 MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL                          # flushes the controller cache&lt;br /&gt;
 MegaCli -LDGetProp -Cache    -LN|-L0,1,2|-LAll -aN|-a0,1,2|-aALL  # shows the controller cache settings&lt;br /&gt;
 MegaCli -LDGetProp -DskCache -LN|-L0,1,2|-LAll -aN|-a0,1,2|-aALL  # shows the disk cache settings (for all phys. disks in logical disk)&lt;br /&gt;
 MegaCli -LDSetProp -EnDskCache|DisDskCache  -LN|-L0,1,2|-LAll  -aN|-a0,1,2|-aALL # set disk cache setting&lt;br /&gt;
&lt;br /&gt;
* Xyratex: from the docs: &amp;quot;Write cache includes the disk drive cache and controller cache.&amp;quot;. So that means you can only set the drive caches and the unit caches together. To protect your data, turn it off, but write performance will suffer badly as also the controller write cache is disabled.&lt;br /&gt;
&lt;br /&gt;
== Q: Which settings are best with virtualization like VMware, XEN, qemu? ==&lt;br /&gt;
&lt;br /&gt;
The biggest problem is that those products seem to also virtualize disk &lt;br /&gt;
writes in a way that even barriers don&#039;t work any more, which means even &lt;br /&gt;
a fsync is not reliable. Tests confirm that unplugging the power from &lt;br /&gt;
such a system even with RAID controller with battery backed cache and &lt;br /&gt;
hard disk cache turned off (which is safe on a normal host) you can &lt;br /&gt;
destroy a database within the virtual machine (client, domU whatever you &lt;br /&gt;
call it).&lt;br /&gt;
&lt;br /&gt;
In qemu you can specify cache=off on the line specifying the virtual &lt;br /&gt;
disk. For others information is missing.&lt;br /&gt;
&lt;br /&gt;
== Q: What is the issue with directory corruption in Linux 2.6.17? ==&lt;br /&gt;
&lt;br /&gt;
In the Linux kernel 2.6.17 release a subtle bug was accidentally introduced into the XFS directory code by some &amp;quot;sparse&amp;quot; endian annotations. This bug was sufficiently uncommon (it only affects a certain type of format change, in Node or B-Tree format directories, and only in certain situations) that it was not detected during our regular regression testing, but it has been observed in the wild by a number of people now.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: the fix is included in 2.6.17.7 and later kernels.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To add insult to injury, &#039;&#039;&#039;xfs_repair(8)&#039;&#039;&#039; is currently not correcting these directories on detection of this corrupt state either. This &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; issue is actively being worked on, and a fixed version will be available shortly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
No other kernel versions are affected. However, using a corrupt filesystem on other kernels can still result in the filesystem being shutdown if the problem has not been rectified (on disk), making it seem like other kernels are affected.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;xfs_check&#039;&#039;&#039; tool, or &#039;&#039;&#039;xfs_repair -n&#039;&#039;&#039;, should be able to detect any directory corruption.&lt;br /&gt;
&lt;br /&gt;
Until a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; binary is available, one can make use of the &#039;&#039;&#039;xfs_db(8)&#039;&#039;&#039; command to mark the problem directory for removal (see the example below). A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; invocation will remove the directory and move all contents into &amp;quot;lost+found&amp;quot;, named by inode number (see second example on how to map inode number to directory entry name, which needs to be done _before_ removing the directory itself). The inode number of the corrupt directory is included in the shutdown report issued by the kernel on detection of directory corruption. Using that inode number, this is how one would ensure it is removed:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 core.mode = 040755&lt;br /&gt;
 core.version = 2&lt;br /&gt;
 core.format = 3 (btree)&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; write core.mode 0&lt;br /&gt;
 xfs_db&amp;amp;gt; quit&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; will clear the directory, and add new entries (named by inode number) in lost+found.&lt;br /&gt;
&lt;br /&gt;
The easiest way to map inode numbers to full paths is via &#039;&#039;&#039;xfs_ncheck(8)&#039;&#039;&#039;&amp;lt;nowiki&amp;gt;: &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_ncheck -i 14101 -i 14102 /dev/sdXXX&lt;br /&gt;
       14101 full/path/mumble_fratz_foo_bar_1495&lt;br /&gt;
       14102 full/path/mumble_fratz_foo_bar_1494&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should this not work, we can manually map inode numbers in B-Tree format directory by taking the following steps:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 ...&lt;br /&gt;
 next_unlinked = null&lt;br /&gt;
 u.bmbt.level = 1&lt;br /&gt;
 u.bmbt.numrecs = 1&lt;br /&gt;
 u.bmbt.keys[1] = [startoff] 1:[0]&lt;br /&gt;
 u.bmbt.ptrs[1] = 1:3628&lt;br /&gt;
 xfs_db&amp;amp;gt; fsblock 3628&lt;br /&gt;
 xfs_db&amp;amp;gt; type bmapbtd&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 magic = 0x424d4150&lt;br /&gt;
 level = 0&lt;br /&gt;
 numrecs = 19&lt;br /&gt;
 leftsib = null&lt;br /&gt;
 rightsib = null&lt;br /&gt;
 recs[1-19] = [startoff,startblock,blockcount,extentflag]&lt;br /&gt;
        1:[0,3088,4,0] 2:[4,3128,8,0] 3:[12,3308,4,0] 4:[16,3360,4,0]&lt;br /&gt;
        5:[20,3496,8,0] 6:[28,3552,8,0] 7:[36,3624,4,0] 8:[40,3633,4,0]&lt;br /&gt;
        9:[44,3688,8,0] 10:[52,3744,4,0] 11:[56,3784,8,0]&lt;br /&gt;
        12:[64,3840,8,0] 13:[72,3896,4,0] 14:[33554432,3092,4,0]&lt;br /&gt;
        15:[33554436,3488,8,0] 16:[33554444,3629,4,0]&lt;br /&gt;
        17:[33554448,3748,4,0] 18:[33554452,3900,4,0]&lt;br /&gt;
        19:[67108864,3364,4,0]&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point we are looking at the extents that hold all of the directory information. There are three types of extent here, we have the data blocks (extents 1 through 13 above), then the leaf blocks (extents 14 through 18), then the freelist blocks (extent 19 above). The jumps in the first field (start offset) indicate our progression through each of the three types. For recovering file names, we are only interested in the data blocks, so we can now feed those offset numbers into the &#039;&#039;&#039;xfs_db&#039;&#039;&#039; dblock command. So, for the fifth extent - 5:[20,3496,8,0] - listed above:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; dblock 20&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 dhdr.magic = 0x58443244&lt;br /&gt;
 dhdr.bestfree[0].offset = 0&lt;br /&gt;
 dhdr.bestfree[0].length = 0&lt;br /&gt;
 dhdr.bestfree[1].offset = 0&lt;br /&gt;
 dhdr.bestfree[1].length = 0&lt;br /&gt;
 dhdr.bestfree[2].offset = 0&lt;br /&gt;
 dhdr.bestfree[2].length = 0&lt;br /&gt;
 du[0].inumber = 13937&lt;br /&gt;
 du[0].namelen = 25&lt;br /&gt;
 du[0].name = &amp;quot;mumble_fratz_foo_bar_1595&amp;quot;&lt;br /&gt;
 du[0].tag = 0x10&lt;br /&gt;
 du[1].inumber = 13938&lt;br /&gt;
 du[1].namelen = 25&lt;br /&gt;
 du[1].name = &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;&lt;br /&gt;
 du[1].tag = 0x38&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
So, here we can see that inode number 13938 matches up with name &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;. Iterate through all the extents, and extract all the name-to-inode-number mappings you can, as these will be useful when looking at &amp;quot;lost+found&amp;quot; (once &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; has removed the corrupt directory).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why does my &amp;gt; 2TB XFS partition disappear when I reboot ? ==&lt;br /&gt;
&lt;br /&gt;
Strictly speaking this is not an XFS problem.&lt;br /&gt;
&lt;br /&gt;
To support &amp;gt; 2TB partitions you need two things: a kernel that supports large block devices (&amp;lt;tt&amp;gt;CONFIG_LBD=y&amp;lt;/tt&amp;gt;) and a partition table format that can hold large partitions.  The default DOS partition tables don&#039;t.  The best partition format for&lt;br /&gt;
&amp;gt; 2TB partitions is the EFI GPT format (&amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Without CONFIG_LBD=y you can&#039;t even create the filesystem, but without &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt; it works fine until you reboot at which point the partition will disappear.  Note that you need to enable the &amp;lt;tt&amp;gt;CONFIG_PARTITION_ADVANCED&amp;lt;/tt&amp;gt; option before you can set &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I receive &amp;lt;tt&amp;gt;No space left on device&amp;lt;/tt&amp;gt; after &amp;lt;tt&amp;gt;xfs_growfs&amp;lt;/tt&amp;gt;? ==&lt;br /&gt;
&lt;br /&gt;
After [http://oss.sgi.com/pipermail/xfs/2009-January/039828.html growing a XFS filesystem], df(1) would show enough free space but attempts to write to the filesystem result in -ENOSPC. To fix this, [http://oss.sgi.com/pipermail/xfs/2009-January/039835.html Dave Chinner advised]:&lt;br /&gt;
&lt;br /&gt;
  The only way to fix this is to move data around to free up space&lt;br /&gt;
  below 1TB. Find your oldest data (i.e. that was around before even&lt;br /&gt;
  the first grow) and move it off the filesystem (move, not copy).&lt;br /&gt;
  Then if you copy it back on, the data blocks will end up above 1TB&lt;br /&gt;
  and that should leave you with plenty of space for inodes below 1TB.&lt;br /&gt;
  &lt;br /&gt;
  A complete dump and restore will also fix the problem ;)&lt;br /&gt;
&lt;br /&gt;
Also, you can add &#039;inode64&#039; to your mount options to allow inodes to live above 1TB.&lt;br /&gt;
&lt;br /&gt;
example:[https://www.centos.org/modules/newbb/viewtopic.php?topic_id=30703&amp;amp;forum=38 | No space left on device on xfs filesystem with 7.7TB free]&lt;br /&gt;
&lt;br /&gt;
== Q: Is using noatime or/and nodiratime at mount time giving any performance benefits in xfs (or not using them performance decrease)? ==&lt;br /&gt;
&lt;br /&gt;
The default atime behaviour is relatime, which has almost no overhead compared to noatime but still maintains sane atime values. All Linux filesystems use this as the default now (since around 2.6.30), but XFS has used relatime-like behaviour since 2006, so no-one should really need to ever use noatime on XFS for performance reasons. &lt;br /&gt;
&lt;br /&gt;
Also, noatime implies nodiratime, so there is never a need to specify nodiratime when noatime is also specified.&lt;br /&gt;
&lt;br /&gt;
== Q: How to get around a bad inode repair is unable to clean up ==&lt;br /&gt;
&lt;br /&gt;
The trick is go in with xfs_db and mark the inode as a deleted, which will cause repair to clean it up and finish the remove process.&lt;br /&gt;
&lt;br /&gt;
  xfs_db -x -c &#039;inode XXX&#039; -c &#039;write core.nextents 0&#039; -c &#039;write core.size 0&#039; /dev/hdXX&lt;br /&gt;
&lt;br /&gt;
== Q: How to calculate the correct sunit,swidth values for optimal performance ==&lt;br /&gt;
&lt;br /&gt;
XFS allows to optimize for a given RAID stripe unit (stripe size) and stripe width (number of data disks) via mount options.&lt;br /&gt;
&lt;br /&gt;
These options can be sometimes autodetected (for example with md raid and recent enough kernel (&amp;gt;= 2.6.32) and xfsprogs (&amp;gt;= 3.1.1) built with libblkid support) but manual calculation is needed for most of hardware raids.&lt;br /&gt;
&lt;br /&gt;
The calculation of these values is quite simple:&lt;br /&gt;
&lt;br /&gt;
  su = &amp;lt;RAID controllers stripe size in BYTES (or KiBytes when used with k)&amp;gt;&lt;br /&gt;
  sw = &amp;lt;# of data disks (don&#039;t count parity disks)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So if your RAID controller has a stripe size of 64KB, and you have a RAID-6 with 8 disks, use&lt;br /&gt;
&lt;br /&gt;
  su = 64k&lt;br /&gt;
  sw = 6 (RAID-6 of 8 disks has 6 data disks)&lt;br /&gt;
&lt;br /&gt;
A RAID stripe size of 256KB with a RAID-10 over 16 disks should use&lt;br /&gt;
&lt;br /&gt;
  su = 256k&lt;br /&gt;
  sw = 8 (RAID-10 of 16 disks has 8 data disks)&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use &amp;quot;sunit&amp;quot; instead of &amp;quot;su&amp;quot; and &amp;quot;swidth&amp;quot; instead of &amp;quot;sw&amp;quot; but then sunit/swidth values need to be specified in &amp;quot;number of 512B sectors&amp;quot;!&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;tt&amp;gt;xfs_info&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mkfs.xfs&amp;lt;/tt&amp;gt; interpret sunit and swidth as being specified in units of 512B sectors; that&#039;s unfortunately not the unit they&#039;re reported in, however.&lt;br /&gt;
&amp;lt;tt&amp;gt;xfs_info&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mkfs.xfs&amp;lt;/tt&amp;gt; report them in multiples of your basic block size (bsize) and not in 512B sectors.&lt;br /&gt;
&lt;br /&gt;
Assume for example: swidth 1024 (specified at mkfs.xfs command line; so 1024 of 512B sectors) and block size of 4096 (bsize reported by mkfs.xfs at output). You should see swidth 128 (reported by mkfs.xfs at output). 128 * 4096 == 1024 * 512.&lt;br /&gt;
&lt;br /&gt;
When creating XFS filesystem on top of LVM on top of hardware raid please use sunit/swith values as when creating XFS filesystem directly on top of hardware raid.&lt;br /&gt;
&lt;br /&gt;
== Q: Why doesn&#039;t NFS-exporting subdirectories of inode64-mounted filesystem work? ==&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;tt&amp;gt;fsid&amp;lt;/tt&amp;gt; type encodes only 32-bit of the inode number for subdirectory exports.  However, exporting the root of the filesystem works, or using one of the non-default &amp;lt;tt&amp;gt;fsid&amp;lt;/tt&amp;gt; types (&amp;lt;tt&amp;gt;fsid=uuid&amp;lt;/tt&amp;gt; in &amp;lt;tt&amp;gt;/etc/exports&amp;lt;/tt&amp;gt; with recent &amp;lt;tt&amp;gt;nfs-utils&amp;lt;/tt&amp;gt;) should work as well. (Thanks, Christoph!)&lt;br /&gt;
&lt;br /&gt;
== Q: What is the inode64 mount option for? ==&lt;br /&gt;
&lt;br /&gt;
By default, with 32bit inodes, XFS places inodes only in the first 1TB of a disk. If you have a disk with 100TB, all inodes will be stuck in the first TB. This can lead to strange things like &amp;quot;disk full&amp;quot; when you still have plenty space free, but there&#039;s no more place in the first TB to create a new inode. Also, performance sucks.&lt;br /&gt;
&lt;br /&gt;
To come around this, use the inode64 mount options for filesystems &amp;gt;1TB. Inodes will then be placed in the location where their data is, minimizing disk seeks.&lt;br /&gt;
&lt;br /&gt;
Beware that some old programs might have problems reading 64bit inodes, especially over NFS. Your editor used inode64 for over a year with recent (openSUSE 11.1 and higher) distributions using NFS and Samba without any corruptions, so that might be a recent enough distro.&lt;br /&gt;
&lt;br /&gt;
== Q: Can I just try the inode64 option to see if it helps me? ==&lt;br /&gt;
&lt;br /&gt;
Starting from kernel 2.6.35, you can try and then switch back. Older kernels have a bug leading to strange problems if you mount without inode64 again. For example, you can&#039;t access files &amp;amp; dirs that have been created with an inode &amp;gt;32bit anymore.&lt;br /&gt;
&lt;br /&gt;
== Q: Performance: mkfs.xfs -n size=64k option ==&lt;br /&gt;
&lt;br /&gt;
Asking the implications of that mkfs option on the XFS mailing list, Dave Chinner explained it this way:&lt;br /&gt;
&lt;br /&gt;
Inodes are not stored in the directory structure, only the directory entry name and the inode number. Hence the amount of space used by a&lt;br /&gt;
directory entry is determined by the length of the name.&lt;br /&gt;
&lt;br /&gt;
There is extra overhead to allocate large directory blocks (16 pages instead of one, to begin with, then there&#039;s the vmap overhead, etc), so for small directories smaller block sizes are faster for create and unlink operations.&lt;br /&gt;
&lt;br /&gt;
For empty directories, operations on 4k block sized directories consume roughly 50% less CPU that 64k block size directories. The 4k block size directories consume less CPU out to roughly 1.5 million entries where the two are roughly equal. At directory sizes of 10 million entries, 64k directory block operations are consuming about 15% of the CPU that 4k directory block operations consume.&lt;br /&gt;
&lt;br /&gt;
In terms of lookups, the 64k block directory will take less IO but consume more CPU for a given lookup. Hence it depends on your IO latency and whether directory readahead can hide that latency as to which will be faster. e.g. For SSDs, CPU usage might be the limiting factor, not the IO. Right now I don&#039;t have any numbers on what the difference might be - I&#039;m getting 1 billion inode population issues worked out first before I start on measuring cold cache lookup times on 1 billion files....&lt;br /&gt;
&lt;br /&gt;
== Q: I want to tune my XFS filesystems for &amp;lt;something&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
The standard answer you will get to this question is this: use the defaults.&lt;br /&gt;
&lt;br /&gt;
There are few workloads where using non-default mkfs.xfs or mount options make much sense. In general, the default values already used are optimised for best performance in the first place. mkfs.xfs will detect the difference between single disk and MD/DM RAID setups and change the default values it uses to  configure the filesystem appropriately.&lt;br /&gt;
&lt;br /&gt;
There are a lot of &amp;quot;XFS tuning guides&amp;quot; that Google will find for you - most are old, out of date and full of misleading or just plain incorrect information. Don&#039;t expect that tuning your filesystem for optimal bonnie++ numbers will mean your workload will go faster. You should only consider changing the defaults if either: a) you know from experience that your workload causes XFS a specific problem that can be worked around via a configuration change, or b) your workload is demonstrating bad performance when using the default configurations. In this case, you need to understand why your application is causing bad performance before you start tweaking XFS configurations.&lt;br /&gt;
&lt;br /&gt;
In most cases, the only thing you need to to consider for &amp;lt;tt&amp;gt;mkfs.xfs&amp;lt;/tt&amp;gt; is specifying the stripe unit and width for hardware RAID devices. For mount options, the only thing that will change metadata performance considerably are the &amp;lt;tt&amp;gt;logbsize&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;delaylog&amp;lt;/tt&amp;gt; mount options. Increasing &amp;lt;tt&amp;gt;logbsize&amp;lt;/tt&amp;gt; reduces the number of journal IOs for a given workload, and &amp;lt;tt&amp;gt;delaylog&amp;lt;/tt&amp;gt; will reduce them even further. The trade off for this increase in metadata performance is that more operations may be &amp;quot;missing&amp;quot; after recovery if the system crashes while actively making modifications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Which factors influence the memory usage of xfs_repair? ==&lt;br /&gt;
&lt;br /&gt;
This is best explained with an example. The example filesystem is 16Tb, but basically empty (look at icount).&lt;br /&gt;
&lt;br /&gt;
  # xfs_repair -n -vv -m 1 /dev/vda&lt;br /&gt;
  Phase 1 - find and verify superblock...&lt;br /&gt;
          - max_mem = 1024, icount = 64, imem = 0, dblock = 4294967296, dmem = 2097152&lt;br /&gt;
  Required memory for repair is greater that the maximum specified&lt;br /&gt;
  with the -m option. Please increase it to at least 2096.&lt;br /&gt;
  #&lt;br /&gt;
&lt;br /&gt;
xfs_repair is saying it needs at least 2096MB of RAM to repair the filesystem,&lt;br /&gt;
of which 2,097,152KB is needed for tracking free space. &lt;br /&gt;
(The -m 1 argument was telling xfs_repair to use ony 1 MB of memory.)&lt;br /&gt;
&lt;br /&gt;
Now if we add some inodes (50 million) to the filesystem (look at icount again), and the result is:&lt;br /&gt;
&lt;br /&gt;
  # xfs_repair -vv -m 1 /dev/vda&lt;br /&gt;
  Phase 1 - find and verify superblock...&lt;br /&gt;
          - max_mem = 1024, icount = 50401792, imem = 196882, dblock = 4294967296, dmem = 2097152&lt;br /&gt;
  Required memory for repair is greater that the maximum specified&lt;br /&gt;
  with the -m option. Please increase it to at least 2289.&lt;br /&gt;
&lt;br /&gt;
That is now needs at least another 200MB of RAM to run.&lt;br /&gt;
&lt;br /&gt;
The numbers reported by xfs_repair are the absolute minimum required and approximate at that;&lt;br /&gt;
more RAM than this may be required to complete successfully.&lt;br /&gt;
Also, if you only give xfs_repair the minimum required RAM, it will be slow;&lt;br /&gt;
for best repair performance, the more RAM you can give it the better.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why some files of my filesystem shows as &amp;quot;?????????? ? ?      ?          ?                ? filename&amp;quot; ? ==&lt;br /&gt;
&lt;br /&gt;
If ls -l shows you a listing as&lt;br /&gt;
&lt;br /&gt;
  # ?????????? ? ?      ?          ?                ? file1&lt;br /&gt;
    ?????????? ? ?      ?          ?                ? file2&lt;br /&gt;
    ?????????? ? ?      ?          ?                ? file3&lt;br /&gt;
    ?????????? ? ?      ?          ?                ? file4&lt;br /&gt;
&lt;br /&gt;
and errors like:&lt;br /&gt;
  # ls /pathtodir/&lt;br /&gt;
    ls: cannot access /pathtodir/file1: Invalid argument&lt;br /&gt;
    ls: cannot access /pathtodir/file2: Invalid argument&lt;br /&gt;
    ls: cannot access /pathtodir/file3: Invalid argument&lt;br /&gt;
    ls: cannot access /pathtodir/file4: Invalid argument&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
or even:&lt;br /&gt;
  # failed to stat /pathtodir/file1&lt;br /&gt;
&lt;br /&gt;
It is very probable your filesystem must be mounted with inode64&lt;br /&gt;
  # mount -oremount,inode64 /dev/diskpart /mnt/xfs&lt;br /&gt;
&lt;br /&gt;
should make it work ok again.&lt;br /&gt;
If it works, add the option to fstab.&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Main_Page&amp;diff=2400</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Main_Page&amp;diff=2400"/>
		<updated>2012-01-27T00:09:25Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: video presentation moved to XFS Papers and Documentation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;vertical-align:top&amp;quot; | &amp;lt;!-- Information --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#E2EAFF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information about XFS ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS FAQ]]&lt;br /&gt;
* [[XFS Status Updates]]&lt;br /&gt;
* [[XFS Papers and Documentation]]&lt;br /&gt;
* [[Linux Distributions shipping XFS]]&lt;br /&gt;
* [[XFS Rpm for RedHat|XFS RPMs for RedHat]]&lt;br /&gt;
* [[XFS Companies]]&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs SGI XFS website]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/XFS Wikipedia XFS page]&lt;br /&gt;
&lt;br /&gt;
== Professional XFS Consulting Services == &lt;br /&gt;
&lt;br /&gt;
[[Consulting Resources]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;50%&amp;quot; style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Developers --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#F8F8FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== XFS Developer Resources ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS email list and archives]]&lt;br /&gt;
* [http://oss.sgi.com/bugzilla/buglist.cgi?product=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ oss.sgi.com]&lt;br /&gt;
* [http://bugzilla.kernel.org/buglist.cgi?product=File+System&amp;amp;component=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ kernel.org]&lt;br /&gt;
* [[Getting the latest source code]]&lt;br /&gt;
* [[Unfinished work]]&lt;br /&gt;
* [[Shrinking Support]]&lt;br /&gt;
* [[Ideas for XFS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- features --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#F2F2F2; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Feature Highlights ==&lt;br /&gt;
&lt;br /&gt;
* [[FITRIM/discard]] - discard (or &amp;quot;trim&amp;quot;) blocks which are not in use by the filesystem&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{#meta: | u+4/+rib+YG96TifD0SN88xS84YSDm2cl61IU7ZIk9g= | verify-v1 }}&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2399</id>
		<title>XFS Papers and Documentation</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Papers_and_Documentation&amp;diff=2399"/>
		<updated>2012-01-27T00:08:56Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: XFS: Recent and Future Adventures in Filesystem Scalability - Dave Chinner&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== Primary XFS Documentation ===&lt;br /&gt;
&lt;br /&gt;
The XFS documentation started by SGI has been converted to docbook/[https://fedorahosted.org/publican/ Publican] format.  The material is suitable for experienced users as well as developers and support staff.  The XML source is available in a [http://git.kernel.org/?p=fs/xfs/xfsdocs-xml-dev.git;a=summary git repository] and builds of the documentation are available here:&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide//tmp/en-US/html/index.html XFS User Guide]&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_Filesystem_Structure//tmp/en-US/html/index.html XFS File System Structure]&lt;br /&gt;
** [http://sites.google.com/site/kandamotohiro/xfs Japanese translation] is also available.&lt;br /&gt;
&lt;br /&gt;
* [http://xfs.org/docs/xfsdocs-xml-dev/XFS_Labs/tmp/en-US/html/index.html XFS Training Labs]&lt;br /&gt;
&lt;br /&gt;
* (Original versions of this material are still available at [http://oss.sgi.com/projects/xfs/training/index.html XFS Overview and Internals (html)] and [http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf XFS Filesystem Structure (pdf)]&lt;br /&gt;
&lt;br /&gt;
The format of &amp;lt;tt&amp;gt;/proc/fs/xfs/stat&amp;lt;/tt&amp;gt; also has been documented:&lt;br /&gt;
* [[Runtime_Stats|Runtime_Stats]]&lt;br /&gt;
&lt;br /&gt;
=== Papers, Presentations, Etc ===&lt;br /&gt;
&lt;br /&gt;
* [http://www.youtube.com/watch?v=FegjLbCnoBw Video &amp;quot;Recent and Future Adventures in Filesystem Scalability&amp;quot; - Dave Chinner ]&lt;br /&gt;
&lt;br /&gt;
The October 2009 issue of the USENIX ;login: magazine published an article about XFS targeted at system administrators:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: The big storage file system for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/hellwig.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium (July 2006), Dave Chinner presented a paper on filesystem scalability in Linux 2.6 kernels:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;High Bandwidth Filesystems on Large Systems&#039;&#039; (July 2006) [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-paper.pdf paper]] [[http://oss.sgi.com/projects/xfs/papers/ols2006/ols-2006-presentation.pdf presentation]]&lt;br /&gt;
&lt;br /&gt;
At linux.conf.au 2008 Dave Chinner gave a presentation about xfs_repair that he co-authored with Barry Naujok:&lt;br /&gt;
&lt;br /&gt;
* Fixing XFS Filesystems Faster [[http://mirror.linux.org.au/pub/linux.conf.au/2008/slides/135-fixing_xfs_faster.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
In July 2006, SGI storage marketing updated the XFS datasheet:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Open Source XFS for Linux&#039;&#039; [[http://oss.sgi.com/projects/xfs/datasheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At UKUUG 2003, Christoph Hellwig presented a talk on XFS:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS for Linux&#039;&#039; (July 2003) [[http://oss.sgi.com/projects/xfs/papers/ukuug2003.pdf pdf]] [[http://verein.lst.de/~hch/talks/ukuug2003/ html]]&lt;br /&gt;
&lt;br /&gt;
Originally published in Proceedings of the FREENIX Track: 2002 Usenix Annual Technical Conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Filesystem Performance and Scalability in Linux 2.4.17&#039;&#039; (June 2002) [[http://oss.sgi.com/projects/xfs/papers/filesystem-perf-tm.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the Ottawa Linux Symposium, an updated presentation on porting XFSÂ to Linux was given:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting XFS to Linux&#039;&#039; (July 2000) [[http://oss.sgi.com/projects/xfs/papers/ols2000/ols-xfs.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the Atlanta Linux Showcase, SGI presented the following paper on the port of XFS to Linux:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Porting the SGI XFS File System to Linux&#039;&#039; (October 1999) [[http://oss.sgi.com/projects/xfs/papers/als/als.ps ps]] [[http://oss.sgi.com/projects/xfs/papers/als/als.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
At the 6th Linux Kongress &amp;amp;amp; the Linux Storage Management Workshop (LSMW) in Germany in September, 1999, SGI had a few presentations including the following:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;SGI&#039;s port of XFS to Linux&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/linux_kongress/index.htm html]]&lt;br /&gt;
* &#039;&#039;Overview of DMF&#039;&#039; (September 1999) [[http://oss.sgi.com/projects/xfs/papers/DMF-over/index.htm html]]&lt;br /&gt;
&lt;br /&gt;
At the LinuxWorld Conference &amp;amp;amp; Expo in August 1999, SGI published:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An Open Source XFS data sheet&#039;&#039; (August 1999) [[http://oss.sgi.com/projects/xfs/papers/xfs_GPL.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
From the 1996 USENIX conference:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;An XFS white paper&#039;&#039; [[http://oss.sgi.com/projects/xfs/papers/xfs_usenix/index.html html]]&lt;br /&gt;
&lt;br /&gt;
=== Other historical articles, press-releases, etc ===&lt;br /&gt;
&lt;br /&gt;
* IBM&#039;s &#039;&#039;Advanced Filesystem Implementor&#039;s Guide&#039;&#039; has a chapter &#039;&#039;Introducing XFS&#039;&#039; [[http://www-106.ibm.com/developerworks/library/l-fs9.html html]]&lt;br /&gt;
&lt;br /&gt;
* An editorial titled &#039;&#039;Tired of fscking? Try a journaling filesystem!&#039;&#039;, Freshmeat (February 2001) [[http://freshmeat.net/articles/view/212/ html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Who give a fsck about filesystems&#039;&#039; provides an overview of the Linux 2.4 filesystems [[http://www.linuxuser.co.uk/articles/issue6/lu6-All_you_need_to_know_about-Filesystems.pdf html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Journal File Systems&#039;&#039; in issue 55 of &#039;&#039;Linux Gazette&#039;&#039; provides a comparison of journaled filesystems.&lt;br /&gt;
&lt;br /&gt;
* The original XFS beta release announcement was published in &#039;&#039;Linux Today&#039;&#039; (September 2000) [[http://linuxtoday.com/news_story.php3?ltsn=2000-09-26-017-04-OS-SW html]]&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;XFS: It&#039;s worth the wait&#039;&#039; was published on &#039;&#039;EarthWeb&#039;&#039; (July 2000) [[http://networking.earthweb.com/netos/oslin/article/0,,12284_623661,00.html html]]&lt;br /&gt;
&lt;br /&gt;
* An &#039;&#039;IRIX-XFS data sheet&#039;&#039; (July 1999) [[http://oss.sgi.com/projects/xfs/papers/IRIX_xfs_data_sheet.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;Getting Started with XFS&#039;&#039; book (1994) [[http://oss.sgi.com/projects/xfs/papers/getting_started_with_xfs.pdf pdf]]&lt;br /&gt;
&lt;br /&gt;
* Original &#039;&#039;XFS design documents&#039;&#039; (1993) ([http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_ps/ ps], [http://oss.sgi.com/projects/xfs/design_docs/xfsdocs93_pdf/ pdf])&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Status/2008-August&amp;diff=2395</id>
		<title>Status/2008-August</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Status/2008-August&amp;diff=2395"/>
		<updated>2012-01-15T18:21:43Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: double redirect fixed (sorry)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[XFS_Status_Updates#XFS_status_update_for_August_2008]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Status_August_2008&amp;diff=2394</id>
		<title>XFS Status August 2008</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Status_August_2008&amp;diff=2394"/>
		<updated>2012-01-15T18:21:39Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: double redirect fixed (sorry)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[XFS_Status_Updates#XFS_status_update_for_August_2008]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_status_update_for_August_2008&amp;diff=2393</id>
		<title>XFS status update for August 2008</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_status_update_for_August_2008&amp;diff=2393"/>
		<updated>2012-01-15T18:21:36Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: double redirect fixed (sorry)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[XFS_Status_Updates#XFS_status_update_for_August_2008]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_status_update_for_August_2008&amp;diff=2392</id>
		<title>XFS status update for August 2008</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_status_update_for_August_2008&amp;diff=2392"/>
		<updated>2012-01-15T18:18:03Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: -&amp;gt; Current_events#XFS_status_update_for_August_2008&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Current_events#XFS_status_update_for_August_2008]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Status_August_2008&amp;diff=2391</id>
		<title>XFS Status August 2008</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Status_August_2008&amp;diff=2391"/>
		<updated>2012-01-15T18:17:27Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: -&amp;gt; Current_events#XFS_status_update_for_August_2008&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Current_events#XFS_status_update_for_August_2008]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2384</id>
		<title>XFS FAQ</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2384"/>
		<updated>2011-12-27T21:48:07Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: Undo revision 2383 by ColleenGrimes (Talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Info from: [http://oss.sgi.com/projects/xfs/faq.html main XFS faq at SGI]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many thanks to earlier maintainers of this document - Thomas Graichen and Seth Mos.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about XFS? ==&lt;br /&gt;
&lt;br /&gt;
The SGI XFS project page http://oss.sgi.com/projects/xfs/ is the definitive reference. It contains pointers to whitepapers, books, articles, etc.&lt;br /&gt;
&lt;br /&gt;
You could also join the [[XFS_email_list_and_archives|XFS mailing list]] or the &#039;&#039;&#039;&amp;lt;nowiki&amp;gt;#xfs&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039; IRC channel on &#039;&#039;irc.freenode.net&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about ACLs? ==&lt;br /&gt;
&lt;br /&gt;
Andreas Gruenbacher maintains the Extended Attribute and POSIX ACL documentation for Linux at http://acl.bestbits.at/&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;acl(5)&#039;&#039;&#039; manual page is also quite extensive.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find information about the internals of XFS? ==&lt;br /&gt;
&lt;br /&gt;
An [http://oss.sgi.com/projects/xfs/training/ SGI XFS Training course] aimed at developers, triage and support staff, and serious users has been in development. Parts of the course are clearly still incomplete, but there is enough content to be useful to a broad range of users.&lt;br /&gt;
&lt;br /&gt;
Barry Naujok has documented the [http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf XFS ondisk format] which is a very useful reference.&lt;br /&gt;
&lt;br /&gt;
== Q: What partition type should I use for XFS on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Linux native filesystem (83).&lt;br /&gt;
&lt;br /&gt;
== Q: What mount options does XFS have? ==&lt;br /&gt;
&lt;br /&gt;
There are a number of mount options influencing XFS filesystems - refer to the &#039;&#039;&#039;mount(8)&#039;&#039;&#039; manual page or the documentation in the kernel source tree itself ([http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD Documentation/filesystems/xfs.txt])&lt;br /&gt;
&lt;br /&gt;
== Q: Is there any relation between the XFS utilities and the kernel version? ==&lt;br /&gt;
&lt;br /&gt;
No, there is no relation. Newer utilities tend to mainly have fixes and checks the previous versions might not have. New features are also added in a backward compatible way - if they are enabled via mkfs, an incapable (old) kernel will recognize that it does not understand the new feature, and refuse to mount the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Does it run on platforms other than i386? ==&lt;br /&gt;
&lt;br /&gt;
XFS runs on all of the platforms that Linux supports. It is more tested on the more common platforms, especially the i386 family. Its also well tested on the IA64 platform since thats the platform SGI Linux products use.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Do quotas work on XFS? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
To use quotas with XFS, you need to enable XFS quota support when you configure your kernel. You also need to specify quota support when mounting. You can get the Linux quota utilities at their sourceforge website [http://sourceforge.net/projects/linuxquota/  http://sourceforge.net/projects/linuxquota/] or use &#039;&#039;&#039;xfs_quota(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: What&#039;s project quota? ==&lt;br /&gt;
&lt;br /&gt;
The  project  quota  is a quota mechanism in XFS can be used to implement a form of directory tree quota, where a specified directory and all of the files and subdirectories below it (i.e. a tree) can be restricted to using a subset of the available space in the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Can group quota and project quota be used at the same time? ==&lt;br /&gt;
&lt;br /&gt;
No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Is umounting prjquota (project quota) enabled fs and mounting it again with grpquota (group quota) removing prjquota limits previously set from fs (and vice versa) ? ==&lt;br /&gt;
&lt;br /&gt;
To be answered.&lt;br /&gt;
&lt;br /&gt;
== Q: Are there any dump/restore tools for XFS? ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and &#039;&#039;&#039;xfsrestore(8)&#039;&#039;&#039; are fully supported. The tape format is the same as on IRIX, so tapes are interchangeable between operating systems.&lt;br /&gt;
&lt;br /&gt;
== Q: Does LILO work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
This depends on where you install LILO.&lt;br /&gt;
&lt;br /&gt;
Yes, for MBR (Master Boot Record) installations.&lt;br /&gt;
&lt;br /&gt;
No, for root partition installations because the XFS superblock is written at block zero, where LILO would be installed. This is to maintain compatibility with the IRIX on-disk format, and will not be changed.&lt;br /&gt;
&lt;br /&gt;
== Q: Does GRUB work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
There is native XFS filesystem support for GRUB starting with version 0.91 and onward. Unfortunately, GRUB used to make incorrect assumptions about being able to read a block device image while a filesystem is mounted and actively being written to, which could cause intermittent problems when using XFS. This has reportedly since been fixed, and the 0.97 version (at least) of GRUB is apparently stable.&lt;br /&gt;
&lt;br /&gt;
== Q: Can XFS be used for a root filesystem? ==&lt;br /&gt;
&lt;br /&gt;
Yes, with one caveat: Linux does not support an external XFS journal for the root filesystem via the &amp;quot;rootflags=&amp;quot; kernel parameter. To use an external journal for the root filesystem in Linux, an init ramdisk must mount the root filesystem with explicit &amp;quot;logdev=&amp;quot; specified. [http://mindplusplus.wordpress.com/2008/07/27/scratching-an-i.html More information here.]&lt;br /&gt;
&lt;br /&gt;
== Q: Will I be able to use my IRIX XFS filesystems on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously, you should back-up your data before trying to move it between systems. Filesystems must be &amp;quot;clean&amp;quot; when moved (i.e. unmounted). If you plan to use IRIX filesystems on Linux keep the following points in mind: the kernel needs to have SGI partition support enabled; there is no XLV support in Linux, so you are unable to read IRIX filesystems which use the XLV volume manager; also not all blocksizes available on IRIX are available on Linux (only blocksizes less than or equal to the pagesize of the architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is possible for now). Make sure that the directory format is version 2 on the IRIX filesystems (this is the default since IRIX 6.5.5). Linux can only read v2 directories.&lt;br /&gt;
&lt;br /&gt;
== Q: Is there a way to make a XFS filesystem larger or smaller? ==&lt;br /&gt;
&lt;br /&gt;
You can &#039;&#039;NOT&#039;&#039; make a XFS partition smaller online. The only way to shrink is to do a complete dump, mkfs and restore.&lt;br /&gt;
&lt;br /&gt;
An XFS filesystem may be enlarged by using &#039;&#039;&#039;xfs_growfs(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
If using partitions, you need to have free space after this partition to do so. Remove partition, recreate it larger with the &#039;&#039;exact same&#039;&#039; starting point. Run &#039;&#039;&#039;xfs_growfs&#039;&#039;&#039; to make the partition larger. Note - editing partition tables is a dangerous pastime, so back up your filesystem before doing so.&lt;br /&gt;
&lt;br /&gt;
Using XFS filesystems on top of a volume manager makes this a lot easier.&lt;br /&gt;
&lt;br /&gt;
== Q: What information should I include when reporting a problem? ==&lt;br /&gt;
&lt;br /&gt;
Things to include are what version of XFS you are using, if this is a CVS version of what date and version of the kernel. If you have problems with userland packages please report the version of the package you are using.&lt;br /&gt;
&lt;br /&gt;
If the problem relates to a particular filesystem, the output from the &#039;&#039;&#039;xfs_info(8)&#039;&#039;&#039; command and any &#039;&#039;&#039;mount(8)&#039;&#039;&#039; options in use will also be useful to the developers.&lt;br /&gt;
&lt;br /&gt;
If you experience an oops, please run it through &#039;&#039;&#039;ksymoops&#039;&#039;&#039; so that it can be interpreted.&lt;br /&gt;
&lt;br /&gt;
If you have a filesystem that cannot be repaired, make sure you have xfsprogs 2.9.0 or later and run &#039;&#039;&#039;xfs_metadump(8)&#039;&#039;&#039; to capture the metadata (which obfuscates filenames and attributes to protect your privacy) and make the dump available for someone to analyse.&lt;br /&gt;
&lt;br /&gt;
== Q: Mounting an XFS filesystem does not work - what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
If mount prints an error message something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
     mount: /dev/hda5 has wrong major or minor number&lt;br /&gt;
&lt;br /&gt;
you either do not have XFS compiled into the kernel (or you forgot to load the modules) or you did not use the &amp;quot;-t xfs&amp;quot; option on mount or the &amp;quot;xfs&amp;quot; option in &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you get something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 mount: wrong fs type, bad option, bad superblock on /dev/sda1,&lt;br /&gt;
        or too many mounted file systems&lt;br /&gt;
&lt;br /&gt;
Refer to your system log file (&amp;lt;tt&amp;gt;/var/log/messages&amp;lt;/tt&amp;gt;) for a detailed diagnostic message from the kernel.&lt;br /&gt;
&lt;br /&gt;
== Q: Does the filesystem have an undelete capability? ==&lt;br /&gt;
&lt;br /&gt;
There is no [[undelete]] in XFS (so far).&lt;br /&gt;
&lt;br /&gt;
However at least some XFS driver implementations do not wipe file information nodes completely  so there are chance to recover files with specialized commercial closed source software like [http://www.ufsexplorer.com/rdr_xfs.php Raise Data Recovery for XFS].&lt;br /&gt;
&lt;br /&gt;
In this kind of XFS driver implementation it does not re-use directory entries immediately so there are chance to get back &lt;br /&gt;
recently deleted files even with their real names.&lt;br /&gt;
&lt;br /&gt;
[[xfs_irecover]] or [[xfsr]] may help too ( http://rzr.online.fr/q/recover provide a few links )&lt;br /&gt;
&lt;br /&gt;
This applies to most recent Linux distributions (versions?), as well as to most popular NAS boxes that use embedded linux and XFS file system.&lt;br /&gt;
&lt;br /&gt;
Anyway, the best is to always keep backups.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I backup a XFS filesystem and ACLs? ==&lt;br /&gt;
&lt;br /&gt;
You can backup a XFS filesystem with utilities like &#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and standard &#039;&#039;&#039;tar(1)&#039;&#039;&#039; for standard files. If you want to backup ACLs you will need to use &#039;&#039;&#039;xfsdump&#039;&#039;&#039; or [http://www.bacula.org/en/dev-manual/Current_State_Bacula.html Bacula] (&amp;gt; version 3.1.4) or [http://rsync.samba.org/ rsync] (&amp;gt;= version 3.0.0) to backup ACLs and EAs. &#039;&#039;&#039;xfsdump&#039;&#039;&#039; can also be integrated with [http://www.amanda.org/ amanda(8)].&lt;br /&gt;
&lt;br /&gt;
== Q: I see applications returning error 990 or &amp;quot;Structure needs cleaning&amp;quot;, what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
The error 990 stands for [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=blob;f=fs/xfs/linux-2.6/xfs_linux.h#l145 EFSCORRUPTED] which usually means XFS has detected a filesystem metadata problem and has shut the filesystem down to prevent further damage. Also, since about June 2006, we [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=commit;h=da2f4d679c8070ba5b6a920281e495917b293aa0 converted from EFSCORRUPTED/990 over to using EUCLEAN], &amp;quot;Structure needs cleaning.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The cause can be pretty much anything, unfortunately - filesystem, virtual memory manager, volume manager, device driver, or hardware.&lt;br /&gt;
&lt;br /&gt;
There should be a detailed console message when this initially happens. The messages have important information giving hints to developers as to the earliest point that a problem was detected. It is there to protect your data.&lt;br /&gt;
&lt;br /&gt;
You can use xfs_check and xfs_repair to remedy the problem (with the file system unmounted).&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I see binary NULLS in some files after recovery when I unplugged the power? ==&lt;br /&gt;
&lt;br /&gt;
Update: This issue has been addressed with a CVS fix on the 29th March 2007 and merged into mainline on 8th May 2007 for 2.6.22-rc1.&lt;br /&gt;
&lt;br /&gt;
XFS journals metadata updates, not data updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash.&lt;br /&gt;
&lt;br /&gt;
Since XFS does not write data out immediately unless you tell it to with fsync, an O_SYNC or O_DIRECT open (the same is true of other filesystems), you are looking at an inode which was flushed out, but whose data was not. Typically you&#039;ll find that the inode is not taking any space since all it has is a size but no extents allocated (try examining the file with the &#039;&#039;&#039;xfs_bmap(8)&#039;&#039;&#039; command).&lt;br /&gt;
&lt;br /&gt;
== Q: What is the problem with the write cache on journaled filesystems? ==&lt;br /&gt;
&lt;br /&gt;
Many drives use a write back cache in order to speed up the performance of writes.  However, there are conditions such as power failure when the write cache memory is never flushed to the actual disk.  Further, the drive can de-stage data from the write cache to the platters in any order that it chooses.  This causes problems for XFS and journaled filesystems in general because they rely on knowing when a write has completed to the disk. They need to know that the log information has made it to disk before allowing metadata to go to disk.  When the metadata makes it to disk then the transaction can effectively be deleted from the log resulting in movement of the tail of the log and thus freeing up some log space. So if the writes never make it to the physical disk, then the ordering is violated and the log and metadata can be lost, resulting in filesystem corruption.&lt;br /&gt;
&lt;br /&gt;
With hard disk cache sizes of currently (Jan 2009) up to 32MB that can be a lot of valuable information.  In a RAID with 8 such disks these adds to 256MB, and the chance of having filesystem metadata in the cache is so high that you have a very high chance of big data losses on a power outage.&lt;br /&gt;
&lt;br /&gt;
With a single hard disk and barriers turned on (on=default), the drive write cache is flushed before and after a barrier is issued.  A powerfail &amp;quot;only&amp;quot; loses data in the cache but no essential ordering is violated, and corruption will not occur.&lt;br /&gt;
&lt;br /&gt;
With a RAID controller with battery backed controller cache and cache in write back mode, you should turn off barriers - they are unnecessary in this case, and if the controller honors the cache flushes, it will be harmful to performance.  But then you *must* disable the individual hard disk write cache in order to ensure to keep the filesystem intact after a power failure. The method for doing this is different for each RAID controller. See the section about RAID controllers below.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I tell if I have the disk write cache enabled? ==&lt;br /&gt;
&lt;br /&gt;
For SCSI/SATA:&lt;br /&gt;
&lt;br /&gt;
* Look in dmesg(8) output for a driver line, such as:&amp;lt;br /&amp;gt; &amp;quot;SCSI device sda: drive cache: write back&amp;quot;&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# sginfo -c /dev/sda | grep -i &#039;write cache&#039; &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For PATA/SATA (although for SATA this only works on a recent kernel with ATA command passthrough):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -I /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; and look under &amp;quot;Enabled Supported&amp;quot; for &amp;quot;Write cache&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
== Q: How can I address the problem with the disk write cache? ==&lt;br /&gt;
&lt;br /&gt;
=== Disabling the disk write back cache. ===&lt;br /&gt;
&lt;br /&gt;
For SATA/PATA(IDE): (although for SATA this only works on a recent kernel with ATA command passthrough):&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -W0 /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # hdparm -W0 /dev/hda&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# blktool /dev/sda wcache off&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # blktool /dev/hda wcache off&lt;br /&gt;
&lt;br /&gt;
For SCSI:&lt;br /&gt;
&lt;br /&gt;
* Using sginfo(8) which is a little tedious&amp;lt;br /&amp;gt; It takes 3 steps. For example:&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -c /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives a list of attribute names and values&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cX /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives an array of cache values which you must match up with from step 1, e.g.&amp;lt;br /&amp;gt; 0 0 0 1 0 1 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cXR /dev/sda 0 0 0 1 0 0 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; allows you to reset the value of the cache attributes.&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
This disabling is kept persistent for a SCSI disk. However, for a SATA/PATA disk this needs to be done after every reset as it will reset back to the default of the write cache enabled. And a reset can happen after reboot or on error recovery of the drive. This makes it rather difficult to guarantee that the write cache is maintained as disabled.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using an external log. ===&lt;br /&gt;
&lt;br /&gt;
Some people have considered the idea of using an external log on a separate drive with the write cache disabled and the rest of the file system on another disk with the write cache enabled. However, that will &#039;&#039;&#039;not&#039;&#039;&#039; solve the problem. For example, the tail of the log is moved when we are notified that a metadata write is completed to disk and we won&#039;t be able to guarantee that if the metadata is on a drive with the write cache enabled.&lt;br /&gt;
&lt;br /&gt;
In fact using an external log will disable XFS&#039; write barrier support.&lt;br /&gt;
&lt;br /&gt;
=== Write barrier support. ===&lt;br /&gt;
&lt;br /&gt;
Write barrier support is enabled by default in XFS since kernel version 2.6.17. It is disabled by mounting the filesystem with &amp;quot;nobarrier&amp;quot;. Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported with external log device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported by the underlying device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, trial barrier write failed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If the filesystem is mounted with an external log device then we currently don&#039;t support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn&#039;t support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.&lt;br /&gt;
&lt;br /&gt;
== Q. Should barriers be enabled with storage which has a persistent write cache? ==&lt;br /&gt;
&lt;br /&gt;
Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with &amp;quot;nobarrier&amp;quot;. But take care about the hard disk write cache, which should be off.&lt;br /&gt;
&lt;br /&gt;
== Q. Which settings does my RAID controller need ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s hard to tell because there are so many controllers. Please consult your RAID controller documentation to determine how to change these settings, but we try to give an overview here:&lt;br /&gt;
&lt;br /&gt;
Real RAID controllers (not those found onboard of mainboards) normally have a battery backed cache (or an [http://en.wikipedia.org/wiki/Electric_double-layer_capacitor ultracapacitor] + flash memory &amp;quot;[http://www.tweaktown.com/articles/2800/adaptec_zero_maintenance_cache_protection_explained/ zero maintenance cache]&amp;quot;) which is used for buffering writes to improve speed. Even if it&#039;s battery backed, the individual hard disk write caches need to be turned off, as they are not protected from a powerfail and will just lose all contents in that case.&lt;br /&gt;
&lt;br /&gt;
* onboard RAID controllers: there are so many different types it&#039;s hard to tell. Generally, those controllers have no cache, but let the hard disk write cache on. That can lead to the bad situation that after a powerfail with RAID-1 when only parts of the disk cache have been written, the controller doesn&#039;t even see that the disks are out of sync, as the disks can resort cached blocks and might have saved the superblock info, but then lost different data contents. So, turn off disk write caches before using the RAID function.&lt;br /&gt;
&lt;br /&gt;
* 3ware: /cX/uX set cache=off, see http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86&lt;br /&gt;
&lt;br /&gt;
* Adaptec: allows setting individual drives cache&lt;br /&gt;
arcconf setcache &amp;lt;disk&amp;gt; wb|wt&lt;br /&gt;
wb=write back, which means write cache on, wt=write through, which means write cache off. So &amp;quot;wt&amp;quot; should be chosen.&lt;br /&gt;
&lt;br /&gt;
* Areca: In archttp under &amp;quot;System Controls&amp;quot; -&amp;gt; &amp;quot;System Configuration&amp;quot; there&#039;s the option &amp;quot;Disk Write Cache Mode&amp;quot; (defaults &amp;quot;Auto&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Off&amp;quot;: disk write cache is turned off&lt;br /&gt;
&lt;br /&gt;
&amp;quot;On&amp;quot;: disk write cache is enabled, this is not safe for your data but fast&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Auto&amp;quot;: If you use a BBM (battery backup module, which you really should use if you care about your data), the controller automatically turns disk writes off, to protect your data. In case no BBM is attached, the controller switches to &amp;quot;On&amp;quot;, because neither controller cache nor disk cache is safe so you don&#039;t seem to care about your data and just want high speed (which you get then).&lt;br /&gt;
&lt;br /&gt;
That&#039;s a very sensible default so you can let it &amp;quot;Auto&amp;quot; or enforce &amp;quot;Off&amp;quot; to be sure.&lt;br /&gt;
&lt;br /&gt;
* LSI MegaRAID: allows setting individual disks cache:&lt;br /&gt;
 MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL                          # flushes the controller cache&lt;br /&gt;
 MegaCli -LDGetProp -Cache    -LN|-L0,1,2|-LAll -aN|-a0,1,2|-aALL  # shows the controller cache settings&lt;br /&gt;
 MegaCli -LDGetProp -DskCache -LN|-L0,1,2|-LAll -aN|-a0,1,2|-aALL  # shows the disk cache settings (for all phys. disks in logical disk)&lt;br /&gt;
 MegaCli -LDSetProp -EnDskCache|DisDskCache  -LN|-L0,1,2|-LAll  -aN|-a0,1,2|-aALL # set disk cache setting&lt;br /&gt;
&lt;br /&gt;
* Xyratex: from the docs: &amp;quot;Write cache includes the disk drive cache and controller cache.&amp;quot;. So that means you can only set the drive caches and the unit caches together. To protect your data, turn it off, but write performance will suffer badly as also the controller write cache is disabled.&lt;br /&gt;
&lt;br /&gt;
== Q: Which settings are best with virtualization like VMware, XEN, qemu? ==&lt;br /&gt;
&lt;br /&gt;
The biggest problem is that those products seem to also virtualize disk &lt;br /&gt;
writes in a way that even barriers don&#039;t work any more, which means even &lt;br /&gt;
a fsync is not reliable. Tests confirm that unplugging the power from &lt;br /&gt;
such a system even with RAID controller with battery backed cache and &lt;br /&gt;
hard disk cache turned off (which is safe on a normal host) you can &lt;br /&gt;
destroy a database within the virtual machine (client, domU whatever you &lt;br /&gt;
call it).&lt;br /&gt;
&lt;br /&gt;
In qemu you can specify cache=off on the line specifying the virtual &lt;br /&gt;
disk. For others information is missing.&lt;br /&gt;
&lt;br /&gt;
== Q: What is the issue with directory corruption in Linux 2.6.17? ==&lt;br /&gt;
&lt;br /&gt;
In the Linux kernel 2.6.17 release a subtle bug was accidentally introduced into the XFS directory code by some &amp;quot;sparse&amp;quot; endian annotations. This bug was sufficiently uncommon (it only affects a certain type of format change, in Node or B-Tree format directories, and only in certain situations) that it was not detected during our regular regression testing, but it has been observed in the wild by a number of people now.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: the fix is included in 2.6.17.7 and later kernels.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To add insult to injury, &#039;&#039;&#039;xfs_repair(8)&#039;&#039;&#039; is currently not correcting these directories on detection of this corrupt state either. This &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; issue is actively being worked on, and a fixed version will be available shortly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
No other kernel versions are affected. However, using a corrupt filesystem on other kernels can still result in the filesystem being shutdown if the problem has not been rectified (on disk), making it seem like other kernels are affected.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;xfs_check&#039;&#039;&#039; tool, or &#039;&#039;&#039;xfs_repair -n&#039;&#039;&#039;, should be able to detect any directory corruption.&lt;br /&gt;
&lt;br /&gt;
Until a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; binary is available, one can make use of the &#039;&#039;&#039;xfs_db(8)&#039;&#039;&#039; command to mark the problem directory for removal (see the example below). A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; invocation will remove the directory and move all contents into &amp;quot;lost+found&amp;quot;, named by inode number (see second example on how to map inode number to directory entry name, which needs to be done _before_ removing the directory itself). The inode number of the corrupt directory is included in the shutdown report issued by the kernel on detection of directory corruption. Using that inode number, this is how one would ensure it is removed:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 core.mode = 040755&lt;br /&gt;
 core.version = 2&lt;br /&gt;
 core.format = 3 (btree)&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; write core.mode 0&lt;br /&gt;
 xfs_db&amp;amp;gt; quit&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; will clear the directory, and add new entries (named by inode number) in lost+found.&lt;br /&gt;
&lt;br /&gt;
The easiest way to map inode numbers to full paths is via &#039;&#039;&#039;xfs_ncheck(8)&#039;&#039;&#039;&amp;lt;nowiki&amp;gt;: &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_ncheck -i 14101 -i 14102 /dev/sdXXX&lt;br /&gt;
       14101 full/path/mumble_fratz_foo_bar_1495&lt;br /&gt;
       14102 full/path/mumble_fratz_foo_bar_1494&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should this not work, we can manually map inode numbers in B-Tree format directory by taking the following steps:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 ...&lt;br /&gt;
 next_unlinked = null&lt;br /&gt;
 u.bmbt.level = 1&lt;br /&gt;
 u.bmbt.numrecs = 1&lt;br /&gt;
 u.bmbt.keys[1] = [startoff] 1:[0]&lt;br /&gt;
 u.bmbt.ptrs[1] = 1:3628&lt;br /&gt;
 xfs_db&amp;amp;gt; fsblock 3628&lt;br /&gt;
 xfs_db&amp;amp;gt; type bmapbtd&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 magic = 0x424d4150&lt;br /&gt;
 level = 0&lt;br /&gt;
 numrecs = 19&lt;br /&gt;
 leftsib = null&lt;br /&gt;
 rightsib = null&lt;br /&gt;
 recs[1-19] = [startoff,startblock,blockcount,extentflag]&lt;br /&gt;
        1:[0,3088,4,0] 2:[4,3128,8,0] 3:[12,3308,4,0] 4:[16,3360,4,0]&lt;br /&gt;
        5:[20,3496,8,0] 6:[28,3552,8,0] 7:[36,3624,4,0] 8:[40,3633,4,0]&lt;br /&gt;
        9:[44,3688,8,0] 10:[52,3744,4,0] 11:[56,3784,8,0]&lt;br /&gt;
        12:[64,3840,8,0] 13:[72,3896,4,0] 14:[33554432,3092,4,0]&lt;br /&gt;
        15:[33554436,3488,8,0] 16:[33554444,3629,4,0]&lt;br /&gt;
        17:[33554448,3748,4,0] 18:[33554452,3900,4,0]&lt;br /&gt;
        19:[67108864,3364,4,0]&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point we are looking at the extents that hold all of the directory information. There are three types of extent here, we have the data blocks (extents 1 through 13 above), then the leaf blocks (extents 14 through 18), then the freelist blocks (extent 19 above). The jumps in the first field (start offset) indicate our progression through each of the three types. For recovering file names, we are only interested in the data blocks, so we can now feed those offset numbers into the &#039;&#039;&#039;xfs_db&#039;&#039;&#039; dblock command. So, for the fifth extent - 5:[20,3496,8,0] - listed above:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; dblock 20&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 dhdr.magic = 0x58443244&lt;br /&gt;
 dhdr.bestfree[0].offset = 0&lt;br /&gt;
 dhdr.bestfree[0].length = 0&lt;br /&gt;
 dhdr.bestfree[1].offset = 0&lt;br /&gt;
 dhdr.bestfree[1].length = 0&lt;br /&gt;
 dhdr.bestfree[2].offset = 0&lt;br /&gt;
 dhdr.bestfree[2].length = 0&lt;br /&gt;
 du[0].inumber = 13937&lt;br /&gt;
 du[0].namelen = 25&lt;br /&gt;
 du[0].name = &amp;quot;mumble_fratz_foo_bar_1595&amp;quot;&lt;br /&gt;
 du[0].tag = 0x10&lt;br /&gt;
 du[1].inumber = 13938&lt;br /&gt;
 du[1].namelen = 25&lt;br /&gt;
 du[1].name = &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;&lt;br /&gt;
 du[1].tag = 0x38&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
So, here we can see that inode number 13938 matches up with name &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;. Iterate through all the extents, and extract all the name-to-inode-number mappings you can, as these will be useful when looking at &amp;quot;lost+found&amp;quot; (once &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; has removed the corrupt directory).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why does my &amp;gt; 2TB XFS partition disappear when I reboot ? ==&lt;br /&gt;
&lt;br /&gt;
Strictly speaking this is not an XFS problem.&lt;br /&gt;
&lt;br /&gt;
To support &amp;gt; 2TB partitions you need two things: a kernel that supports large block devices (&amp;lt;tt&amp;gt;CONFIG_LBD=y&amp;lt;/tt&amp;gt;) and a partition table format that can hold large partitions.  The default DOS partition tables don&#039;t.  The best partition format for&lt;br /&gt;
&amp;gt; 2TB partitions is the EFI GPT format (&amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Without CONFIG_LBD=y you can&#039;t even create the filesystem, but without &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt; it works fine until you reboot at which point the partition will disappear.  Note that you need to enable the &amp;lt;tt&amp;gt;CONFIG_PARTITION_ADVANCED&amp;lt;/tt&amp;gt; option before you can set &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I receive &amp;lt;tt&amp;gt;No space left on device&amp;lt;/tt&amp;gt; after &amp;lt;tt&amp;gt;xfs_growfs&amp;lt;/tt&amp;gt;? ==&lt;br /&gt;
&lt;br /&gt;
After [http://oss.sgi.com/pipermail/xfs/2009-January/039828.html growing a XFS filesystem], df(1) would show enough free space but attempts to write to the filesystem result in -ENOSPC. To fix this, [http://oss.sgi.com/pipermail/xfs/2009-January/039835.html Dave Chinner advised]:&lt;br /&gt;
&lt;br /&gt;
  The only way to fix this is to move data around to free up space&lt;br /&gt;
  below 1TB. Find your oldest data (i.e. that was around before even&lt;br /&gt;
  the first grow) and move it off the filesystem (move, not copy).&lt;br /&gt;
  Then if you copy it back on, the data blocks will end up above 1TB&lt;br /&gt;
  and that should leave you with plenty of space for inodes below 1TB.&lt;br /&gt;
  &lt;br /&gt;
  A complete dump and restore will also fix the problem ;)&lt;br /&gt;
&lt;br /&gt;
Also, you can add &#039;inode64&#039; to your mount options to allow inodes to live above 1TB.&lt;br /&gt;
&lt;br /&gt;
example:[https://www.centos.org/modules/newbb/viewtopic.php?topic_id=30703&amp;amp;forum=38 | No space left on device on xfs filesystem with 7.7TB free]&lt;br /&gt;
&lt;br /&gt;
== Q: Is using noatime or/and nodiratime at mount time giving any performance benefits in xfs (or not using them performance decrease)? ==&lt;br /&gt;
&lt;br /&gt;
The default atime behaviour is relatime, which has almost no overhead compared to noatime but still maintains sane atime values. All Linux filesystems use this as the default now (since around 2.6.30), but XFS has used relatime-like behaviour since 2006, so no-one should really need to ever use noatime on XFS for performance reasons. &lt;br /&gt;
&lt;br /&gt;
Also, noatime implies nodiratime, so there is never a need to specify nodiratime when noatime is also specified.&lt;br /&gt;
&lt;br /&gt;
== Q: How to get around a bad inode repair is unable to clean up ==&lt;br /&gt;
&lt;br /&gt;
The trick is go in with xfs_db and mark the inode as a deleted, which will cause repair to clean it up and finish the remove process.&lt;br /&gt;
&lt;br /&gt;
  xfs_db -x -c &#039;inode XXX&#039; -c &#039;write core.nextents 0&#039; -c &#039;write core.size 0&#039; /dev/hdXX&lt;br /&gt;
&lt;br /&gt;
== Q: How to calculate the correct sunit,swidth values for optimal performance ==&lt;br /&gt;
&lt;br /&gt;
XFS allows to optimize for a given RAID stripe unit (stripe size) and stripe width (number of data disks) via mount options.&lt;br /&gt;
&lt;br /&gt;
These options can be sometimes autodetected (for example with md raid and recent enough kernel (&amp;gt;= 2.6.32) and xfsprogs (&amp;gt;= 3.1.1) built with libblkid support) but manual calculation is needed for most of hardware raids.&lt;br /&gt;
&lt;br /&gt;
The calculation of these values is quite simple:&lt;br /&gt;
&lt;br /&gt;
  su = &amp;lt;RAID controllers stripe size in BYTES (or KiBytes when used with k)&amp;gt;&lt;br /&gt;
  sw = &amp;lt;# of data disks (don&#039;t count parity disks)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So if your RAID controller has a stripe size of 64KB, and you have a RAID-6 with 8 disks, use&lt;br /&gt;
&lt;br /&gt;
  su = 64k&lt;br /&gt;
  sw = 6 (RAID-6 of 8 disks has 6 data disks)&lt;br /&gt;
&lt;br /&gt;
A RAID stripe size of 256KB with a RAID-10 over 16 disks should use&lt;br /&gt;
&lt;br /&gt;
  su = 256k&lt;br /&gt;
  sw = 8 (RAID-10 of 16 disks has 8 data disks)&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use &amp;quot;sunit&amp;quot; instead of &amp;quot;su&amp;quot; and &amp;quot;swidth&amp;quot; instead of &amp;quot;sw&amp;quot; but then sunit/swidth values need to be specified in &amp;quot;number of 512B sectors&amp;quot;!&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;tt&amp;gt;xfs_info&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mkfs.xfs&amp;lt;/tt&amp;gt; interpret sunit and swidth as being specified in units of 512B sectors; that&#039;s unfortunately not the unit they&#039;re reported in, however.&lt;br /&gt;
&amp;lt;tt&amp;gt;xfs_info&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mkfs.xfs&amp;lt;/tt&amp;gt; report them in multiples of your basic block size (bsize) and not in 512B sectors.&lt;br /&gt;
&lt;br /&gt;
Assume for example: swidth 1024 (specified at mkfs.xfs command line; so 1024 of 512B sectors) and block size of 4096 (bsize reported by mkfs.xfs at output). You should see swidth 128 (reported by mkfs.xfs at output). 128 * 4096 == 1024 * 512.&lt;br /&gt;
&lt;br /&gt;
When creating XFS filesystem on top of LVM on top of hardware raid please use sunit/swith values as when creating XFS filesystem directly on top of hardware raid.&lt;br /&gt;
&lt;br /&gt;
== Q: Why doesn&#039;t NFS-exporting subdirectories of inode64-mounted filesystem work? ==&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;tt&amp;gt;fsid&amp;lt;/tt&amp;gt; type encodes only 32-bit of the inode number for subdirectory exports.  However, exporting the root of the filesystem works, or using one of the non-default &amp;lt;tt&amp;gt;fsid&amp;lt;/tt&amp;gt; types (&amp;lt;tt&amp;gt;fsid=uuid&amp;lt;/tt&amp;gt; in &amp;lt;tt&amp;gt;/etc/exports&amp;lt;/tt&amp;gt; with recent &amp;lt;tt&amp;gt;nfs-utils&amp;lt;/tt&amp;gt;) should work as well. (Thanks, Christoph!)&lt;br /&gt;
&lt;br /&gt;
== Q: What is the inode64 mount option for? ==&lt;br /&gt;
&lt;br /&gt;
By default, with 32bit inodes, XFS places inodes only in the first 1TB of a disk. If you have a disk with 100TB, all inodes will be stuck in the first TB. This can lead to strange things like &amp;quot;disk full&amp;quot; when you still have plenty space free, but there&#039;s no more place in the first TB to create a new inode. Also, performance sucks.&lt;br /&gt;
&lt;br /&gt;
To come around this, use the inode64 mount options for filesystems &amp;gt;1TB. Inodes will then be placed in the location where their data is, minimizing disk seeks.&lt;br /&gt;
&lt;br /&gt;
Beware that some old programs might have problems reading 64bit inodes, especially over NFS. Your editor used inode64 for over a year with recent (openSUSE 11.1 and higher) distributions using NFS and Samba without any corruptions, so that might be a recent enough distro.&lt;br /&gt;
&lt;br /&gt;
== Q: Can I just try the inode64 option to see if it helps me? ==&lt;br /&gt;
&lt;br /&gt;
Starting from kernel 2.6.35, you can try and then switch back. Older kernels have a bug leading to strange problems if you mount without inode64 again. For example, you can&#039;t access files &amp;amp; dirs that have been created with an inode &amp;gt;32bit anymore.&lt;br /&gt;
&lt;br /&gt;
== Q: Performance: mkfs.xfs -n size=64k option ==&lt;br /&gt;
&lt;br /&gt;
Asking the implications of that mkfs option on the XFS mailing list, Dave Chinner explained it this way:&lt;br /&gt;
&lt;br /&gt;
Inodes are not stored in the directory structure, only the directory entry name and the inode number. Hence the amount of space used by a&lt;br /&gt;
directory entry is determined by the length of the name.&lt;br /&gt;
&lt;br /&gt;
There is extra overhead to allocate large directory blocks (16 pages instead of one, to begin with, then there&#039;s the vmap overhead, etc), so for small directories smaller block sizes are faster for create and unlink operations.&lt;br /&gt;
&lt;br /&gt;
For empty directories, operations on 4k block sized directories consume roughly 50% less CPU that 64k block size directories. The 4k block size directories consume less CPU out to roughly 1.5 million entries where the two are roughly equal. At directory sizes of 10 million entries, 64k directory block operations are consuming about 15% of the CPU that 4k directory block operations consume.&lt;br /&gt;
&lt;br /&gt;
In terms of lookups, the 64k block directory will take less IO but consume more CPU for a given lookup. Hence it depends on your IO latency and whether directory readahead can hide that latency as to which will be faster. e.g. For SSDs, CPU usage might be the limiting factor, not the IO. Right now I don&#039;t have any numbers on what the difference might be - I&#039;m getting 1 billion inode population issues worked out first before I start on measuring cold cache lookup times on 1 billion files....&lt;br /&gt;
&lt;br /&gt;
== Q: I want to tune my XFS filesystems for &amp;lt;something&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
The standard answer you will get to this question is this: use the defaults.&lt;br /&gt;
&lt;br /&gt;
There are few workloads where using non-default mkfs.xfs or mount options make much sense. In general, the default values already used are optimised for best performance in the first place. mkfs.xfs will detect the difference between single disk and MD/DM RAID setups and change the default values it uses to  configure the filesystem appropriately.&lt;br /&gt;
&lt;br /&gt;
There are a lot of &amp;quot;XFS tuning guides&amp;quot; that Google will find for you - most are old, out of date and full of misleading or just plain incorrect information. Don&#039;t expect that tuning your filesystem for optimal bonnie++ numbers will mean your workload will go faster. You should only consider changing the defaults if either: a) you know from experience that your workload causes XFS a specific problem that can be worked around via a configuration change, or b) your workload is demonstrating bad performance when using the default configurations. In this case, you need to understand why your application is causing bad performance before you start tweaking XFS configurations.&lt;br /&gt;
&lt;br /&gt;
In most cases, the only thing you need to to consider for &amp;lt;tt&amp;gt;mkfs.xfs&amp;lt;/tt&amp;gt; is specifying the stripe unit and width for hardware RAID devices. For mount options, the only thing that will change metadata performance considerably are the &amp;lt;tt&amp;gt;logbsize&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;delaylog&amp;lt;/tt&amp;gt; mount options. Increasing &amp;lt;tt&amp;gt;logbsize&amp;lt;/tt&amp;gt; reduces the number of journal IOs for a given workload, and &amp;lt;tt&amp;gt;delaylog&amp;lt;/tt&amp;gt; will reduce them even further. The trade off for this increase in metadata performance is that more operations may be &amp;quot;missing&amp;quot; after recovery if the system crashes while actively making modifications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Which factors influence the memory usage of xfs_repair? ==&lt;br /&gt;
&lt;br /&gt;
This is best explained with an example. The example filesystem is 16Tb, but basically empty (look at icount).&lt;br /&gt;
&lt;br /&gt;
  # xfs_repair -n -vv -m 1 /dev/vda&lt;br /&gt;
  Phase 1 - find and verify superblock...&lt;br /&gt;
          - max_mem = 1024, icount = 64, imem = 0, dblock = 4294967296, dmem = 2097152&lt;br /&gt;
  Required memory for repair is greater that the maximum specified&lt;br /&gt;
  with the -m option. Please increase it to at least 2096.&lt;br /&gt;
  #&lt;br /&gt;
&lt;br /&gt;
xfs_repair is saying it needs at least 2096MB of RAM to repair the filesystem,&lt;br /&gt;
of which 2,097,152KB is needed for tracking free space. &lt;br /&gt;
(The -m 1 argument was telling xfs_repair to use ony 1 MB of memory.)&lt;br /&gt;
&lt;br /&gt;
Now if we add some inodes (50 million) to the filesystem (look at icount again), and the result is:&lt;br /&gt;
&lt;br /&gt;
  # xfs_repair -vv -m 1 /dev/vda&lt;br /&gt;
  Phase 1 - find and verify superblock...&lt;br /&gt;
          - max_mem = 1024, icount = 50401792, imem = 196882, dblock = 4294967296, dmem = 2097152&lt;br /&gt;
  Required memory for repair is greater that the maximum specified&lt;br /&gt;
  with the -m option. Please increase it to at least 2289.&lt;br /&gt;
&lt;br /&gt;
That is now needs at least another 200MB of RAM to run.&lt;br /&gt;
&lt;br /&gt;
The numbers reported by xfs_repair are the absolute minimum required and approximate at that;&lt;br /&gt;
more RAM than this may be required to complete successfully.&lt;br /&gt;
Also, if you only give xfs_repair the minimum required RAM, it will be slow;&lt;br /&gt;
for best repair performance, the more RAM you can give it the better.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why some files of my filesystem shows as &amp;quot;?????????? ? ?      ?          ?                ? filename&amp;quot; ? ==&lt;br /&gt;
&lt;br /&gt;
If ls -l shows you a listing as&lt;br /&gt;
&lt;br /&gt;
  # ?????????? ? ?      ?          ?                ? file1&lt;br /&gt;
    ?????????? ? ?      ?          ?                ? file2&lt;br /&gt;
    ?????????? ? ?      ?          ?                ? file3&lt;br /&gt;
    ?????????? ? ?      ?          ?                ? file4&lt;br /&gt;
&lt;br /&gt;
and errors like:&lt;br /&gt;
  # ls /pathtodir/&lt;br /&gt;
    ls: cannot access /pathtodir/file1: Invalid argument&lt;br /&gt;
    ls: cannot access /pathtodir/file2: Invalid argument&lt;br /&gt;
    ls: cannot access /pathtodir/file3: Invalid argument&lt;br /&gt;
    ls: cannot access /pathtodir/file4: Invalid argument&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
or even:&lt;br /&gt;
  # failed to stat /pathtodir/file1&lt;br /&gt;
&lt;br /&gt;
It is very probable your filesystem must be mounted with inode64&lt;br /&gt;
  # mount -oremount,inode64 /dev/diskpart /mnt/xfs&lt;br /&gt;
&lt;br /&gt;
should make it work ok again.&lt;br /&gt;
If it works, add the option to fstab.&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2377</id>
		<title>XFS FAQ</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_FAQ&amp;diff=2377"/>
		<updated>2011-10-29T19:27:17Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: possible vandalism?&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Info from: [http://oss.sgi.com/projects/xfs/faq.html main XFS faq at SGI]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Many thanks to earlier maintainers of this document - Thomas Graichen and Seth Mos.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about XFS? ==&lt;br /&gt;
&lt;br /&gt;
The SGI XFS project page http://oss.sgi.com/projects/xfs/ is the definitive reference. It contains pointers to whitepapers, books, articles, etc.&lt;br /&gt;
&lt;br /&gt;
You could also join the [[XFS_email_list_and_archives|XFS mailing list]] or the &#039;&#039;&#039;&amp;lt;nowiki&amp;gt;#xfs&amp;lt;/nowiki&amp;gt;&#039;&#039;&#039; IRC channel on &#039;&#039;irc.freenode.net&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find documentation about ACLs? ==&lt;br /&gt;
&lt;br /&gt;
Andreas Gruenbacher maintains the Extended Attribute and POSIX ACL documentation for Linux at http://acl.bestbits.at/&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;acl(5)&#039;&#039;&#039; manual page is also quite extensive.&lt;br /&gt;
&lt;br /&gt;
== Q: Where can I find information about the internals of XFS? ==&lt;br /&gt;
&lt;br /&gt;
An [http://oss.sgi.com/projects/xfs/training/ SGI XFS Training course] aimed at developers, triage and support staff, and serious users has been in development. Parts of the course are clearly still incomplete, but there is enough content to be useful to a broad range of users.&lt;br /&gt;
&lt;br /&gt;
Barry Naujok has documented the [http://oss.sgi.com/projects/xfs/papers/xfs_filesystem_structure.pdf XFS ondisk format] which is a very useful reference.&lt;br /&gt;
&lt;br /&gt;
== Q: What partition type should I use for XFS on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Linux native filesystem (83).&lt;br /&gt;
&lt;br /&gt;
== Q: What mount options does XFS have? ==&lt;br /&gt;
&lt;br /&gt;
There are a number of mount options influencing XFS filesystems - refer to the &#039;&#039;&#039;mount(8)&#039;&#039;&#039; manual page or the documentation in the kernel source tree itself ([http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=blob;f=Documentation/filesystems/xfs.txt;hb=HEAD Documentation/filesystems/xfs.txt])&lt;br /&gt;
&lt;br /&gt;
== Q: Is there any relation between the XFS utilities and the kernel version? ==&lt;br /&gt;
&lt;br /&gt;
No, there is no relation. Newer utilities tend to mainly have fixes and checks the previous versions might not have. New features are also added in a backward compatible way - if they are enabled via mkfs, an incapable (old) kernel will recognize that it does not understand the new feature, and refuse to mount the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Does it run on platforms other than i386? ==&lt;br /&gt;
&lt;br /&gt;
XFS runs on all of the platforms that Linux supports. It is more tested on the more common platforms, especially the i386 family. Its also well tested on the IA64 platform since thats the platform SGI Linux products use.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Do quotas work on XFS? ==&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
To use quotas with XFS, you need to enable XFS quota support when you configure your kernel. You also need to specify quota support when mounting. You can get the Linux quota utilities at their sourceforge website [http://sourceforge.net/projects/linuxquota/  http://sourceforge.net/projects/linuxquota/] or use &#039;&#039;&#039;xfs_quota(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: What&#039;s project quota? ==&lt;br /&gt;
&lt;br /&gt;
The  project  quota  is a quota mechanism in XFS can be used to implement a form of directory tree quota, where a specified directory and all of the files and subdirectories below it (i.e. a tree) can be restricted to using a subset of the available space in the filesystem.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Can group quota and project quota be used at the same time? ==&lt;br /&gt;
&lt;br /&gt;
No, project quota cannot be used with group quota at the same time. On the other hand user quota and project quota can be used simultaneously.&lt;br /&gt;
&lt;br /&gt;
== Q: Quota: Is umounting prjquota (project quota) enabled fs and mounting it again with grpquota (group quota) removing prjquota limits previously set from fs (and vice versa) ? ==&lt;br /&gt;
&lt;br /&gt;
To be answered.&lt;br /&gt;
&lt;br /&gt;
== Q: Are there any dump/restore tools for XFS? ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and &#039;&#039;&#039;xfsrestore(8)&#039;&#039;&#039; are fully supported. The tape format is the same as on IRIX, so tapes are interchangeable between operating systems.&lt;br /&gt;
&lt;br /&gt;
== Q: Does LILO work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
This depends on where you install LILO.&lt;br /&gt;
&lt;br /&gt;
Yes, for MBR (Master Boot Record) installations.&lt;br /&gt;
&lt;br /&gt;
No, for root partition installations because the XFS superblock is written at block zero, where LILO would be installed. This is to maintain compatibility with the IRIX on-disk format, and will not be changed.&lt;br /&gt;
&lt;br /&gt;
== Q: Does GRUB work with XFS? ==&lt;br /&gt;
&lt;br /&gt;
There is native XFS filesystem support for GRUB starting with version 0.91 and onward. Unfortunately, GRUB used to make incorrect assumptions about being able to read a block device image while a filesystem is mounted and actively being written to, which could cause intermittent problems when using XFS. This has reportedly since been fixed, and the 0.97 version (at least) of GRUB is apparently stable.&lt;br /&gt;
&lt;br /&gt;
== Q: Can XFS be used for a root filesystem? ==&lt;br /&gt;
&lt;br /&gt;
Yes, with one caveat: Linux does not support an external XFS journal for the root filesystem via the &amp;quot;rootflags=&amp;quot; kernel parameter. To use an external journal for the root filesystem in Linux, an init ramdisk must mount the root filesystem with explicit &amp;quot;logdev=&amp;quot; specified. [http://mindplusplus.wordpress.com/2008/07/27/scratching-an-i.html More information here.]&lt;br /&gt;
&lt;br /&gt;
== Q: Will I be able to use my IRIX XFS filesystems on Linux? ==&lt;br /&gt;
&lt;br /&gt;
Yes. The on-disk format of XFS is the same on IRIX and Linux. Obviously, you should back-up your data before trying to move it between systems. Filesystems must be &amp;quot;clean&amp;quot; when moved (i.e. unmounted). If you plan to use IRIX filesystems on Linux keep the following points in mind: the kernel needs to have SGI partition support enabled; there is no XLV support in Linux, so you are unable to read IRIX filesystems which use the XLV volume manager; also not all blocksizes available on IRIX are available on Linux (only blocksizes less than or equal to the pagesize of the architecture: 4k for i386, ppc, ... 8k for alpha, sparc, ... is possible for now). Make sure that the directory format is version 2 on the IRIX filesystems (this is the default since IRIX 6.5.5). Linux can only read v2 directories.&lt;br /&gt;
&lt;br /&gt;
== Q: Is there a way to make a XFS filesystem larger or smaller? ==&lt;br /&gt;
&lt;br /&gt;
You can &#039;&#039;NOT&#039;&#039; make a XFS partition smaller online. The only way to shrink is to do a complete dump, mkfs and restore.&lt;br /&gt;
&lt;br /&gt;
An XFS filesystem may be enlarged by using &#039;&#039;&#039;xfs_growfs(8)&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
If using partitions, you need to have free space after this partition to do so. Remove partition, recreate it larger with the &#039;&#039;exact same&#039;&#039; starting point. Run &#039;&#039;&#039;xfs_growfs&#039;&#039;&#039; to make the partition larger. Note - editing partition tables is a dangerous pastime, so back up your filesystem before doing so.&lt;br /&gt;
&lt;br /&gt;
Using XFS filesystems on top of a volume manager makes this a lot easier.&lt;br /&gt;
&lt;br /&gt;
== Q: What information should I include when reporting a problem? ==&lt;br /&gt;
&lt;br /&gt;
Things to include are what version of XFS you are using, if this is a CVS version of what date and version of the kernel. If you have problems with userland packages please report the version of the package you are using.&lt;br /&gt;
&lt;br /&gt;
If the problem relates to a particular filesystem, the output from the &#039;&#039;&#039;xfs_info(8)&#039;&#039;&#039; command and any &#039;&#039;&#039;mount(8)&#039;&#039;&#039; options in use will also be useful to the developers.&lt;br /&gt;
&lt;br /&gt;
If you experience an oops, please run it through &#039;&#039;&#039;ksymoops&#039;&#039;&#039; so that it can be interpreted.&lt;br /&gt;
&lt;br /&gt;
If you have a filesystem that cannot be repaired, make sure you have xfsprogs 2.9.0 or later and run &#039;&#039;&#039;xfs_metadump(8)&#039;&#039;&#039; to capture the metadata (which obfuscates filenames and attributes to protect your privacy) and make the dump available for someone to analyse.&lt;br /&gt;
&lt;br /&gt;
== Q: Mounting an XFS filesystem does not work - what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
If mount prints an error message something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
     mount: /dev/hda5 has wrong major or minor number&lt;br /&gt;
&lt;br /&gt;
you either do not have XFS compiled into the kernel (or you forgot to load the modules) or you did not use the &amp;quot;-t xfs&amp;quot; option on mount or the &amp;quot;xfs&amp;quot; option in &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
If you get something like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 mount: wrong fs type, bad option, bad superblock on /dev/sda1,&lt;br /&gt;
        or too many mounted file systems&lt;br /&gt;
&lt;br /&gt;
Refer to your system log file (&amp;lt;tt&amp;gt;/var/log/messages&amp;lt;/tt&amp;gt;) for a detailed diagnostic message from the kernel.&lt;br /&gt;
&lt;br /&gt;
== Q: Does the filesystem have an undelete capability? ==&lt;br /&gt;
&lt;br /&gt;
There is no [[undelete]] in XFS (so far).&lt;br /&gt;
&lt;br /&gt;
However at least some XFS driver implementations do not wipe file information nodes completely  so there are chance to recover files with specialized commercial closed source software like [http://www.ufsexplorer.com/rdr_xfs.php Raise Data Recovery for XFS].&lt;br /&gt;
&lt;br /&gt;
In this kind of XFS driver implementation it does not re-use directory entries immediately so there are chance to get back &lt;br /&gt;
recently deleted files even with their real names.&lt;br /&gt;
&lt;br /&gt;
[[xfs_irecover]] or [[xfsr]] may help too ( http://rzr.online.fr/q/recover provide a few links )&lt;br /&gt;
&lt;br /&gt;
This applies to most recent Linux distributions (versions?), as well as to most popular NAS boxes that use embedded linux and XFS file system.&lt;br /&gt;
&lt;br /&gt;
Anyway, the best is to always keep backups.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I backup a XFS filesystem and ACLs? ==&lt;br /&gt;
&lt;br /&gt;
You can backup a XFS filesystem with utilities like &#039;&#039;&#039;xfsdump(8)&#039;&#039;&#039; and standard &#039;&#039;&#039;tar(1)&#039;&#039;&#039; for standard files. If you want to backup ACLs you will need to use &#039;&#039;&#039;xfsdump&#039;&#039;&#039; or [http://www.bacula.org/en/dev-manual/Current_State_Bacula.html Bacula] (&amp;gt; version 3.1.4) or [http://rsync.samba.org/ rsync] (&amp;gt;= version 3.0.0) to backup ACLs and EAs. &#039;&#039;&#039;xfsdump&#039;&#039;&#039; can also be integrated with [http://www.amanda.org/ amanda(8)].&lt;br /&gt;
&lt;br /&gt;
== Q: I see applications returning error 990 or &amp;quot;Structure needs cleaning&amp;quot;, what is wrong? ==&lt;br /&gt;
&lt;br /&gt;
The error 990 stands for [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=blob;f=fs/xfs/linux-2.6/xfs_linux.h#l145 EFSCORRUPTED] which usually means XFS has detected a filesystem metadata problem and has shut the filesystem down to prevent further damage. Also, since about June 2006, we [http://oss.sgi.com/cgi-bin/gitweb.cgi?p=xfs/xfs.git;a=commit;h=da2f4d679c8070ba5b6a920281e495917b293aa0 converted from EFSCORRUPTED/990 over to using EUCLEAN], &amp;quot;Structure needs cleaning.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The cause can be pretty much anything, unfortunately - filesystem, virtual memory manager, volume manager, device driver, or hardware.&lt;br /&gt;
&lt;br /&gt;
There should be a detailed console message when this initially happens. The messages have important information giving hints to developers as to the earliest point that a problem was detected. It is there to protect your data.&lt;br /&gt;
&lt;br /&gt;
You can use xfs_check and xfs_repair to remedy the problem (with the file system unmounted).&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I see binary NULLS in some files after recovery when I unplugged the power? ==&lt;br /&gt;
&lt;br /&gt;
Update: This issue has been addressed with a CVS fix on the 29th March 2007 and merged into mainline on 8th May 2007 for 2.6.22-rc1.&lt;br /&gt;
&lt;br /&gt;
XFS journals metadata updates, not data updates. After a crash you are supposed to get a consistent filesystem which looks like the state sometime shortly before the crash, NOT what the in memory image looked like the instant before the crash.&lt;br /&gt;
&lt;br /&gt;
Since XFS does not write data out immediately unless you tell it to with fsync, an O_SYNC or O_DIRECT open (the same is true of other filesystems), you are looking at an inode which was flushed out, but whose data was not. Typically you&#039;ll find that the inode is not taking any space since all it has is a size but no extents allocated (try examining the file with the &#039;&#039;&#039;xfs_bmap(8)&#039;&#039;&#039; command).&lt;br /&gt;
&lt;br /&gt;
== Q: What is the problem with the write cache on journaled filesystems? ==&lt;br /&gt;
&lt;br /&gt;
Many drives use a write back cache in order to speed up the performance of writes.  However, there are conditions such as power failure when the write cache memory is never flushed to the actual disk.  Further, the drive can de-stage data from the write cache to the platters in any order that it chooses.  This causes problems for XFS and journaled filesystems in general because they rely on knowing when a write has completed to the disk. They need to know that the log information has made it to disk before allowing metadata to go to disk.  When the metadata makes it to disk then the transaction can effectively be deleted from the log resulting in movement of the tail of the log and thus freeing up some log space. So if the writes never make it to the physical disk, then the ordering is violated and the log and metadata can be lost, resulting in filesystem corruption.&lt;br /&gt;
&lt;br /&gt;
With hard disk cache sizes of currently (Jan 2009) up to 32MB that can be a lot of valuable information.  In a RAID with 8 such disks these adds to 256MB, and the chance of having filesystem metadata in the cache is so high that you have a very high chance of big data losses on a power outage.&lt;br /&gt;
&lt;br /&gt;
With a single hard disk and barriers turned on (on=default), the drive write cache is flushed before and after a barrier is issued.  A powerfail &amp;quot;only&amp;quot; loses data in the cache but no essential ordering is violated, and corruption will not occur.&lt;br /&gt;
&lt;br /&gt;
With a RAID controller with battery backed controller cache and cache in write back mode, you should turn off barriers - they are unnecessary in this case, and if the controller honors the cache flushes, it will be harmful to performance.  But then you *must* disable the individual hard disk write cache in order to ensure to keep the filesystem intact after a power failure. The method for doing this is different for each RAID controller. See the section about RAID controllers below.&lt;br /&gt;
&lt;br /&gt;
== Q: How can I tell if I have the disk write cache enabled? ==&lt;br /&gt;
&lt;br /&gt;
For SCSI/SATA:&lt;br /&gt;
&lt;br /&gt;
* Look in dmesg(8) output for a driver line, such as:&amp;lt;br /&amp;gt; &amp;quot;SCSI device sda: drive cache: write back&amp;quot;&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# sginfo -c /dev/sda | grep -i &#039;write cache&#039; &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For PATA/SATA (although for SATA this only works on a recent kernel with ATA command passthrough):&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -I /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; and look under &amp;quot;Enabled Supported&amp;quot; for &amp;quot;Write cache&amp;quot;&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
== Q: How can I address the problem with the disk write cache? ==&lt;br /&gt;
&lt;br /&gt;
=== Disabling the disk write back cache. ===&lt;br /&gt;
&lt;br /&gt;
For SATA/PATA(IDE): (although for SATA this only works on a recent kernel with ATA command passthrough):&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# hdparm -W0 /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # hdparm -W0 /dev/hda&lt;br /&gt;
* &amp;lt;nowiki&amp;gt;# blktool /dev/sda wcache off&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; # blktool /dev/hda wcache off&lt;br /&gt;
&lt;br /&gt;
For SCSI:&lt;br /&gt;
&lt;br /&gt;
* Using sginfo(8) which is a little tedious&amp;lt;br /&amp;gt; It takes 3 steps. For example:&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -c /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives a list of attribute names and values&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cX /dev/sda&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; which gives an array of cache values which you must match up with from step 1, e.g.&amp;lt;br /&amp;gt; 0 0 0 1 0 1 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&lt;br /&gt;
*# &amp;lt;nowiki&amp;gt;#sginfo -cXR /dev/sda 0 0 0 1 0 0 0 0 0 0 65535 0 65535 65535 1 0 0 0 3 0 0&amp;lt;/nowiki&amp;gt;&amp;lt;br /&amp;gt; allows you to reset the value of the cache attributes.&lt;br /&gt;
&lt;br /&gt;
For RAID controllers:&lt;br /&gt;
&lt;br /&gt;
* See the section about RAID controllers below&lt;br /&gt;
&lt;br /&gt;
This disabling is kept persistent for a SCSI disk. However, for a SATA/PATA disk this needs to be done after every reset as it will reset back to the default of the write cache enabled. And a reset can happen after reboot or on error recovery of the drive. This makes it rather difficult to guarantee that the write cache is maintained as disabled.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using an external log. ===&lt;br /&gt;
&lt;br /&gt;
Some people have considered the idea of using an external log on a separate drive with the write cache disabled and the rest of the file system on another disk with the write cache enabled. However, that will &#039;&#039;&#039;not&#039;&#039;&#039; solve the problem. For example, the tail of the log is moved when we are notified that a metadata write is completed to disk and we won&#039;t be able to guarantee that if the metadata is on a drive with the write cache enabled.&lt;br /&gt;
&lt;br /&gt;
In fact using an external log will disable XFS&#039; write barrier support.&lt;br /&gt;
&lt;br /&gt;
=== Write barrier support. ===&lt;br /&gt;
&lt;br /&gt;
Write barrier support is enabled by default in XFS since kernel version 2.6.17. It is disabled by mounting the filesystem with &amp;quot;nobarrier&amp;quot;. Barrier support will flush the write back cache at the appropriate times (such as on XFS log writes). This is generally the recommended solution, however, you should check the system logs to ensure it was successful. Barriers will be disabled and reported in the log if any of the 3 scenarios occurs:&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported with external log device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, not supported by the underlying device&amp;quot;&lt;br /&gt;
* &amp;quot;Disabling barriers, trial barrier write failed&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If the filesystem is mounted with an external log device then we currently don&#039;t support flushing to the data and log devices (this may change in the future). If the driver tells the block layer that the device does not support write cache flushing with the write cache enabled then it will report that the device doesn&#039;t support it. And finally we will actually test out a barrier write on the superblock and test its error state afterwards, reporting if it fails.&lt;br /&gt;
&lt;br /&gt;
== Q. Should barriers be enabled with storage which has a persistent write cache? ==&lt;br /&gt;
&lt;br /&gt;
Many hardware RAID have a persistent write cache which preserves it across power failure, interface resets, system crashes, etc. Using write barriers in this instance is not recommended and will in fact lower performance. Therefore, it is recommended to turn off the barrier support and mount the filesystem with &amp;quot;nobarrier&amp;quot;. But take care about the hard disk write cache, which should be off.&lt;br /&gt;
&lt;br /&gt;
== Q. Which settings does my RAID controller need ? ==&lt;br /&gt;
&lt;br /&gt;
It&#039;s hard to tell because there are so many controllers. Please consult your RAID controller documentation to determine how to change these settings, but we try to give an overview here:&lt;br /&gt;
&lt;br /&gt;
Real RAID controllers (not those found onboard of mainboards) normally have a battery backed cache (or an [http://en.wikipedia.org/wiki/Electric_double-layer_capacitor ultracapacitor] + flash memory &amp;quot;[http://www.tweaktown.com/articles/2800/adaptec_zero_maintenance_cache_protection_explained/ zero maintenance cache]&amp;quot;) which is used for buffering writes to improve speed. Even if it&#039;s battery backed, the individual hard disk write caches need to be turned off, as they are not protected from a powerfail and will just lose all contents in that case.&lt;br /&gt;
&lt;br /&gt;
* onboard RAID controllers: there are so many different types it&#039;s hard to tell. Generally, those controllers have no cache, but let the hard disk write cache on. That can lead to the bad situation that after a powerfail with RAID-1 when only parts of the disk cache have been written, the controller doesn&#039;t even see that the disks are out of sync, as the disks can resort cached blocks and might have saved the superblock info, but then lost different data contents. So, turn off disk write caches before using the RAID function.&lt;br /&gt;
&lt;br /&gt;
* 3ware: /cX/uX set cache=off, see http://www.3ware.com/support/UserDocs/CLIGuide-9.5.1.1.pdf , page 86&lt;br /&gt;
&lt;br /&gt;
* Adaptec: allows setting individual drives cache&lt;br /&gt;
arcconf setcache &amp;lt;disk&amp;gt; wb|wt&lt;br /&gt;
wb=write back, which means write cache on, wt=write through, which means write cache off. So &amp;quot;wt&amp;quot; should be chosen.&lt;br /&gt;
&lt;br /&gt;
* Areca: In archttp under &amp;quot;System Controls&amp;quot; -&amp;gt; &amp;quot;System Configuration&amp;quot; there&#039;s the option &amp;quot;Disk Write Cache Mode&amp;quot; (defaults &amp;quot;Auto&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Off&amp;quot;: disk write cache is turned off&lt;br /&gt;
&lt;br /&gt;
&amp;quot;On&amp;quot;: disk write cache is enabled, this is not safe for your data but fast&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Auto&amp;quot;: If you use a BBM (battery backup module, which you really should use if you care about your data), the controller automatically turns disk writes off, to protect your data. In case no BBM is attached, the controller switches to &amp;quot;On&amp;quot;, because neither controller cache nor disk cache is safe so you don&#039;t seem to care about your data and just want high speed (which you get then).&lt;br /&gt;
&lt;br /&gt;
That&#039;s a very sensible default so you can let it &amp;quot;Auto&amp;quot; or enforce &amp;quot;Off&amp;quot; to be sure.&lt;br /&gt;
&lt;br /&gt;
* LSI MegaRAID: allows setting individual disks cache:&lt;br /&gt;
 MegaCli -AdpCacheFlush -aN|-a0,1,2|-aALL                          # flushes the controller cache&lt;br /&gt;
 MegaCli -LDGetProp -Cache    -LN|-L0,1,2|-LAll -aN|-a0,1,2|-aALL  # shows the controller cache settings&lt;br /&gt;
 MegaCli -LDGetProp -DskCache -LN|-L0,1,2|-LAll -aN|-a0,1,2|-aALL  # shows the disk cache settings (for all phys. disks in logical disk)&lt;br /&gt;
 MegaCli -LDSetProp -EnDskCache|DisDskCache  -LN|-L0,1,2|-LAll  -aN|-a0,1,2|-aALL # set disk cache setting&lt;br /&gt;
&lt;br /&gt;
* Xyratex: from the docs: &amp;quot;Write cache includes the disk drive cache and controller cache.&amp;quot;. So that means you can only set the drive caches and the unit caches together. To protect your data, turn it off, but write performance will suffer badly as also the controller write cache is disabled.&lt;br /&gt;
&lt;br /&gt;
== Q: Which settings are best with virtualization like VMware, XEN, qemu? ==&lt;br /&gt;
&lt;br /&gt;
The biggest problem is that those products seem to also virtualize disk &lt;br /&gt;
writes in a way that even barriers don&#039;t work any more, which means even &lt;br /&gt;
a fsync is not reliable. Tests confirm that unplugging the power from &lt;br /&gt;
such a system even with RAID controller with battery backed cache and &lt;br /&gt;
hard disk cache turned off (which is safe on a normal host) you can &lt;br /&gt;
destroy a database within the virtual machine (client, domU whatever you &lt;br /&gt;
call it).&lt;br /&gt;
&lt;br /&gt;
In qemu you can specify cache=off on the line specifying the virtual &lt;br /&gt;
disk. For others information is missing.&lt;br /&gt;
&lt;br /&gt;
== Q: What is the issue with directory corruption in Linux 2.6.17? ==&lt;br /&gt;
&lt;br /&gt;
In the Linux kernel 2.6.17 release a subtle bug was accidentally introduced into the XFS directory code by some &amp;quot;sparse&amp;quot; endian annotations. This bug was sufficiently uncommon (it only affects a certain type of format change, in Node or B-Tree format directories, and only in certain situations) that it was not detected during our regular regression testing, but it has been observed in the wild by a number of people now.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: the fix is included in 2.6.17.7 and later kernels.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
To add insult to injury, &#039;&#039;&#039;xfs_repair(8)&#039;&#039;&#039; is currently not correcting these directories on detection of this corrupt state either. This &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; issue is actively being worked on, and a fixed version will be available shortly.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Update: a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; is now available; version 2.8.10 or later of the xfsprogs package contains the fixed version.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
No other kernel versions are affected. However, using a corrupt filesystem on other kernels can still result in the filesystem being shutdown if the problem has not been rectified (on disk), making it seem like other kernels are affected.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;xfs_check&#039;&#039;&#039; tool, or &#039;&#039;&#039;xfs_repair -n&#039;&#039;&#039;, should be able to detect any directory corruption.&lt;br /&gt;
&lt;br /&gt;
Until a fixed &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; binary is available, one can make use of the &#039;&#039;&#039;xfs_db(8)&#039;&#039;&#039; command to mark the problem directory for removal (see the example below). A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; invocation will remove the directory and move all contents into &amp;quot;lost+found&amp;quot;, named by inode number (see second example on how to map inode number to directory entry name, which needs to be done _before_ removing the directory itself). The inode number of the corrupt directory is included in the shutdown report issued by the kernel on detection of directory corruption. Using that inode number, this is how one would ensure it is removed:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 core.mode = 040755&lt;br /&gt;
 core.version = 2&lt;br /&gt;
 core.format = 3 (btree)&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; write core.mode 0&lt;br /&gt;
 xfs_db&amp;amp;gt; quit&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A subsequent &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; will clear the directory, and add new entries (named by inode number) in lost+found.&lt;br /&gt;
&lt;br /&gt;
The easiest way to map inode numbers to full paths is via &#039;&#039;&#039;xfs_ncheck(8)&#039;&#039;&#039;&amp;lt;nowiki&amp;gt;: &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_ncheck -i 14101 -i 14102 /dev/sdXXX&lt;br /&gt;
       14101 full/path/mumble_fratz_foo_bar_1495&lt;br /&gt;
       14102 full/path/mumble_fratz_foo_bar_1494&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should this not work, we can manually map inode numbers in B-Tree format directory by taking the following steps:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&lt;br /&gt;
 # xfs_db -x /dev/sdXXX&lt;br /&gt;
 xfs_db&amp;amp;gt; inode NNN&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 core.magic = 0x494e&lt;br /&gt;
 ...&lt;br /&gt;
 next_unlinked = null&lt;br /&gt;
 u.bmbt.level = 1&lt;br /&gt;
 u.bmbt.numrecs = 1&lt;br /&gt;
 u.bmbt.keys[1] = [startoff] 1:[0]&lt;br /&gt;
 u.bmbt.ptrs[1] = 1:3628&lt;br /&gt;
 xfs_db&amp;amp;gt; fsblock 3628&lt;br /&gt;
 xfs_db&amp;amp;gt; type bmapbtd&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 magic = 0x424d4150&lt;br /&gt;
 level = 0&lt;br /&gt;
 numrecs = 19&lt;br /&gt;
 leftsib = null&lt;br /&gt;
 rightsib = null&lt;br /&gt;
 recs[1-19] = [startoff,startblock,blockcount,extentflag]&lt;br /&gt;
        1:[0,3088,4,0] 2:[4,3128,8,0] 3:[12,3308,4,0] 4:[16,3360,4,0]&lt;br /&gt;
        5:[20,3496,8,0] 6:[28,3552,8,0] 7:[36,3624,4,0] 8:[40,3633,4,0]&lt;br /&gt;
        9:[44,3688,8,0] 10:[52,3744,4,0] 11:[56,3784,8,0]&lt;br /&gt;
        12:[64,3840,8,0] 13:[72,3896,4,0] 14:[33554432,3092,4,0]&lt;br /&gt;
        15:[33554436,3488,8,0] 16:[33554444,3629,4,0]&lt;br /&gt;
        17:[33554448,3748,4,0] 18:[33554452,3900,4,0]&lt;br /&gt;
        19:[67108864,3364,4,0]&lt;br /&gt;
 &amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At this point we are looking at the extents that hold all of the directory information. There are three types of extent here, we have the data blocks (extents 1 through 13 above), then the leaf blocks (extents 14 through 18), then the freelist blocks (extent 19 above). The jumps in the first field (start offset) indicate our progression through each of the three types. For recovering file names, we are only interested in the data blocks, so we can now feed those offset numbers into the &#039;&#039;&#039;xfs_db&#039;&#039;&#039; dblock command. So, for the fifth extent - 5:[20,3496,8,0] - listed above:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ...&lt;br /&gt;
 xfs_db&amp;amp;gt; dblock 20&lt;br /&gt;
 xfs_db&amp;amp;gt; print&lt;br /&gt;
 dhdr.magic = 0x58443244&lt;br /&gt;
 dhdr.bestfree[0].offset = 0&lt;br /&gt;
 dhdr.bestfree[0].length = 0&lt;br /&gt;
 dhdr.bestfree[1].offset = 0&lt;br /&gt;
 dhdr.bestfree[1].length = 0&lt;br /&gt;
 dhdr.bestfree[2].offset = 0&lt;br /&gt;
 dhdr.bestfree[2].length = 0&lt;br /&gt;
 du[0].inumber = 13937&lt;br /&gt;
 du[0].namelen = 25&lt;br /&gt;
 du[0].name = &amp;quot;mumble_fratz_foo_bar_1595&amp;quot;&lt;br /&gt;
 du[0].tag = 0x10&lt;br /&gt;
 du[1].inumber = 13938&lt;br /&gt;
 du[1].namelen = 25&lt;br /&gt;
 du[1].name = &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;&lt;br /&gt;
 du[1].tag = 0x38&lt;br /&gt;
 ...&lt;br /&gt;
&lt;br /&gt;
So, here we can see that inode number 13938 matches up with name &amp;quot;mumble_fratz_foo_bar_1594&amp;quot;. Iterate through all the extents, and extract all the name-to-inode-number mappings you can, as these will be useful when looking at &amp;quot;lost+found&amp;quot; (once &#039;&#039;&#039;xfs_repair&#039;&#039;&#039; has removed the corrupt directory).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Why does my &amp;gt; 2TB XFS partition disappear when I reboot ? ==&lt;br /&gt;
&lt;br /&gt;
Strictly speaking this is not an XFS problem.&lt;br /&gt;
&lt;br /&gt;
To support &amp;gt; 2TB partitions you need two things: a kernel that supports large block devices (&amp;lt;tt&amp;gt;CONFIG_LBD=y&amp;lt;/tt&amp;gt;) and a partition table format that can hold large partitions.  The default DOS partition tables don&#039;t.  The best partition format for&lt;br /&gt;
&amp;gt; 2TB partitions is the EFI GPT format (&amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
Without CONFIG_LBD=y you can&#039;t even create the filesystem, but without &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt; it works fine until you reboot at which point the partition will disappear.  Note that you need to enable the &amp;lt;tt&amp;gt;CONFIG_PARTITION_ADVANCED&amp;lt;/tt&amp;gt; option before you can set &amp;lt;tt&amp;gt;CONFIG_EFI_PARTITION=y&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
== Q: Why do I receive &amp;lt;tt&amp;gt;No space left on device&amp;lt;/tt&amp;gt; after &amp;lt;tt&amp;gt;xfs_growfs&amp;lt;/tt&amp;gt;? ==&lt;br /&gt;
&lt;br /&gt;
After [http://oss.sgi.com/pipermail/xfs/2009-January/039828.html growing a XFS filesystem], df(1) would show enough free space but attempts to write to the filesystem result in -ENOSPC. To fix this, [http://oss.sgi.com/pipermail/xfs/2009-January/039835.html Dave Chinner advised]:&lt;br /&gt;
&lt;br /&gt;
  The only way to fix this is to move data around to free up space&lt;br /&gt;
  below 1TB. Find your oldest data (i.e. that was around before even&lt;br /&gt;
  the first grow) and move it off the filesystem (move, not copy).&lt;br /&gt;
  Then if you copy it back on, the data blocks will end up above 1TB&lt;br /&gt;
  and that should leave you with plenty of space for inodes below 1TB.&lt;br /&gt;
  &lt;br /&gt;
  A complete dump and restore will also fix the problem ;)&lt;br /&gt;
&lt;br /&gt;
Also, you can add &#039;inode64&#039; to your mount options to allow inodes to live above 1TB.&lt;br /&gt;
&lt;br /&gt;
example:[https://www.centos.org/modules/newbb/viewtopic.php?topic_id=30703&amp;amp;forum=38 | No space left on device on xfs filesystem with 7.7TB free]&lt;br /&gt;
&lt;br /&gt;
== Q: Is using noatime or/and nodiratime at mount time giving any performance benefits in xfs (or not using them performance decrease)? ==&lt;br /&gt;
&lt;br /&gt;
The default atime behaviour is relatime, which has almost no overhead compared to noatime but still maintains sane atime values. All Linux filesystems use this as the default now (since around 2.6.30), but XFS has used relatime-like behaviour since 2006, so no-one should really need to ever use noatime on XFS for performance reasons. &lt;br /&gt;
&lt;br /&gt;
Also, noatime implies nodiratime, so there is never a need to specify nodiratime when noatime is also specified.&lt;br /&gt;
&lt;br /&gt;
== Q: How to get around a bad inode repair is unable to clean up ==&lt;br /&gt;
&lt;br /&gt;
The trick is go in with xfs_db and mark the inode as a deleted, which will cause repair to clean it up and finish the remove process.&lt;br /&gt;
&lt;br /&gt;
  xfs_db -x -c &#039;inode XXX&#039; -c &#039;write core.nextents 0&#039; -c &#039;write core.size 0&#039; /dev/hdXX&lt;br /&gt;
&lt;br /&gt;
== Q: How to calculate the correct sunit,swidth values for optimal performance ==&lt;br /&gt;
&lt;br /&gt;
XFS allows to optimize for a given RAID stripe unit (stripe size) and stripe width (number of data disks) via mount options.&lt;br /&gt;
&lt;br /&gt;
These options can be sometimes autodetected (for example with md raid and recent enough kernel (&amp;gt;= 2.6.32) and xfsprogs (&amp;gt;= 3.1.1) built with libblkid support) but manual calculation is needed for most of hardware raids.&lt;br /&gt;
&lt;br /&gt;
The calculation of these values is quite simple:&lt;br /&gt;
&lt;br /&gt;
  su = &amp;lt;RAID controllers stripe size in BYTES (or KiBytes when used with k)&amp;gt;&lt;br /&gt;
  sw = &amp;lt;# of data disks (don&#039;t count parity disks)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
So if your RAID controller has a stripe size of 64KB, and you have a RAID-6 with 8 disks, use&lt;br /&gt;
&lt;br /&gt;
  su = 64k&lt;br /&gt;
  sw = 6 (RAID-6 of 8 disks has 6 data disks)&lt;br /&gt;
&lt;br /&gt;
A RAID stripe size of 256KB with a RAID-10 over 16 disks should use&lt;br /&gt;
&lt;br /&gt;
  su = 256k&lt;br /&gt;
  sw = 8 (RAID-10 of 16 disks has 8 data disks)&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can use &amp;quot;sunit&amp;quot; instead of &amp;quot;su&amp;quot; and &amp;quot;swidth&amp;quot; instead of &amp;quot;sw&amp;quot; but then sunit/swidth values need to be specified in &amp;quot;number of 512B sectors&amp;quot;!&lt;br /&gt;
&lt;br /&gt;
Note that &amp;lt;tt&amp;gt;xfs_info&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mkfs.xfs&amp;lt;/tt&amp;gt; interpret sunit and swidth as being specified in units of 512B sectors; that&#039;s unfortunately not the unit they&#039;re reported in, however.&lt;br /&gt;
&amp;lt;tt&amp;gt;xfs_info&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;mkfs.xfs&amp;lt;/tt&amp;gt; report them in multiples of your basic block size (bsize) and not in 512B sectors.&lt;br /&gt;
&lt;br /&gt;
Assume for example: swidth 1024 (specified at mkfs.xfs command line; so 1024 of 512B sectors) and block size of 4096 (bsize reported by mkfs.xfs at output). You should see swidth 128 (reported by mkfs.xfs at output). 128 * 4096 == 1024 * 512.&lt;br /&gt;
&lt;br /&gt;
When creating XFS filesystem on top of LVM on top of hardware raid please use sunit/swith values as when creating XFS filesystem directly on top of hardware raid.&lt;br /&gt;
&lt;br /&gt;
== Q: Why doesn&#039;t NFS-exporting subdirectories of inode64-mounted filesystem work? ==&lt;br /&gt;
&lt;br /&gt;
The default &amp;lt;tt&amp;gt;fsid&amp;lt;/tt&amp;gt; type encodes only 32-bit of the inode number for subdirectory exports.  However, exporting the root of the filesystem works, or using one of the non-default &amp;lt;tt&amp;gt;fsid&amp;lt;/tt&amp;gt; types (&amp;lt;tt&amp;gt;fsid=uuid&amp;lt;/tt&amp;gt; in &amp;lt;tt&amp;gt;/etc/exports&amp;lt;/tt&amp;gt; with recent &amp;lt;tt&amp;gt;nfs-utils&amp;lt;/tt&amp;gt;) should work as well. (Thanks, Christoph!)&lt;br /&gt;
&lt;br /&gt;
== Q: What is the inode64 mount option for? ==&lt;br /&gt;
&lt;br /&gt;
By default, with 32bit inodes, XFS places inodes only in the first 1TB of a disk. If you have a disk with 100TB, all inodes will be stuck in the first TB. This can lead to strange things like &amp;quot;disk full&amp;quot; when you still have plenty space free, but there&#039;s no more place in the first TB to create a new inode. Also, performance sucks.&lt;br /&gt;
&lt;br /&gt;
To come around this, use the inode64 mount options for filesystems &amp;gt;1TB. Inodes will then be placed in the location where their data is, minimizing disk seeks.&lt;br /&gt;
&lt;br /&gt;
Beware that some old programs might have problems reading 64bit inodes, especially over NFS. Your editor used inode64 for over a year with recent (openSUSE 11.1 and higher) distributions using NFS and Samba without any corruptions, so that might be a recent enough distro.&lt;br /&gt;
&lt;br /&gt;
== Q: Can I just try the inode64 option to see if it helps me? ==&lt;br /&gt;
&lt;br /&gt;
Starting from kernel 2.6.35, you can try and then switch back. Older kernels have a bug leading to strange problems if you mount without inode64 again. For example, you can&#039;t access files &amp;amp; dirs that have been created with an inode &amp;gt;32bit anymore.&lt;br /&gt;
&lt;br /&gt;
== Q: Performance: mkfs.xfs -n size=64k option ==&lt;br /&gt;
&lt;br /&gt;
Asking the implications of that mkfs option on the XFS mailing list, Dave Chinner explained it this way:&lt;br /&gt;
&lt;br /&gt;
Inodes are not stored in the directory structure, only the directory entry name and the inode number. Hence the amount of space used by a&lt;br /&gt;
directory entry is determined by the length of the name.&lt;br /&gt;
&lt;br /&gt;
There is extra overhead to allocate large directory blocks (16 pages instead of one, to begin with, then there&#039;s the vmap overhead, etc), so for small directories smaller block sizes are faster for create and unlink operations.&lt;br /&gt;
&lt;br /&gt;
For empty directories, operations on 4k block sized directories consume roughly 50% less CPU that 64k block size directories. The 4k block size directories consume less CPU out to roughly 1.5 million entries where the two are roughly equal. At directory sizes of 10 million entries, 64k directory block operations are consuming about 15% of the CPU that 4k directory block operations consume.&lt;br /&gt;
&lt;br /&gt;
In terms of lookups, the 64k block directory will take less IO but consume more CPU for a given lookup. Hence it depends on your IO latency and whether directory readahead can hide that latency as to which will be faster. e.g. For SSDs, CPU usage might be the limiting factor, not the IO. Right now I don&#039;t have any numbers on what the difference might be - I&#039;m getting 1 billion inode population issues worked out first before I start on measuring cold cache lookup times on 1 billion files....&lt;br /&gt;
&lt;br /&gt;
== Q: I want to tune my XFS filesystems for &amp;lt;something&amp;gt; ==&lt;br /&gt;
&lt;br /&gt;
The standard answer you will get to this question is this: use the defaults.&lt;br /&gt;
&lt;br /&gt;
There are few workloads where using non-default mkfs.xfs or mount options make much sense. In general, the default values already used are optimised for best performance in the first place. mkfs.xfs will detect the difference between single disk and MD/DM RAID setups and change the default values it uses to  configure the filesystem appropriately.&lt;br /&gt;
&lt;br /&gt;
There are a lot of &amp;quot;XFS tuning guides&amp;quot; that Google will find for you - most are old, out of date and full of misleading or just plain incorrect information. Don&#039;t expect that tuning your filesystem for optimal bonnie++ numbers will mean your workload will go faster. You should only consider changing the defaults if either: a) you know from experience that your workload causes XFS a specific problem that can be worked around via a configuration change, or b) your workload is demonstrating bad performance when using the default configurations. In this case, you need to understand why your application is causing bad performance before you start tweaking XFS configurations.&lt;br /&gt;
&lt;br /&gt;
In most cases, the only thing you need to to consider for &amp;lt;tt&amp;gt;mkfs.xfs&amp;lt;/tt&amp;gt; is specifying the stripe unit and width for hardware RAID devices. For mount options, the only thing that will change metadata performance considerably are the &amp;lt;tt&amp;gt;logbsize&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;delaylog&amp;lt;/tt&amp;gt; mount options. Increasing &amp;lt;tt&amp;gt;logbsize&amp;lt;/tt&amp;gt; reduces the number of journal IOs for a given workload, and &amp;lt;tt&amp;gt;delaylog&amp;lt;/tt&amp;gt; will reduce them even further. The trade off for this increase in metadata performance is that more operations may be &amp;quot;missing&amp;quot; after recovery if the system crashes while actively making modifications.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Q: Which factors influence the memory usage of xfs_repair? ==&lt;br /&gt;
&lt;br /&gt;
This is best explained with an example. The example filesystem is 16Tb, but basically empty (look at icount).&lt;br /&gt;
&lt;br /&gt;
  # xfs_repair -n -vv -m 1 /dev/vda&lt;br /&gt;
  Phase 1 - find and verify superblock...&lt;br /&gt;
          - max_mem = 1024, icount = 64, imem = 0, dblock = 4294967296, dmem = 2097152&lt;br /&gt;
  Required memory for repair is greater that the maximum specified&lt;br /&gt;
  with the -m option. Please increase it to at least 2096.&lt;br /&gt;
  #&lt;br /&gt;
&lt;br /&gt;
xfs_repair is saying it needs at least 2096MB of RAM to repair the filesystem,&lt;br /&gt;
of which 2,097,152KB is needed for tracking free space. &lt;br /&gt;
(The -m 1 argument was telling xfs_repair to use ony 1 MB of memory.)&lt;br /&gt;
&lt;br /&gt;
Now if we add some inodes (50 million) to the filesystem (look at icount again), and the result is:&lt;br /&gt;
&lt;br /&gt;
  # xfs_repair -vv -m 1 /dev/vda&lt;br /&gt;
  Phase 1 - find and verify superblock...&lt;br /&gt;
          - max_mem = 1024, icount = 50401792, imem = 196882, dblock = 4294967296, dmem = 2097152&lt;br /&gt;
  Required memory for repair is greater that the maximum specified&lt;br /&gt;
  with the -m option. Please increase it to at least 2289.&lt;br /&gt;
&lt;br /&gt;
That is now needs at least another 200MB of RAM to run.&lt;br /&gt;
&lt;br /&gt;
The numbers reported by xfs_repair are the absolute minimum required and approximate at that;&lt;br /&gt;
more RAM than this may be required to complete successfully.&lt;br /&gt;
Also, if you only give xfs_repair the minimum required RAM, it will be slow;&lt;br /&gt;
for best repair performance, the more RAM you can give it the better.&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User_talk:J3gum&amp;diff=2369</id>
		<title>User talk:J3gum</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User_talk:J3gum&amp;diff=2369"/>
		<updated>2011-10-13T19:31:10Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: /* De-pretty/complicate the page? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== De-pretty/complicate the page? ==&lt;br /&gt;
&lt;br /&gt;
Hi! I&#039;ve seen your [http://xfs.org/index.php?title=Main_Page&amp;amp;diff=2356&amp;amp;oldid=2256 edit] to &amp;quot;De-pretty/complicate the page&amp;quot; and I don&#039;t understand why you did that? Don&#039;t we want our wiki to be pretty too? I don&#039;t think the page was overly complicated, just a few boxes so that one does not have to scroll down, as the page gets longer and longer. -- [[User:Ckujau|Ckujau]] 06:02, 12 October 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
:First, I apologize if I&#039;ve overstepped.  That was not my intention.  I did try to add [http://xfs.org/index.php?title=Main_Page&amp;amp;diff=2357&amp;amp;oldid=2356 a link and header] before the edit to which you referred.  When added, the system generated table of contents overlapped the &amp;quot;custom box divisions&amp;quot;.  I added these two edits as two separate edits in case someone wanted to reimplement the &amp;quot;prettiness.&amp;quot;  I do believe these &amp;quot;prettiness&amp;quot; additions raise the bar for entry, that is not needed, for people trying to add content to the page. -- [[User:J3gum|J3gum]] 14:21, 12 October 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
::No apology needed. I appreciate your addition and I agree that every formatting syntax (Mediawiki-syntax or HTML) inbetween context can confuse people who just want to add context. But the [[Main_Page]] is somewhat special, IMHO. It&#039;s an overview of the most important pages and I think we need to keep it somewhat organized and not every page needs to be on the front page. Otherwise we could just redirect to [[Special:AllPages]] :-) Every other page on this wiki needs only little formatting (just bullet points (&amp;quot;*&amp;quot;) and paragraphs (&amp;quot;==&amp;quot;). I&#039;d like to re-pretty the [[Main_Page]] again and keep [{{SERVER}}/index.php?title=Main_Page&amp;amp;action=watch watching] the page so if another additon mangles the &amp;quot;prettyness&amp;quot; I&#039;ll gladly fix it. Thanks. -- [[User:Ckujau|Ckujau]] 20:59, 12 October 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
:::Yes, please add the &amp;quot;prettiness&amp;quot; back in. -- [[User:J3gum|J3gum]] 13:47, 13 October 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
:::Please do not randomly edit front pages.  If you think the format should be different please bring it up on the mailinglist.  I really dislike the new look, but I&#039;ll give you the chance of getting more opinion on the mailinglist before simply reverting it. -- [[User:Hch|Hch]] 10:42, 13 October 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
::::Please revert it, or better yet, add one in that doesn&#039;t break with the &amp;quot;auto-contents&amp;quot; menu.  Either way is fine with me.  -- [[User:J3gum|J3gum]] 13:47, 13 October 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
::::: Done. -- [[User:Ckujau|Ckujau]] 19:31, 13 October 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
== What change policy ==&lt;br /&gt;
&lt;br /&gt;
Hch, you said, &amp;quot;bring it up on the mailinglist&amp;quot;. If you want people to use your change policy, it may be a good idea to put your change policy in the description area on your homepage.  Perhaps it should even have it&#039;s own page.  It&#039;s pretty hard to guess the policy as a first time user. -- [[User:J3gum|J3gum]] 13:47, 13 October 2011 (UTC)&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Main_Page&amp;diff=2368</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Main_Page&amp;diff=2368"/>
		<updated>2011-10-13T19:26:12Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: prettyness re-added, as per discussion&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
{| width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;vertical-align:top&amp;quot; | &amp;lt;!-- Information --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#E2EAFF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Information about XFS ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS FAQ]]&lt;br /&gt;
* [[XFS Status Updates]]&lt;br /&gt;
* [[XFS Papers and Documentation]]&lt;br /&gt;
* [[Linux Distributions shipping XFS]]&lt;br /&gt;
* [[XFS Rpm for RedHat|XFS RPMs for RedHat]]&lt;br /&gt;
* [[XFS Companies]]&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs SGI XFS website]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/XFS Wikipedia XFS page]&lt;br /&gt;
&lt;br /&gt;
== Professional XFS Consulting Services == &lt;br /&gt;
&lt;br /&gt;
[[Consulting Resources]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;50%&amp;quot; style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Developers --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#F8F8FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== XFS Developer Resources ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS email list and archives]]&lt;br /&gt;
* [http://oss.sgi.com/bugzilla/buglist.cgi?product=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ oss.sgi.com]&lt;br /&gt;
* [http://bugzilla.kernel.org/buglist.cgi?product=File+System&amp;amp;component=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ kernel.org]&lt;br /&gt;
* [[Getting the latest source code]]&lt;br /&gt;
* [[Unfinished work]]&lt;br /&gt;
* [[Shrinking Support]]&lt;br /&gt;
* [[Ideas for XFS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- features --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#F2F2F2; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Feature Highlights ==&lt;br /&gt;
&lt;br /&gt;
* [[FITRIM/discard]] - discard (or &amp;quot;trim&amp;quot;) blocks which are not in use by the filesystem&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{#meta: | u+4/+rib+YG96TifD0SN88xS84YSDm2cl61IU7ZIk9g= | verify-v1 }}&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Ckujau&amp;diff=2365</id>
		<title>User:Ckujau</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Ckujau&amp;diff=2365"/>
		<updated>2011-10-12T21:06:14Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: rot13 :-)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;  rot13: ksfjvxv@areqolangher.qr&lt;br /&gt;
&lt;br /&gt;
* [[/maintenance/]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User_talk:J3gum&amp;diff=2364</id>
		<title>User talk:J3gum</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User_talk:J3gum&amp;diff=2364"/>
		<updated>2011-10-12T20:59:11Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: /* De-pretty/complicate the page? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== De-pretty/complicate the page? ==&lt;br /&gt;
&lt;br /&gt;
Hi! I&#039;ve seen your [http://xfs.org/index.php?title=Main_Page&amp;amp;diff=2356&amp;amp;oldid=2256 edit] to &amp;quot;De-pretty/complicate the page&amp;quot; and I don&#039;t understand why you did that? Don&#039;t we want our wiki to be pretty too? I don&#039;t think the page was overly complicated, just a few boxes so that one does not have to scroll down, as the page gets longer and longer. -- [[User:Ckujau|Ckujau]] 06:02, 12 October 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
:First, I apologize if I&#039;ve overstepped.  That was not my intention.  I did try to add [http://xfs.org/index.php?title=Main_Page&amp;amp;diff=2357&amp;amp;oldid=2356 a link and header] before the edit to which you referred.  When added, the system generated table of contents overlapped the &amp;quot;custom box divisions&amp;quot;.  I added these two edits as two separate edits in case someone wanted to reimplement the &amp;quot;prettiness.&amp;quot;  I do believe these &amp;quot;prettiness&amp;quot; additions raise the bar for entry, that is not needed, for people trying to add content to the page. -- [[User:J3gum|J3gum]] 14:21, 12 October 2011 (UTC)&lt;br /&gt;
&lt;br /&gt;
::No apology needed. I appreciate your addition and I agree that every formatting syntax (Mediawiki-syntax or HTML) inbetween context can confuse people who just want to add context. But the [[Main_Page]] is somewhat special, IMHO. It&#039;s an overview of the most important pages and I think we need to keep it somewhat organized and not every page needs to be on the front page. Otherwise we could just redirect to [[Special:AllPages]] :-) Every other page on this wiki needs only little formatting (just bullet points (&amp;quot;*&amp;quot;) and paragraphs (&amp;quot;==&amp;quot;). I&#039;d like to re-pretty the [[Main_Page]] again and keep [{{SERVER}}/index.php?title=Main_Page&amp;amp;action=watch watching] the page so if another additon mangles the &amp;quot;prettyness&amp;quot; I&#039;ll gladly fix it. Thanks. -- [[User:Ckujau|Ckujau]] 20:59, 12 October 2011 (UTC)&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User_talk:J3gum&amp;diff=2362</id>
		<title>User talk:J3gum</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User_talk:J3gum&amp;diff=2362"/>
		<updated>2011-10-12T06:02:26Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: New page: == De-pretty/complicate the page? ==  Hi! I&amp;#039;ve seen your [http://xfs.org/index.php?title=Main_Page&amp;amp;diff=2356&amp;amp;oldid=2256 edit] to &amp;quot;De-pretty/complicate the page&amp;quot; and I don&amp;#039;t understand why ...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== De-pretty/complicate the page? ==&lt;br /&gt;
&lt;br /&gt;
Hi! I&#039;ve seen your [http://xfs.org/index.php?title=Main_Page&amp;amp;diff=2356&amp;amp;oldid=2256 edit] to &amp;quot;De-pretty/complicate the page&amp;quot; and I don&#039;t understand why you did that? Don&#039;t we want our wiki to be pretty too? I don&#039;t think the page was overly complicated, just a few boxes so that one does not have to scroll down, as the page gets longer and longer. -- [[User:Ckujau|Ckujau]] 06:02, 12 October 2011 (UTC)&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Category:Template&amp;diff=2307</id>
		<title>Category:Template</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Category:Template&amp;diff=2307"/>
		<updated>2011-04-04T17:06:27Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: New page: Templates&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Templates&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Template:Delete&amp;diff=2306</id>
		<title>Template:Delete</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Template:Delete&amp;diff=2306"/>
		<updated>2011-04-04T17:05:44Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: category changed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;text-align:center; background:#ffaaaa&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;This page has been labelled for deletion.&#039;&#039;&#039;&amp;lt;br/ &amp;gt;The given reason is: &#039;&#039;{{{1|specify reason}}}&#039;&#039;.&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;small&amp;gt;This normally means the page title is a bad one or the content may have been spam or a trojan trap. Senseful content possibly has been moved somewhere better.&lt;br /&gt;
&lt;br /&gt;
If a page title is vaguely meaningful, the page should perhaps be a redirect, or hold a small summary for historical interest instead.&lt;br /&gt;
 &lt;br /&gt;
The page should now be empty apart from this message. Remember to check [[Special:Whatlinkshere/{{FULLPAGENAME}}|if anything links here]] and [{{fullurl:{{FULLPAGENAME}}|action=history}} the page history] before deleting. Please fix [[Special:Whatlinkshere/{{NAMESPACE}}:{{PAGENAME}}|any pages linking to here]].&lt;br /&gt;
&amp;lt;/small&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;includeonly&amp;gt;[[Category:Labelled for deletion]]&amp;lt;/includeonly&amp;gt;&lt;br /&gt;
&amp;lt;noinclude&amp;gt;&lt;br /&gt;
== About this template - Procedure for deleting wiki pages ==&lt;br /&gt;
Use this template with the syntax: &lt;br /&gt;
 &amp;lt;nowiki&amp;gt;{{delete|reason}}&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use it to delete a page, when a page definitely and unequivocally should be deleted:&lt;br /&gt;
: check the links to the page with &amp;quot;Special:WhatLinksHere&amp;quot;, and correct the links in all linked pages&lt;br /&gt;
: &#039;&#039;&#039;Replace the entire contents of the page&#039;&#039;&#039; with &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;{{delete|reason}}&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
: give a reason for the deletion in the &#039;Summary&#039; field when you make this edit&lt;br /&gt;
: subscribe with &amp;lt;code&amp;gt;&amp;lt;nowiki&amp;gt;--~~~~&amp;lt;/nowiki&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
See also [http://www.mediawiki.org/wiki/Help:Deleting_a_page MediaWiki Help:Deleting a page]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;If the page title still holds some meaning&#039;&#039;&#039; but the contents of the page are mostly out-of-date or otherwise deserves deleting...  well then fix it! Edit the page. Delete the content. Write something different in its place. Don&#039;t use this template.&lt;br /&gt;
&lt;br /&gt;
All the pages get listed in [[:Category:Labelled for deletion]]. It will help the [[Special:listadmins|wiki sysops]] to deal with all these, if this delete template is only used in cases where the community has agreed upon the decision to delete, or where individuals believe (in good faith) that there could be no reason for anyone to disagree with the deletion.&lt;br /&gt;
&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;noinclude&amp;gt;[[Category:Template]]&amp;lt;/noinclude&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Ckujau/maintenance&amp;diff=2305</id>
		<title>User:Ckujau/maintenance</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Ckujau/maintenance&amp;diff=2305"/>
		<updated>2011-04-04T16:44:55Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: maintainer added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[Special:BrokenRedirects]]&lt;br /&gt;
* [[Special:DoubleRedirects]]&lt;br /&gt;
* [[Special:Lonelypages]]&lt;br /&gt;
* [[Special:Uncategorizedpages]]&lt;br /&gt;
* [[Special:Uncategorizedtemplates]]&lt;br /&gt;
* [[Special:Unusedcategories]]&lt;br /&gt;
* [[Special:Wantedcategories]]&lt;br /&gt;
* [[Special:Wantedpages]]&lt;br /&gt;
&amp;lt;!-- * [[Special:WantedFiles]] (since [http://www.mediawiki.org/wiki/Release_notes/1.14 1.14]) --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Ckujau&amp;diff=2304</id>
		<title>User:Ckujau</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Ckujau&amp;diff=2304"/>
		<updated>2011-04-04T16:44:34Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: maintenance added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;  [mailto:xfswiki@nerdbynature.de Christian Kujau]&lt;br /&gt;
&lt;br /&gt;
* [[/maintenance/]]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Companies&amp;diff=2117</id>
		<title>XFS Companies</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Companies&amp;diff=2117"/>
		<updated>2010-10-03T08:30:41Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: s/Â/™/g&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== These are companies that either use XFS or have a product that utilizes XFS . ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Info gathered from: [http://oss.sgi.com/projects/xfs/users.html XFS Users] on [http://oss.sgi.com/ oss.sgi.com]&lt;br /&gt;
&lt;br /&gt;
== [http://www.kernel.org/ The Linux Kernel Archives] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A bit more than a year ago (as of October 2008) kernel.org, in an ever increasing need to squeeze more performance out of it&#039;s machines, made the leap of migrating the primary mirror machines (mirrors.kernel.org) to XFS.  We site a number of reasons including fscking 5.5T of disk is long and painful, we were hitting various cache issues, and we were seeking better performance out of our file system.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;After initial tests looked positive we made the jump, and have been quite happy with the results.  With an instant increase in performance and throughput, as well as the worst xfs_check we&#039;ve ever seen taking 10 minutes, we were quite happy.  Subsequently we&#039;ve moved all primary mirroring file-systems to XFS, including www.kernel.org , and mirrors.kernel.org.  With an average constant movement of about 400mbps around the world, and with peaks into the 3.1gbps range serving thousands of users simultaneously it&#039;s been a file system that has taken the brunt we can throw at it and held up spectacularly.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.sdss.org/ The Sloan Digital Sky Survey] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Sloan Digital Sky Survey is an ambitious effort to map one-quarter of the sky at optical and very-near infrared wavelengths and take spectra of 1 million extra-galactic objects. The estimated amount of data that will be acquired over the 5 year lifespan of the project is 15TB, however, the total amount of storage space required for object informational databases, corrected frames, and reduced spectra will be several factors more than this. The goal is to have all the data online and available to the collaborators at all times. To accomplish this goal we are using commodity, off the shelf (COTS) Intel servers with EIDE disks configured as RAID50 arrays using XFS. Currently, 14 machines are in production accounting for over 18TB. By the scheduled end of the survey in 2005, 50TB of XFS disks will be online serving SDSS data to collaborators and the public.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;For complete details and status of the project please see [http://www.sdss.org/ http://www.sdss.org]. For details of the storage systems, see the [http://home.fnal.gov/~yocum/storageServerTechnicalNote.html SDSS Storage Server Technical Note].&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www-d0.fnal.gov/  The DØ Experiment at Fermilab] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At the DØ experiment at the Fermi National Accelerator Laboratory we have a ~150 node cluster of desktop machines all using the SGI-patched kernel. Every large disk (&amp;amp;gt;40Gb) or disk array in the cluster uses XFS including 4x640Gb disk servers and several 60-120Gb disks/arrays. Originally we chose reiserfs as our journaling filesystem, however, this was a disaster. We need to export these disks via NFS and this seemed perpetually broken in 2.4 series kernels. We switched to XFS and have been very happy. The only inconvenience is that it is not included in the standard kernel. The SGI guys are very prompt in their support of new kernels, but it is still an extra step which should not be necessary.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.ciprico.com/pDiMeda.shtml  Ciprico DiMeda NAS Solutions] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Ciprico DiMeda line of Network Attached Storage solutions combine the ease of connectivity of NAS with the SAN like performance levels required for digital media applications. The DiMeda 3600 provides high availability and high performance through dual NAS servers and redundant, scalable Fibre Channel RAID storage. The DiMeda 1700 provides high performance files services at a low price by using the latest Serial ATA RAID technology. All DiMeda systems are based on Linux and use XFS as the filesystem. We tested a number of filesystem alternatives and XFS was chosen because it provided the highest performance in digital media applications and the journaling feature ensures rapid failover in our dual node fault tolerant configurations.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.quantum.com/Products/NAS+Servers/Guardian+14000/Default.htm  The Quantum Guardian™ 14000] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Quantum Guardian™ 14000, the latest Network Attached Storage (NAS) solution from Quantum, delivers 1.4TB of enterprise-class storage for less than $25,000. The Guardian 14000 is a Linux-based device which utilizes XFS to provide a highly reliable journaling filesystem with simultaneous support for Windows, UNIX, Linux and Macintosh environments. As dedicated appliance optimized for fast, reliable file sharing, the Guardian 14000 combines the simplicity of NAS with a robust feature set designed for the most demanding enterprise environments. Support for tools such as Active Directory Service (ADS), UNIX Network Information Service (NIS) and Simple Network Management Protocol (SNMP) provides ease of management and seamless integration. Hardware redundancy, Snapshots and StorageCare™ on-site service ensure security for business-critical data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.bigstorage.com/products_approach_overview.html  BigStorage K2~NAS] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At BigStorage we pride ourselves on tailoring our NAS systems to meet our customer&#039;s needs, with the help of XFS we are able to provide them with the most reliable Journaling Filesystem technology available. Our open systems approach, which allows for cross-platform integration, gives our customers the flexibility to grow with their data requirements. In addition, BigStorage offers a variety of other features including total hardware redundancy, snapshotting, replication and backups directly from the unit. All of our products include BigStorageï¿½s 24/7 LiveResponse™ support. With LiveResponse™, we keep our team of experienced technical experts on call 24 hours a day, every day, to ensure that your storage investment remains online, all the time.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.echostar.com  Echostar DishPVR 721] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Echostar uses the XFS filesystem for its latest generation of satellite receivers, the DP721. Echostar chose XFS for its performance, stability and unique set of features.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS allowed us to meet a demanding requirement of recording two mpeg2 streams to the internal hard drive while simultaneously viewing a third pre-recorded stream. In addition, XFS allowed us to withstand unexpected power loss without filesystem corruption or user interaction.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We tested several other filesystems, but XFS emerged as the clear winner.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.sun.com/hardware/serverappliances/raq550/  Sun Cobalt RaQ™ 550] ==&lt;br /&gt;
&lt;br /&gt;
From the [http://www.sun.com/hardware/serverappliances/raq550/features.html features] page:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS is a journaling file system capable of quick fail over recovery after unexpected interruptions. XFS is an important feature for mission-critical applications as it ensures data integrity and dramatically reduces startup time by avoiding FSCK delay.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://pingu.salk.edu/  Center for Cytometry and Molecular Imaging at the Salk Institute] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I run the Center for Cytometry and Molecular Imaging at the Salk Institute in La Jolla, CA. We&#039;re a core facility for the Institute, offering flow cytometry, basic and deconvolution microscopy, phosphorimaging (radioactivity imaging) and fluorescent imaging.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;m currently in the process of migrating our data server to Linux/XFS. Our web server currently uses Linux/XFS. We have about 60 Gb on the data server which has a 100Gb SCSI RAID 5 array. This is a bit restrictive for our microscopists so in order that they can put more data online, I&#039;m adding another machine, also running Linux/XFS, with about 420 Gb of IDE-RAID5, based on Adaptec controllers....&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Servers are configured with quota and run Samba, NFS, and Netatalk for connectivity to the mixed bag of computers we have around here. I use the CVS XFS tree most of the time. I have not seen any problems in the several months I have been testing.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://coltex.nl/ Coltex Retail Group BV] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Coltex Retail group BV in the Netherlands uses Red Hat Linux with XFS for their main database server which collects the data from over 240 clothing retail stores throughout the Netherlands. Coltex depends on the availability of the server for over 100 hundred employees in the main office for retrieval of logistical and sales figures. The database size is roughly 10GB large containing both historical and current data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The entire production and logistical system depends on the availability of the system and downtime would mean a significant financial penalty. The speed and reliability of the XFS filesystem which has a proven track record and mature tools to go with it is fundamental to the availability of the system.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS has saved us a lot of time during testing and implementation. A long filesystems check is no longer needed when bad things happen when they do. The increased speed of our database system which is based on Progress 9.1C is also a nice benefit to this filesystem.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.dkp.com/ DKP Effects] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;re a 3D computer graphics/post-production house. We&#039;ve currently got four fileservers using XFS under Linux online - three 350GB servers and one 800GB server. The servers are under fairly heavy load - network load to and from the dual NICs on the box is basically maxed out 18 hours a day - and we do have occasional lockups and drive failures. Thanks to Linux SW RAID5 and XFS, though, we haven&#039;t had any data loss, or significant down time.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.epigenomics.com/ Epigenomics] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We currently have several IDE-to-SCSI-RAID systems with XFS in production. The largest has a capacity of 1.5TB, the other 2 have 430GB each.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Data stored on these filesystems is on the one hand &amp;quot;normal&amp;quot; home directories and corporate documents and on the other hand scientific data for our laboratory and IT department.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.incyte.com/ Incyte Genomics] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;m currently in the process of slowly converting 21 clusters totaling 2300+ processors over to XFS.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;These machines are running a fairly stock RH7.1+XFS. The application is our own custom scheduler for doing genomic research. We have one of the worlds largest sequencing labs which generates a tremendous amount of raw data. Vast amounts of CPU cycles must be applied to it to turn it into useful data we can then sell access to.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Currently, a minority of these machines are running XFS, but as I can get downtime on the clusters I am upgrading them to 7.1+XFS. When I&#039;m done, it&#039;ll be about 10TB of XFS goodness... across 9G disks mostly.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.monmouth.edu/ Monmouth University] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve replaced our NetApp filer (80GB, $40,000). NetApp ONTAP software [runs on NetApp filers] is basically an NFS and CIFS server with their own proprietary filesystem. We were quickly running out of space and our annual budget almost depleted. What were we to do?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;With an off-the-shelf Dell 4400 series server and 300GB of disks ($8,000 total). We were able to run Linux and Samba to emulate a NetApp filer.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS allowed us to manage 300GB of data with absolutely no downtime (now going on 79 days) since implementation. Gone are the days of fearing the fsck of 300GB.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.astro.wisc.edu  The University of Wisconsin Astronomy Department] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At the University of Wisconsin Astronomy Department we have been using Linux XFS since the first release. We currently have 31 Linux boxes running XFS on all filesystems with about 2.6 TB of disk space on these machines. We use XFS primarily on our data reduction systems, but we also use it on our web server and on one of the remote observing machines at the WIYN 3.5m Telescope at Kitt Peak (http://www.noao.edu/wiyn/wiyn.html).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We will likely be using Linux XFS at least in part on the GLIMPSE program (http://www.astro.wisc.edu/sirtf/) which will likely require several TB of disk space to process the data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.amoa.org/ The Austin Museum of Art] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Austin Museum of Art has two file servers running RedHat 7.2_XFS upgraded from RedHat 7.1_XFS. Our webserver runs Domino on top of RedHat 7.3_XFS and we&#039;re getting about 70% better performance than the Domino server running on Windows 2000 Server. We&#039;re moving our workstations away from Windows and Microsoft Office to an LTSP server running on RedHat 7.3_XFS.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve become solely dependent on XFS for all of our data systems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.tecmath.com/ tecmath AG] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We use a production server with a 270 GB RAID 5 (hardware) disk array. It is based on a Suse 7.2 distribution, but with a standard 2.4.12 kernel with XFS and LVM patches. The server provides NFS to 8 Unix clients as well as Samba to about 80 PCs. The machine also runs Bind 9, Apache, Exim, DHCP, POP3, MySQL. I have tried out different configurations with ReiserFS, but I didn&#039;t manage to find a stable configuration with respect to NFS. Since I converted all disks to XFS some 3 months ago, we never had any filesystem-related problems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.theiqgroup.com/ The IQ Group] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Here at the IQ Group, Inc. we use XFS for all our production and development servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Our OS of choice is Slackware Linux 8.0. Our hardware of choice is Dell and VALinux servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;As for applications, we run the standard Unix/Linux apps like Sendmail, Apache, BIND, DHCP, iptables, etc.; as well as Oracle 9i and Arkeia.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve been running XFS across the board for about 3 months now without a hitch (so far).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Size-wise, our biggest server is about 40 GB, but that will be increasing substantially in the near future.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Our production servers are collocated so a journaled FS was a must. Reboots are quick and no human interaction is required like with a bad fsck on ext2. Additionally, our database servers gain additional integrity and robustness.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We originally chose XFS over ReiserFS and ext3 because of it&#039;s age (it&#039;s been in production on SGI boxes for probably longer than all the other journaling FS&#039;s combined) and it&#039;s speed appeared comparable as well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.artsit.usyd.edu.au  Arts IT Unit, Sydney University] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;ve got XFS on a &#039;production&#039; file server. The machine could have up to 500 people logged in, but typically less than 200. Most are Mac users, connected via NetAtalk for &#039;personal files&#039;, although there are shared areas for admin units. Probably about 30-40 windows users. (Samba) It&#039;s the file server for an Academic faculty at a University.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Hardware RAID, via Mylex dual channel controller with 4 drives, Intel Tupelo MB, Intel &#039;SC5000&#039; server chassis with redundant power and hot-swap scsi bays. The system boots off a non RAID single 9gb UW-scsi drive.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Only system &#039;crash&#039; was caused by some one accidentally unplugging it, just before we put it into production. It was back in full operation within 5 minutes. Without journaling, the fsck would have taken well over an hour. In day to day use it has run well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://structbio.vanderbilt.edu/comp/  Vanderbilt University Center for Structural Biology] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I run a high-performance computing center for Structural Biology research at Vanderbilt University. We use XFS extensively, and have been since the late prerelease versions. I&#039;ve had nothing but good experiences with it.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We began using XFS in our search for a good solution for our RAID fileservers. We had such good experiences with it on these systems that we&#039;ve begun putting it on the root/usr/var partitions of every Linux system we run here. I even have it on my laptop these days. XFS in combination with the 2.4 NFS3 implementation performs very well for us, and we have great uptimes on these systems (Our 750GB ArenaII setup is at 143 days right now).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;All told, we&#039;ve got about 1.2TB of XFS filesystems spinning right now. It&#039;s spread out across maybe a dozen or so filesystems and will continue to increase as we are growing fast and that&#039;s all we use now. Next up is putting it on our 17-node Linux cluster, which will bring that up to 1.5TB spread across 30 filesystems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I, for one, would LOVE to see XFS make it into the kernel tree. From my perspectives, it&#039;s one of the best things to happen to Linux in the 7 years I&#039;ve been using/administering it.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== 2008 Update ====&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve since moved our main home directories to a proprietary NAS, but continue to use XFS on 10TB of LVM storage for doing backup-to-disk from the same NAS&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www-cdf.fnal.gov/  CDF Experiment at Fermi National Lab] ==&lt;br /&gt;
&lt;br /&gt;
CDF, an elementary particle physics experiment at Fermi National Lab, is using XFS for all our cache disks.&lt;br /&gt;
&lt;br /&gt;
The usage model is that we have a PB tape archive (2 STK silos) as permanent storage. In front of this archive we are deploying a roughly 100TB disk cache system. The cache is made up of 50 2TB file server based on cheap commodity hardware (3ware based hardware raid using IDE drives). The data is then processed by a cluster of 300 Dual CPU Linux PC&#039;s. The cache software is dCache, a DESY/FNAL product.&lt;br /&gt;
&lt;br /&gt;
The whole system is used by more than 300 active users from all over the world for batch processing for their physics data analysis.&lt;br /&gt;
&lt;br /&gt;
== [http://www.get2chip.com  Get2Chip, Inc.] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are using XFS on 3 production file servers with approximately 1.5T of data. Quite impressive especially when we had a power outage and all three servers shutdown. All servers came back up in minutes with no problems! We are looking at creating two more servers that would manage 2+ TB of data store.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.lando.co.za  Lando International Group Technologies] ==&lt;br /&gt;
&lt;br /&gt;
Lando International Group Technologies is the home of:&lt;br /&gt;
&lt;br /&gt;
* [www.lando.co.za Lanndo Technologies Africa (Pty) Ltd] - Internet Service Provider&lt;br /&gt;
* [www.lbsd.net Linux Based Systems Design] (Article 21). Not-For-Profit company established to provide free Linux distributions and programs.&lt;br /&gt;
* Cell Park South Africa (Pty) Ltd. RSA Pat Appln 2001/10406. Collecting parking fees by means of cell phone SMS or voice.&lt;br /&gt;
* Read Plus Education (Pty) Ltd. Software based reading skills training and testing for ages 4 to 100.&lt;br /&gt;
* Mobivan. Mobile office including Internet access, fax, copying, printing, telephone, collection and delivery services, legal services, pre-paid phone and electricity services, bill payment email, secretarial services, training facilities and management services.&lt;br /&gt;
* Lando International Marketing Agency. Direct marketing services, design and supply of promotional material, consulting, sourcing of capital and other funding.&lt;br /&gt;
* Illico. Software development and systems analysis on most platforms.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Throughout these companies, we use the XFS filesystem with [http://idms.lbsd.net IDMS Linux] on high-end Intel servers, with an average of 100 GB storage each. XFS stores our customer and user data, including credit card details, mail, routing tables, etc.. We have not had one problem since the release of the first XFS patch.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.fcb-wilkens.com  Foote, Cone, &amp;amp;amp; Belding] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are an advertisement company in Germany, and the use of the XFS filesystem is a story of success for us. In our Hamburg office, we have two file servers having a 420 Gig RAID in XFS format serving (almost) all our data to about 180 Macs and about 30 PCs using Samba and Netatalk. Some of the data is used in our offices in Frankfurt and Berlin, and in fact the Berlin office is just getting it&#039;s own 250 Gig fileserver (using XFS) right now.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The general success with XFS has led us to switch over all our Linux servers to run on XFS as well (with the exception of two systems that are tied to tight specifications configuration wise). XFS, even the old 1.0 version, has happily taken on various abuse - broken SCSI controllers, broken RAID systems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.moving-picture.co.uk/  Moving Picture Company] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We here at MPC use XFS/RedHat 7.2 on all of our graphics-workstations and file-servers. More info can be found in an [http://www.linuxuser.co.uk/articles/issue20/lu20-Linux_at_work-In_the_picture.pdf  article] LinuxUser magazine did on us recently.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.coremetrics.com/  Coremetrics, Inc.] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are currently using XFS for 25+ production web-servers, ~900GB Oracle db servers, with potentially 15+ more servers by mid 2003, with ~900GB+ databases. All XFS installed.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Also, our dev environment, except for the Sun boxes which all are being migrated to X86 in the aforementioned server additions, plus the dev Sun boxes as well, are all x86 dual proc servers running Oracle, application servers, or web services as needed. All servers run XFS from images we&#039;ve got on our SystemImager servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;All production back-end servers are connected via FC1 or FC2 to a SAN containing ~13TB of raw storage, which, will soon be converted from VxFS to XFS with the migration of Oracle to our x86 platforms.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://evolt.org Evolt.org] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;evolt.org, a world community for web developers promoting the mutual free exchange of ideas, skills and experiences, has had a great deal of success using XFS. Our primary webserver which serves 100K hosts/month, primary Oracle database with ~25Gb of data, and free member hosting for 1000 users haven&#039;t had a minute of downtime since XFS has been installed. Performance has been spectacular and maintenance a breeze.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font size=&amp;quot;-1&amp;quot;&amp;gt; &#039;&#039;All testimonials on this page represent the views of the submitters, and references to other products and companies should not be construed as an endorsement by either the organizations profiled, or by SGI. All trademarks (r) their respective owners.&#039;&#039; &amp;lt;/font&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=XFS_Companies&amp;diff=2116</id>
		<title>XFS Companies</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=XFS_Companies&amp;diff=2116"/>
		<updated>2010-10-03T08:29:03Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: s/Â/™/&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== These are companies that either use XFS or have a product that utilizes XFS . ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Info gathered from: [http://oss.sgi.com/projects/xfs/users.html XFS Users] on [http://oss.sgi.com/ oss.sgi.com]&lt;br /&gt;
&lt;br /&gt;
== [http://www.kernel.org/ The Linux Kernel Archives] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A bit more than a year ago (as of October 2008) kernel.org, in an ever increasing need to squeeze more performance out of it&#039;s machines, made the leap of migrating the primary mirror machines (mirrors.kernel.org) to XFS.  We site a number of reasons including fscking 5.5T of disk is long and painful, we were hitting various cache issues, and we were seeking better performance out of our file system.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;After initial tests looked positive we made the jump, and have been quite happy with the results.  With an instant increase in performance and throughput, as well as the worst xfs_check we&#039;ve ever seen taking 10 minutes, we were quite happy.  Subsequently we&#039;ve moved all primary mirroring file-systems to XFS, including www.kernel.org , and mirrors.kernel.org.  With an average constant movement of about 400mbps around the world, and with peaks into the 3.1gbps range serving thousands of users simultaneously it&#039;s been a file system that has taken the brunt we can throw at it and held up spectacularly.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.sdss.org/ The Sloan Digital Sky Survey] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Sloan Digital Sky Survey is an ambitious effort to map one-quarter of the sky at optical and very-near infrared wavelengths and take spectra of 1 million extra-galactic objects. The estimated amount of data that will be acquired over the 5 year lifespan of the project is 15TB, however, the total amount of storage space required for object informational databases, corrected frames, and reduced spectra will be several factors more than this. The goal is to have all the data online and available to the collaborators at all times. To accomplish this goal we are using commodity, off the shelf (COTS) Intel servers with EIDE disks configured as RAID50 arrays using XFS. Currently, 14 machines are in production accounting for over 18TB. By the scheduled end of the survey in 2005, 50TB of XFS disks will be online serving SDSS data to collaborators and the public.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;For complete details and status of the project please see [http://www.sdss.org/ http://www.sdss.org]. For details of the storage systems, see the [http://home.fnal.gov/~yocum/storageServerTechnicalNote.html SDSS Storage Server Technical Note].&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www-d0.fnal.gov/  The DØ Experiment at Fermilab] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At the DØ experiment at the Fermi National Accelerator Laboratory we have a ~150 node cluster of desktop machines all using the SGI-patched kernel. Every large disk (&amp;amp;gt;40Gb) or disk array in the cluster uses XFS including 4x640Gb disk servers and several 60-120Gb disks/arrays. Originally we chose reiserfs as our journaling filesystem, however, this was a disaster. We need to export these disks via NFS and this seemed perpetually broken in 2.4 series kernels. We switched to XFS and have been very happy. The only inconvenience is that it is not included in the standard kernel. The SGI guys are very prompt in their support of new kernels, but it is still an extra step which should not be necessary.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.ciprico.com/pDiMeda.shtml  Ciprico DiMeda NAS Solutions] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Ciprico DiMeda line of Network Attached Storage solutions combine the ease of connectivity of NAS with the SAN like performance levels required for digital media applications. The DiMeda 3600 provides high availability and high performance through dual NAS servers and redundant, scalable Fibre Channel RAID storage. The DiMeda 1700 provides high performance files services at a low price by using the latest Serial ATA RAID technology. All DiMeda systems are based on Linux and use XFS as the filesystem. We tested a number of filesystem alternatives and XFS was chosen because it provided the highest performance in digital media applications and the journaling feature ensures rapid failover in our dual node fault tolerant configurations.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.quantum.com/Products/NAS+Servers/Guardian+14000/Default.htm  The Quantum Guardian™ 14000] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Quantum Guardian™ 14000, the latest Network Attached Storage (NAS) solution from Quantum, delivers 1.4TB of enterprise-class storage for less than $25,000. The Guardian 14000 is a Linux-based device which utilizes XFS to provide a highly reliable journaling filesystem with simultaneous support for Windows, UNIX, Linux and Macintosh environments. As dedicated appliance optimized for fast, reliable file sharing, the Guardian 14000 combines the simplicity of NAS with a robust feature set designed for the most demanding enterprise environments. Support for tools such as Active Directory Service (ADS), UNIX Network Information Service (NIS) and Simple Network Management Protocol (SNMP) provides ease of management and seamless integration. Hardware redundancy, Snapshots and StorageCareÂ on-site service ensure security for business-critical data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.bigstorage.com/products_approach_overview.html  BigStorage K2~NAS] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At BigStorage we pride ourselves on tailoring our NAS systems to meet our customer&#039;s needs, with the help of XFS we are able to provide them with the most reliable Journaling Filesystem technology available. Our open systems approach, which allows for cross-platform integration, gives our customers the flexibility to grow with their data requirements. In addition, BigStorage offers a variety of other features including total hardware redundancy, snapshotting, replication and backups directly from the unit. All of our products include BigStorageï¿½s 24/7 LiveResponseÂ support. With LiveResponseÂ, we keep our team of experienced technical experts on call 24 hours a day, every day, to ensure that your storage investment remains online, all the time.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.echostar.com  Echostar DishPVR 721] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Echostar uses the XFS filesystem for its latest generation of satellite receivers, the DP721. Echostar chose XFS for its performance, stability and unique set of features.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS allowed us to meet a demanding requirement of recording two mpeg2 streams to the internal hard drive while simultaneously viewing a third pre-recorded stream. In addition, XFS allowed us to withstand unexpected power loss without filesystem corruption or user interaction.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We tested several other filesystems, but XFS emerged as the clear winner.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.sun.com/hardware/serverappliances/raq550/  Sun Cobalt RaQÂ 550] ==&lt;br /&gt;
&lt;br /&gt;
From the [http://www.sun.com/hardware/serverappliances/raq550/features.html features] page:&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS is a journaling file system capable of quick fail over recovery after unexpected interruptions. XFS is an important feature for mission-critical applications as it ensures data integrity and dramatically reduces startup time by avoiding FSCK delay.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://pingu.salk.edu/  Center for Cytometry and Molecular Imaging at the Salk Institute] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I run the Center for Cytometry and Molecular Imaging at the Salk Institute in La Jolla, CA. We&#039;re a core facility for the Institute, offering flow cytometry, basic and deconvolution microscopy, phosphorimaging (radioactivity imaging) and fluorescent imaging.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;m currently in the process of migrating our data server to Linux/XFS. Our web server currently uses Linux/XFS. We have about 60 Gb on the data server which has a 100Gb SCSI RAID 5 array. This is a bit restrictive for our microscopists so in order that they can put more data online, I&#039;m adding another machine, also running Linux/XFS, with about 420 Gb of IDE-RAID5, based on Adaptec controllers....&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Servers are configured with quota and run Samba, NFS, and Netatalk for connectivity to the mixed bag of computers we have around here. I use the CVS XFS tree most of the time. I have not seen any problems in the several months I have been testing.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://coltex.nl/ Coltex Retail Group BV] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Coltex Retail group BV in the Netherlands uses Red Hat Linux with XFS for their main database server which collects the data from over 240 clothing retail stores throughout the Netherlands. Coltex depends on the availability of the server for over 100 hundred employees in the main office for retrieval of logistical and sales figures. The database size is roughly 10GB large containing both historical and current data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The entire production and logistical system depends on the availability of the system and downtime would mean a significant financial penalty. The speed and reliability of the XFS filesystem which has a proven track record and mature tools to go with it is fundamental to the availability of the system.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS has saved us a lot of time during testing and implementation. A long filesystems check is no longer needed when bad things happen when they do. The increased speed of our database system which is based on Progress 9.1C is also a nice benefit to this filesystem.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.dkp.com/ DKP Effects] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;re a 3D computer graphics/post-production house. We&#039;ve currently got four fileservers using XFS under Linux online - three 350GB servers and one 800GB server. The servers are under fairly heavy load - network load to and from the dual NICs on the box is basically maxed out 18 hours a day - and we do have occasional lockups and drive failures. Thanks to Linux SW RAID5 and XFS, though, we haven&#039;t had any data loss, or significant down time.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.epigenomics.com/ Epigenomics] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We currently have several IDE-to-SCSI-RAID systems with XFS in production. The largest has a capacity of 1.5TB, the other 2 have 430GB each.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Data stored on these filesystems is on the one hand &amp;quot;normal&amp;quot; home directories and corporate documents and on the other hand scientific data for our laboratory and IT department.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.incyte.com/ Incyte Genomics] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;m currently in the process of slowly converting 21 clusters totaling 2300+ processors over to XFS.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;These machines are running a fairly stock RH7.1+XFS. The application is our own custom scheduler for doing genomic research. We have one of the worlds largest sequencing labs which generates a tremendous amount of raw data. Vast amounts of CPU cycles must be applied to it to turn it into useful data we can then sell access to.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Currently, a minority of these machines are running XFS, but as I can get downtime on the clusters I am upgrading them to 7.1+XFS. When I&#039;m done, it&#039;ll be about 10TB of XFS goodness... across 9G disks mostly.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.monmouth.edu/ Monmouth University] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve replaced our NetApp filer (80GB, $40,000). NetApp ONTAP software [runs on NetApp filers] is basically an NFS and CIFS server with their own proprietary filesystem. We were quickly running out of space and our annual budget almost depleted. What were we to do?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;With an off-the-shelf Dell 4400 series server and 300GB of disks ($8,000 total). We were able to run Linux and Samba to emulate a NetApp filer.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;XFS allowed us to manage 300GB of data with absolutely no downtime (now going on 79 days) since implementation. Gone are the days of fearing the fsck of 300GB.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.astro.wisc.edu  The University of Wisconsin Astronomy Department] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;At the University of Wisconsin Astronomy Department we have been using Linux XFS since the first release. We currently have 31 Linux boxes running XFS on all filesystems with about 2.6 TB of disk space on these machines. We use XFS primarily on our data reduction systems, but we also use it on our web server and on one of the remote observing machines at the WIYN 3.5m Telescope at Kitt Peak (http://www.noao.edu/wiyn/wiyn.html).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We will likely be using Linux XFS at least in part on the GLIMPSE program (http://www.astro.wisc.edu/sirtf/) which will likely require several TB of disk space to process the data.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.amoa.org/ The Austin Museum of Art] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The Austin Museum of Art has two file servers running RedHat 7.2_XFS upgraded from RedHat 7.1_XFS. Our webserver runs Domino on top of RedHat 7.3_XFS and we&#039;re getting about 70% better performance than the Domino server running on Windows 2000 Server. We&#039;re moving our workstations away from Windows and Microsoft Office to an LTSP server running on RedHat 7.3_XFS.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve become solely dependent on XFS for all of our data systems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.tecmath.com/ tecmath AG] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We use a production server with a 270 GB RAID 5 (hardware) disk array. It is based on a Suse 7.2 distribution, but with a standard 2.4.12 kernel with XFS and LVM patches. The server provides NFS to 8 Unix clients as well as Samba to about 80 PCs. The machine also runs Bind 9, Apache, Exim, DHCP, POP3, MySQL. I have tried out different configurations with ReiserFS, but I didn&#039;t manage to find a stable configuration with respect to NFS. Since I converted all disks to XFS some 3 months ago, we never had any filesystem-related problems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.theiqgroup.com/ The IQ Group] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Here at the IQ Group, Inc. we use XFS for all our production and development servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Our OS of choice is Slackware Linux 8.0. Our hardware of choice is Dell and VALinux servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;As for applications, we run the standard Unix/Linux apps like Sendmail, Apache, BIND, DHCP, iptables, etc.; as well as Oracle 9i and Arkeia.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve been running XFS across the board for about 3 months now without a hitch (so far).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Size-wise, our biggest server is about 40 GB, but that will be increasing substantially in the near future.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Our production servers are collocated so a journaled FS was a must. Reboots are quick and no human interaction is required like with a bad fsck on ext2. Additionally, our database servers gain additional integrity and robustness.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We originally chose XFS over ReiserFS and ext3 because of it&#039;s age (it&#039;s been in production on SGI boxes for probably longer than all the other journaling FS&#039;s combined) and it&#039;s speed appeared comparable as well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.artsit.usyd.edu.au  Arts IT Unit, Sydney University] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I&#039;ve got XFS on a &#039;production&#039; file server. The machine could have up to 500 people logged in, but typically less than 200. Most are Mac users, connected via NetAtalk for &#039;personal files&#039;, although there are shared areas for admin units. Probably about 30-40 windows users. (Samba) It&#039;s the file server for an Academic faculty at a University.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Hardware RAID, via Mylex dual channel controller with 4 drives, Intel Tupelo MB, Intel &#039;SC5000&#039; server chassis with redundant power and hot-swap scsi bays. The system boots off a non RAID single 9gb UW-scsi drive.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Only system &#039;crash&#039; was caused by some one accidentally unplugging it, just before we put it into production. It was back in full operation within 5 minutes. Without journaling, the fsck would have taken well over an hour. In day to day use it has run well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://structbio.vanderbilt.edu/comp/  Vanderbilt University Center for Structural Biology] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I run a high-performance computing center for Structural Biology research at Vanderbilt University. We use XFS extensively, and have been since the late prerelease versions. I&#039;ve had nothing but good experiences with it.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We began using XFS in our search for a good solution for our RAID fileservers. We had such good experiences with it on these systems that we&#039;ve begun putting it on the root/usr/var partitions of every Linux system we run here. I even have it on my laptop these days. XFS in combination with the 2.4 NFS3 implementation performs very well for us, and we have great uptimes on these systems (Our 750GB ArenaII setup is at 143 days right now).&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;All told, we&#039;ve got about 1.2TB of XFS filesystems spinning right now. It&#039;s spread out across maybe a dozen or so filesystems and will continue to increase as we are growing fast and that&#039;s all we use now. Next up is putting it on our 17-node Linux cluster, which will bring that up to 1.5TB spread across 30 filesystems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;I, for one, would LOVE to see XFS make it into the kernel tree. From my perspectives, it&#039;s one of the best things to happen to Linux in the 7 years I&#039;ve been using/administering it.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
==== 2008 Update ====&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We&#039;ve since moved our main home directories to a proprietary NAS, but continue to use XFS on 10TB of LVM storage for doing backup-to-disk from the same NAS&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www-cdf.fnal.gov/  CDF Experiment at Fermi National Lab] ==&lt;br /&gt;
&lt;br /&gt;
CDF, an elementary particle physics experiment at Fermi National Lab, is using XFS for all our cache disks.&lt;br /&gt;
&lt;br /&gt;
The usage model is that we have a PB tape archive (2 STK silos) as permanent storage. In front of this archive we are deploying a roughly 100TB disk cache system. The cache is made up of 50 2TB file server based on cheap commodity hardware (3ware based hardware raid using IDE drives). The data is then processed by a cluster of 300 Dual CPU Linux PC&#039;s. The cache software is dCache, a DESY/FNAL product.&lt;br /&gt;
&lt;br /&gt;
The whole system is used by more than 300 active users from all over the world for batch processing for their physics data analysis.&lt;br /&gt;
&lt;br /&gt;
== [http://www.get2chip.com  Get2Chip, Inc.] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are using XFS on 3 production file servers with approximately 1.5T of data. Quite impressive especially when we had a power outage and all three servers shutdown. All servers came back up in minutes with no problems! We are looking at creating two more servers that would manage 2+ TB of data store.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.lando.co.za  Lando International Group Technologies] ==&lt;br /&gt;
&lt;br /&gt;
Lando International Group Technologies is the home of:&lt;br /&gt;
&lt;br /&gt;
* [www.lando.co.za Lanndo Technologies Africa (Pty) Ltd] - Internet Service Provider&lt;br /&gt;
* [www.lbsd.net Linux Based Systems Design] (Article 21). Not-For-Profit company established to provide free Linux distributions and programs.&lt;br /&gt;
* Cell Park South Africa (Pty) Ltd. RSA Pat Appln 2001/10406. Collecting parking fees by means of cell phone SMS or voice.&lt;br /&gt;
* Read Plus Education (Pty) Ltd. Software based reading skills training and testing for ages 4 to 100.&lt;br /&gt;
* Mobivan. Mobile office including Internet access, fax, copying, printing, telephone, collection and delivery services, legal services, pre-paid phone and electricity services, bill payment email, secretarial services, training facilities and management services.&lt;br /&gt;
* Lando International Marketing Agency. Direct marketing services, design and supply of promotional material, consulting, sourcing of capital and other funding.&lt;br /&gt;
* Illico. Software development and systems analysis on most platforms.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Throughout these companies, we use the XFS filesystem with [http://idms.lbsd.net IDMS Linux] on high-end Intel servers, with an average of 100 GB storage each. XFS stores our customer and user data, including credit card details, mail, routing tables, etc.. We have not had one problem since the release of the first XFS patch.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.fcb-wilkens.com  Foote, Cone, &amp;amp;amp; Belding] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are an advertisement company in Germany, and the use of the XFS filesystem is a story of success for us. In our Hamburg office, we have two file servers having a 420 Gig RAID in XFS format serving (almost) all our data to about 180 Macs and about 30 PCs using Samba and Netatalk. Some of the data is used in our offices in Frankfurt and Berlin, and in fact the Berlin office is just getting it&#039;s own 250 Gig fileserver (using XFS) right now.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The general success with XFS has led us to switch over all our Linux servers to run on XFS as well (with the exception of two systems that are tied to tight specifications configuration wise). XFS, even the old 1.0 version, has happily taken on various abuse - broken SCSI controllers, broken RAID systems.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.moving-picture.co.uk/  Moving Picture Company] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We here at MPC use XFS/RedHat 7.2 on all of our graphics-workstations and file-servers. More info can be found in an [http://www.linuxuser.co.uk/articles/issue20/lu20-Linux_at_work-In_the_picture.pdf  article] LinuxUser magazine did on us recently.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://www.coremetrics.com/  Coremetrics, Inc.] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;We are currently using XFS for 25+ production web-servers, ~900GB Oracle db servers, with potentially 15+ more servers by mid 2003, with ~900GB+ databases. All XFS installed.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Also, our dev environment, except for the Sun boxes which all are being migrated to X86 in the aforementioned server additions, plus the dev Sun boxes as well, are all x86 dual proc servers running Oracle, application servers, or web services as needed. All servers run XFS from images we&#039;ve got on our SystemImager servers.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;All production back-end servers are connected via FC1 or FC2 to a SAN containing ~13TB of raw storage, which, will soon be converted from VxFS to XFS with the migration of Oracle to our x86 platforms.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== [http://evolt.org Evolt.org] ==&lt;br /&gt;
&lt;br /&gt;
&amp;quot;evolt.org, a world community for web developers promoting the mutual free exchange of ideas, skills and experiences, has had a great deal of success using XFS. Our primary webserver which serves 100K hosts/month, primary Oracle database with ~25Gb of data, and free member hosting for 1000 users haven&#039;t had a minute of downtime since XFS has been installed. Performance has been spectacular and maintenance a breeze.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;font size=&amp;quot;-1&amp;quot;&amp;gt; &#039;&#039;All testimonials on this page represent the views of the submitters, and references to other products and companies should not be construed as an endorsement by either the organizations profiled, or by SGI. All trademarks (r) their respective owners.&#039;&#039; &amp;lt;/font&amp;gt;&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=Main_Page&amp;diff=2115</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=Main_Page&amp;diff=2115"/>
		<updated>2010-10-03T07:10:08Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: capitalize RPM :) we could also set up a redirect page, if really needed....&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!-- Welcome &lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#C5C5FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
Welcome to XFS.org. This site is set up to help with the XFS file system.&amp;lt;/div&amp;gt;&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| width=&amp;quot;100%&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Information --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#E2EAFF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== Information about XFS ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS FAQ]]&lt;br /&gt;
* [[XFS Status Updates]]&lt;br /&gt;
* [[XFS Papers and Documentation]]&lt;br /&gt;
* [[Linux Distributions shipping XFS]]&lt;br /&gt;
* [[XFS Rpm for RedHat|XFS RPMs for RedHat]]&lt;br /&gt;
* [[XFS Companies]]&lt;br /&gt;
* [http://oss.sgi.com/projects/xfs SGI XFS website]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/XFS Wikipedia XFS page]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Consulting --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#fffff0; align:right; &amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Professional XFS Consulting Services == &lt;br /&gt;
&lt;br /&gt;
[[Consulting Resources]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| width=&amp;quot;50%&amp;quot; style=&amp;quot;vertical-align:top&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Developers --&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;margin:0; margin-top:10px; margin-right:10px; border:1px solid #dfdfdf; padding:0 1em 1em 1em; background-color:#F8F8FF; align:right;&amp;quot;&amp;gt;&lt;br /&gt;
== XFS Developer Resources ==&lt;br /&gt;
&lt;br /&gt;
* [[XFS email list and archives]]&lt;br /&gt;
* [http://oss.sgi.com/bugzilla/buglist.cgi?product=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ oss.sgi.com]&lt;br /&gt;
* [http://bugzilla.kernel.org/buglist.cgi?product=File+System&amp;amp;component=XFS&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED Bugzilla @ kernel.org]&lt;br /&gt;
* [[Getting the latest source code]]&lt;br /&gt;
* [[Unfinished work]]&lt;br /&gt;
* [[Shrinking Support]]&lt;br /&gt;
* [[Ideas for XFS]]&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{#meta: | u+4/+rib+YG96TifD0SN88xS84YSDm2cl61IU7ZIk9g= | verify-v1 }}&lt;br /&gt;
&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Ckujau&amp;diff=2102</id>
		<title>User:Ckujau</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Ckujau&amp;diff=2102"/>
		<updated>2010-08-25T20:43:11Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: email address corrected&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;  [mailto:xfswiki@nerdbynature.de Christian Kujau]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
	<entry>
		<id>https://xfs.org/index.php?title=User:Ckujau&amp;diff=2101</id>
		<title>User:Ckujau</title>
		<link rel="alternate" type="text/html" href="https://xfs.org/index.php?title=User:Ckujau&amp;diff=2101"/>
		<updated>2010-08-25T20:42:36Z</updated>

		<summary type="html">&lt;p&gt;Ckujau: New page:   [mailto:lists___nospam@nerdbynature.de Christian Kujau]&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;  [mailto:lists___nospam@nerdbynature.de Christian Kujau]&lt;/div&gt;</summary>
		<author><name>Ckujau</name></author>
	</entry>
</feed>