Does this even get the basics correct?

Story: UNIX file system fragmentationTotal Replies: 0
Author Content
phsolide

Jun 10, 2008
1:20 PM EDT
The "Fragmentation" in NTFS comes from storing the disk block numbers as "extents" (first block number, number of blocks) rather than as individual block numbers as Berkeley FFS and EXT2/EXT3, right?

I mean, SGI had "EFS" in the late 80s, which was basically FFS which stored blocks in extents rather than the block number/indirect blocks/doubly-indirect blocks that FFS had at the time. SGI was dumb enough to include a nearly bug-free defragementer, and a default cron entry that ran it once a week, rather than doing an MSFT and originally releasing NT without a working defrag, or even a documented defragmentation interface.

"Fragments" in FFS (and I believe EXT2) are the smaller-than-a-single-block pieces that might exist at the end of a file. Files bigger than a certain number of bytes are guaranteed to be "fragmented" in the NTFS sense.

"Fragments" in NTFS are files that didn't get allocated in contiguous runs of blocks, so the file header has to have more than 1 extent in it, right?

This article misses that distinction entirely, and thus seems to me to get the whole thing wrong.

Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]

Becoming a member of LXer is easy and free. Join Us!