Block Reclamation
Space usage monitoring on a SAN is different from how space usage is monitored within a host’s file system.
A SAN reports free space in terms of how many blocks have not been written to. These blocks are sometimes
referred to as “clean blocks”. The number of clean blocks is multiplied by the block size to provide a more
user-friendly space usage figure.
In contrast, host file systems report free space in terms of the total capacity of a datastore or volume, less
the sum of all files within the file system. When a file is deleted, free space is instantly increased within the
host file system. However, in the majority of cases, deleting files on the host does not automatically notify the
SAN that those blocks can be freed up, since the physical block remains in place after the deletion; in such
cases, only the file system metadata is updated. This leads to a discrepancy between how much free space
is being reported within the file system and how much free space is being reported on the SAN. This
discrepancy is not limited to Nimble arrays as all block-storage SANs that use thin provisioning have the
same space discrepancy issue.
To work around this space discrepancy issue, Windows, VMware and Linux file systems have implemented
a feature that notifies the SAN to free up blocks that are no longer in use by the host file system. This feature
is called block unmap or SCSI unmap.
Example:
Suppose 4 TB of data are written onto a Nimble volume mounted on a Windows 2008 R2 NTFS host, and
then 2 TB of data are deleted. When files are deleted, the data blocks remain in place and the file system
headers are updated with info that the blocks are not in use and are available; however, the array continues
to see the blocks in use as data is still physically present on the volume. So, unless the host OS supports
SCSI unmap to inform the underlying storage target of the freed up space on the file system, the storage
continues to report that the data as still in use. In most cases, this is not a problem because the host file
system will eventually reuse the deleted blocks for new data and the underlying storage will not report an
increase in utilized space; however, this could become a problem if the Nimble array is becoming full and the
space is needed for other volumes / snapshots.
Note For these features to work optimally, you must be running NimbleOS version 1.4.7.0 or later. SCSI
unmap is supported in NimbleOS 1.4.4.0 and later releases.
File systems support two methods for informing storage of vacated free blocks on the file system: online
(periodic discards) and batched discards. In both methods, unused blocks are returned to the underlying
SAN storage by overwriting the unused file system blocks with zeroes.
Nimble arrays support "zero page detection" to reclaim previously provisioned space. The array will detect
the zeroes written by the host file system and reduce the reported storage space used on the fly.
Periodic (online) discards happen automatically; that is, no scheduled run is required to reclaim unused
space on the array. On the other hand, Batched discards require that a user manually run a tool or command
to reclaim unused space.
Examples of file systems that support online discards:
• NTFS (Windows Server 2012, Nimble OS 1.4.4.0 and higher)
• VMFS6 (VMware ESXi 6.5 and higher)
• VMFS5 (VMware ESXi 5.0, removed in 5.0 Update 1 due to performance overhead)
• ext4 (version 2.6.27-5185-g8a0aba7 onwards)
File systems that supportbatched discards:
• NTFS (Windows Server 2003-2008 R2, through use of "sdelete -z" utility)
• VMFS5 (VMware ESXi 5.0, Update 1 onwards, through use of "vmkfstools" utility)
• ext4 (Linux / v2.6.36-rc6-35-g7360d17)
85Copyright
©
2019 by Hewlett Packard Enterprise Development LP. All rights reserved.
Block Reclamation