Académique Documents
Professionnel Documents
Culture Documents
Data deduplication finds and removes duplication within data on a volume while
ensuring that the data remains correct and complete. This makes it possible to store
more file data in less space on the volume.
What Data Deduplication Does
Chunk store (the optimized file data, stored as chunks packed into container
files)
Additional free space (because the optimized files and chunk store occupy
much less space than they did before they were optimized)
When new files are added to the volume, they are not optimized right away. Only
files that have not been changed for a minimum amount of time are optimized.
(This minimum amount of time is set by user-configurable policy.)
The chunk store consists of one or more chunk store container files. New chunks are
appended to the current chunk store container. When its size reaches about 1 GB,
that container file is sealed and a new container file is created.
When an optimized file is deleted from the data deduplication-enabled volume, its
reparse point is deleted, but its data chunks are not immediately deleted from the
chunk store. The data deduplication feature's garbage collection job reclaims the
unreferenced chunks.
Requirements for Data Deduplication
Data deduplication is supported only on the following:
Windows Server operating systems beginning with Windows Server 2012
Cluster shared volume file system (CSVFS) for non-VDI workloads or any
workloads on Windows Server 2012
Files approaching or larger than 1 TB in size.
Encrypted files