跳转到主内容

使用Fabric Pool时、为什么Amazon S3的数据大小大于卷的逻辑大小?

Views:
8
Visibility:
Public
Votes:
0
Category:
cloud-volumes-ontap-cvo
Specialty:
CORE
Last Updated:

适用场景

  • Cloud Volume ONTAP (CVO)
  • Amazon Web Services (AWS)
  • Fabric Pool

问题解答

这是通过对象删除和碎片整理的设计实现的。

  • 在将数据移至云层时、数据压缩、重复数据删除和数据缩减等存储效率会得以保留、从而降低对象存储和传输成本。
  • 但是、Fabric Pool不会从连接的对象存储中删除块。而是在 ONTAP不再引用某个百分比的对象块后、Fabric Pool将删除整个对象。

示例: 

在一个4 MB的对象中、有1、024个4 KB块分层到Amazon S3。 
只有在ONTAP引用的4 KB块少于205个(1、024个中的20%)时、才会进行碎片整理和删除。
如果有足够(1、024)个块没有引用、则会删除其原始4 MB对象、并创建一个新对象。

追加信息

此百分比(即未回收的空间阈值)可以自定义、Amazon S3的默认设置为20%。

 

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.