跳转到主内容

为什么使用Fabric Pool时Amazon S3的数据大小大于卷的逻辑大小?

Views:
2
Visibility:
Public
Votes:
0
Category:
cloud-volumes-ontap-cvo
Specialty:
core
Last Updated:

适用场景

  • Cloud Volume ONTAP ( CVO )
  • Amazon Web Services ( AWS )
  • Fabric Pool

问题解答

这是通过对象删除和碎片整理设计实现的。

  • 在将数据移至云层时、可以保留数据压缩、重复数据删除和数据缩减等存储效率、从而降低对象存储和传输成本。
  • 但是、Fabric Pool不会从连接的对象存储中删除块。而是在ONTAP 不再引用对象中特定百分比的块后、Fabric Pool将删除整个对象。

示例: 

一个4 MB对象中有1、024个4 KB块分层到Amazon S3。 
只有在ONTAP 引用少于205个4 KB块(1、024个块中的20%)时、才会进行碎片整理和删除。
如果足够的(1、024)个块没有引用、则会删除其原始4 MB对象、并创建新对象。

追加信息

此百分比是未回收空间阈值、可以自定义、Amazon S3的默认设置为20%。

 

NetApp provides no representations or warranties regarding the accuracy or reliability or serviceability of any information or recommendations provided in this publication or with respect to any results that may be obtained by the use of the information or observance of any recommendations provided herein. The information in this document is distributed AS IS and the use of this information or the implementation of any recommendations or techniques herein is a customer's responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. This document and the information contained herein may be used solely in connection with the NetApp products discussed in this document.