Cinder LVM Oversubscription in thin provisioning

摘要:
e.g.[lvm-1]volume_group=centosvolume_driver=cinder.volume.drivers.lvm.LVMVolumeDrivervolume_backend_name=lvm-1iscsi_helper=tgtadmiscsi_protocol=iscsilvm_max_over_subscription_ratio=25.0功能生效的地方是CinderSchedulerFiltercapacity_filter,代码如下:#cinder/scheduler/filters/capacity_filter.py#Onlyevaluateusingmax_over_subscription_ratioif#thin_provisioning_supportisTrue.Checkiftheratioof#provisionedcapacityovertotalcapacityhasexceededover#subscriptionratio.if:provisioned_ratio=LOG.debugifprovisioned_ratio˃backend_state.max_over_subscription_ratio:msg_args={"provisioned_ratio":provisioned_ratio,"oversub_ratio":backend_state.max_over_subscription_ratio,"grouping":grouping,"grouping_name":backend_state.backend_id,}LOG.warningreturnFalse如果provisioned_ratio˃backend_state.max_over_subscription_ratio为True则表示当前provisioned的比率已经大于Oversubscriptioninthinprovisioning设定的比率了,所以当前CinderBackend无法继续创建新的Volume。那么,provisioned_ratio是怎么得到的呢?
目录

文章目录

Oversubscription in thin provisioning
  • Cinder spec: Over Subscription in Thin Provisioning https://review.openstack.org/#/c/129342/12/specs/kilo/over-subscription-in-thin-provisioning.rst
  • cinder bp: Over subscription in thin provisioning https://blueprints.launchpad.net/cinder/+spec/over-subscription-in-thin-provisioning

所谓 Oversubscription in thin provisioning,就相当于是 Thin Provisioning Storage Pool 的超分比限制,防止 Thin Provisioning Storage Pool 被无限放大。对应的配置项是 max_over_subscription_ratio,默认值为 20.0。e.g.

[lvm-1]
volume_group = centos
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvm-1
iscsi_helper = tgtadm
iscsi_protocol = iscsi
lvm_max_over_subscription_ratio = 25.0

功能生效的地方是 Cinder Scheduler Filter capacity_filter,代码如下:

# cinder/scheduler/filters/capacity_filter.py

        # Only evaluate using max_over_subscription_ratio if
        # thin_provisioning_support is True. Check if the ratio of
        # provisioned capacity over total capacity has exceeded over
        # subscription ratio.
        if (thin and backend_state.thin_provisioning_support and
                backend_state.max_over_subscription_ratio >= 1):
            provisioned_ratio = ((backend_state.provisioned_capacity_gb +
                                  requested_size) / total)
            LOG.debug("Checking provisioning for request of %s GB. "
                      "Backend: %s", requested_size, backend_state)
            if provisioned_ratio > backend_state.max_over_subscription_ratio:
                msg_args = {
                    "provisioned_ratio": provisioned_ratio,
                    "oversub_ratio": backend_state.max_over_subscription_ratio,
                    "grouping": grouping,
                    "grouping_name": backend_state.backend_id,
                }
                LOG.warning(
                    "Insufficient free space for thin provisioning. "
                    "The ratio of provisioned capacity over total capacity "
                    "%(provisioned_ratio).2f has exceeded the maximum over "
                    "subscription ratio %(oversub_ratio).2f on %(grouping)s "
                    "%(grouping_name)s.", msg_args)
                return False

如果 provisioned_ratio > backend_state.max_over_subscription_ratio 为 True 则表示当前 provisioned 的比率已经大于 Oversubscription in thin provisioning 设定的比率了,所以当前 Cinder Backend 无法继续创建新的 Volume。

那么,provisioned_ratio 是怎么得到的呢?代码如下:

provisioned_ratio = ((backend_state.provisioned_capacity_gb + requested_size) / total)

首先弄清楚这几个变量的含义及取值:

# cinder/utils.py

    # provisioned_capacity_gb is the apparent total capacity of
    # all the volumes created on a backend, which is greater than
    # or equal to allocated_capacity_gb, which is the apparent
    # total capacity of all the volumes created on a backend
    # in Cinder. Using allocated_capacity_gb as the default of
    # provisioned_capacity_gb if it is not set.
    allocated_capacity_gb = capability.get('allocated_capacity_gb', 0)
    provisioned_capacity_gb = capability.get('provisioned_capacity_gb',
                                             allocated_capacity_gb)
    thin_provisioning_support = capability.get('thin_provisioning_support',
                                               False)
    total_capacity_gb = capability.get('total_capacity_gb', 0)
    free_capacity_gb = capability.get('free_capacity_gb', 0)

官方说明

  • total_capacity: This is an existing parameter already reported by the driver. It is the total physical capacity. Example: Assume backend A has a total physical capacity of 100G.

  • available_capacity: This is an existing parameter already reported by the driver. It is the real physical capacity available to be used. Example: Assume backend A has a total physical capacity of 100G. There are 10G thick luns and 20G thin luns (10G out of the 20G thin luns are written). In this case, available_capacity = 100 - 10 -10 = 80G.

  • used_capacity: This parameter is calculated by the difference between total_capacity and available_capacity. It is used below for calculating used ratio.

  • volume_size: This is an existing parameter. It is the size of the volume to be provisioned.

  • provisioned_capacity: This is a new parameter. It is the apparent allocated space indicating how much capacity has been provisioned. Example: User A created 2x10G volumes in Cinder from backend A, and user B created 3x10G volumes from backend A directly, without using Cinder. Assume those are all the volumes provisioned on backend A. The total provisioned_capacity will be 50G and that is what the driver should be reporting.

  • allocated_capacity: This is an existing parameter. Cinder uses this to keep track of how much capacity has been allocated through Cinder. Example: Using the same example above for provisioned_capacity, the allocated_capacity will be 20G because that is what has been provisioned through Cinder. allocated_capacity is documented here to differentiate from the new parameter provisioned_capacity.

简要说明

  • allocated_capacity_gb:实际已分配的容量。
  • provisioned_capacity_gb:已置备的容量,大于或等于 allocated_capacity_gb。
    注:所谓已置备是存储领域的专业术语,表示逻辑上的已经分配出去的虚拟容量,可能是精简置备的也可能是厚置备的。需要与实际已分配的容量作一个区分。
  • total_capacity_gb:实际的总容量。
  • free_capacity_gb:实际剩余的容量。

在 LVM Driver 中的含义

  • allocated_capacity_gb: 在 cinder 中分配的 cinder volume 总大小
  • free_capacity_gb: vg 中空闲的容量
  • provisioned_capacity_gb: vg 上面分配的总大小,因为文件稀疏(Thin)的问题,这个值可能很大
  • total_capacity_gb: vg 容量的总大小
  • max_over_subscription_ratio: 最大超配比

回过头来在看看这条计算公式:

provisioned_ratio = ((backend_state.provisioned_capacity_gb + requested_size) / total)

provisioned_ratio 就是当前已经被置备的容量的比率,包括精简置备或厚置备的情况,也就是我们不希望它太过于大的比率。理应小于 max_over_subscription_ratio。

但有一个问题需要注意,在 LVMDriver 中使用 Thin provisioning 时,provisioned_capacity_gb 马上就等于 VG 的总容量。你会发现虽然你还没有创建任何 Volume,但 provisioned_capacity_gb 就已经等于 VG 的 Size 了。代码如下:

# cinder/volume/drivers/lvm.py
        if self.configuration.lvm_mirrors > 0:
            total_capacity =
                self.vg.vg_mirror_size(self.configuration.lvm_mirrors)
            free_capacity =
                self.vg.vg_mirror_free_space(self.configuration.lvm_mirrors)
            provisioned_capacity = round(
                float(total_capacity) - float(free_capacity), 2)
        elif self.configuration.lvm_type == 'thin':
            total_capacity = self.vg.vg_thin_pool_size
            free_capacity = self.vg.vg_thin_pool_free_space
            provisioned_capacity = self.vg.vg_provisioned_capacity
        else:
            total_capacity = self.vg.vg_size
            free_capacity = self.vg.vg_free_space
            provisioned_capacity = round(
                float(total_capacity) - float(free_capacity), 2)

这是因为 Cinder 假设,当你使用 LVM Thin provisioning 的时候,那么整个 VG 都应该是 Thin provisioning 的,不存在 Thin 和 Thick 混合的情况,否则无法正确进行容量的计算。因此,当我们使用 LVM Thin provisioning 时,切记要划分一个干净的 VG 给 LVM Backend,否则就会出现资源计算错误的问题。

免责声明:文章转载自《Cinder LVM Oversubscription in thin provisioning》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇python实现RSA加解密使用apache benchmark(ab) 测试报错汇总下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

Linux中的LVM逻辑卷管理(转载)

这几天把自己的系统重做了一下,把自己电脑的硬盘分区做成了LVM(逻辑卷管理)类型的了。呵呵,以前老是会出现某一个分区的磁盘空间突然不足,然后就得放别的盘里。等东西放的多了用某一件东西时候就非常麻烦,都不知道自己的东西到底在哪个磁盘里面放着呢。。。这下好了,把硬盘做了LVM类型的磁盘就不用操这心了,现在先分少点,等以后不够用了,直接给不够用的磁盘往上加容量就...

LVM 数据迁移

当用作LVM的某磁盘需挪作他用时,我们需要把该磁盘上的数据迁移到其它磁盘 数据迁移有两种方式:1.手动选择迁移位置、2.不选择迁移位置 1.手动选择迁移位置 首先我们查看一下当前pv 空间 [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 ce...

Linux LVM扩容和缩容

将原硬盘上的LVM分区/dev/mapper/RHEL-Data由原来的60G扩展到80G Step1:将LVData扩容+20G,如下图: [root@esc data]# lvextend -L +20G /dev/RHEL/Data Size of logical volume RHEL/Data changed from 60.00 GiB (15...

Lvm 折腾小记

LVM简述(引用百度百科):LVM是逻辑盘卷管理(Logical Volume Manager)的简称,它是Linux环境下对磁盘分区进行管理的一种机制,LVM是建立在硬盘和分区之上的一个逻辑层,来提高磁盘分LVMLVM区管理的灵活性。前面谈到,LVM是在磁盘分区和文件系统之间添加的一个逻辑层,来为文件系统屏蔽下层磁盘分区布局,提供一个抽象的盘卷,在盘卷上...

LVM 扩容、删除

当发现lv空间不够用时,就需要我们进行对lv的扩容。扩容分两种情况:(lv的有点,支持在线扩容) 1.VG有充足的空间,直接对lv进行扩容; 2.VG空间不足时,我们需要对VG进行扩容,然后再对lv进行扩容 1)首先我们做VG空间充足的情况。直接对lv进行扩容 查看我们的L挂在情况 [root@bogon ~]# df -Th 文件系统...

1.3 LVM条带化

1、什么是条带化 磁盘冲突:当多个进程同时访问一个磁盘时,可能会出现磁盘冲突。磁盘系统对访问次数(每秒的IOPS)和数据传输速率(读写速率,TPS)有限制。当达到这些限制时,后面需要访问磁盘的进程就需要挂起等待,这就是磁盘冲突。避免磁盘冲突是优化I/O性能的一个重要目标。   条带化技术:将I/O负载均衡到多个物理磁盘上的技术。条带化技术将一块连续的数据分...