NVMe on RHEL7

摘要:
NVMExpress[NVMe]或非易失性存储器主机控制器接口规范,是通过PCIExpress总线访问固态驱动器的规范。NVMis是非易失存储器的缩写,包含在SSD中。NVMed为PCIeSSD定义了优化的注册表接口、命令集和功能集。NVMe专注于标准化PCIeSSD并提高性能PCIeSSD设备的设计基于NVMe的特定规范PCIeSSD的格式,请参阅elinkhttp://www.nvmexpress.org/.TheNVMedevicesusedcurrentlyareNVMe1.0ccompliantInthisblogwewillbelookingintoRHEL7supportfortheNVMedevices.CurrentlyDELLsupporttheNVMedeviceswithRHEL7outofbox[基于供应商]驱动程序以下是我们要查看的内容:NVMe功能支持的NVMe设备:列出设备及其能力检查最大有效负载NVMe驱动程序:列出驱动程序信息NVMe设备节点和命名约定格式化和安装设备使用单个单核工具管理NVMe设备的背板LED支持的NVMe设备显示以下功能基本操作热插拔支持[UEFI和Legacy]下表列出了在12和13机器上支持NVMe功能的RHEL7[Out-of-box]驱动程序GenerationBasicIOHotPlugUEFIBootLegacyBoot13GYesYesNo12GYesYeNoNoTable1:RHEL7驱动程序支持NVMe设备:列出设备及其功能1)列出RHEL7OS信息[root@localhost~]#uname-aLinuxlocalhost.localmain3.10.0-123.el7.x86_64#1STMonMay511:16:57EDT2014x86_64x86_64GNU/Linux 2)使用lspciutilitya)我们支持SamsungbasedNVMedrives。首先使用以下命令[root@localhost~]#lspci|grep-iSamsung45:00.0非易失性内存控制器:SamsungElectronicsCoLtdNVMeSSDC控制器171X47:00.0非易失存储器控制器:SamSungElectronicsCo.LtdNVMeSSD控制器171Xb)这些软件将如下所示[图1]。此处为“45:00.0”和“47:00.0”“是连接驱动程序的批次。图1:液体固化)使用液体并使用以下液体连接来确定设备的尾部、能力和相应的河流[root@localhost~]#lspci-s45:00.0-v45:00.0非易失性内存控制器:三星电子有限公司NVMeSSDController171X子系统:DellExpressFlashNVMeXS1715SSD800GBPhysicalSlot:25标志:busmaster、fastdevsel、latency0、IRQ76Memoryat47fc000[size=16K]功能:[c0]电源管理版本3功能:[c8]MSI:启用计数=1/32可屏蔽+64位+功能:[e0]MSI-X:启用+计数=129屏蔽功能:[70]ExpressEndpoint,MSI00功能:[40]供应商特定信息:Len=24˂?

原文地址https://www.dell.com/support/article/cn/zh/cnbsd1/sln312382/nvme-on-rhel7?lang=en

Posted on behalf of Lakshmi Narayanan Durairajan (Lakshmi_Narayanan_Du@dell.com)

What is NVMe?

NVM Express [NVMe] or Non-Volatile Memory Host Controller Interface Specification (NVMHCI), is a specification for accessing solid-state drives (SSDs) attached through the PCI Express (PCIe) bus. NVM is an acronym for non-volatile memory, as used in SSDs.

NVMe defines optimized register interface, command set and feature set for PCIe SSD’s. NVMe focuses to standardize the PCIe SSD’s and improve the performance

PCIe SSD devices designed based on the NVMe specification are NVMe based PCIeSSD’s

For more details on the NVMe please refer the link http://www.nvmexpress.org/ .The NVMe devices used currently are NVMe 1.0c compliant

In this blog we will be looking into RHEL 7 support for the NVMe devices.

Currently DELL support the NVMe devices with RHEL 7 out of box [vendor based] driver

Following are the list of the things that we will look into:

  • NVMe - Features Supported
  • NVMe Device : Listing the device and its Capabilities
  • Checking MaxPayLoad
  • NVMe Driver : List the driver information
  • NVMe Device Node and Naming conventions
  • Formatting with xfs and mounting the device
  • Using ledmon utility to manage backplane LEDs for NVMe device

NVMe- Features Supported

NVMe driver exposes the following features

  • Basic IO operations
  • Hot Plug
  • Boot Support [UEFI and Legacy]

The following table lists the RHEL 7 [Out of box] driver supported features for NVMe on 12G and 13 G machines

GenerationBasic IOHot PlugUEFI BootLegacy Boot
13 GYesYesYesNo
12 GYesYesNoNo
Table 1: RHEL 7 Driver Support

NVMe Device: Listing the device and its Capabilities

1) List the RHEL 7 OS information

[root@localhost ~]# uname -a

Linux localhost.localdomain 3.10.0-123.el7.x86_64 #1 SMP Mon May 5 11:16:57 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux 

2) Get the device details by using the lspci utility

a) We support Samsung based NVMe drives. First get the pci slot id by using the following command

[root@localhost ~]# lspci | grep -i Samsung

45:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03)

47:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03)


b) The slot id will be listed as shown in the below [ Fig 1]. Here ‘’45:00.0"and "47:00.0"are the slots on which the drives are connected.

 lspci listing the slot id
Figure 1: lspci listing the slot id

a) Use the slot id and use the following lspci options to get the device details ,capabilities and the corresponding driver

[root@localhost ~]# lspci -s 45:00.0 -v

45:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03) (prog-if 02)

Subsystem: Dell Express Flash NVMe XS1715 SSD 800GB

Physical Slot: 25

Flags: bus master, fast devsel, latency 0, IRQ 76

Memory at d47fc000 (64-bit, non-prefetchable) [size=16K]

Capabilities: [c0] Power Management version 3

Capabilities: [c8] MSI: Enable- Count=1/32 Maskable+ 64bit+

Capabilities: [e0] MSI-X: Enable+ Count=129 Masked-

Capabilities: [70] Express Endpoint, MSI 00

Capabilities: [40] Vendor Specific Information: Len=24 <?>

Capabilities: [100] Advanced Error Reporting

Capabilities: [180] #19

Capabilities: [150] Vendor Specific Information: ID=0001 Rev=1 Len=02c <?>

Kernel driver in use: nvme



The below [Fig 2] shows the Samsung NVMe device and the device details listed. It also shows name of the driver ‘nvme’ in this case for this device

lspci listing NVMe device details
Figure 2: lspci listing NVMe device details

Checking MaxPayLoad

Check the MaxPayload value by executing the following commands. It should set it to 256 bytes [Fig.3]

[root@localhost home]# lspci | grep -i Samsung

45:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller 171X (rev 03) 

[root@localhost home]# lspci -vvv -s 45:00.0


MaxPayload set to 256 bytes
Figure 3: MaxPayload set to 256 bytes


NVMe Driver: List the driver information

1) Use modinfo command to list the diver details

[root@localhost ~]# modinfo nvme

filename: /lib/modules/3.10.0-123.el7.x86_64/extra/nvme/nvme.ko

version: 0.8-dell1.17

license: GPL

author: Samsung Electronics Corporation

srcversion: AB81DD9D63DD5DADDED9253

alias: pci:v0000144Dd0000A820sv*sd*bc*sc*i*

depends: 

vermagic: 3.10.0-123.el7.x86_64 SMP mod_unload modversions

parm: nvme_major:int

parm: use_threaded_interrupts:int 


The below [Fig 4] shows details of the NVMe driver nvme.ko 

Modinfo listing driver information
Figure 4: Modinfo listing driver information


NVMe Device Node and Naming conventions

1) cat /proc/partitions displays the device node of nvme.

a) Following command run lists the nvme device as nvme0n1 and nvme1n1

[root@localhost ~]# cat /proc/partitions

major minor #blocks name 

259 0 781412184 nvme0n1

8 0 1952448512 sda

8 1 512000 sda1

8 2 1951935488 sda2

11 0 1048575 sr0

253 0 52428800 dm-0

253 1 16523264 dm-1

253 2 1882980352 dm-2

259 3 390711384 nvme1n1 


Partition the device using the any partitioning tools (fdisk,parted)

b) Executing the following command again, lists nvme device along with partitions

[root@localhost ~]# cat /proc/partitions

major minor #blocks name 

259 0 781412184 nvme0n1

259 1 390705068 nvme0n1p1

259 2 390706008 nvme0n1p2

8 0 1952448512 sda

8 1 512000 sda1

8 2 1951935488 sda2

11 0 1048575 sr0

253 0 52428800 dm-0

253 1 16523264 dm-1

253 2 1882980352 dm-2

259 3 390711384 nvme1n1

259 4 195354668 nvme1n1p1

259 5 195354712 nvme1n1p2 

Naming conventions:

The below [Fig 5] explains the naming convention of the device nodes

The number immediately after the string "nvme" is the device number

Example:

nvme0n1 – Here the device number is 0

Partitions are appended after the device name with the prefix ‘p’

Example:

nvme0n1p1 – partition 1

nvme1n1p2 – partition 2

Example:

nvme0n1p1 – partition 1 of device 0

nvme0n1p2 – partition 2 of device 0

nvme1n1p1 – partition 1 of device 1

nvme1n1p2 – partition 2 of device 1 

Device node naming conventions
Figure 5: Device node naming conventions

Formatting with xfs and mounting the device


1) The following command formats the nvme partition 1 on device 1 to xfs 

[root@localhost ~]# mkfs.xfs /dev/nvme1n1p1

meta-data=/dev/nvme1n1p1 isize=256 agcount=4, agsize=12209667 blks

= sectsz=512 attr=2, projid32bit=1

= crc=0

data = bsize=4096 blocks=48838667, imaxpct=25

= sunit=0 swidth=0 blks

naming =version 2 bsize=4096 ascii-ci=0 ftype=0

log =internal log bsize=4096 blocks=23847, version=2

= sectsz=512 sunit=0 blks, lazy-count=1

realtime =none extsz=4096 blocks=0, rtextents=0 


2) Mount the device to a mount point and list the same 

[root@localhost ~]# mount /dev/nvme1n1p1 /mnt/

[root@localhost ~]# mount | grep -i nvme

/dev/nvme1n1p1 on /mnt type xfs (rw,relatime,seclabel,attr2,inode64,noquota) 


Using ledmon utility to manage backplane LEDs for NVMe device

Ledmon and ledctl are two utilities for Linux that can be used to control LED status on drive backplanes. Normally drive backplane LEDs are controlled by a hardware RAID controller (PERC), but when using Software RAID on Linux (mdadm) for NVMe PCIE SSD, the ledmon daemon will monitor the status of the drive array and update the status of drive LEDs.

For extra reading check the link 

https://www.dell.com/support/article/SLN310523/

Following are the steps to install and use the ledmon/ledctl utility

1) Installing OpenIPMI and ledmon/ledctl utilities:


Execute the following commands to install OpenIPMI and ledmon

[root@localhost ~]# yum install OpenIPMI

[root@localhost ~]# yum install ledmon-0.79-3.el7.x86_64.rpm 


2) Use ledmod/ledctl utilities 

Running ledctl and ledmon concurrently, ledmon will eventually override the ledctl settings

a) Start and check the status of ipmi as shown in the [Fig.6] using the following command

[root@localhost ~]# systemctl start ipmi

IPMI start and status
Figure 6: 
IPMI start and status

a) Start the ledmod

[root@localhost ~]# ledmon

b) [Fig 7] shows LED status after executing ledmon for the working state of the device


LED status after ledmon run for working state of the device (green)
Figure 7: 
LED status after ledmon run for working state of the device (green) 

a) The below command will blink drive LED [on the device node /dev/nvme0n1 ]

[root@localhost ~]# ledctl locate=/dev/nvme0n1

Below command will blink both the drive LEDs [on the device node /dev/nvme0n1 and /dev/nvme1n1]

[root@localhost ~]# ledctl locate={ /dev/nvme0n1 /dev/nvme1n1 }

And the following command will turn off the locate LED

[root@localhost ~]# ledctl locate_off=/dev/nvme0n1
​​​​​​​

免责声明:文章转载自《NVMe on RHEL7》仅用于学习参考。如对内容有疑问,请及时联系本站处理。

上篇python 处理10000个txt,每个文件夹里面放1000个。Windows Server 2008服务器配置FTP站点的方法教程下篇

宿迁高防,2C2G15M,22元/月;香港BGP,2C5G5M,25元/月 雨云优惠码:MjYwNzM=

相关文章

在linux中使用ramdisk文件系统 天高地厚

一 什么是RamDisk Ram:内存,Disk:磁盘,在Linux中可以将一部分内存当作分区来使用,称之为RamDisk。对于一些经常被访问、并且不会被更改的文件,可以将它们通过RamDisk放在内存中,能够明显地提高系统性能。RamDisk工作于虚拟文件系统(VFS)层,不能格式化,但可以创建多个RamDisk。虽然现在硬盘价钱越来越便宜,但对于一些...

Git 分支及bug分支合并

branch 分支学习:branch称为分支,默认仅有一个名为master的分支。一般开发新功能流程为:开发新功能时会在分支dev上进行,开发完毕后再合并到master分支。 学习参考上图,也可以按照着这样的流程进行开发,如果遇到上文开发到一般需要临时修复Bug的情况,可以按照下图的流程进行:   注意:git merge 时也可能会出现冲突,解决冲突的...

(20135213)信息安全系统设计基础第一周学习总结(共12课)课程(6~12)

【所有参考资料皆来源与实验楼,特此声明】 【第六课】 文件打包与压缩 实验介绍 Linux 上常用的 压缩/解压 工具,介绍了 zip,rar,tar 的使用。 一、文件打包和解压缩 在讲 Linux 上的解压缩工具之前,有必要先了解以下常见常用的压缩包文件格式。在 Windows 上我们最常见的不外乎这三种*.zip,*.rar,*.7z后缀的压缩文件,...

重写DEV的DateEdit控件的类只选择年月

最新在做CRM的报表,查询条件只需要年月,DateEdit 以及几个时间控件都用的不顺,强迫症犯了一怒之下起了重写DateEdit的想法 新建一个类 CXDateEdit using DevExpress.XtraEditors; using DevExpress.XtraEditors.Calendar; using DevExpress.XtraEd...

搭建自己的React+Typescript环境(二)

前言 搭建自己的React+Typescript环境(一) 上一篇文章介绍了React+Typescript的基础环境搭建,并没有做任何优化配置,以及根据不同的开发环境拆分配置,这篇文章主要就是介绍这些,并且所有配置都是在上篇文章的基础上,如果有什么问题或者不对的地方,希望大佬们能及时指出,最后有项目地址~ 要用到的几个依赖 webpack-merg...

虚拟机virtualbox中挂载新硬盘

在virtualbox中装好Ubuntu后,发现硬盘空间太小,怎样才能增加硬盘容量?那就是再建一个硬盘: 1. 添加新硬盘      设置 -> Storage -> SATA控制器->右击,选择“添加虚拟硬盘”      然后,根据需求创建合适的硬盘 2. 重启虚拟机      查看现有系统的磁盘空间         sudo fd...