A lot of VMware vSphere architects and engineers are designing their vSphere clusters for some overbooking ratios to define some level of the service (SLA or OLA) and differentiate between different compute tiers. They usually want to achieve something like

  • Tier 1 cluster (mission-critical applications) – 1:1 vCPU / pCPU ratio
  • Tier 2 cluster (business-critical applications) – 3:1 vCPU / pCPU ratio
  • Tier 3 cluster (supporting applications) – 5:1 vCPU / pCPU ratio
  • Tier 4 cluster (virtual desktops) – 10:1 vCPU / pCPU ratio

Before vSphere 6.5 we have to monitor it externally by vROps or some other monitoring tool. Some time ago I have blogged how to achieve it with PowerCLI and LogInsight – ESXi host vCPU/pCPU reporting via PowerCLI to LogInsight.

vSphere 6.5 DRS has introduced additional option to set maximum CPU over-commitment. It limits the number of vCPUs per pCPU in particular DRS cluster. However, it is good to know that there are two different advanced DRS options (configuration parameters) how to specify vCPU:pCPU ratio and each setting behaves differently. See table below …

DRS Advanced OptionScopeMin-Max Value
MaxVcpusPerClusterPctcluster0% – 500%
MaxVCPUsPerCorehost0 – 32

It is worth to mention a little bit tricky setting of these additional options via GUI. It is good to know how GUI setting of “CPU Over-Commitment” is mapped to DRS cluster advanced options.

In my lab, I have VCSA 6.5 U1c (build 7119157) so I did some tests.

If I set “CPU Over-Commitment” in vSphere Web Client (Flash/Flex) it sets MaxVcpusPerClusterPct so it is the setting per the whole vSphere Cluster.

However, in vSphere Client (HTML5) it sets MaxVCPUsPerCore so it is per ESXi host.

Therefore, it is good to know what you would like to achieve and double check DRS Advanced Options.

See different behavior in screenshots below

6.5 U1c (build 7119157) vSphere Web Client (Flash/Flex) sets MaxVcpusPerClusterPct
6.5 U1c (build 7119157) vSphere Client (HTML5) sets MaxVCPUsPerCore

So this is how it should work. Now let’s do some test to understand real behavior.

TEST 1: MaxVcpusPerClusterPct = 0 

Let’s set MaxVcpusPerClusterPct to 0 so we are saying to allow 0 : 1 vCPU / pCPU ratio. In other words, no VM can be running in the cluster. And it works as expected. When I try to run VM I get the error “The total number of virtual CPUs present or requested in virtual machines’ configuration has exceeded the limit on the host: 0”. Well, it is a little bit misleading because it should be cluster-wide rule but it works as expected.

Error when MaxVcpusPerClusterPct is set to 0

TEST 2: MaxVcpusPerClusterPct = 100 

Let’s set MaxVcpusPerClusterPct to 100% so we are saying to allow 1 : 1 vCPU / pCPU ratio. I have 4 node DRS cluster where each ESXi host has two cores (pCPUs), therefore I have 8 pCPUs available in the cluster. And I can really start only four VMs because each has 2 vCPUs so I can run up to 8 vCPUs in DRS cluster.

Error when MaxVcpusPerClusterPct is set to 100

It is great, but it is worth to mention that VMs were started on single ESXi host even I have 4 ESXi hosts in DRS cluster. So vCPU / pCPU ratio is compliant per cluster but not per ESXi host as I have 8 vCPUs on single ESXi hosts having just 2 pCPUs. But that’s expected behavior. So far so good.

TEST 3: MaxVcpusPerCore = 0 

MaxVcpusPerCore should solve the problem observed in previous two test because it should set vCPU / pCPU ratio per host which is much better from the predictability point of view.

Let’s set MaxVcpusPerCore to 0. My expectation was that I will not be able to start any VM but that was NOT the case. I was able to start a lot of VMs and exceed the expected vCPU / pCPU ratio. This is unexpected behavior.

TEST 4: MaxVcpusPerCore = 1 

Let’s set MaxVcpusPerCore to 1.  My expectation was that I will not be able to start more than 2 vCPUs per ESXi host so only one VM with 2 vCPUs. Unfortunately, I was able to start much more vCPUs per ESXi host. This is again unexpected behavior.

TEST 5: MaxVcpusPerCore = 4 

I have been informed by DRS Engineer that the minimum value allowed by host over-commitment ratio option (MaxVCPUsPerCore) is 4:1.

So, let’s set MaxVcpusPerCore to 4.  I have prepared nine VMs with 4 vCPUs each and my expectation is that I will not be able to start more than 8 vCPUs per ESXi host so only two VMs with 4 vCPUs per ESXi host. And because I have 4 ESXi hosts per cluster I should be able to start the maximum of 8 VMs.

Expected error message when MaxVcpusPerCore is set to 4 and vCPU:pCPU is over 4:1 per ESXi host.

UPDATE 2018-04-20:The issue with MaxVcpusPerCore is fixed in vSphere 6.5 U2. This is written in Release Notes: The advanced vSphere DRS parameter MaxVcpusPerCore might not work as expected and the desired ratio of virtual CPUs per physical CPU or core will not take effect in configurations below 4:1. MaxVcpusPerCore supported ratios now start from 1:1.
Please note, that MaxVcpusPerCore does not support ratio 0:1 in contrast with MaxVcpusPerClusterPct where 0:1 is possible and it will effectively disable to run any VM on the cluster.


Advanced DRS setting MaxVcpusPerClusterPct supports values between 0 and 500 and represents percentage between vCPUs and pCPUs across the whole DRS cluster. If the value is higher then 500, enforcing does not work. So, vCPU / pCPU percentage ratio can be enforced between 0:1 to 5:1.  vCPU:pCPU ratio 0:1 is a special setting where no VM can be PowerOn on vSphere Cluster. This is little bit risky setting but it can be used to put the whole cluster in kind of “maintenance mode” and forbid anyone to run VMs there.

Advanced DRS setting MaxVcpusPerCore currently supports values between 0 and 32 but it works only with values between 4 and 32. This setting enforces vCPU / pCPU per each ESXi host within DRS cluster. This means that the minimum vCPU / pCPU ration configurable by this option is 4 : 1.

To be honest, I think for vSphere architects/designers MaxVcpusPerCore makes more sense than MaxVcpusPerClusterPct because vCPU/pCPU overbooking ratio defines CPU quality per ESXi host.

VMware is internally considering to align MaxVcpusPerClusterPct and MaxVcpusPerCore behavior and allow lower vCPU / pCPU ratios when MaxVcpusPerCore is used. I will track how this topic will evolve in the future. Stay tuned.


[root@www ~]# sed [-nefr] [动作]
-n :使用安静(silent)模式。在一般 sed 的用法中,所有来自 STDIN 的数据一般都会被列出到终端上。但如果加上 -n 参数后,则只有经过sed 特殊处理的那一行(或者动作)才会被列出来。
-e :直接在命令列模式上进行 sed 的动作编辑;
-f :直接将 sed 的动作写在一个文件内, -f filename 则可以运行 filename 内的 sed 动作;
-r :sed 的动作支持的是延伸型正规表示法的语法。(默认是基础正规表示法语法)
-i :直接修改读取的文件内容,而不是输出到终端。

动作说明: [n1[,n2]]function
n1, n2 :不见得会存在,一般代表『选择进行动作的行数』,举例来说,如果我的动作是需要在 10 到 20 行之间进行的,则『 10,20[动作行为] 』

a :新增, a 的后面可以接字串,而这些字串会在新的一行出现(目前的下一行)~
c :取代, c 的后面可以接字串,这些字串可以取代 n1,n2 之间的行!
d :删除,因为是删除啊,所以 d 后面通常不接任何咚咚;
i :插入, i 的后面可以接字串,而这些字串会在新的一行出现(目前的上一行);
p :列印,亦即将某个选择的数据印出。通常 p 会与参数 sed -n 一起运行~
s :取代,可以直接进行取代的工作哩!通常这个 s 的动作可以搭配正规表示法!例如 1,20s/old/new/g 就是啦!

sed -i 就是直接对文本文件进行操作的

sed -i 's/原字符串/新字符串/' /home/1.txt
sed -i 's/原字符串/新字符串/g' /home/1.txt



#cat 1.txt


sed -i 's/d/7523/' /home/1.txt

sed -i 's/d/7523/g' /home/1.txt

去掉 “行首” 带“@”的首字母@

sed -i 's/^@//' file


sed -i '/特定字符串/i 新行字符串' file


sed -i '/特定字符串/a 新行字符串' file


sed -i '/字符串/d' file