原文:https://rvdnieuwendijk.com/2011/07/18/how-to-use-vmware-vsphere-powercli-to-find-a-virtual-machine-by-mac-address/

Sometimes you need to find a virtual machine by MAC address. This can be very time consuming if you have to do this by hand using the VMware vSphere Client. PowerCLI can do this task for you in only a few seconds. The script presented in this blogpost will retrieve the virtual machine that has a certain MAC address.

You can find the virtual machine with a certain MAC address by just using the PowerCLI Get-VM and Get-NetworkAdapter cmdlets and piping these together. E.g. to find the virtual machine with MAC address “00:0c:29:1d:5c:ec” you can give the following PowerCLI command:

 Get-VM | `
Get-NetworkAdapter | `
Where-Object {$_.MacAddress -eq "00:0c:29:1d:5c:ec"} | `
Format-List -Property *

Figure 1. PowerCLI command to find a virtual machine with certain MAC address.

The PowerCLI command of figure 1 gives the following output:

Output of the command of figure 1.

Figure 2: Output of the command of figure 1.

In my environment with about five hundred fifty virtual machines this PowerCLI command takes about two minutes and twenty seconds to return.

So I decided to build a PowerCLI advanced function called Get-VmByMacAddress that uses the VMware vSphere SDK to find the virtual machine with a certain MAC address as fast as possible. The function uses PowerShell comment-based help. And there are some examples how you can use this function in the help.

function Get-VmByMacAddress {
<#
.SYNOPSIS
Retrieves the virtual machines with a certain MAC address on a vSphere server.
 
.DESCRIPTION
Retrieves the virtual machines with a certain MAC address on a vSphere server.
 
.PARAMETER MacAddress
Specify the MAC address of the virtual machines to search for.
 
.EXAMPLE
Get-VmByMacAddress -MacAddress 00:0c:29:1d:5c:ec,00:0c:29:af:41:5c
Retrieves the virtual machines with MAC addresses 00:0c:29:1d:5c:ec and 00:0c:29:af:41:5c.
 
.EXAMPLE
"00:0c:29:1d:5c:ec","00:0c:29:af:41:5c" | Get-VmByMacAddress
Retrieves the virtual machines with MAC addresses 00:0c:29:1d:5c:ec and 00:0c:29:af:41:5c.
 
.COMPONENT
VMware vSphere PowerCLI
 
.NOTES
Author:  Robert van den Nieuwendijk
Date:    18-07-2011
Version: 1.0
#>
 
[CmdletBinding()]
param(
[parameter(Mandatory = $true,
ValueFromPipeline = $true,
ValueFromPipelineByPropertyName = $true)]
[string[]] $MacAddress
)
 
begin {
# $Regex contains the regular expression of a valid MAC address
$Regex = "^[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]$" 
 
# Get all the virtual machines
$VMsView = Get-View -ViewType VirtualMachine -Property Name,Guest.Net
}
 
process {
ForEach ($Mac in $MacAddress) {
# Check if the MAC Address has a valid format
if ($Mac -notmatch $Regex) {
Write-Error "$Mac is not a valid MAC address. The MAC address should be in the format 99:99:99:99:99:99."
}
else {   
# Get all the virtual machines
$VMsView | `
ForEach-Object {
$VMview = $_
$VMView.Guest.Net | Where-Object {
# Filter the virtual machines on Mac address
$_.MacAddress -eq $Mac
} | `
Select-Object -property @{N="VM";E={$VMView.Name}},
MacAddress,
IpAddress,
Connected
}
}
}
}
}

Figure 3: Get-VmByMacAddress PowerCLI advanced function.

The Get-VmByMacAddress function gives the following output:

Output of the Get-VMHostByMacAddress PowerCLI function.

Figure 4: Output of the Get-VMHostByMacAddress PowerCLI function.

The Get-VmByMacAddress function took about 1.7 seconds to complete. That is about eighty times faster than the first script.

在线为数据盘添加Cache。

原磁盘/dev/sde1为8T磁盘,通过lvm管理,目前该空间有数据。

命令:lvm,pvcreate,vgcreate,lvcreate,vgextend,

lvcreate -n cache -L 9G vg_faster
lvcreate -n meta -l 100%FREE vg_faster
vgextend 8t_test /dev/disk/by-id/wwn-0x5e83a97ee1537d85-part5
lvcreate --type cache-pool --name cache -L 8G 8t_test /dev/disk/by-id/wwn-0x5e83a97ee1537d85-part5
lvconvert --type cache --cachepool 8t_test/cache 8t_test/8tall


root@HomeServer-Master:/opt/seafile# pvdisplay /dev/sde1  && vgdisplay 8t_test && lvs
--- Physical volume ---
PV Name /dev/sde1
VG Name 8t_test
PV Size <7.28 TiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 1907720
Free PE 0
Allocated PE 1907720
PV UUID noAsRL-SkDF-Zn2D-X8p6-UXD4-bXLV-kd27s6
--- Volume group ---
VG Name 8t_test
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size <7.29 TiB
PE Size 4.00 MiB
Total PE 1910279
Alloc PE / Size 1909774 / <7.29 TiB
Free PE / Size 505 / 1.97 GiB
VG UUID BLj8Le-q4vP-YaKb-fYvT-h8oY-Pa64-A1s6Lq
One or more devices used as PVs in VG ubuntu_vg have changed sizes.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
8tall 8t_test Cwi-aoC--- <7.28t [cache] [8tall_corig] 1.64 8.85 0.00
root ubuntu_vg -wi-ao---- 23.28g
swap_1 ubuntu_vg -wi-ao---- 952.00m

如果Faster SSD生成的cache\meta和Slow Disk生成的data数据区不再同一个VG时会提示如下错误:

lvm cache VG name mismatch from position arg (8t_test) and option arg (vg_faster).

可以通过将cache/meta所在的SSD通过vgextend的方式加入到现有的VG中。

vgextend 8t_test /dev/disk/by-id/wwn-0x5e83a97ee1537d85-part5

然后从SSD空间中创建Cache

lvcreate –type cache-pool –name cache -L 8G 8t_test /dev/disk/by-id/wwn-0x5e83a97ee1537d85-part5

or
lvcreate –type cache-pool –name cache -l +100%free vg_name pv_name


root@HomeServer-Master:/mnt/8t# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
8tall 8t_test Cwi-aoC--- <7.28t [cache] [8tall_corig] 4.20 8.98 0.00
root ubuntu_vg -wi-ao---- 23.28g
swap_1 ubuntu_vg -wi-ao---- 952.00m
root@HomeServer-Master:/mnt/8t#

root@HomeServer-Master:/mnt/8t# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
8tall 8t_test Cwi-aoC--- <7.28t [cache] [8tall_corig] 4.20 8.98 0.00
[8tall_corig] 8t_test owi-aoC--- <7.28t
[cache] 8t_test Cwi---C--- 8.00g 4.20 8.98 0.00
[cache_cdata] 8t_test Cwi-ao---- 8.00g
[cache_cmeta] 8t_test ewi-ao---- 12.00m
[lvol0_pmspare] 8t_test ewi------- 12.00m
root ubuntu_vg -wi-ao---- 23.28g
swap_1 ubuntu_vg -wi-ao---- 952.00m
root@HomeServer-Master:/mnt/8t#


本次采用默认安装的mysqlD(MariaDB),root用户默认的验证插件是VIA unix socket的。
当通过phpmyadmin等登陆时候,会报错误,拒绝用户登陆;

主要修改命令:

update user set plugin="mysql_native_password";

修改完成后:

MariaDB [mysql]> select host,user,password,plugin from user limit 1;
+-----------+------+-------------------------------------------+-----------------------+
| host      | user | password                                  | plugin                |
+-----------+------+-------------------------------------------+-----------------------+
| localhost | root | *Cdddddddddd029230220C8A6F | mysql_native_password |
+-----------+------+-------------------------------------------+-----------------------+

后面的只是完整过程,意义不大。

/etc/init.d/mysql stop
sudo killall mysqld_safe
sudo killall mysqld
sudo mysqld_safe --skip-grant-tables &
mysql -u root
use mysql;
update user set password=PASSWORD("mynewpassword") where User='root';
update user set plugin="mysql_native_password";
quit;
/etc/init.d/mysql stop
sudo kill -9 $(pgrep mysql)
/etc/init.d/mysql start

NPIV和NPV的不同

NPIV是N_Port ID Virtualization
的缩写,主要是host-base的解决方案。适用于VMWare/MS
Virtual Server/Xen,想像一下一台服务器上有一块HBA卡,但是在VMWare上有多台VM,这些VM都使用后边不同的LUN,那么没有NPIV就没法做了。

NPV是N_Port Virtualization的缩写,主要是switch-base的解决方案。适用于UCS的palo卡。

NPIV和NPV支持虚拟化,降低管理复杂性

     NPIV和NPV允许主机和交换机端口虚拟化,从而可降低大型或者混合SAN环境的管理复杂性。
     NPIV允许单个HBA卡(称为N_Port)注册多个WWPNs(全球唯一端口名)和N_portID号码。这使得单个主机上的多个虚拟机可以拥有在SAN中独立的N_PortID号码用来划分区域和分配LUN(逻辑单元号)。这样做的唯一要求是交换机必须同样支持NPIV。
     NPV允许一个交换机端口作为一个NPIV主机连到另一个交换机上。这可以使整个交换机看上去就像一个NPIV端口,让SAN的存储扩展变得更加容易而不需要消耗额外的ID域或者增加管理开销。一些厂商还支持对光纤网络之间的这些链路进行加密,适合用来确保校园网与数据中心或者城域网之间的链路安全。同样,唯一的要求就是现有的交换机支持NPIV。

NPV

  目前市面上80%以上的标榜自己实现了FCoE的交换机产品其实都是只实现了NPV功能,NPIV(NPort ID Virtualization),是FC里面的概念。如果一台物理服务器里面搞了好多虚拟机后,每个VM都打算弄个FC ID独立通信,但只有一块FC HBA网卡时。FC中通过NPIV解决了这种使用场景需求,可以给一个NPort分配多个FC ID,配合多个pWWN (private WWN)来进行区分安全控制。

  理解了NPIV后就好理解NPV了,我们把上图中的NPort拿出来作为一个独立设备给后面服务器代理进行FC ID注册就是NPV(NPort Virtualization)了。NPV要做的两件事:

  1、自己先通过FLOGI向FC Switch注册去要个FC ID

  2、将后续Server过来的FLOGI请求代理成FDISC请求,向FC Switch再去申请更多的FC ID

  NPV的好处是可以不需要Domain ID(每个FC区域最多只有255个),同时能将FC交换机下联服务器规模扩大。NPV在FC网络中最常见的应用是在刀片交换机上。

  随之有人将FCoE的脑筋动到了NPV与服务器之间的网络上,如下图所示:

  在FCoE中的NPV相比较FC中要多做三件事,参考前面FIP流程:

  1、回应节点设备关于FCoE承载VLAN的请求

  2、回应节点设备的FCF查找请求,根据自己初始化时从FC Switch得到的FC ID生成仿冒FCF使用的MAC地址

  3、在CNA网卡和FC Switch之间对转发的数据报文进行FCoE头的封包解包。

  NPV不是FCoE标准中定义的元素,因此各个厂家在一些细节上实现起来都各玩各的。比如都是将连接服务器的Ethernet接口和连接FC Switch的FC接口绑定起来使用,但是对应的绑定规则就可能不同。再有如FC接口故障时,如何将服务器对应的通道切换到其他FC接口去,是否通知服务器变化重新进行FLOGI注册,及通知等待时长等设定都会有所区别。

  NPV的优点,首先是实现容易,之前描述的那几件主要的任务现在都已经有公共芯片可以直接搞定,所以包装盒子就是了。其次是部署简单,不需要实现FCF,不用管FC转发,不计算FSPF,不占Domain ID。最后是扩展方便,使用FC Switch的少量接口就可以连接大量的服务器。

  由于NPV与服务器之间网络为传统以太网,因此NPV交换机也必须支持DCB标准中相关的无丢包以太网技术。

  严格来讲,NPV交换机不是FCoE标准中定义的FCoE交换机,但可以在接入层交换机上实现与服务器之间的Ethernet网络复用,减少了服务器的物理网卡数量(并未减少操作系统层面的网络通道数量),扩展了FC网络接入服务器节点的规模,适用于云计算大规模服务器部署应用。

  补充一下ENPV(Ethernet NPV)的概念,这个概念由Cisco提出,就是在服务器与FCoE交换机(FCF)之间串个NPV进去,还是做些代理的工作,可以对FIP进行Snooping,监控FIP注册过程,获取VLAN/FC ID/WWN等信息,对过路流量做安全控制。