A lot of VMware vSphere architects and engineers are designing their vSphere clusters for some overbooking ratios to define some level of the service (SLA or OLA) and differentiate between different compute tiers. They usually want to achieve something like

  • Tier 1 cluster (mission-critical applications) – 1:1 vCPU / pCPU ratio
  • Tier 2 cluster (business-critical applications) – 3:1 vCPU / pCPU ratio
  • Tier 3 cluster (supporting applications) – 5:1 vCPU / pCPU ratio
  • Tier 4 cluster (virtual desktops) – 10:1 vCPU / pCPU ratio

Before vSphere 6.5 we have to monitor it externally by vROps or some other monitoring tool. Some time ago I have blogged how to achieve it with PowerCLI and LogInsight – ESXi host vCPU/pCPU reporting via PowerCLI to LogInsight.

vSphere 6.5 DRS has introduced additional option to set maximum CPU over-commitment. It limits the number of vCPUs per pCPU in particular DRS cluster. However, it is good to know that there are two different advanced DRS options (configuration parameters) how to specify vCPU:pCPU ratio and each setting behaves differently. See table below …

DRS Advanced OptionScopeMin-Max Value
MaxVcpusPerClusterPctcluster0% – 500%
MaxVCPUsPerCorehost0 – 32

It is worth to mention a little bit tricky setting of these additional options via GUI. It is good to know how GUI setting of “CPU Over-Commitment” is mapped to DRS cluster advanced options.

In my lab, I have VCSA 6.5 U1c (build 7119157) so I did some tests.

If I set “CPU Over-Commitment” in vSphere Web Client (Flash/Flex) it sets MaxVcpusPerClusterPct so it is the setting per the whole vSphere Cluster.

However, in vSphere Client (HTML5) it sets MaxVCPUsPerCore so it is per ESXi host.

Therefore, it is good to know what you would like to achieve and double check DRS Advanced Options.

See different behavior in screenshots below

6.5 U1c (build 7119157) vSphere Web Client (Flash/Flex) sets MaxVcpusPerClusterPct
6.5 U1c (build 7119157) vSphere Client (HTML5) sets MaxVCPUsPerCore

So this is how it should work. Now let’s do some test to understand real behavior.

TEST 1: MaxVcpusPerClusterPct = 0 

Let’s set MaxVcpusPerClusterPct to 0 so we are saying to allow 0 : 1 vCPU / pCPU ratio. In other words, no VM can be running in the cluster. And it works as expected. When I try to run VM I get the error “The total number of virtual CPUs present or requested in virtual machines’ configuration has exceeded the limit on the host: 0”. Well, it is a little bit misleading because it should be cluster-wide rule but it works as expected.

Error when MaxVcpusPerClusterPct is set to 0

TEST 2: MaxVcpusPerClusterPct = 100 

Let’s set MaxVcpusPerClusterPct to 100% so we are saying to allow 1 : 1 vCPU / pCPU ratio. I have 4 node DRS cluster where each ESXi host has two cores (pCPUs), therefore I have 8 pCPUs available in the cluster. And I can really start only four VMs because each has 2 vCPUs so I can run up to 8 vCPUs in DRS cluster.

Error when MaxVcpusPerClusterPct is set to 100

It is great, but it is worth to mention that VMs were started on single ESXi host even I have 4 ESXi hosts in DRS cluster. So vCPU / pCPU ratio is compliant per cluster but not per ESXi host as I have 8 vCPUs on single ESXi hosts having just 2 pCPUs. But that’s expected behavior. So far so good.

TEST 3: MaxVcpusPerCore = 0 

MaxVcpusPerCore should solve the problem observed in previous two test because it should set vCPU / pCPU ratio per host which is much better from the predictability point of view.

Let’s set MaxVcpusPerCore to 0. My expectation was that I will not be able to start any VM but that was NOT the case. I was able to start a lot of VMs and exceed the expected vCPU / pCPU ratio. This is unexpected behavior.

TEST 4: MaxVcpusPerCore = 1 

Let’s set MaxVcpusPerCore to 1.  My expectation was that I will not be able to start more than 2 vCPUs per ESXi host so only one VM with 2 vCPUs. Unfortunately, I was able to start much more vCPUs per ESXi host. This is again unexpected behavior.

TEST 5: MaxVcpusPerCore = 4 

I have been informed by DRS Engineer that the minimum value allowed by host over-commitment ratio option (MaxVCPUsPerCore) is 4:1.

So, let’s set MaxVcpusPerCore to 4.  I have prepared nine VMs with 4 vCPUs each and my expectation is that I will not be able to start more than 8 vCPUs per ESXi host so only two VMs with 4 vCPUs per ESXi host. And because I have 4 ESXi hosts per cluster I should be able to start the maximum of 8 VMs.

Expected error message when MaxVcpusPerCore is set to 4 and vCPU:pCPU is over 4:1 per ESXi host.

UPDATE 2018-04-20:The issue with MaxVcpusPerCore is fixed in vSphere 6.5 U2. This is written in Release Notes: The advanced vSphere DRS parameter MaxVcpusPerCore might not work as expected and the desired ratio of virtual CPUs per physical CPU or core will not take effect in configurations below 4:1. MaxVcpusPerCore supported ratios now start from 1:1.
Please note, that MaxVcpusPerCore does not support ratio 0:1 in contrast with MaxVcpusPerClusterPct where 0:1 is possible and it will effectively disable to run any VM on the cluster.


Advanced DRS setting MaxVcpusPerClusterPct supports values between 0 and 500 and represents percentage between vCPUs and pCPUs across the whole DRS cluster. If the value is higher then 500, enforcing does not work. So, vCPU / pCPU percentage ratio can be enforced between 0:1 to 5:1.  vCPU:pCPU ratio 0:1 is a special setting where no VM can be PowerOn on vSphere Cluster. This is little bit risky setting but it can be used to put the whole cluster in kind of “maintenance mode” and forbid anyone to run VMs there.

Advanced DRS setting MaxVcpusPerCore currently supports values between 0 and 32 but it works only with values between 4 and 32. This setting enforces vCPU / pCPU per each ESXi host within DRS cluster. This means that the minimum vCPU / pCPU ration configurable by this option is 4 : 1.

To be honest, I think for vSphere architects/designers MaxVcpusPerCore makes more sense than MaxVcpusPerClusterPct because vCPU/pCPU overbooking ratio defines CPU quality per ESXi host.

VMware is internally considering to align MaxVcpusPerClusterPct and MaxVcpusPerCore behavior and allow lower vCPU / pCPU ratios when MaxVcpusPerCore is used. I will track how this topic will evolve in the future. Stay tuned.


[root@www ~]# sed [-nefr] [动作]
-n :使用安静(silent)模式。在一般 sed 的用法中,所有来自 STDIN 的数据一般都会被列出到终端上。但如果加上 -n 参数后,则只有经过sed 特殊处理的那一行(或者动作)才会被列出来。
-e :直接在命令列模式上进行 sed 的动作编辑;
-f :直接将 sed 的动作写在一个文件内, -f filename 则可以运行 filename 内的 sed 动作;
-r :sed 的动作支持的是延伸型正规表示法的语法。(默认是基础正规表示法语法)
-i :直接修改读取的文件内容,而不是输出到终端。

动作说明: [n1[,n2]]function
n1, n2 :不见得会存在,一般代表『选择进行动作的行数』,举例来说,如果我的动作是需要在 10 到 20 行之间进行的,则『 10,20[动作行为] 』

a :新增, a 的后面可以接字串,而这些字串会在新的一行出现(目前的下一行)~
c :取代, c 的后面可以接字串,这些字串可以取代 n1,n2 之间的行!
d :删除,因为是删除啊,所以 d 后面通常不接任何咚咚;
i :插入, i 的后面可以接字串,而这些字串会在新的一行出现(目前的上一行);
p :列印,亦即将某个选择的数据印出。通常 p 会与参数 sed -n 一起运行~
s :取代,可以直接进行取代的工作哩!通常这个 s 的动作可以搭配正规表示法!例如 1,20s/old/new/g 就是啦!

sed -i 就是直接对文本文件进行操作的

sed -i 's/原字符串/新字符串/' /home/1.txt
sed -i 's/原字符串/新字符串/g' /home/1.txt



#cat 1.txt


sed -i 's/d/7523/' /home/1.txt

sed -i 's/d/7523/g' /home/1.txt

去掉 “行首” 带“@”的首字母@

sed -i 's/^@//' file


sed -i '/特定字符串/i 新行字符串' file


sed -i '/特定字符串/a 新行字符串' file


sed -i '/字符串/d' file


Sometimes you need to find a virtual machine by MAC address. This can be very time consuming if you have to do this by hand using the VMware vSphere Client. PowerCLI can do this task for you in only a few seconds. The script presented in this blogpost will retrieve the virtual machine that has a certain MAC address.

You can find the virtual machine with a certain MAC address by just using the PowerCLI Get-VM and Get-NetworkAdapter cmdlets and piping these together. E.g. to find the virtual machine with MAC address “00:0c:29:1d:5c:ec” you can give the following PowerCLI command:

 Get-VM | `
Get-NetworkAdapter | `
Where-Object {$_.MacAddress -eq "00:0c:29:1d:5c:ec"} | `
Format-List -Property *

Figure 1. PowerCLI command to find a virtual machine with certain MAC address.

The PowerCLI command of figure 1 gives the following output:

Output of the command of figure 1.

Figure 2: Output of the command of figure 1.

In my environment with about five hundred fifty virtual machines this PowerCLI command takes about two minutes and twenty seconds to return.

So I decided to build a PowerCLI advanced function called Get-VmByMacAddress that uses the VMware vSphere SDK to find the virtual machine with a certain MAC address as fast as possible. The function uses PowerShell comment-based help. And there are some examples how you can use this function in the help.

function Get-VmByMacAddress {
Retrieves the virtual machines with a certain MAC address on a vSphere server.
Retrieves the virtual machines with a certain MAC address on a vSphere server.
Specify the MAC address of the virtual machines to search for.
Get-VmByMacAddress -MacAddress 00:0c:29:1d:5c:ec,00:0c:29:af:41:5c
Retrieves the virtual machines with MAC addresses 00:0c:29:1d:5c:ec and 00:0c:29:af:41:5c.
"00:0c:29:1d:5c:ec","00:0c:29:af:41:5c" | Get-VmByMacAddress
Retrieves the virtual machines with MAC addresses 00:0c:29:1d:5c:ec and 00:0c:29:af:41:5c.
VMware vSphere PowerCLI
Author:  Robert van den Nieuwendijk
Date:    18-07-2011
Version: 1.0
[parameter(Mandatory = $true,
ValueFromPipeline = $true,
ValueFromPipelineByPropertyName = $true)]
[string[]] $MacAddress
begin {
# $Regex contains the regular expression of a valid MAC address
$Regex = "^[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]:[0-9A-Fa-f][0-9A-Fa-f]$" 
# Get all the virtual machines
$VMsView = Get-View -ViewType VirtualMachine -Property Name,Guest.Net
process {
ForEach ($Mac in $MacAddress) {
# Check if the MAC Address has a valid format
if ($Mac -notmatch $Regex) {
Write-Error "$Mac is not a valid MAC address. The MAC address should be in the format 99:99:99:99:99:99."
else {   
# Get all the virtual machines
$VMsView | `
ForEach-Object {
$VMview = $_
$VMView.Guest.Net | Where-Object {
# Filter the virtual machines on Mac address
$_.MacAddress -eq $Mac
} | `
Select-Object -property @{N="VM";E={$VMView.Name}},

Figure 3: Get-VmByMacAddress PowerCLI advanced function.

The Get-VmByMacAddress function gives the following output:

Output of the Get-VMHostByMacAddress PowerCLI function.

Figure 4: Output of the Get-VMHostByMacAddress PowerCLI function.

The Get-VmByMacAddress function took about 1.7 seconds to complete. That is about eighty times faster than the first script.




lvcreate -n cache -L 9G vg_faster
lvcreate -n meta -l 100%FREE vg_faster
vgextend 8t_test /dev/disk/by-id/wwn-0x5e83a97ee1537d85-part5
lvcreate --type cache-pool --name cache -L 8G 8t_test /dev/disk/by-id/wwn-0x5e83a97ee1537d85-part5
lvconvert --type cache --cachepool 8t_test/cache 8t_test/8tall

root@HomeServer-Master:/opt/seafile# pvdisplay /dev/sde1  && vgdisplay 8t_test && lvs
--- Physical volume ---
PV Name /dev/sde1
VG Name 8t_test
PV Size <7.28 TiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 1907720
Free PE 0
Allocated PE 1907720
PV UUID noAsRL-SkDF-Zn2D-X8p6-UXD4-bXLV-kd27s6
--- Volume group ---
VG Name 8t_test
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 7
VG Access read/write
VG Status resizable
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size <7.29 TiB
PE Size 4.00 MiB
Total PE 1910279
Alloc PE / Size 1909774 / <7.29 TiB
Free PE / Size 505 / 1.97 GiB
VG UUID BLj8Le-q4vP-YaKb-fYvT-h8oY-Pa64-A1s6Lq
One or more devices used as PVs in VG ubuntu_vg have changed sizes.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
8tall 8t_test Cwi-aoC--- <7.28t [cache] [8tall_corig] 1.64 8.85 0.00
root ubuntu_vg -wi-ao---- 23.28g
swap_1 ubuntu_vg -wi-ao---- 952.00m

如果Faster SSD生成的cache\meta和Slow Disk生成的data数据区不再同一个VG时会提示如下错误:

lvm cache VG name mismatch from position arg (8t_test) and option arg (vg_faster).


vgextend 8t_test /dev/disk/by-id/wwn-0x5e83a97ee1537d85-part5


lvcreate –type cache-pool –name cache -L 8G 8t_test /dev/disk/by-id/wwn-0x5e83a97ee1537d85-part5

lvcreate –type cache-pool –name cache -l +100%free vg_name pv_name

root@HomeServer-Master:/mnt/8t# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
8tall 8t_test Cwi-aoC--- <7.28t [cache] [8tall_corig] 4.20 8.98 0.00
root ubuntu_vg -wi-ao---- 23.28g
swap_1 ubuntu_vg -wi-ao---- 952.00m

root@HomeServer-Master:/mnt/8t# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
8tall 8t_test Cwi-aoC--- <7.28t [cache] [8tall_corig] 4.20 8.98 0.00
[8tall_corig] 8t_test owi-aoC--- <7.28t
[cache] 8t_test Cwi---C--- 8.00g 4.20 8.98 0.00
[cache_cdata] 8t_test Cwi-ao---- 8.00g
[cache_cmeta] 8t_test ewi-ao---- 12.00m
[lvol0_pmspare] 8t_test ewi------- 12.00m
root ubuntu_vg -wi-ao---- 23.28g
swap_1 ubuntu_vg -wi-ao---- 952.00m

本次采用默认安装的mysqlD(MariaDB),root用户默认的验证插件是VIA unix socket的。


update user set plugin="mysql_native_password";


MariaDB [mysql]> select host,user,password,plugin from user limit 1;
| host      | user | password                                  | plugin                |
| localhost | root | *Cdddddddddd029230220C8A6F | mysql_native_password |


/etc/init.d/mysql stop
sudo killall mysqld_safe
sudo killall mysqld
sudo mysqld_safe --skip-grant-tables &
mysql -u root
use mysql;
update user set password=PASSWORD("mynewpassword") where User='root';
update user set plugin="mysql_native_password";
/etc/init.d/mysql stop
sudo kill -9 $(pgrep mysql)
/etc/init.d/mysql start


NPIV是N_Port ID Virtualization
Virtual Server/Xen,想像一下一台服务器上有一块HBA卡,但是在VMWare上有多台VM,这些VM都使用后边不同的LUN,那么没有NPIV就没法做了。

NPV是N_Port Virtualization的缩写,主要是switch-base的解决方案。适用于UCS的palo卡。




  目前市面上80%以上的标榜自己实现了FCoE的交换机产品其实都是只实现了NPV功能,NPIV(NPort ID Virtualization),是FC里面的概念。如果一台物理服务器里面搞了好多虚拟机后,每个VM都打算弄个FC ID独立通信,但只有一块FC HBA网卡时。FC中通过NPIV解决了这种使用场景需求,可以给一个NPort分配多个FC ID,配合多个pWWN (private WWN)来进行区分安全控制。

  理解了NPIV后就好理解NPV了,我们把上图中的NPort拿出来作为一个独立设备给后面服务器代理进行FC ID注册就是NPV(NPort Virtualization)了。NPV要做的两件事:

  1、自己先通过FLOGI向FC Switch注册去要个FC ID

  2、将后续Server过来的FLOGI请求代理成FDISC请求,向FC Switch再去申请更多的FC ID

  NPV的好处是可以不需要Domain ID(每个FC区域最多只有255个),同时能将FC交换机下联服务器规模扩大。NPV在FC网络中最常见的应用是在刀片交换机上。




  2、回应节点设备的FCF查找请求,根据自己初始化时从FC Switch得到的FC ID生成仿冒FCF使用的MAC地址

  3、在CNA网卡和FC Switch之间对转发的数据报文进行FCoE头的封包解包。

  NPV不是FCoE标准中定义的元素,因此各个厂家在一些细节上实现起来都各玩各的。比如都是将连接服务器的Ethernet接口和连接FC Switch的FC接口绑定起来使用,但是对应的绑定规则就可能不同。再有如FC接口故障时,如何将服务器对应的通道切换到其他FC接口去,是否通知服务器变化重新进行FLOGI注册,及通知等待时长等设定都会有所区别。

  NPV的优点,首先是实现容易,之前描述的那几件主要的任务现在都已经有公共芯片可以直接搞定,所以包装盒子就是了。其次是部署简单,不需要实现FCF,不用管FC转发,不计算FSPF,不占Domain ID。最后是扩展方便,使用FC Switch的少量接口就可以连接大量的服务器。



  补充一下ENPV(Ethernet NPV)的概念,这个概念由Cisco提出,就是在服务器与FCoE交换机(FCF)之间串个NPV进去,还是做些代理的工作,可以对FIP进行Snooping,监控FIP注册过程,获取VLAN/FC ID/WWN等信息,对过路流量做安全控制。






我们假设您已经有一整套的 Seafile 主服务器在运行,而现在您想要配置一套备份服务器。


  1. 在备份服务器上安装 Seafile 程序。
  2. 在主服务器和备份服务器之间配置 Seafile 同步。
  3. 通过 mysqldump 周期性的备份数据库中的数据。


在备份服务器上安装 Seafile

您可以按照官方文档在备份服务器上安装 Seafile。由于实时同步功能只有在 5.1.0 及其以上版本中可用,所以您必须将主服务器上的 Seafile 版本更新到 5.1.0 以上。安装完成后,不要启动seahub.sh服务。

当在备份服务器上安装 Seafile 时需要注意:

  • 数据库名称(ccnet,seafile 和 seahub 数据库)应该和主服务器上的相同。
  • 您无需在备份服务器上开启专业版功能,例如 Office 文档预览,全文检索和文件编辑等功能。
  • 您不能在主服务器上启动 Seahub 进程,这意味着通常情况下备份服务器不能对外提供服务。


配置 Seafile 实时同步

主服务器上,添加以下配置到 seafile.conf (默认路径:/opt/seafile/conf/)中:

backup_url = http://backup-server
sync_token = c7a78c0210c2470e14a20a8244562ab8ad509734

备份服务器上,添加以下配置到 seafile.conf (默认路径:/opt/seafile/conf/)中:

primary_url = http://primary-server
sync_token = c7a78c0210c2470e14a20a8244562ab8ad509734
sync_poll_interval = 3
  • backup_url:备份服务器的访问地址,您可以使用http或https协议;
  • primary_url:主服务器的访问地址。
  • sync_token:主服务器和备份服务器之间共享的一个密钥,它是由系统管理员生成的40个字符的 SHA1。您可以使用 uuidgen | openssl sha1 命令生成一个随机密钥。
  • sync_poll_interval:备份服务器定期轮询主服务器的所有资料库。您可以以小时为单位设置轮询间隔。默认的间隔是1小时,这意味着备份服务器将每小时轮询一次主服务器。如果您有大量的资料库,您应该选择较大的轮询间隔。

如果您使用https在主服务器和备份服务器之间同步,您必须为您的系统使用正确的 Seafile server 包。如果您使用的是 CentOS,您应该使用没有 “Ubuntu” 后缀的 Seafile 包;如果您使用的是 Debian 或 Ubuntu,您应该使用带有 “Ubuntu” 后缀的 Seafile 包。否则,您可能会在https请求中遇到CA错误。

保存配置后,在主服务器和备份服务器上重新启动 Seafile 服务。备份服务器将在重新启动时自动启动备份进程。

注意:不要在备份服务器上启动 Seahub 进程



使用 mysqldump 备份服务器的 MySQL 数据:

mysqldump -u <user> -p<password> --databases \
--ignore-table=<seafile_db>.Repo \
--ignore-table=<seafile_db>.Branch \
--ignore-table=<seafile_db>.RepoHead \
<seafile_db> <ccnet_db> <seahub_db> > dbdump.sql

您应该将 <user>, <password> 替换为您的 MySQL 用户和密码,将 <seafile_db>, <seahub_db> 和 <ccnet_db> 替换为您的 MySQL 中的数据库名。

这三个被忽略的表是与资料库核心数据相关的表,并由 Seafile 备份服务器实时同步。它们保存在备份服务器的seafile数据库中,并与mysqldump进程分开。

您应该设置 crontab 周期性自动运行 mysqldump 进程。

如果希望以更实时的方式备份数据库表(除了使用Seafile同步的3个表),可以将MySQL/MariaDB数据库主从复制的从主节点部署到另一个数据库服务器上。在 Seafile 备份服务器上运行的数据库不能用作此复制的目标。否则将导致复制冲突,因为备份服务器上的db也将通过 Seafile 备份进程进行更新。




  • 资料库数据由 Seafile 备份服务器备份和管理。根据备份服务器的设置,数据可以存储在外部存储、对象存储或本地磁盘上。
  • 数据库表分为两部分:
    • 3个核心数据库表实时备份到备份节点的MySQL数据库。
    • 其他表通常被转储到带有 mysqldump 的文件中。备份文件存储在主服务器之外的其他地方。 提供 status 命令来查看备份状态。输出如下:

# ./ status
Total number of libraries: xxx
Number of synchronized libraries: xxx
Number of libraries waiting for sync: xxx
Number of libraries syncing: xxx
Number of libraries failed to sync: xxx

List of syncing libraries:

List of libraries failed to sync:

还可以通过iftop监视网络流量,利用du -sh查看目录文件空间大小。


  • 主服务器中的一些数据被损坏。这些数据可能处于最新状态或历史中。由于备份过程同步整个历史,历史中的损坏将导致备份失败。
  • 主服务器运行了 seaf-fsck,它可以将库恢复到旧状态





  • 将最新的 mysql dump 出的文件导入 Seafile 备份服务器的mysql数据库中。
  • 在 Seafile 备份服务器上启用其他专业版功能特性,并启动seahub进程 ./ start



mysql -u <user> -p<pass> < dbdump.sql

将 <user> 和 <pass> 替换为您的 MySQL 的用户名密码。

第二步:在备份服务器上启动 seahub 进程

将主服务器上的 Seafile 的配置复制到备份服务器,然后在备份服务器上启动seahub进程。

./ start




/usr/bin/mysqldump -h -u<user> -p<password> –opt ccnet_db > /backup/databases/`date +”%Y-%m-%d”`.ccnet_db.sql.`date +”%Y-%m-%d-%H-%M-%S”`&
/usr/bin/mysqldump -h -u<user> -p<password> –opt <user>_db > /backup/databases/`date +”%Y-%m-%d”`.<user>_db.sql.`date +”%Y-%m-%d-%H-%M-%S”`&
/usr/bin/mysqldump -h -u<user> -p<password> –opt seahub_db > /backup/databases/`date +”%Y-%m-%d”`.seahub_db.sql.`date +’%Y-%m-%d-%H-%M-%S’`&



破解版本玩玩就好了,切勿用于其他非法用途。毕竟 Seafile 也是国人开发的一款程序, 同时在此呼吁有条件的各位购买正版,以支持国产软件发展

首先非常感谢热心网友的关于 seafile 版本更新的提醒,但是由于一些私人原因一拖再拖的拖到了今天,不过这确实是最近各种事务较多,无暇顾及其他,今天也还是硬挤出时间来更新的破解。
此次的破解版本为 6.3.7,最高注册上限 1000 人。此外也因为工作变动关系,于 2018 年 10 月上旬(具体时间忘了,2333)暂时关停了物理服务器,所以运行在该服务器上的所有服务均无法访问 (包括,待一切安置妥当后会再行启动服务器。
这里也回应下希望做注册机的各位:分析后发现授权文件使用 rsa 非对称加密,程序内置公钥进行解密,就算做了注册机也是需要将程序内的公钥替换成自己的,所以最终还是需要安装文件。不然授权文件并不能用于官方发布文件。除非得到官方的私钥。



可以在 的公共资料库 Seafile 中找到 (服务器关闭,已无法通过该方式下载)
(直达链接: 密码服务器关闭,已无法通过该方式下载)
高速下载 (这是唯一的下载方式了)


请参考官方文档 进行安装升级。