FastNetMon

Показаны сообщения с ярлыком OpenVZ. Показать все сообщения
Показаны сообщения с ярлыком OpenVZ. Показать все сообщения

пятница, 10 июня 2016 г.

11 reasons why you should avoid OpenVZ and Virtuozzo in any cases!


I want to share few words about me. I'm CTO of FastVPS Eesti OU with ~8 years of overall experience with containers and OpenVZ. I'm author of following tools for OpenVZ:

  1. Ploop userspace (user mode implementation of ploop filesystem)
  2. Fast incremental difference algorithm for ploop images.
  3. Author of script for TC based shaper for OpenVZ
And also, I'm author of 175 bug and feature requests at OpenVZ bug tracker.

From the operational side we have about few hundreds of physical servers with OpenVZ (VZ6 based) with pretty huge density and I'm trying to keep my hands dirty and dig into details of any issues.

What you should expect following technical issues when you are working with OpenVZ:

  1. Very often and repetitive kernel faults for different reasons even on top of top-notch hardware with ECC, redundant power and battery backed RAID's
  2. Huge disk space (up 30-40% in some cases) overuse if you decide to use ploop filesystem (that's true for VZ7 too). 
  3. Pretty often storage faults with complete data corruption if you are using ploop. You could read more details about it here.
  4. If you want to get security updates in timely manner you should not rely on OpenVZ team. You should use some external toolkit for applying patches to running kernel (Ksplice, Kpatch or Kernel Care).
  5. You should expect a lot of issues already fixed in recent kernels because OpenVZ project are relying on already outdated RHEL 6 or RHEL7 (based on top of 3.10 but current version is 4.6) kernel. You definitely know about huge backport work of Red Hat but these kernels are really outdated and you could get significant speedup with update to recent vanilla kernel (as Oracle or Ubuntu already doing). But please keep in mind if you are running OpenVZ (VZ7) you are running already outdated toolkit.
  6. You should expect significant changes without backward compatibility in next major release of OpenVZ (for more details please read more details about prlctl and vzctl). In some nice moment you should rewrite all your integration code from billing, configuration management and support control subsystems.
  7. Very poor QA and issues with pretty obvious things.
  8. Almost each upcoming systemd update in container may require new kernel from OpenVZ / Virtuozzo team. Actually, this updates could break your running kernel. 


If we are speaking about overall experience about project I could share my ten cents below:

  • You should not expect any openness in the decision making of OpenVZ project development. That's still true for day-to-day issues (bug priority) and for helicopter view (strategic planning). That's still commercial company and they will not discuss roadmaps with the community. If they decide to do something they will do it even if community hate their decisions. You should check "simfs deprecation" conversation in OpenVZ Users maillist for details.
  • You should have very experienced system administrator who should dig into details of each issue and system instability. 
We are discussed a lot of issues about OpenVZ project. But I want to highlight few issues abut container aware isolation technologies. We have two options if we want to isolate applications in containers:

  • OpenVZ (VZ6, VZ7)
  • Linux upstream containers (managed by LXC or Docker)

Unfortunately, Linux upstream containers could not run whole OS properly they are pretty insecure in compare with OpenVZ. But if you want to isolate your own applications I want to recommend to use Docker because they have a lot of benefits and haven't almost all OpenVZ issues.

But if you want to isolate whole operating systems (for making VPS service) I want to recommend KVM technology. But KVM has significant overhead in compare with containers! Actually, it's true only partially because Intel have been doing a lot of things for improving performance of hardware based isolation and recent CPU's (Xeon E3, E5) has lightning fast KVM implementation! But please be careful and do not use RHEL 7 (do you remember? It's already outdated and you could host more VM's per host with recent kernel!) and look at vanilla kernel, Oracle kernel or some other project which offer recent and stable kernel.

Finally, I really like container isolation and I will be happy when all mentioned issues will be fixed ;)

Thanks for attention! 

пятница, 29 января 2016 г.

Fast-rdiff is extremely fast impelementation of rdiff algorithm for OpenVZ ploop containers

I would like to share my project implemented at FastVPS.ru for large scale backup system.

This is very fast (thanks to C++) impelementation of rdiff algoritmh (delta difference, librsync, rdiff). Actually, rdiff should not be used in production due to significant performance issues: https://github.com/librsync/librsync/issues/6

We support only delta generation and ONLY for 1MB block files (we are using fast-rdiff for OpenVZ ploop files).

We haven't support for "rdiff path" but original rdiff could be used for this purpose. But please be careful! fast-rdiff is using md4 algorithm and you could explicitly specify it to new rdiff:= with option: --hash=md4

https://github.com/FastVPSEestiOu/fast-rdiff

пятница, 24 июля 2015 г.

Effectiveness of ZFS usage for OpenVZ

This article is a detailed answer to conversation about simfs vs ploop vs ZFS from OpenVZ maillist.

Because I have finished very detailed tests of ZFS for VPS storage I would like to share they with my Dear Community. 

Source data (real clients, real data, different OS templates, different OS templates versions, server running for ~1year, production server full copy):
Size: 5,4T
Used: 3,5T
Avail: 1,7T
Usage:69%
This data from HWN with OpenVZ with ploop.

Our internal toolkit fastvps_ploop_compacter show following details about this server:
Total wasted space due to ploop bugs: 205.1 Gb
Wasted space means difference between real data size and ploop disks size, i.e. ploop overhead.

gzip compression on ZFS

We have enabled ZFS gzip compression and move all data to new created ZFS volume.

And we got following result:
NAME          SIZE   ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT data  8,16T  3,34T  4,82T    40%  1.00x  ONLINE  -

As you can see  we save about 160 GB of data. New ZFS size of this data is: 3340 GB, old size: 3500 GB.

lz4 compression on ZFS

So, ZFS developers do not recommend gzip and we will try lzr4 compression as best option.

We copy same data to new ZFS storage with lz4 compression enabled and got following results:
ALLOC: 2,72Tb
Wow! Amazing! We save about 600 Gb of data!  Really!

ZFS deduplication

As you know, ZFS has another killer feature - deduplication! And it's best option when we store so much containers with fixed number of distributions (debian, ubuntu, centos).

But please keep in mind, we have disabled compression on this step!

We have enabled deduplication for new storage and copy all production data to new storage with deduplication.

When data copy was finished we got this breath taking results:

zpool list
NAME         SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOTdata  8,16T  2,53T  5,62T    31%  1.33x  ONLINE  -

We have saved 840 GB of data with deduplication! We are really close to save 1Tb!

For very curious readers I could offer some internal data regarding ZFS dedup:
zdb -D data
DDT-sha256-zap-duplicate: 5100040 entries, size 581 on disk, 187 in core
DDT-sha256-zap-unique: 27983716 entries, size 518 on disk, 167 in core
dedup = 1.34, compress = 1.00, copies = 1.02, dedup * compress / copies = 1.31
ZFS compression and deduplication simultaneously

So, ZFS is amazingly flexible and we could use compression and dedulication in same time and got even more storage save. And this tests is your own home task :) Please share your results here!

Conclusion

That's why we have only single file system which ready for 21 century. And ext4 with derivative file systems should be avoided everywhere if possible.  

So, will be fine if you help ZFS on Linux community with bug reporting and development! 

вторник, 3 июня 2014 г.

Использование собственных параметров в конфиге OpenVZ контейнера

Очень часто требуется добавить какие-то свои параметры в конфиг OpenVZ контейнера, например, идентификатор услуги в биллинге или еще что-то.

Городить отдельные конфиги крайне не хочется и есть отличный способ! Можно просто добавить их в конфиг контейнера /etc/vz/conf/XXX.conf в виде по аналогии с родными параметрами vzctl:

NETWORK_UPLOAD_SPEED="10mbps"
Никаких проблем/ошибок это не вызовет, vzctl примет их как родных. Единственным неудобством в данном случае является то, что требуется делать обертку для правки/чтения данных параметров, так как vzctl/vzlist ничего о них не знаю и не могут их править/отображать.

воскресенье, 1 декабря 2013 г.

Btrfs и OpenVZ/ploop


Не поддерживается, собака:
vzctl create 88811 --layout ploop --private /mnt/88811
Creating image: /mnt/88811.tmp/root.hdd/root.hdd size=5153024K
Creating delta /mnt/88811.tmp/root.hdd/root.hdd bs=2048 size=10307584 sectors v2
Storing /mnt/88811.tmp/root.hdd/DiskDescriptor.xml
Opening delta /mnt/88811.tmp/root.hdd/root.hdd
Adding delta dev=/dev/ploop56655 img=/mnt/88811.tmp/root.hdd/root.hdd (rw)
Error in add_delta (ploop.c:1171): Can't add image /mnt/88811.tmp/root.hdd/root.hdd: unsupported underlying filesystem
Failed to create image: Error in add_delta (ploop.c:1171): Can't add image /mnt/88811.tmp/root.hdd/root.hdd: unsupported underlying filesystem [3]
Destroying container private area: /mnt/88811
Creation of container private area failed

среда, 20 ноября 2013 г.

Installing actual Gentoo from stage3 on OpenVZ withou any strange templates, only from stage3

0. Set our container number for preventing hardcoding in my commands:

export CTID=19484

1. Download stage3 into OpenVZ template folder (you can get actual link from here http://distfiles.gentoo.org/releases/amd64/autobuilds/latest-stage3-amd64.txt). Be careful - you must set downloaded file name to gento-bla-bla-bla:
cd /vz/template/cache
wget http://distfiles.gentoo.org/releases/amd64/autobuilds/current-stage3/stage3-amd64-20131031.tar.bz2 -Ogentoo-stage3-amd64-20131031.tar.bz2

2. Create container using stage3 template.

vzctl create $CTID --hostname gentoo-forever.com --ipadd 78.47.xx.xx --ostemplate gentoo-stage3-amd64-20131031 --layout ploop --disk 10G

3. Start container for removing udev (udev and openvz are natural enemies): 

vzctl start $CTID
vzctl exec $CTID 'rm -f /etc/init.d/udev'
vzctl exec  $CTID 'rm -f /etc/init.d/udev-mount'

4. Start container for production:
vzctl start 19484;
vzctl enter 19484
5. Enable ssh:
/etc/init.d/sshd start

6. First steps in Gentoo - getting portage tree!
 emerge --sync


вторник, 19 ноября 2013 г.

How to mount multiple disks to OpenVZ ploop container?

Hello, OpenVZ admin! :)

In many cases you can want ability to mount an additional ploop devices in OpenVZ container (be care! This guide works only with ploop), i.e. add separate as /data or /mnt. It's not so obviously but I can help you. My test container number (CTID) is 19484, be careful because this number is another in your case!  

At first, you must select path where you plans save additional disk images. In my case, it's /storage folder on hardware node.

1. We must create subfolder for saving additional disk data (image + description file). I recommend to name it with CTID.

mkdir /storage
mkdir /storage/19484

2. Create ploop device with defined size in this folder.
ploop init -s 10G -f ploop1 -v 2 /storage/19484/additional_disk_image -t ext4

3. Please stop container (it's needed because we manipulate with container root filesystem and it's so dangerous):



vzctl stop 19484

4.  We must add hook scripts for mount and umount this disk to container root mount point.

Please create file /etc/vz/conf/19484.mount with following content:
#!/bin/bash 
ploop mount -m /vz/root/$VEID/mnt  /storage/$VEID/DiskDescriptor.xml
And file  /etc/vz/conf/19484.umount with following content (we must add exit 0 because at first start vzctl will tries to umount disk and we got an error):


#!/bin/bash 
ploop umount /storage/$VEID/DiskDescriptor.xml
exit 0
Add execution flag to both files:
chmod +x /etc/vz/conf/19484.mount /etc/vz/conf/19484.umount
5. Start container:
vzctl start 19484
At first startups you must got error message because vzctl can't unmount additional disk at startup (so strange feature, really! What reason to umount disks at startup?) . It's ok, you can sefely ignore. At next start/restart you did not got any errors.   
Error in ploop_umount_image (ploop.c:1579): Image /storage/19484/additional_disk_image is not mounted
At second and next start's you can see no error:
vzctl start 19484
Starting container...
Opening delta /vz/private/19484/root.hdd/root.hdd
Adding delta dev=/dev/ploop18662 img=/vz/private/19484/root.hdd/root.hdd (rw)
/dev/ploop18662p1: clean, 36570/144288 files, 228564/576251 blocks
Mounting /dev/ploop18662p1 at /vz/root/19484 fstype=ext4 data='balloon_ino=12,'
Opening delta /storage/19484/additional_disk_image
Adding delta dev=/dev/ploop50284 img=/storage/19484/additional_disk_image (rw)
Mounting /dev/ploop50284p1 at /vz/root/19484/mnt fstype=ext4 data='balloon_ino=12,'
Container is mounted
Adding IP address(es): 78.47.76.28
Setting CPU units: 1000
Container start in progress...
Check new disk in container:
vzctl exec 19484 'df -h'
Filesystem         Size  Used Avail Use% Mounted on
/dev/ploop18662p1  2.2G  858M  1.3G  41% /
/dev/ploop50284p1  9.9G  164M  9.2G   2% /mnt
none               136M  4.0K  136M   1% /dev
none                28M 1020K   27M   4% /run
none               5.0M     0  5.0M   0% /run/lock
none                52M     0   52M   0% /run/shm
none               100M     0  100M   0% /run/user
It's important, you don't need any quota setup because ploop is complete filesystem and it's limited by standard linux mechanic like native hard drive.

Mounting additional disk as read only

If you want you can mount additional disk as read only. You must add -r flag to ploop mount command.

In read only case you will see something like this while containers starts:
vzctl start 19484
Starting container...
Opening delta /vz/private/19484/root.hdd/root.hdd
Adding delta dev=/dev/ploop18662 img=/vz/private/19484/root.hdd/root.hdd (rw)
/dev/ploop18662p1: clean, 36570/144288 files, 228582/576251 blocks (check in 4 mounts)
Mounting /dev/ploop18662p1 at /vz/root/19484 fstype=ext4 data='balloon_ino=12,'
Opening delta /storage/19484/additional_disk_image
Adding delta dev=/dev/ploop50284 img=/storage/19484/additional_disk_image (ro)
Mounting /dev/ploop50284p1 at /vz/root/19484/mnt fstype=ext4 data='balloon_ino=12,' ro
Container is mounted
Adding IP address(es): 78.47.76.28
Setting CPU units: 1000
Container start in progress...

понедельник, 18 ноября 2013 г.

OpenVZ DiskDescriptor.xml - что есть что?

<?xml version="1.0"?>
<Parallels_disk_image>
  <Disk_Parameters>
    <Disk_size>8388608</Disk_size>
    <Cylinders>8322</Cylinders>
    <Heads>16</Heads>
    <Sectors>63</Sectors>
    <Padding>0</Padding>
  </Disk_Parameters>
  <StorageData>
    <Storage>
      <Start>0</Start>
      <End>8388608</End>
      <Blocksize>2048</Blocksize>
      <Image>
        <GUID>{5fbaabe3-6958-40ff-92a7-860e329aab41}</GUID>
        <Type>Compressed</Type>
        <File>root.hdd</File>
      </Image>
    </Storage>
  </StorageData>
  <Snapshots>
    <TopGUID>{5fbaabe3-6958-40ff-92a7-860e329aab41}</TopGUID>
    <Shot>
      <GUID>{5fbaabe3-6958-40ff-92a7-860e329aab41}</GUID>
      <ParentGUID>{00000000-0000-0000-0000-000000000000}</ParentGUID>
    </Shot>
  </Snapshots>
</Parallels_disk_image>

Что же есть из себя 5fbaabe3-6958-40ff-92a7-860e329aab41? Не понятно! Это не GUID таблицы GPT,  которую использует ploop потому что:

 cat /vz/private/19484/root.hdd/DiskDescriptor.xml|grep UID -i
        <GUID>{5fbaabe3-6958-40ff-92a7-860e329aab41}</GUID>
    <TopGUID>{5fbaabe3-6958-40ff-92a7-860e329aab41}</TopGUID>
      <GUID>{5fbaabe3-6958-40ff-92a7-860e329aab41}</GUID>

Обойдем все диски и получим из GPT GUID: 
for i in `find /dev/ploop*|egrep -v 'p1$'`;do sgdisk -p $i |grep GUID;done
Disk identifier (GUID): A821C1C0-DB6E-4F32-ABC5-D05B7CF9A1DE
Disk identifier (GUID): BC1A5685-36FE-47E4-96E8-240307329FAF
Disk identifier (GUID): AA41E391-374A-44E3-A46C-FF32435131F9
Disk identifier (GUID): 113F3E0A-AE6A-4DB7-B68A-FC3FA05230B6
Disk identifier (GUID): 747F78F7-2A98-49D0-8D7E-98910EB282D3
Disk identifier (GUID): 50D6C777-0FEA-401E-86D2-15F4AA853605
Disk identifier (GUID): 099821D1-9ABF-4183-879F-107602ADA5D7
Disk identifier (GUID): B1C96257-86A2-4A82-8391-8F7A30EA7BAE
Disk identifier (GUID): 61D8174B-0FF2-4E23-94E2-4CD8C887FEF9
Problem reading disk in BasicMBRData::ReadMBRData()!
Warning! Read error 22; strange behavior now likely!
Warning! Read error 22; strange behavior now likely!
Disk identifier (GUID): 47F93EC9-EAFD-4349-B82D-040B1C15517B
Disk identifier (GUID): C33B4D83-7D0E-45E1-B2EB-68EDE2BEBB7A
Disk identifier (GUID): 82B84F3A-0538-4ADB-9467-699E17E87DB9
А также при этом это не UUID раздела созданного на GPT диске ploop:
for i in `find /dev/ploop*|egrep -v 'p1$'`;do blkid ${i}p1;done
/dev/ploop10267p1: UUID="bc34c9db-04f8-4c43-b180-210560e0bf50" TYPE="ext4"
/dev/ploop14204p1: UUID="c3b4888a-22a5-4025-b37f-d61904b16567" TYPE="ext4"
/dev/ploop18546p1: UUID="b2aa615b-8375-4a1f-8bf9-1756696d6718" TYPE="ext4"
/dev/ploop18662p1: UUID="b8ccbf07-85fe-44d3-bbc4-6593a3a0c872" TYPE="ext4"
/dev/ploop22367p1: UUID="88829ca9-e1b5-4499-94fe-46d987055368" TYPE="ext4"
/dev/ploop35145p1: UUID="b9370d59-fbad-4e15-b559-2c4af5f4a254" TYPE="ext4"
/dev/ploop38462p1: UUID="189bbe9e-20a2-4b09-8e61-1bf30a4a89f2" TYPE="ext4"
/dev/ploop41650p1: UUID="cb56f5a8-5e47-4ea4-b9bf-bd8e5b8578ff" TYPE="ext4"
/dev/ploop41997p1: UUID="4f53649c-683e-4a31-baea-87a446834f02" TYPE="ext4"
/dev/ploop57837p1: UUID="23028f1d-60de-4d7d-adf9-8466dbb1ff45" TYPE="ext4"
/dev/ploop57849p1: UUID="4f8bf47b-2259-4e60-9f69-bf4828740ee4" TYPE="ext4" 

пятница, 1 ноября 2013 г.

Сборка resize2fs из комплекта e2fsprogs

Зачем? Для работы ploop на OpenVZ/CentOS 5. Для CentOS 6 в репозитории OpenVZ поставляется спец пакетик с обновленной (относительно системы) версией resize2fs:
rpm -ql e2fsprogs-resize2fs-static-1.42.3-3.el6.1.ovz.x86_64
/usr/libexec/resize2fs
Слинкован он при этом статически, без внешних зависимостей:
ldd /usr/libexec/resize2fs
linux-vdso.so.1 =>  (0x00007fff29eec000)
libpthread.so.0 => /lib/libpthread.so.0 (0x00007f9fb7bee000)
libc.so.6 => /lib/libc.so.6 (0x00007f9fb785b000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9fb7e15000)

Попробуем сделать тоже самое, но для OpenVZ на CentOS 5:
yum install -y gcc make
cd /usr/src
wget -Oe2fsprogs-1.42.3.tar.gz 'http://downloads.sourceforge.net/project/e2fsprogs/e2fsprogs/v1.42.3/e2fsprogs-1.42.3.tar.gz?r=http%3A%2F%2Fsourceforge.net%2Fprojects%2Fe2fsprogs%2Ffiles%2Fe2fsprogs%2Fv1.42.3%2F&ts=1383316431&use_mirror=citylan'
tar -xf e2fsprogs-1.42.3.tar.gz
cd e2fsprogs-1.42.3/
./configure --prefix=/opt/e2fsprogs  --disable-debugfs --disable-defrag --disable-imager
make install

В итоге в папочке /opt/e2fsprogs мы обнаружим статически слинкованный resize2fs:
ldd /opt/e2fsprogs/sbin/resize2fs
linux-vdso.so.1 =>  (0x00007fff7ad85000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003201800000)
libc.so.6 => /lib64/libc.so.6 (0x0000003201000000)
/lib64/ld-linux-x86-64.so.2 (0x0000003200c00000)
Который можно закинуть на CentOS 5 ноду, чтобы добиться корректной работы ploop на CentOS 5.

Ради теста перекинул на другую машину и все заработало на ура:
./resize2fs
resize2fs 1.42.3 (14-May-2012)
Usage: ./resize2fs [-d debug_flags] [-f] [-F] [-M] [-P] [-p] device [new_size]

Но итог всей затеи - ploop на CentOS 5 не особо удачный:
This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first.  Use tune4fs -c or -i to override.
tune2fs 1.39 (29-May-2006)
tune2fs: Filesystem has unsupported feature(s) while trying to open /dev/ploop26472p1
Couldn't find valid filesystem superblock.
Error in run_prg_rc (util.c:289): Command tune2fs -ouser_xattr,acl /dev/ploop26472p1  exited with code 1
Unmounting device /dev/ploop26472
Failed to create image: Error in run_prg_rc (util.c:289): Command tune2fs -ouser_xattr,acl /dev/ploop26472p1  exited with code 1 [24]
Destroying container private area: /vz/private/7777
Creation of container private area failed
Надо менять в ploop вызовы tune2fs на tune4fs, так как в CentOS 5 пакеты и тулзы для экст2/3 и экст4 - отдельные.

вторник, 29 октября 2013 г.

Build vzctl from source code on CentOS 6

Во-первых, сначала подключите openvz репозиторий, так как ploop-devel есть только там.

Собираем:
yum install -y automake libxml2-devel libtool ploop-devel libcgroup-devel git
cd /usr/src
git clone git://git.openvz.org/vzctl
cd vzctl
./autogen.sh
./configure --prefix=/opt/vzctl
make
make install
Вуаля:
/opt/vzctl/sbin/vzctl --version
vzctl version 4.5.1-55.git.0da90c7

среда, 28 августа 2013 г.

Как сделать, чтобы при создании контейнера отображался градусник состояния копирования файла в диск контейнера?

Поставить пакет pv из epel.

После этого создание контейнера обогатится градусником:
Creating container private area (debian-6.0-x86_64)
 150MB 0:00:04 [32.8MB/s] [==============================================================================>] 100%          
Unmounting file system at /vz/root/7804

понедельник, 18 марта 2013 г.

Как задать число inodes для файловой системы ext4/ploop/openvz вручную?

Код ploop (версия ploop 1.6) создает файловую систему следюущим образом:

int make_fs(const char *device, const char *fstype)
{
       char part_device[64];
       char *argv[8];
       if (get_partition_device_name(device, part_device, sizeof(part_device)))
               return SYSEXIT_MKFS;
       argv[0] = "/sbin/mkfs";
       argv[1] = "-t";
       argv[2] = (char*)fstype;
       argv[3] = "-j";
       argv[4] = "-b4096";
       argv[5] = part_device;
       argv[6] = NULL;
       if (run_prg(argv))
               return SYSEXIT_MKFS;
       argv[0] = "/sbin/tune2fs";
       argv[1] =  "-ouser_xattr,acl";
       argv[2] = part_device;
       argv[3] = NULL;
       if (run_prg(argv))
               return SYSEXIT_MKFS;
       return 0;
}
Легко можно понять, что передать параметр сюда нельзя, также как и сам ploop не управляет числом inodes в создаваемой фс.

Но у mkfs.ext4 есть флаг -i, которым можно задать число inodes в целовой файловой системе явно:
-N number-of-inodes  Overrides the default calculation of the number of inodes that should be reserved for the  filesystem  (which  is  based  on  the  number of blocks and the bytes-per-inode ratio).  This allows the user to specify the number of desired inodes directly.

Таким образом, внеся легкий патч в код ploop, можно подхачить систему и создать фс с нужным числом inodes без поломки openvz :)

понедельник, 18 февраля 2013 г.

Определение CPU, на которых выполняются процессы определенного контейнера?

До начала работ требуется поставить вот это дело: http://www.stableit.ru/2009/12/openvz-vztop-vzps.html

Далее же все осуществляется простой Bash командой:

for i in `vzps -E 1023 --no-headers   |awk '{print $2}'`; do cat /proc/$i/stat|awk '{print $39}';done|sort | uniq -c
Выдача будет примерно следующая:
      1 0
      3 1
      8 10
      7 11
      4 12
      3 13
      9 22
     11 23
      4 3
      2 4
      1 6
     12 8
     17 9
Обращаю внимание, что первый столбец - число процессов на заданном процессоре, второй - номер логического процессора.

Также предлагаю вариацию команды, отображающей физические слоты (в случае многопроцессорых - несколько реальных, физических процессоров):

for i in `vzps -E 1023 --no-headers   |awk '{print $2}'`; do cpu=$(cat /proc/$i/stat|awk '{print $39}'); cat /sys/devices/system/cpu/cpu$cpu/topology/physical_package_id; done|sort | uniq -c
Ее выдача имеет похожий вид:
     44 0
     38 1

пятница, 11 января 2013 г.

понедельник, 12 ноября 2012 г.

Какой формат у CTID/VEID в OpenVZ?

Все изыскания основаны на коде: http://download.openvz.org/utils/vzctl/4.1/src/vzctl-4.1.tar.bz2

Везде, где требуется работа с CTID/VEID, используется следующий тип: envid_t, который в свою очередь объявлен в файле include/types.h вот таким образом:
typedef unsigned envid_t;
Вот так.  А unsigned в свою очередь - это:
typedef unsigned int            uint32_t;
А это в свою очередь интервал от 0 до 4294967295.  Но интервал от 100 (включительно с обоих сторон) зарезервирован OpenVZ для служебного использования:

Note that CT ID <= 100 are reserved for OpenVZ internal purposes.
Итого, номер CTID/VEID может быть от 101 до 4294967295.

пятница, 26 октября 2012 г.

OpenVZ и CentOS 6 - установка debug ядра

Открываем конфиг репо:
vi /etc/yum.repos.d/openvz.repo

Там ищем блок: openvz-kernel-rhel6-debuginfo и меняем на enabled=1.

Ставим debug ядро:
yum install -y vzkernel-debug

Активация kdump на CentOS 6 и OpenVZ


Ставим юзерспейс софт:
yum install -y kexec-tools

Проверим, чтобы юзерспейс демон грузился при запуске:
chkconfig --list|grep kdump
kdump           0:выкл 1:выкл 2:выкл 3:вкл 4:вкл 5:вкл 6:выкл


После этого, убеждаемся, что опция "crashkernel=auto" присутствует среди опций используемого ядра.

Перезагружаемся для применения изменений:
shutdown -r now
Теперь если машина упадет по кернел панику, в папке /var/crash/ мы обнаружим лог того, что творилось с ядром.

В случае отладки проблем OpenVZ ядер рекомендуется использовать дебаг версию ядра.

Источник: http://bugreev.ru/blog:2011:11:21-_kernel_panic_-_%D0%B4%D0%B5%D0%B1%D0%B0%D0%B3_%D1%8F%D0%B4%D1%80%D0%B0