20160125

optimum block size (&))))?


for: change block size from 1m to 4m (on ""addit vmfs part 11 G"))



"idea" via https://wiki.linaro.org/WorkingGroups/KernelArchived/Projects/FlashCardSurvey?action=show&redirect=WorkingGroups/Kernel/Projects/FlashCardSurvey

..

USB Flash (current)

Name

Size

Allocation Unit

Write Size Unit

Page Size

FAT Location

# open AUs linear

# open AUs random

Algorithm

USB ID

Name


Sandisk

Cruzer Fit

15,633,408 KB

4 MB

64 KB

16 KB

--

27 * 1.33MB

0

linear SLC (22 MB/s), Linar MLC (5 MB/s)

0781:5571

SanDiskCruzer Fit 1.26 PQ:

SO:

post before prev created ADDIT PARTITION WAS:

[root@localhost:~] ls -l /vmfs/volumes/
total 1792
drwxr-xr-x    1 root     root             8 Jan  1  1970 0b6c70ea-d8b9d184-1d7f-                                176591fde77e
drwxr-xr-x    1 root     root             8 Jan  1  1970 42d466e4-b369e7f8-dc81-                                643a620e5406
drwxr-xr-x    1 root     root             8 Jan  1  1970 5695083f-631345ec-de5c-                                101f744c0550
drwxr-xr-t    1 root     root          1680 Jan 23 20:10 56a3bdf4-65a64697-f032-                                101f744c0550
lrwxr-xr-x    1 root     root            35 Jan 25 11:09 NewDatastore -> 56a3bdf                                4-65a64697-f032-101f744c0550
[root@localhost:~] ls -l /vmfs/volumes/
total 1792
drwxr-xr-x    1 root     root             8 Jan  1  1970 0b6c70ea-d8b9d184-1d7f-176591fde77e
drwxr-xr-x    1 root     root             8 Jan  1  1970 42d466e4-b369e7f8-dc81-643a620e5406
drwxr-xr-x    1 root     root             8 Jan  1  1970 5695083f-631345ec-de5c-101f744c0550
drwxr-xr-t    1 root     root          1680 Jan 23 20:10 56a3bdf4-65a64697-f032-101f744c0550
lrwxr-xr-x    1 root     root            35 Jan 25 11:09 NewDatastore -> 56a3bdf4-65a64697-f032-101f744c0550
[root@localhost:~] vmkfstools -P /vmfs/volumes/56a3bdf4-65a64697-f032-101f744c0550
VMFS-5.61 file system spanning 1 partitions.
File system label (if any): NewDatastore
Mode: public
Capacity 12348030976 (11776 file blocks * 1048576), 2641362944 (2519 blocks) avail, max supported file size 69201586814976
UUID: 56a3bdf4-65a64697-f032-101f744c0550
Partitions spanned (on "lvm"):
        mpx.vmhba32:C0:T0:L0:10
Is Native Snapshot Capable: YES
[root@localhost:~] vmkfstools -Ph -v10 /vmfs/volumes/56a3bdf4-65a64697-f032-101f744c0550
VMFS-5.61 file system spanning 1 partitions.
File system label (if any): NewDatastore
Mode: public
Capacity 11.5 GB, 2.5 GB available, file block size 1 MB, max supported file size 62.9 TB
Volume Creation Time: Sat Jan 23 17:52:52 2016
Files (max/free): 87052/86983
Ptr Blocks (max/free): 64512/64488
Sub Blocks (max/free): 32000/31982
Secondary Ptr Blocks (max/free): 256/256
File Blocks (overcommit/used/overcommit %): 0/9257/0
Ptr Blocks  (overcommit/used/overcommit %): 0/24/0
Sub Blocks  (overcommit/used/overcommit %): 0/18/0
Volume Metadata size: 715481088
UUID: 56a3bdf4-65a64697-f032-101f744c0550
Logical device: 56a3bded-b45b4fdb-fba7-101f744c0550
Partitions spanned (on "lvm"):
        mpx.vmhba32:C0:T0:L0:10
Is Native Snapshot Capable: YES
OBJLIB-LIB: ObjLib cleanup done.
WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0
[root@localhost:~]



============================
but "

http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1003565

VMFS-5 Size Limitations

With VMFS-5, we use a unified 1 MB block size which is no longer configurable, but we can address larger files than a VMFS-3 1 MB block size can due to enhancements to the VMFS file system. Therefore a 1 MB VMFS-3 block size is not the same as a 1 MB VMFS-5 block size regarding file sizes.

The limits that apply to VMFS-5 datastores are:
  • The maximum virtual disk (VMDK) size is 2 TB minus 512 B for ESXi 5.0 and 5.1. In ESXi 5.5, the size is increased to 62TB.
  • The maximum virtual-mode RDM (vRDM) size is 2 TB minus 512 B for ESXi 5.0 and 5.1. In ESXi 5.5, the size is increased to 62TB.
  • Physical-mode RDMs are supported up to 64 TB.
For more information about large VMDK support in vSphere 5.5, see Support for virtual machine disks larger than 2 TB in vSphere 5.5 (2058287).

In VMFS-5, very small files (that is, files smaller than 1 KB) will be stored in the file descriptor location in the metadata rather than using file blocks. Once the file size increases beyond 1 KB, sub-blocks are used. After one 8 KB sub-block is used, 1 MB file blocks are used. As VMFS-5 uses sub-blocks of 8 KB rather than 64 KB (as in VMFS-3), this reduces the amount of disk space being used by small files. For more information on VMFS-5, see vSphere 5 FAQ: VMFS-5 (2003813).

=============

TARGET: BLOCK SIZE 4m

======

AGAIN:
reinstall esxi ))) (created in prev post)
AND:


Using username "root".
Using keyboard-interactive authentication.
Password:
The time and date of this login have been sent to the system logs.

VMware offers supported, powerful system administration tools.  Please
see www.vmware.com/go/sysadmintools for details.

The ESXi Shell can be disabled by an administrative user. See the
vSphere Security documentation for more information.
[root@esxi:~] esxcli storage core device list
mpx.vmhba32:C0:T0:L0
   Display Name: Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)
   Has Settable Display Name: false
   Size: 15267
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0
   Vendor: SanDisk
   Model: Cruzer Fit
   Revision: 1.27
   SCSI Level: 2
   Is Pseudo: false
   Status: on
   Is RDM Capable: false
   Is Local: true
   Is Removable: true
   Is SSD: false
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: unknown
   Attached Filters:
   VAAI Status: unsupported
   Other UIDs: vml.0000000000766d68626133323a303a30
   Is Shared Clusterwide: false
   Is Local SAS Device: false
   Is SAS: false
   Is USB: true
   Is Boot USB Device: true
   Is Boot Device: true
   Device Max Queue Depth: 1
   No of outstanding IOs with competing worlds: 32
   Drive Type: unknown
   RAID Level: unknown
   Number of Physical Drives: unknown
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: NO PROTECTION
   Supported Guard Types: NO GUARD SUPPORT
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false
[root@esxi:~] partedUtil getptbl "/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0"
gpt
1946 255 63 31266816
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
[root@esxi:~] partedUtil setptbl "/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0" gpt "1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B 128" "
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 0" "6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 0" "7 1032224 1257471 9D27538
040AD11DBBF97000C2911D1B8 0" "8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 0" "9 1843200 7086079 9D27538040AD11DBBF97000C291
1D1B8 0" "10 7086080 31264767 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
0 0 0 0
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 0
10 7086080 31264767 AA31E02A400F11DB9590000C2911D1B8 0
[root@esxi:~] partedUtil getptbl "/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0"
gpt
1946 255 63 31266816
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
10 7086080 31264767 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
[root@esxi:~] vmkfstools -C vmfs5 -b 4m -S Datastore4m
Invalid destination specification:
vmkfstools: Argument missing
[root@esxi:~] vmkfstools -C vmfs5 -b 4m -S Datastore4m /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:10
create fs deviceName:'/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:10', fsShortName:'vmfs5', fsName:'Datastore4m'
deviceFullPath:/dev/disks/mpx.vmhba32:C0:T0:L0:10 deviceFile:mpx.vmhba32:C0:T0:L0:10
Invalid fileBlockSize 4194304. VMFS 5 doesn't support file blocks greater than 1MB
Usage: vmkfstools -C [vmfs5|vfat] /vmfs/devices/disks/vml... or,
       vmkfstools -C [vmfs5|vfat] /vmfs/devices/disks/naa... or,
       vmkfstools -C [vmfs5|vfat] /vmfs/devices/disks/mpx.vmhbaA:T:L:P
Error: Invalid argument
[root@esxi:~] ls -l /vmfs/volumes/
total 768
drwxr-xr-x    1 root     root             8 Jan  1  1970 1e7934e3-47d70b38-1615-5dbf5e04c594
drwxr-xr-x    1 root     root             8 Jan  1  1970 56a610da-cc6be0e4-7bfa-101f744c0550
drwxr-xr-x    1 root     root             8 Jan  1  1970 f3716ce3-2e47c11d-b1ff-40e92ad34166
[root@esxi:~] df
Filesystem     Bytes      Used Available Use% Mounted on
vfat       261853184 169873408  91979776  65% /vmfs/volumes/1e7934e3-47d70b38-1615-5dbf5e04c594
vfat       261853184      8192 261844992   0% /vmfs/volumes/f3716ce3-2e47c11d-b1ff-40e92ad34166
vfat       299712512 211386368  88326144  71% /vmfs/volumes/56a610da-cc6be0e4-7bfa-101f744c0550
[root@esxi:~] vmkfstools -Ph -v10 /vmfs/volumes/1e7934e3-47d70b38-1615-5dbf5e04c594
Could not retrieve max file size: Inappropriate ioctl for device
vfat-0.04 file system spanning 1 partitions.
File system label (if any):
Mode: private
Capacity 249.7 MB, 87.7 MB available, file block size 4 KB, max supported file size 0 bytes
UUID: 1e7934e3-47d70b38-1615-5dbf5e04c594
Logical device: mpx.vmhba32:C0:T0:L0:5
Partitions spanned (on "disks"):
        mpx.vmhba32:C0:T0:L0:5
Is Native Snapshot Capable: NO
OBJLIB-LIB: ObjLib cleanup done.
WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0
[root@esxi:~] vmkfstools -Ph -v10 /vmfs/volumes/56a610da-cc6be0e4-7bfa-101f744c0550
Could not retrieve max file size: Inappropriate ioctl for device
vfat-0.04 file system spanning 1 partitions.
File system label (if any):
Mode: private
Capacity 285.8 MB, 84.2 MB available, file block size 8 KB, max supported file size 0 bytes
UUID: 56a610da-cc6be0e4-7bfa-101f744c0550
Logical device: mpx.vmhba32:C0:T0:L0:8
Partitions spanned (on "disks"):
        mpx.vmhba32:C0:T0:L0:8
Is Native Snapshot Capable: NO
OBJLIB-LIB: ObjLib cleanup done.
WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0
[root@esxi:~] vmkfstools -Ph -v10 /vmfs/volumes/f3716ce3-2e47c11d-b1ff-40e92ad34166
Could not retrieve max file size: Inappropriate ioctl for device
vfat-0.04 file system spanning 1 partitions.
File system label (if any):
Mode: private
Capacity 249.7 MB, 249.7 MB available, file block size 4 KB, max supported file size 0 bytes
UUID: f3716ce3-2e47c11d-b1ff-40e92ad34166
Logical device: mpx.vmhba32:C0:T0:L0:6
Partitions spanned (on "disks"):
        mpx.vmhba32:C0:T0:L0:6
Is Native Snapshot Capable: NO
OBJLIB-LIB: ObjLib cleanup done.
WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0
[root@esxi:~] esxcfg-scsidevs -m
[root@esxi:~] vmkfstools -C vmfs3 -b 4m -S Datastore4m /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:10
create fs deviceName:'/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:10', fsShortName:'vmfs3', fsName:'Datastore4m'
Creation of VMFS-3 is not supported.

Usage: vmkfstools -C [vmfs5|vfat] /vmfs/devices/disks/vml... or,
       vmkfstools -C [vmfs5|vfat] /vmfs/devices/disks/naa... or,
       vmkfstools -C [vmfs5|vfat] /vmfs/devices/disks/mpx.vmhbaA:T:L:P
Error: vmkfstools failed: vmkernel is not loaded or call not implemented.

[root@esxi:~] vmkfstools -h
No valid command specified


OPTIONS FOR FILE SYSTEMS:

vmkfstools -C --createfs [vmfs5|vfat]
               -S --setfsname fsName
           -Z --spanfs span-partition
           -G --growfs grown-partition
   deviceName

           -P --queryfs -h --humanreadable
           -T --upgradevmfs
   vmfsPath
           -y --reclaimBlocks vmfsPath [--reclaimBlocksUnit #blocks]

OPTIONS FOR VIRTUAL DISKS:

vmkfstools -c --createvirtualdisk #[bBsSkKmMgGtT]
               -d --diskformat [zeroedthick
                               |thin
                               |eagerzeroedthick
                               ]
               -a --adaptertype [deprecated]
               -W --objecttype [file|vsan|vvol]
               --policyFile <fileName>
           -w --writezeros
           -j --inflatedisk
           -k --eagerzero
           -K --punchzero
           -U --deletevirtualdisk
           -E --renamevirtualdisk srcDisk
           -i --clonevirtualdisk srcDisk
               -d --diskformat [zeroedthick
                               |thin
                               |eagerzeroedthick
                               |rdm:<device>|rdmp:<device>
                               |2gbsparse]
               -W --object [file|vsan|vvol]
               --policyFile <fileName>
               -N --avoidnativeclone
           -X --extendvirtualdisk #[bBsSkKmMgGtT]
               [-d --diskformat eagerzeroedthick]
           -M --migratevirtualdisk
           -r --createrdm /vmfs/devices/disks/...
           -q --queryrdm
           -z --createrdmpassthru /vmfs/devices/disks/...
           -v --verbose #
           -g --geometry
           -x --fix [check|repair]
           -e --chainConsistent
           -Q --objecttype name/value pair
           --uniqueblocks childDisk
   vmfsPath

OPTIONS FOR DEVICES:

           -L --lock [reserve|release|lunreset|targetreset|busreset|readkeys|readresv
                     ] /vmfs/devices/disks/...
           -B --breaklock /vmfs/devices/disks/...

vmkfstools -H --help

[root@esxi:~] vmkfstools -v -C vmfs5 -S DatastoreDefvmfs5 /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:10
Extra arguments at the end of the command line.

OPTIONS FOR FILE SYSTEMS:

vmkfstools -C --createfs [vmfs5|vfat]
               -S --setfsname fsName
           -Z --spanfs span-partition
           -G --growfs grown-partition
   deviceName

           -P --queryfs -h --humanreadable
           -T --upgradevmfs
   vmfsPath
           -y --reclaimBlocks vmfsPath [--reclaimBlocksUnit #blocks]

OPTIONS FOR VIRTUAL DISKS:

vmkfstools -c --createvirtualdisk #[bBsSkKmMgGtT]
               -d --diskformat [zeroedthick
                               |thin
                               |eagerzeroedthick
                               ]
               -a --adaptertype [deprecated]
               -W --objecttype [file|vsan|vvol]
               --policyFile <fileName>
           -w --writezeros
           -j --inflatedisk
           -k --eagerzero
           -K --punchzero
           -U --deletevirtualdisk
           -E --renamevirtualdisk srcDisk
           -i --clonevirtualdisk srcDisk
               -d --diskformat [zeroedthick
                               |thin
                               |eagerzeroedthick
                               |rdm:<device>|rdmp:<device>
                               |2gbsparse]
               -W --object [file|vsan|vvol]
               --policyFile <fileName>
               -N --avoidnativeclone
           -X --extendvirtualdisk #[bBsSkKmMgGtT]
               [-d --diskformat eagerzeroedthick]
           -M --migratevirtualdisk
           -r --createrdm /vmfs/devices/disks/...
           -q --queryrdm
           -z --createrdmpassthru /vmfs/devices/disks/...
           -v --verbose #
           -g --geometry
           -x --fix [check|repair]
           -e --chainConsistent
           -Q --objecttype name/value pair
           --uniqueblocks childDisk
   vmfsPath

OPTIONS FOR DEVICES:

           -L --lock [reserve|release|lunreset|targetreset|busreset|readkeys|readresv
                     ] /vmfs/devices/disks/...
           -B --breaklock /vmfs/devices/disks/...

vmkfstools -H --help

[root@esxi:~] vmkfstools -C vmfs5 -S DatastoreDefvmfs5 /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:10
create fs deviceName:'/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:10', fsShortName:'vmfs5', fsName:'DatastoreDefvmfs5'
deviceFullPath:/dev/disks/mpx.vmhba32:C0:T0:L0:10 deviceFile:mpx.vmhba32:C0:T0:L0:10
ATS on device /dev/disks/mpx.vmhba32:C0:T0:L0:10: not supported
.
Checking if remote hosts are using this device as a valid file system. This may take a few seconds...
Creating vmfs5 file system on "mpx.vmhba32:C0:T0:L0:10" with blockSize 1048576 and volume label "DatastoreDefvmfs5".
Successfully created new volume: 56a64f12-82285a62-4f3f-101f744c0550
[root@esxi:~] vmkfstools -V
[root@esxi:~] ls -la /vmfs/volumes/
total 1796
drwxr-xr-x    1 root     root           512 Jan 25 16:39 .
drwxr-xr-x    1 root     root           512 Jan 25 15:07 ..
drwxr-xr-x    1 root     root             8 Jan  1  1970 1e7934e3-47d70b38-1615-5dbf5e04c594
drwxr-xr-x    1 root     root             8 Jan  1  1970 56a610da-cc6be0e4-7bfa-101f744c0550
drwxr-xr-t    1 root     root          1260 Jan 25 16:36 56a64f12-82285a62-4f3f-101f744c0550
lrwxr-xr-x    1 root     root            35 Jan 25 16:39 DatastoreDefvmfs5 -> 56a64f12-82285a62-4f3f-101f744c0550
drwxr-xr-x    1 root     root             8 Jan  1  1970 f3716ce3-2e47c11d-b1ff-40e92ad34166
[root@esxi:~] vmkfstools -Ph -v10 /vmfs/volumes/56a64f12-82285a62-4f3f-101f744c0550
VMFS-5.61 file system spanning 1 partitions.
File system label (if any): DatastoreDefvmfs5
Mode: public
Capacity 11.5 GB, 10.6 GB available, file block size 1 MB, max supported file size 62.9 TB
Volume Creation Time: Mon Jan 25 16:36:34 2016
Files (max/free): 87052/87044
Ptr Blocks (max/free): 64512/64496
Sub Blocks (max/free): 32000/32000
Secondary Ptr Blocks (max/free): 256/256
File Blocks (overcommit/used/overcommit %): 0/888/0
Ptr Blocks  (overcommit/used/overcommit %): 0/16/0
Sub Blocks  (overcommit/used/overcommit %): 0/0/0
Volume Metadata size: 715481088
UUID: 56a64f12-82285a62-4f3f-101f744c0550
Logical device: 56a64f0b-cd78de55-808f-101f744c0550
Partitions spanned (on "lvm"):
        mpx.vmhba32:C0:T0:L0:10
Is Native Snapshot Capable: YES
OBJLIB-LIB: ObjLib cleanup done.
WORKER: asyncOps=0 maxActiveOps=0 maxPending=0 maxCompleted=0
[root@esxi:~]

[root@esxi:~] df
Filesystem       Bytes      Used   Available Use% Mounted on
VMFS-5     12348030976 931135488 11416895488   8% /vmfs/volumes/DatastoreDefvmfs5
vfat         261853184 169873408    91979776  65% /vmfs/volumes/1e7934e3-47d70b38-1615-5dbf5e04c594
vfat         261853184      8192   261844992   0% /vmfs/volumes/f3716ce3-2e47c11d-b1ff-40e92ad34166
vfat         299712512 211386368    88326144  71% /vmfs/volumes/56a610da-cc6be0e4-7bfa-101f744c0550
[root@esxi:~] df -h
Filesystem   Size   Used Available Use% Mounted on
VMFS-5      11.5G 888.0M     10.6G   8% /vmfs/volumes/DatastoreDefvmfs5
vfat       249.7M 162.0M     87.7M  65% /vmfs/volumes/1e7934e3-47d70b38-1615-5dbf5e04c594
vfat       249.7M   8.0K    249.7M   0% /vmfs/volumes/f3716ce3-2e47c11d-b1ff-40e92ad34166
vfat       285.8M 201.6M     84.2M  71% /vmfs/volumes/56a610da-cc6be0e4-7bfa-101f744c0550
[root@esxi:~] fdisk -h

***
*** The fdisk command is deprecated: fdisk does not handle GPT partitions.  Please use partedUtil
***

/usr/lib/vmware/misc/bin/fdisk: invalid option -- 'h'
BusyBox v1.20.2 (2014-08-27 12:48:18 PDT) multi-call binary.

Usage: fdisk [-ul] [-C CYLINDERS] [-H HEADS] [-S SECTORS] [-b SSZ] DISK

Change partition table

        -u              Start and End are in sectors (instead of cylinders)
        -l              Show partition table for each DISK, then exit
        -b 2048         (for certain MO disks) use 2048-byte sectors
        -C CYLINDERS    Set number of cylinders/heads/sectors
        -H HEADS
        -S SECTORS

[root@esxi:~] fdisk -l /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0

***
*** The fdisk command is deprecated: fdisk does not handle GPT partitions.  Please use partedUtil
***

Found valid GPT with protective MBR; using GPT

Disk /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0: 31266816 sectors, 29.8M
Logical sector size: 512
Disk identifier (GUID): 29d21951-dae2-476f-8c15-b6e2891fd70c
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 31266782

Number  Start (sector)    End (sector)  Size       Code  Name
   1              64            8191        8128   0700
   5            8224          520191        499K   0700
   6          520224         1032191        499K   0700
   7         1032224         1257471        219K   0700
   8         1257504         1843199        571K   0700
   9         1843200         7086079       5120K   0700
  10         7086080        31264767       23.0M   0700
[root@esxi:~]


======

SO:
TARGET UNREACHABLE :( :)

====
refs:

Partition Alignment and block size VMware 5

https://pubs.vmware.com/vsphere-50/index.jsp#com.vmware.vsphere.storage.doc_50/GUID-A5D85C33-A510-4A3E-8FC7-93E6BA0A048F.html?resultof=%2522%2576%256d%256b%2566%2573%2574%256f%256f%256c%2573%2522%2520%2522%2576%256d%256b%2566%2573%2574%256f%256f%256c%2522%2520


=======

Reformatting the local VMFS partition's block size in ESX 4.x (post-installation)(1013210)

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1013210

Frequently Asked Questions on VMware vSphere 5.x for VMFS-5 (2003813)

http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2003813


Performing a rescan of the storage on an ESX/ESXi host (1003988)

http://kb.vmware.com/selfservice/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=1003988

to search for new VMFS datastores, run this command:

vmkfstools -V

Note: This command does not generate any output.

If a new datastore has been detected, it is mounted in /vmfs/volumes/ using its friendly name (if it has one) or its UUID.



With USB devices the scratch partition is not created on the device


=======

Installing ESXi 5.x on a supported USB flash drive or SD flash card (2004784)


http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004784

.....

Limitation when installing on USB flash drive or SD flash card:

When installing ESXi onto a SD flash card, if the drive contains less than 8 GB of space, this prevents the allocation of a scratch partition onto the flash device. With USB devices the scratch partition is not created on the device during installation and will need to be configured after.  For more information, see Creating a persistent scratch location for ESXi 4.x and 5.x (1033696). VMware recommends using a retail purchased USB flash drive of 16 GB or larger so that the "extra" flash cells can prolong the life of the boot media but high quality parts of 4 GB or larger are sufficient to hold the extended coredump partition. 

To workaround this limitation:
  1. Connect to the ESXi host via SSH. For more information, see Using Tech Support Mode in ESXi 4.1 and ESXi 5.x (1017910).
  2. Back up the existing boot.cfg file, located in /bootbank/, using this command:

    cp /bootbank/boot.cfg /bootbank/boot.bkp

  3. Open the boot.cfg file using VI editor. For more information, see Editing files on an ESX host using vi or nano (1020302).
  4. Modify the following line:

    kernelopt=no-auto-partition

    to 

    kernelopt=autoPartition=TRUE skipPartitioningSsds=TRUE autoPartitionCreateUSBCoreDumpPartition=TRUE

  5. Save and close the boot.cfg file.
  6. Restart the ESXi host.


====
installed:






=====



bootstate=0
kernel=tboot.b00
title=Loading VMware ESXi
kernelopt= installerDiskDumpSlotSize=2560 no-auto-partition
modules=b.b00 --- jumpstrt.gz --- useropts.gz --- k.b00 --- chardevs.b00 --- a.b
build=6.0.0-3380124
updated=1
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
- /bootbank/boot.cfg 1/7 14%

######################
modiff_to:
######################
bootstate=0
kernel=tboot.b00
title=Loading VMware ESXi
kernelopt=autoPartition=TRUE skipPartitioningSsds=TRUE autoPartitionCreateUSBCoreDumpPartition=TRUE
modules=b.b00 --- jumpstrt.gz --- useropts.gz --- k.b00 --- chardevs.b00 --- a.b00 --- user.b00 --- uc_intel.b00 --- uc_amd.b00 --- sb
build=6.0.0-3380124
updated=1
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
~
I /bootbank/boot.cfg [Modified] 4/7 57%
===ESC : wq ENTER
[root@esxi:~] cat /bootbank/boot.cfg
bootstate=0
kernel=tboot.b00
title=Loading VMware ESXi
kernelopt=autoPartition=TRUE skipPartitioningSsds=TRUE autoPartitionCreateUSBCoreDumpPartition=TRUE
modules=b.b00 --- jumpstrt.gz --- useropts.gz --- k.b00 --- chardevs.b00 --- a.b00 --- user.b00 --- uc_intel.b00 --- uc_amd.b00 --- sb.v00 --- s.v00 --- mtip32xx.v00 --- ata_pata.v00 --- ata_pata.v01 --- ata_pata.v02 --- ata_pata.v03 --- ata_pata.v04 --- ata_pata.v05 --- ata_pata.v06 --- ata_pata.v07 --- block_cc.v00 --- ehci_ehc.v00 --- elxnet.v00 --- emulex_e.v00 --- weaselin.t00 --- esx_dvfi.v00 --- ima_qla4.v00 --- ipmi_ipm.v00 --- ipmi_ipm.v01 --- ipmi_ipm.v02 --- lpfc.v00 --- lsi_mr3.v00 --- lsi_msgp.v00 --- lsu_hp_h.v00 --- lsu_lsi_.v00 --- lsu_lsi_.v01 --- lsu_lsi_.v02 --- lsu_lsi_.v03 --- lsu_lsi_.v04 --- misc_cni.v00 --- misc_dri.v00 --- net_bnx2.v00 --- net_bnx2.v01 --- net_cnic.v00 --- net_e100.v00 --- net_e100.v01 --- net_enic.v00 --- net_forc.v00 --- net_igb.v00 --- net_ixgb.v00 --- net_mlx4.v00 --- net_mlx4.v01 --- net_nx_n.v00 --- net_tg3.v00 --- net_vmxn.v00 --- nmlx4_co.v00 --- nmlx4_en.v00 --- nmlx4_rd.v00 --- nvme.v00 --- ohci_usb.v00 --- qlnative.v00 --- rste.v00 --- sata_ahc.v00 --- sata_ata.v00 --- sata_sat.v00 --- sata_sat.v01 --- sata_sat.v02 --- sata_sat.v03 --- sata_sat.v04 --- scsi_aac.v00 --- scsi_adp.v00 --- scsi_aic.v00 --- scsi_bnx.v00 --- scsi_bnx.v01 --- scsi_fni.v00 --- scsi_hps.v00 --- scsi_ips.v00 --- scsi_meg.v00 --- scsi_meg.v01 --- scsi_meg.v02 --- scsi_mpt.v00 --- scsi_mpt.v01 --- scsi_mpt.v02 --- scsi_qla.v00 --- uhci_usb.v00 --- xhci_xhc.v00 --- xorg.v00 --- vsanheal.v00 --- imgdb.tgz --- state.tgz
build=6.0.0-3380124
updated=1
[root@esxi:~]

reboot
-- nothing changed :(



[root@esxi:~] df
Filesystem     Bytes      Used Available Use% Mounted on
vfat       261853184 169873408  91979776  65% /vmfs/volumes/1e7934e3-47d70b38-1615-5dbf5e04c594
vfat       261853184      8192 261844992   0% /vmfs/volumes/f3716ce3-2e47c11d-b1ff-40e92ad34166
vfat       299712512 211386368  88326144  71% /vmfs/volumes/56a610da-cc6be0e4-7bfa-101f744c0550
[root@esxi:~]

20160123

vmware create vmfs datastore on the same 16G usb flash drive :)


ESXI (VMware-VMvisor-Installer-201601001-3380124.x86_64.iso) is installed on SanDisk Cruzer Fit 16G usb flash



======

[root@localhost:~] esxcli storage core device list
mpx.vmhba32:C0:T0:L0
   Display Name: Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)
   Has Settable Display Name: false
   Size: 15267
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0
   Vendor: SanDisk
   Model: Cruzer Fit
   Revision: 1.27
   SCSI Level: 2
   Is Pseudo: false
   Status: on
   Is RDM Capable: false
   Is Local: true
   Is Removable: true
   Is SSD: false
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: unknown
   Attached Filters:
   VAAI Status: unsupported
   Other UIDs: vml.0000000000766d68626133323a303a30
   Is Shared Clusterwide: false
   Is Local SAS Device: false
   Is SAS: false
   Is USB: true
   Is Boot USB Device: true
   Is Boot Device: true
   Device Max Queue Depth: 1
   No of outstanding IOs with competing worlds: 32
   Drive Type: unknown
   RAID Level: unknown
   Number of Physical Drives: unknown
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: NO PROTECTION
   Supported Guard Types: NO GUARD SUPPORT
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false

[root@localhost:~] ls -alh /vmfs/devices/disks
total 960812721
drwxr-xr-x    2 root     root         512 Jan 23 17:21 .
drwxr-xr-x   15 root     root         512 Jan 23 17:21 ..
-rw-------    1 root     root       14.9G Jan 23 17:21 mpx.vmhba32:C0:T0:L0
-rw-------    1 root     root        4.0M Jan 23 17:21 mpx.vmhba32:C0:T0:L0:1
-rw-------    1 root     root      250.0M Jan 23 17:21 mpx.vmhba32:C0:T0:L0:5
-rw-------    1 root     root      250.0M Jan 23 17:21 mpx.vmhba32:C0:T0:L0:6
-rw-------    1 root     root      110.0M Jan 23 17:21 mpx.vmhba32:C0:T0:L0:7
-rw-------    1 root     root      286.0M Jan 23 17:21 mpx.vmhba32:C0:T0:L0:8
-rw-------    1 root     root        2.5G Jan 23 17:21 mpx.vmhba32:C0:T0:L0:9
lrwxrwxrwx    1 root     root          20 Jan 23 17:21 vml.0000000000766d68626133323a303a30 -> mpx.vmhba32:C0:T0:L0
lrwxrwxrwx    1 root     root          22 Jan 23 17:21 vml.0000000000766d68626133323a303a30:1 -> mpx.vmhba32:C0:T0:L0:1
lrwxrwxrwx    1 root     root          22 Jan 23 17:21 vml.0000000000766d68626133323a303a30:5 -> mpx.vmhba32:C0:T0:L0:5
lrwxrwxrwx    1 root     root          22 Jan 23 17:21 vml.0000000000766d68626133323a303a30:6 -> mpx.vmhba32:C0:T0:L0:6
lrwxrwxrwx    1 root     root          22 Jan 23 17:21 vml.0000000000766d68626133323a303a30:7 -> mpx.vmhba32:C0:T0:L0:7
lrwxrwxrwx    1 root     root          22 Jan 23 17:21 vml.0000000000766d68626133323a303a30:8 -> mpx.vmhba32:C0:T0:L0:8
lrwxrwxrwx    1 root     root          22 Jan 23 17:21 vml.0000000000766d68626133323a303a30:9 -> mpx.vmhba32:C0:T0:L0:9


[root@localhost:~] partedUtil getptbl "/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0"
gpt
1946 255 63 31266816
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
[root@localhost:~]
[root@localhost:~]#add last partition with 1MB gap at the end of flash disk - like this gap laft all the partitioning tools from linux
[root@localhost:~] #in this flash disk - last sector = 31266816
[root@localhost:~] #so 31266816 - 2048 = 31264767
[root@localhost:~] #and according to doc:
[root@localhost:~] #For ESXi/ESX 4.1 and later, use the command:
[root@localhost:~] #partedUtil setptbl "/vmfs/devices/disks/DeviceName" DiskLabel ["partNum startSector endSector type/guid
attribute"]*
[root@localhost:~] #Using the partedUtil command line utility on ESXi and ESX (1036609)
[root@localhost:~] partedUtil getptbl "/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0"
gpt
1946 255 63 31266816
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
[root@localhost:~] partedUtil setptbl "/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0" gpt "1 64 8191 C12A7328F81F11D2BA4B00A0C93E
C93B 128" "5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 0" "6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 0" "7 103222
4 1257471 0" "8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 0" "9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 0" "
10 7086080 31264767 AA31E02A400F11DB9590000C2911D1B8 0"
Invalid number of tokens

Invalid partition information: 7 1032224 1257471 0

Invalid Partition information

[root@localhost:~] partedUtil setptbl "/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0" gpt "1 64 8191 C12A7328F81F11D2BA4B00A0C93E
C93B 128" "5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 0" "6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 0" "7 103222
4 1257471 9D27538040AD11DBBF97000C2911D1B8 0" "8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 0" "9 1843200 7086079 9D27
538040AD11DBBF97000C2911D1B8 0" "10 7086080 31264767 AA31E02A400F11DB9590000C2911D1B8 0"
gpt
0 0 0 0
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 0
10 7086080 31264767 AA31E02A400F11DB9590000C2911D1B8 0
[root@localhost:~] partedUtil getptbl "/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0"
gpt
1946 255 63 31266816
1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128
5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
8 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
9 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
10 7086080 31264767 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
[root@localhost:~] #PartitionGUIDType (Hex)Type (Decimal)
[root@localhost:~] #VMFS Datastore AA31E02A400F11DB9590000C2911D1B8 0xFB 251
[root@localhost:~] #etc.
[root@localhost:~] vmkfstools -C vmfs5 -b 1m -S NewDatastore /vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:10
create fs deviceName:'/vmfs/devices/disks/mpx.vmhba32:C0:T0:L0:10', fsShortName:'vmfs5', fsName:'NewDatastore'
deviceFullPath:/dev/disks/mpx.vmhba32:C0:T0:L0:10 deviceFile:mpx.vmhba32:C0:T0:L0:10
ATS on device /dev/disks/mpx.vmhba32:C0:T0:L0:10: not supported
.
Checking if remote hosts are using this device as a valid file system. This may take a few seconds...
Creating vmfs5 file system on "mpx.vmhba32:C0:T0:L0:10" with blockSize 1048576 and volume label "NewDatastore".
Successfully created new volume: 56a3bdf4-65a64697-f032-101f744c0550
[root@localhost:~] #last command was how to add "NewDatastore" to esxi from cli
[root@localhost:~] #To create a new VMFS volume from the ESX/ESXi host command line:
[root@localhost:~] #Manually creating a VMFS volume using vmkfstools -C (1009829)
[root@localhost:~] esxcli storage core adapter list
HBA Name  Driver       Link State  UID           Capabilities  Description
--------  -----------  ----------  ------------  ------------  -------------------------------------------------------------------------
vmhba0    ahci         link-n/a    sata.vmhba0                 (0000:00:1f.2) Intel Corporation Cougar Point 6 port SATA AHCI Controller
vmhba32   usb-storage  link-n/a    usb.vmhba32                 () USB
vmhba33   ahci         link-n/a    sata.vmhba33                (0000:00:1f.2) Intel Corporation Cougar Point 6 port SATA AHCI Controller
vmhba34   ahci         link-n/a    sata.vmhba34                (0000:00:1f.2) Intel Corporation Cougar Point 6 port SATA AHCI Controller
vmhba35   ahci         link-n/a    sata.vmhba35                (0000:00:1f.2) Intel Corporation Cougar Point 6 port SATA AHCI Controller
vmhba36   ahci         link-n/a    sata.vmhba36                (0000:00:1f.2) Intel Corporation Cougar Point 6 port SATA AHCI Controller
vmhba37   ahci         link-n/a    sata.vmhba37                (0000:00:1f.2) Intel Corporation Cougar Point 6 port SATA AHCI Controller
[root@localhost:~] esxcli storage core adapter rescan --adapter vmhba32
[root@localhost:~] vmkfstools -V
[root@localhost:~] #last was acc to Performing a rescan of the storage on an ESX/ESXi host (1003988)
[root@localhost:~]


result:







======

20160121

MAAS: Metal As A Service


http://maas.ubuntu.com/docs/

MAAS: Metal As A Service

This is the documentation for the MAAS project.

Metal as a Service – MAAS – lets you treat physical servers like virtual machines in the cloud. Rather than having to manage each server individually, MAAS turns your bare metal into an elastic cloud-like resource.

What does that mean in practice? Tell MAAS about the machines you want it to manage and it will boot them, check the hardware's okay, and have them waiting for when you need them. You can then pull nodes up, tear them down and redeploy them at will; just as you can with virtual machines in the cloud.

When you're ready to deploy a service, MAAS gives Juju the nodes it needs to power that service. It's as simple as that: no need to manually provision, check and, afterwards, clean-up. As your needs change, you can easily scale services up or down. Need more power for your Hadoop cluster for a few hours? Simply tear down one of your Nova compute nodes and redeploy it to Hadoop. When you're done, it's just as easy to give the node back to Nova.

MAAS is ideal where you want the flexibility of the cloud, and the hassle-free power of Juju charms, but you need to deploy to bare metal.

http://maas.io/tour

Take a tour

MAAS is accessible through both a webUI and the CLI, and has a restful API as well. Take a tour to see all MAAS features, including new features in the MAAS 1.9 release.

Discovery and node listing

Take action

Deploy OS

Node details

Configure node interfaces

Configure node storage devices

Machine event log

Commissioning output




#EXTRACT4 ubuntu server

server 202011 #EXTRACT4: enable the “partner” repository: https://askubuntu.com/questions/14629/how-do-i-enable-the-partner-repository sud...