I got an openshift cluster with 2 nodes (a master and a slave), I want to change the config file of my HAproxy router, so i choose to do a configmap.
After i follow this tuto: https://docs.openshift.org/latest/install_config/install/deploy_router.html
the configmap is created, but the pod doesnt want to restart, i got this error:
I0830 12:35:37.112924 1 router.go:161] Router is including routes in all >namespaces
E0830 12:35:37.372029 1 ratelimiter.go:50] error reloading router: exit >status 1
[ALERT] 242/123537 (28) : [/usr/sbin/haproxy.main()] No enabled listener found >(check the keywords) ! Exiting.
After i deleate the "livenessProb" and "readenesProb" in the rc I can access my router pod, but the configfile is empty.
When i do "findmnt -o +PROPAGATION" in the pod i got this :
TARGET SOURCE FSTYPE OPTIONS PROPAGATION
/ /dev/mapper/docker-253:0-202065893- 4b0b4dede29e355551067e03212ee75cd293545839a9e5014525b8fc8453e5e4[/rootfs]
xfs rw,relat private
|-/proc proc proc rw,nosui private
| |-/proc/bus proc[/bus] proc ro,nosui private
| |-/proc/fs proc[/fs] proc ro,nosui private
| |-/proc/irq proc[/irq] proc ro,nosui private
| |-/proc/sys proc[/sys] proc ro,nosui private
| |-/proc/sysrq-trigger proc[/sysrq-trigger] proc ro,nosui private
| |-/proc/kcore tmpfs[/null] tmpfs rw,nosui private
| `-/proc/timer_stats tmpfs[/null] tmpfs rw,nosui private
|-/dev tmpfs tmpfs rw,nosui private
| |-/dev/pts devpts devpts rw,nosui private
| |-/dev/mqueue mqueue mqueue rw,nosui private
| |-/dev/termination-log /dev/mapper/centos- root[/var/lib/origin/openshift.local.volumes/pods/3deedc57-6eae-11e6-8091- 020000a17bb0/containers/router/58cbfd4d]
xfs rw,relat private,slave
| `-/dev/shm shm tmpfs rw,nosui private
|-/sys sysfs sysfs ro,nosui private
| `-/sys/fs/cgroup tmpfs tmpfs ro,nosui private
| |-/sys/fs/cgroup/systemd cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
| |-/sys/fs/cgroup/cpuacct,cpu cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
| |-/sys/fs/cgroup/cpuset cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
| |-/sys/fs/cgroup/net_cls cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
| |-/sys/fs/cgroup/memory cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
| |-/sys/fs/cgroup/blkio cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
| |-/sys/fs/cgroup/perf_event cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
| |-/sys/fs/cgroup/devices cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
| |-/sys/fs/cgroup/freezer cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
| `-/sys/fs/cgroup/hugetlb cgroup[/system.slice/docker-297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c.scope]
cgroup ro,nosui private,slave
|-/run/secrets /dev/mapper/centos- root[/var/lib/docker/containers/297a37b2903e3a3bcd64d74a4e0c8e71d90cf240377bbc4b778e73ebda53af0c/secrets]
xfs rw,relat private,slave
| `-/run/secrets/kubernetes.io/serviceaccount
tmpfs tmpfs ro,relat private,slave
|-/etc/hosts /dev/mapper/centos-root[/var/lib/docker/containers/56f5ea1e5e2fb9392b9cb3cfc6eecc43d42eb23f9769793e6b2e4f7250c7cf5c/hosts]
xfs rw,relat private
|-/etc/resolv.conf /dev/mapper/centos-root[/var/lib/docker/containers/56f5ea1e5e2fb9392b9cb3cfc6eecc43d42eb23f9769793e6b2e4f7250c7cf5c/resolv.conf]
xfs rw,relat private
|-/etc/hostname /dev/mapper/centos-root[/var/lib/docker/containers/56f5ea1e5e2fb9392b9cb3cfc6eecc43d42eb23f9769793e6b2e4f7250c7cf5c/hostname]
xfs rw,relat private
`-/var/lib/haproxy/conf/custom tmpfs tmpfs rw,relat private,slave
Any help ? Thanks
You need to use a template file for the openshift router.
This explains in detail what you need to do.
https://docs.openshift.org/latest/install_config/install/deploy_router.html#using-configmap-replace-template
Related
I am currently running this command on a linux machine to get pods older than 1 day:
kubectl get pod | awk 'match($5,/[0-9]+d/) {print $1}'
I want to be able to run the same command in Powershell. How could I do it?
kubectl get pod output:
NAME READY STATUS RESTARTS AGE
pod-name 1/1 Running 0 2h3m
pod-name2 1/1 Running 0 1d2h
pod-name3 1/1 Running 0 4d4h
kubectl get pod | awk 'match($5,/[0-9]+d/) {print $1}' output:
pod-name2
pod-name3
You can use:
$long_running_pods=(kubectl get pod | Tail -n+2 | ConvertFrom-String -PropertyNames NAME, READY, STATUS, RESTARTS, AGE | Where-Object {$_.AGE -match "[1-9][0-9]*d[0-9]{1,2}h"})
$long_running_pods.NAME
This will give you all pods which have been running for more than one day.
Example:
$long_running_pods=(echo 'NAME READY STATUS RESTARTS AGE
pod-name 1/1 Running 0 1d2h
pod-name2 1/1 Running 0 0d0h' | Tail -n+2 | ConvertFrom-String -PropertyNames NAME, READY, STATUS, RESTARTS, AGE | Where-Object {$_.AGE -match "[1-9][0-9]*d[0-9]{1,2}h"})
$long_running_pods.NAME
will print:
pod-name
Awk might be a little more convenient in this case. You have to prevent the $split array from being spun out in the pipe before where-object. It turns out I can still reference the $split variable in the foreach-object.
' NAME READY STATUS RESTARTS AGE
pod-name 1/1 Running 0 2h3m
pod-name2 1/1 Running 0 1d2h
pod-name3 1/1 Running 0 4d4h' | set-content file
get-content file |
where-object { $split = -split $_; $split[4] -match '[0-9]+d' } |
foreach-object { $split[0] }
pod-name2
pod-name3
I am trying to build Yocto Zeus in Podman and getting the below error. I noticed that sigcontext.h header file .recipe-sysroot/usr/include/ only has 32bit version whereas unistd.h file copied under asm-generic directory.
./recipe-sysroot/usr/include/asm/sigcontext-32.h
./recipe-sysroot/usr/include/asm-generic/unistd.h
| ../sysdeps/unix/sysv/linux/sys/syscall.h:24:10: fatal error: asm/unistd.h: No such file or directory
| 24 | #include <asm/unistd.h>
| | ^~~~~~~~~~~~~~
| compilation terminated.
| Traceback (most recent call last):
| File "../scripts/gen-as-const.py", line 120, in <module>
| main()
| File "../scripts/gen-as-const.py", line 116, in main
| consts = glibcextract.compute_c_consts(sym_data, args.cc)
| File "/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/git/scripts/glibcextract.py", line 62, in compute_c_consts
| subprocess.check_call(cmd, shell=True)
| File "/usr/lib/python3.5/subprocess.py", line 581, in check_call
| raise CalledProcessError(retcode, cmd)
| subprocess.CalledProcessError: Command 'arm-poky-linux-gnueabi-gcc -mthumb -mfpu=neon -mfloat-abi=hard -mcpu=cortex-a7 --sysroot=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot -std=gnu11 -fgnu89-inline -O2 -pipe -g -feliminate-unused-debug-types -fmacro-prefix-map=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0=/usr/src/debug/glibc/2.30-r0 -fdebug-prefix-map=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0=/usr/src/debug/glibc/2.30-r0 -fdebug-prefix-map=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot= -fdebug-prefix-map=/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot-native= -Wall -Wwrite-strings -Wundef -Werror -fmerge-all-constants -frounding-math -fno-stack-protector -Wstrict-prototypes -Wold-style-definition -fmath-errno -ftls-model=initial-exec -I../include -I/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/csu -I/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi -I../sysdeps/unix/sysv/linux/arm -I../sysdeps/arm/nptl -I../sysdeps/unix/sysv/linux/include -I../sysdeps/unix/sysv/linux -I../sysdeps/nptl -I../sysdeps/pthread -I../sysdeps/gnu -I../sysdeps/unix/inet -I../sysdeps/unix/sysv -I../sysdeps/unix/arm -I../sysdeps/unix -I../sysdeps/posix -I../sysdeps/arm/armv7/multiarch -I../sysdeps/arm/armv7 -I../sysdeps/arm/armv6t2 -I../sysdeps/arm/armv6 -I../sysdeps/arm/include -I../sysdeps/arm -I../sysdeps/wordsize-32 -I../sysdeps/ieee754/flt-32 -I../sysdeps/ieee754/dbl-64 -I../sysdeps/ieee754 -I../sysdeps/generic -I.. -I../libio -I. -nostdinc -isystem /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi/../../lib/arm-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/9.2.0/include -isystem /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot-native/usr/bin/arm-poky-linux-gnueabi/../../lib/arm-poky-linux-gnueabi/gcc/arm-poky-linux-gnueabi/9.2.0/include-fixed -isystem /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/recipe-sysroot/usr/include -D_LIBC_REENTRANT -include /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/libc-modules.h -DMODULE_NAME=libc -include ../include/libc-symbols.h -DTOP_NAMESPACE=glibc -DGEN_AS_CONST_HEADERS -MD -MP -MF /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/tcb-offsets.h.dT -MT '/home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/tcb-offsets.h.d /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/tcb-offsets.h' -S -o /tmp/tmp2wx6srl6/test.s -x c - < /tmp/tmp2wx6srl6/test.c' returned non-zero exit status 1
| make[2]: *** [../Makerules:271: /home/dev/inode_zeus/build/tmp/work/cortexa7t2hf-neon-poky-linux-gnueabi/glibc/2.30-r0/build-arm-poky-linux-gnueabi/tcb-offsets.h] Error 1
| make[2]: *** Waiting for unfinished jobs....
| In file included from ../signal/signal.h:291,
| from ../include/signal.h:2,
| from ../misc/sys/param.h:28,
| from ../include/sys/param.h:1,
| from ../sysdeps/generic/hp-timing-common.h:39,
| from ../sysdeps/generic/hp-timing.h:25,
| from ../nptl/descr.h:27,
| from ../sysdeps/arm/nptl/tls.h:42,
| from ../sysdeps/unix/sysv/linux/arm/tls.h:23,
| from ../include/link.h:51,
| from ../include/dlfcn.h:4,
| from ../sysdeps/generic/ldsodefs.h:32,
| from ../sysdeps/arm/ldsodefs.h:38,
| from ../sysdeps/gnu/ldsodefs.h:46,
| from ../sysdeps/unix/sysv/linux/ldsodefs.h:25,
| from ../sysdeps/unix/sysv/linux/arm/ldsodefs.h:22,
| from <stdin>:2:
| ../sysdeps/unix/sysv/linux/bits/sigcontext.h:30:11: fatal error: asm/sigcontext.h: No such file or directory
| 30 | # include <asm/sigcontext.h>
| | ^~~~~~~~~~~~~~~~~~
| compilation terminated.
|
ERROR: Task (/home/dev/inode_zeus/sources/poky/meta/recipes-core/glibc/glibc_2.30.bb:do_compile) failed with exit code '1'
DEBUG: Teardown for bitbake-worker
NOTE: Tasks Summary: Attempted 437 tasks of which 430 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/home/dev/inode_zeus/sources/poky/meta/recipes-core/glibc/glibc_2.30.bb:do_compile
Please note that I am able to build Jethro version using the Podman Container which runs Ubuntu16.04.
But, Zeus build is failing. Can someone tell me why these errors are seen?
I am able to resolve the issue by mapping the yocto build directory with host directory.
Yocto build worked liked a charm!
podman --storage-opt overlay.mount_program=/usr/bin/fuse-overlayfs --storage-opt overlay.mountopt=nodev,metacopy=on,noxattrs=1 run -it -v $PWD/my_yocto/build_output:/home/oibdev/yocto/build 4cbcb3842ed5
I have master-slave (primary-standby) streaming replication set up on 2 physical nodes. Although the replication is working correctly and walsender and walreceiver both work fine, the files in the pg_wal folder on the slave node are not getting removed. This is a problem I have been facing every time I try to bring the slave node back after a crash. Here are the details of the problem:
postgresql.conf on master and slave/standby node
# Connection settings
# -------------------
listen_addresses = '*'
port = 5432
max_connections = 400
tcp_keepalives_idle = 0
tcp_keepalives_interval = 0
tcp_keepalives_count = 0
# Memory-related settings
# -----------------------
shared_buffers = 32GB # Physical memory 1/4
##DEBUG: mmap(1652555776) with MAP_HUGETLB failed, huge pages disabled: Cannot allocate memory
#huge_pages = try # on, off, or try
#temp_buffers = 16MB # depends on DB checklist
work_mem = 8MB # Need tuning
effective_cache_size = 64GB # Physical memory 1/2
maintenance_work_mem = 512MB
wal_buffers = 64MB
# WAL/Replication/HA settings
# --------------------
wal_level = logical
synchronous_commit = remote_write
archive_mode = on
archive_command = 'rsync -a %p /TPINFO01/wal_archive/%f'
#archive_command = ':'
max_wal_senders=5
hot_standby = on
restart_after_crash = off
wal_sender_timeout = 5000
wal_receiver_status_interval = 2
max_standby_streaming_delay = -1
max_standby_archive_delay = -1
hot_standby_feedback = on
random_page_cost = 1.5
max_wal_size = 5GB
min_wal_size = 200MB
checkpoint_completion_target = 0.9
checkpoint_timeout = 30min
# Logging settings
# ----------------
log_destination = 'csvlog,syslog'
logging_collector = on
log_directory = 'pg_log'
log_filename = 'postgresql_%Y%m%d.log'
log_truncate_on_rotation = off
log_rotation_age = 1h
log_rotation_size = 0
log_timezone = 'Japan'
log_line_prefix = '%t [%p]: [%l-1] %h:%u#%d:[PG]:CODE:%e '
log_statement = all
log_min_messages = info # DEBUG5
log_min_error_statement = info # DEBUG5
log_error_verbosity = default
log_checkpoints = on
log_lock_waits = on
log_temp_files = 0
log_connections = on
log_disconnections = on
log_duration = off
log_min_duration_statement = 1000
log_autovacuum_min_duration = 3000ms
track_functions = pl
track_activity_query_size = 8192
# Locale/display settings
# -----------------------
lc_messages = 'C'
lc_monetary = 'en_US.UTF-8' # ja_JP.eucJP
lc_numeric = 'en_US.UTF-8' # ja_JP.eucJP
lc_time = 'en_US.UTF-8' # ja_JP.eucJP
timezone = 'Asia/Tokyo'
bytea_output = 'escape'
# Auto vacuum settings
# -----------------------
autovacuum = on
autovacuum_max_workers = 3
autovacuum_vacuum_cost_limit = 200
auto_explain.log_min_duration = 10000
auto_explain.log_analyze = on
include '/var/lib/pgsql/tmp/rep_mode.conf' # added by pgsql RA
recovery.conf
primary_conninfo = 'host=xxx.xx.xx.xx port=5432 user=replica application_name=xxxxx keepalives_idle=60 keepalives_interval=5 keepalives_count=5'
restore_command = 'rsync -a /TPINFO01/wal_archive/%f %p'
recovery_target_timeline = 'latest'
standby_mode = 'on'
Result of pg_stat_replication on master/primary
select * from pg_stat_replication;
-[ RECORD 1 ]----+------------------------------
pid | 8868
usesysid | 16420
usename | xxxxxxx
application_name | sub_xxxxxxx
client_addr | xx.xx.xxx.xxx
client_hostname |
client_port | 21110
backend_start | 2021-06-10 10:55:37.61795+09
backend_xmin |
state | streaming
sent_lsn | 97AC/589D93B8
write_lsn | 97AC/589D93B8
flush_lsn | 97AC/589D93B8
replay_lsn | 97AC/589D93B8
write_lag |
flush_lag |
replay_lag |
sync_priority | 0
sync_state | async
-[ RECORD 2 ]----+------------------------------
pid | 221533
usesysid | 3541624258
usename | replica
application_name | xxxxx
client_addr | xxx.xx.xx.xx
client_hostname |
client_port | 55338
backend_start | 2021-06-12 21:26:40.192443+09
backend_xmin | 72866358
state | streaming
sent_lsn | 97AC/589D93B8
write_lsn | 97AC/589D93B8
flush_lsn | 97AC/589D93B8
replay_lsn | 97AC/589D93B8
write_lag |
flush_lag |
replay_lag |
sync_priority | 1
sync_state | sync
Steps I had followed to bring the standby node back from a crash
On master started select pg_start_backup('backup');
rsync data folder and wal_archive folder from master/primary to slave/standby
On master `select pg_stop_backup();
Restart postgres on slave/standby node.
This resulted in the slave/standby node being in sync with master and has been working fine since then.
On the primary/master node the pg_wal folder gets its files removed after nearly 2 hours. But the files on the slave/standby node are not removed. Almost all the files are in the archive_status folder in the pg_wal folder with the <filename>.done as well on the standby node.
I guess the problem can go away if I perform a switchover, but I still want to understand the reason why it is happening.
Please see, I am also trying to find the answers to some of the following questions as well:
Which process writes the files to pg_wal on the slave/standby node? I am following this link
https://severalnines.com/database-blog/postgresql-streaming-replication-deep-dive
Which parameter removes the files from the pg_wal folder on the standby node?
Do they need to go to wal_archive folder on the disk just like they go to wal_archive folder on the master node?
You didn't describe omitting pg_replslot during your rsync, as the docs recommend. If you didn't omit it, then now your replica has a replication slot which is a clone of the one on the master. But if nothing ever connects to that slot on the replica and advances the cutoff, then the WAL never gets released to recycling. To fix you just need to shutdown the replica, remove that directory, restart it, (and wait for the next restart point to finish).
Do they need to go to wal_archive folder on the disk just like they go to wal_archive folder on the master node?
No, that is optional not necessary. It is set by archive_mode = always if you want it to happen.
Is there any cmdlet way to get if a Disk is fixed or removable given a code like this?
$disk = Get-Disk -Number 1
Get-DiskDriveType $disk
Where Get-DiskDriveType should return either Removable or Fixed.
Inventory Drive Types by Using PowerShell
https://blogs.technet.microsoft.com/heyscriptingguy/2014/09/10/inventory-drive-types-by-using-powershell
Two methods:
Get-Volume
DriveLetter FileSystemLabel FileSystem DriveType HealthStatus SizeRemaining Size
----------- ----------- ---------- --------- ---------- ---------- ----
C SSD NTFS Fixed Healthy 75.38 GB 148.53 GB
E HybridTe... NTFS Fixed Healthy 560.71 GB 931.39 GB
D FourTB_B... NTFS Fixed Healthy 1.5 TB 3.64 TB
F TwoTB_BU... NTFS Fixed Healthy 204.34 GB 1.82 TB
G USB3 NTFS Removable Healthy 6.73 GB 58.89 GB
Recovery NTFS Fixed Healthy 22.96 MB 300 MB
H CD-ROM Healthy 0 B 0 B
Or
$hash = #{
2 = "Removable disk"
3 = "Fixed local disk"
4 = "Network disk"
5 = "Compact disk"
}
Get-CimInstance Win32_LogicalDisk |
Select DeviceID, VolumeName,
#{LABEL='TypeDrive';EXPRESSION={$hash.item([int]$_.DriveType)}}
Get-Volume | Where-Object {$_.DriveType -eq 'removable'} | Get-Partition | Get-Disk | Where-Object {$_.Number -eq $diskNumber}
I am working on a Powershell script to monitor a SAN.
I successfully extracted a text file containing all the values from the system in Powershell with this code:
& "NaviSecCli.exe" -user xxxx -password xxxx -h host -f "C:\LUNstate.txt" lun -list
$Path = "C:\LUNstate.txt"
$Text = "Capacity \(GBs\)"
$Name = "^Name"
Get-Content $Path | Select-String -pattern $Text,$Name
This generates the following output:
Name: TEST-DATASTORE-1
User Capacity (GBs): 1536.000
Consumed Capacity (GBs): 955.112
Name: CV Snapshot Mountpoint
User Capacity (GBs): 1024.000
Consumed Capacity (GBs): 955.112
Now I can split the values through the colon, by putting the output into a variable:
$LUNArray = Get-Content $Path | Select-String -pattern $Text,$Name
$LUNArray | foreach {
$LUNArray = $_ -split ': '
Write-Host $LUNArray[0]
Write-Host $LUNArray[1]
}
The only interesting data is stored in $LUNArray[1], so I can just leave out Write-Host $LUNArray[0] which gives me the following output:
TEST-DATASTORE-1
1536.000
955.112
CV Snapshot Mountpoint
1024.000
955.112
Now the tricky part, I would like to put the data into a multi dimensional array. So I would get the following array layout:
LUN Usercap ConsCap
TEST-DATASTORE-1 1536.000 955.112
CV Snapshot Mountpoint 1024.000 955.112
The input file looks like this:
LOGICAL UNIT NUMBER 201
Name: TEST-DATASTORE-1
UID: 60:06:E4:E3:11:50:E4:E3:11:20:A4:D0:C6:E4:E3:11
Current Owner: SP B
Default Owner: SP B
Allocation Owner: SP B
User Capacity (Blocks): 3221225472
User Capacity (GBs): 1536.000
Consumed Capacity (Blocks): 2005641216
Consumed Capacity (GBs): 956.364
Pool Name: Pool HB Hasselt
Raid Type: Mixed
Offset: 0
Auto-Assign Enabled: DISABLED
Auto-Trespass Enabled: DISABLED
Current State: Ready
Status: OK(0x0)
Is Faulted: false
Is Transitioning: false
Current Operation: None
Current Operation State: N/A
Current Operation Status: N/A
Current Operation Percent Completed: 0
Is Pool LUN: Yes
Is Thin LUN: Yes
Is Private: No
Is Compressed: No
Tiering Policy: Lowest Available
Initial Tier: Lowest Available
Tier Distribution:
Capacity: 100.00%
LOGICAL UNIT NUMBER 63920
Name: CV Snapshot Mountpoint
UID: 60:50:38:00:14:50:38:00:C6:64:50:38:00:50:38:00
Current Owner: SP B
Default Owner: SP B
Allocation Owner: SP B
User Capacity (Blocks): 2147483648
User Capacity (GBs): 1024.000
Consumed Capacity (Blocks): 2005641216
Consumed Capacity (GBs): 956.364
Pool Name: Pool HB Hasselt
Raid Type: Mixed
Offset: 0
Auto-Assign Enabled: DISABLED
Auto-Trespass Enabled: DISABLED
Current State: Ready
Status: OK(0x0)
Is Faulted: false
Is Transitioning: false
Current Operation: None
Current Operation State: N/A
Current Operation Status: N/A
Current Operation Percent Completed: 0
Is Pool LUN: Yes
Is Thin LUN: Yes
Is Private: No
Is Compressed: No
Tiering Policy: Lowest Available
Initial Tier: Lowest Available
Tier Distribution:
Capacity: 100.00%
...
$filePath = 'absolute path'
$content = [IO.File]::ReadAllText($filePath)
[regex]::Matches(
$content,
'(?x)
Name: [ ]* ([^\n]+) # name
\n User [ ] (Capacity) [^:]+: [ ]* ([^\n]+) # capacity
\n Consumed [ ] \2 [^:]+:[ ]* ([^\n]+)' # Consumed
) |
ForEach-Object {
$LUN = $_.groups[1].value
$Usercap = $_.groups[3].value
$ConsCap = $_.groups[4].value
# process $Lun, $Usercap and $ConsCap
}
Build a list of custom objects, like this:
& "NaviSecCli.exe" -user xxxx -password xxxx -h host -f "C:\LUNstate.txt" lun -list
$datafile = 'C:\LUNstate.txt'
$pattern = 'Name:\s+(.*)[\s\S]+(User Capacity).*?:\s+(.*)\s+(Consumed Capacity).*?:\s+(.*)'
$LUNArray = (Get-Content $datafile | Out-String) -split '\r\n(\r\n)+' |
Select-String $pattern -AllMatches |
Select-Object -Expand Matches |
% {
New-Object -Type PSObject -Property #{
'LUN' = $_.Groups[1].Value
$_.Groups[2].Value = $_.Groups[3].Value
$_.Groups[4].Value = $_.Groups[5].Value
}
}
The data can be displayed for instance like this:
"{0}: {1}" -f $LUNArray[1].LUN, $LUNArray[1].'Consumed Capacity'