how do I remove a filesystem from a Solaris liveupgrade BE - solaris

I have a spare disk on my T5440 Solaris10 box that I want to use for extra ZFS filesystems
The problem is that this disk was mounted in my original OS installation - but I carried it a live upgrade and the mount point for 'carried over' into the new boot environment (BE)
So when I try and create a zpool on this disk - Solaris complains that that it is in use ....
How can I get c0t0d0 into a state that I can newfs it or create a zpool on it?
root#solaris>zpool create -f spare_pool c0t0d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c0t0d0s7 is in use for live upgrade /export/home.
Please see ludelete(1M).
root#solaris>
root#solaris>lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
new_zfs_BE yes yes yes no -
root#solaris>lufslist new_zfs_BE
boot environment name: new_zfs_BE
This boot environment is currently active.
This boot environment will be active on next system boot.
Filesystem fstype device size Mounted on Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool2/swap swap 34359738368 - -
rpool2/ROOT/new_zfs_BE zfs 5213962240 / -
/dev/dsk/c0t0d0s7 ufs 121010061312 /export/home -
rpool2 zfs 42872619520 /rpool2 -

In your case /export/home is already mounted from rpool2 and then again trying from /dev/dsk/c0t0d0s7 because of this you will not able to delete the BE or patched the BE.
To recover from this issue hand edit /etc/lu/ICF.* file and delete below file.
/dev/dsk/c0t0d0s7 ufs 121010061312 /export/home -
And then try to create your pool.

Related

Service Fabric local machine deployment fails with unclear error

When trying to debug Service Fabric locally it fails during deployment:
1>------ Build started: Project: Project.TestServer.Contracts, Configuration: Debug Any CPU ------
1>Project.TestServer.Contracts -> D:\Projects\Project.Test\Project.TestServer.Contracts\bin\Debug\netstandard2.1\Project.TestServer.Contracts.dll
2>------ Build started: Project: Project.TestServer, Configuration: Debug Any CPU ------
2>Waiting for output folder cleanup...
2>Output folder cleanup has been completed.
2>Project.TestServer -> D:\Projects\Project.Test\Project.TestServer\bin\Debug\netcoreapp3.1\win7-x64\Project.TestServer.dll
2>Project.TestServer -> D:\Projects\Project.Test\Project.TestServer\bin\Debug\netcoreapp3.1\win7-x64\Project.TestServer.Views.dll
3>------ Build started: Project: Project.TestServer.ServiceFabric, Configuration: Debug x64 ------
4>------ Deploy started: Project: Project.TestServer.ServiceFabric, Configuration: Debug x64 ------
4>C:\ProgramData\Microsoft\Crypto\Keys\33c99d3358d005d142e356b6d*******_8f15e82c-1deb-4d62-b94a-196c3a******
========== Build: 3 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
========== Deploy: 0 succeeded, 1 failed, 0 skipped ==========
What this line could mean?
C:\ProgramData\Microsoft\Crypto\Keys\33c99d3358d005d142e356b6d*******_8f15e82c-1deb-4d62-b94a-196c3a******
I had this same issue for the past day or so, and I was able to resolve the issue by searching my OS(C://) drive for the first part of the key name {first part}_{the rest}.
I found a copy/original key in "C:\Users\youruser\AppData\Roaming\Microsoft\Crypto\Keys" and copied it over to "C:\ProgramData\Microsoft\Crypto\Keys".
After doing this the app was able to run and deploy again on my local machine.
This solution by ravipal worked for me:
The issue is that the ASP.NET development certificate being imported to Local Computer was incomplete. We are working on addressing this issue in the VS Tooling. Meanwhile, please use the following workaround which is needed only once per machine.
Export the asp net development certificate
dotnet dev-certs https -ep "%TEMP%\aspcert.pfx" -p <password> (choose any password)
Launch local machine certificate manager
Import the certificate that was exported in step1 (%TEMP%\aspcert.pfx) to both 'Personal' and 'Trusted Root Certification Authorities' of Local Computer. Please use all the default options while importing the certificate.
Now the deployment of the SF application will work.

mounting bucket with fstab not working NEWBIE

I'm new on GCP and on linux and I try to mount a bucket on my centos instance using gcsfuse.
I tried with a script running at boot but it was not working so I tried with fstab (peoples told me it is much better)
But I got this error when I tried to ls my mounted point :
ls: reading directory .: Input/output error
here is my fstab file :
#
# /etc/fstab
# Created by anaconda on Tue Mar 26 23:07:36 2019
#
# Accessible filesystems, by reference, are maintained under'/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=de2d3dce-cce3-47a8-a0fa-5bfe54e611ab / xfs defaults 0 0
mybucket /mount/to/point gcsfuse rw,allow_other,uid=1001,gid=1001
According : https://github.com/GoogleCloudPlatform/gcsfuse/blob/master/docs/mounting.md
Thanks for your time.
Okay so I just had to wait 2 minutes due to google auth granting my key. Basically it works

hadoop fs -copyFromLocal localfile.txt cos://remotefile.txt => Failed to create /disk2/s3a

I'm trying to upload a file to cloud object storage from IBM Analytics Engine:
$ hadoop fs -copyFromLocal LICENSE-2.0.txt \
cos://xxxxx/LICENSE-2.0.txt
However, I'm receiving warnings about failure to create disks:
18/01/26 17:47:47 WARN fs.LocalDirAllocator$AllocatorPerContext:
Failed to create /disk1/s3a 18/01/26 17:47:47 WARN
fs.LocalDirAllocator$AllocatorPerContext: Failed to create /disk2/s3a
Note even though I recieve this warning, the file is still uploaded:
$ hadoop fs -ls cos://xxxxx/LICENSE-2.0.txt
-rw-rw-rw- 1 clsadmin clsadmin 11358 2018-01-26 17:49 cos://xxxxx/LICENSE-2.0.txt
The problem seems to be:
$ grep -B2 -C1 'disk' /etc/hadoop/conf/core-site.xml
<property>
<name>fs.s3a.buffer.dir</name>
<value>/disk1/s3a,/disk2/s3a,/tmp/s3a</value>
</property>
$ ls -lh /disk1 /disk2
ls: cannot access /disk1: No such file or directory
ls: cannot access /disk2: No such file or directory
What are the implications of these warnings? The /tmp/s3a folder does exist, so can we ignore the warnings about these other folders?
The hadoop property 'fs.s3a.buffer.dir' supports list (comma separated values)and points to a local path. When the path is missing, the warnings do appear but they can be safely ignored since they are harmless.If the same command had been run from within the data node, the warning would not show up.Regardless of the warning, the file will be copied to Cloud Object Store, hence does not have any other impact.
Idea to have multiple values for fs.s3a.buffer.dir to'/disk1/s3a,/disk2/s3a,/tmp/s3a' is that when hadoop jobs are run on cluster with Cloud Object Store, the map-reduce tasks are scheduled on data nodes which has additional disks viz /disk1 and /disk2 which has more disk capacity compared to management nodes.

Can't mount DRBD device to directory

I installed drbd for replicate data on two host. After installing successed, I check status drbd:
root#host3:~# cat /proc/drbd
version: 8.3.13 (api:88/proto:86-96)
GIT-hash: 83ca112086600faacab2f157bc5a9324f7bd7f77 build by root#sighted, 2012-10-09 12:47:51
0: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
ns:105400 nr:0 dw:0 dr:106396 al:0 bm:20 lo:0 pe:0 ua:0 ap:0 ep:1 wo:d oos:0
But when I try to: mount /dev/sdb1 /mnt (/dev/sdb1 - device drbd), It not working. This is error:
root#host3:~# mount /dev/sdb1 /mnt/
mount: unknown filesystem type 'drbd'
So, what can I do to mount drbd device?
You're in Primary/Primary so I assume you want to format the DRBD device with a Filesystem that can be mounted/accessed by more than one system simultaneously like GFS2 or OCFS2. You'll need to do this before you can mount the device.
This type of configuration has a lot of requirements, and is probably too much to cover in a stack post. However, you should be able to follow the GFS2 primer in LINBIT's DRBD users guide here:
https://www.linbit.com/drbd-user-guide/users-guide-drbd-8-4/#s-gfs-primer

Postgres Streaming Replication Disk Usage Discrepancy

Just a quick question - apologies if its been asked before, I couldn't find it.
We are using asynchronous streaming replication with postgres and have noticed that the disk usage for the database can vary between the master and the replica, even though the databases appear to be synchronising correctly.
At the moment the discrepancy is quite small, but it has on occasion been in the region of several GB.
At present, this is the sync status:
Master:
master=# SELECT pg_current_xlog_location();
pg_current_xlog_location
--------------------------
35C/F142C98
(1 row)
Slave:
slave=# select pg_last_xlog_receive_location();
pg_last_xlog_receive_location
-------------------------------
35C/F142C98
(1 row)
The disk usage is as follows. Again, I realise the discrepancy is currently quite small (~1.5GiB), but yesterday it was several GB.
Master
-bash-4.1$ df -m /var/lib/pgsql/
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/lv_pgsql
401158 302898 98261 76% /var/lib/pgsql
Slave:
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/mapper/lv_pgsql
401158 301263 99895 76% /var/lib/pgsql
I should clarify that the archive command is set to archive to a different partition on the master. I guess what i'm asking is:
- Is the current disk usage discrepancy normal?
- How can it be explained?
- How much of a discrepancy should there be before I get worried?
Thanks in advance for any help.
edit - I'm specifically interested in the discrepancy within the "data/base/*" directories which house the actual db content, as follows:
Master:
7 /var/lib/pgsql/master/9.3/data/base/1
7 /var/lib/pgsql/master/9.3/data/base/12891
7 /var/lib/pgsql/master/9.3/data/base/12896
57904 /var/lib/pgsql/master/9.3/data/base/16385
180 /var/lib/pgsql/master/9.3/data/base/16387
11 /var/lib/pgsql/master/9.3/data/base/16389
203588 /var/lib/pgsql/master/9.3/data/base/48448446
7 /var/lib/pgsql/master/9.3/data/base/534138292
1 /var/lib/pgsql/master/9.3/data/base/pgsql_tmp
Slave:
7 /var/lib/pgsql/slave/9.3/data/base/1
7 /var/lib/pgsql/slave/9.3/data/base/12891
7 /var/lib/pgsql/slave/9.3/data/base/12896
57634 /var/lib/pgsql/slave/9.3/data/base/16385
180 /var/lib/pgsql/slave/9.3/data/base/16387
10 /var/lib/pgsql/slave/9.3/data/base/16389
202945 /var/lib/pgsql/slave/9.3/data/base/48448446
7 /var/lib/pgsql/slave/9.3/data/base/534138292
1 /var/lib/pgsql/slave/9.3/data/base/pgsql_tmp