Upload private key to rundeck key storage - rundeck

I am trying to upload private key to rundeck key storage, it works with with UI, i want to try this same from REST, any suggestion around this ?

Use the new CLI tool. See https://rundeck.github.io/rundeck-cli/
The usage is as follows
rd keys create help
Create a new key entry.
Usage: create options PATH
[--file -f value] : File path for reading the upload contents.
[--path -p value] : Storage path, default: keys/
[--prompt -p] : (password type only) prompt on console for the password value, if -f is not specified.
--type -t value : Type of key to store: publicKey,privateKey,password.
For example:
rd keys create --file id_rsa --path keys/myuser/id_rsa --type privateKey

Related

db2 how to configure external tables using extbl_location, extbl_strict_io

db2 how to configure external tables using extbl_location, extbl_strict_io. Could you please give insert example for system table how to set up this parameters. I need to create external table and upload data to external table.
I need to know how to configure parameters extbl_location, extbl_strict_io.
I created table like this.
CREATE EXTERNAL TABLE textteacher(ID int, Name char(50), email varchar(255)) USING ( DATAOBJECT 'teacher.csv' FORMAT TEXT CCSID 1208 DELIMITER '|' REMOTESOURCE 'LOCAL' SOCKETBUFSIZE 30000 LOGDIR '/tmp/logs' );
and tried to upload data to it.
insert into textteacher (ID,Name,email) select id,name,email from teacher;
and get exception [428IB][-20569] The external table operation failed due to a problem with the corresponding data file or diagnostic files. File name: "teacher.csv". Reason code: "1".. SQLCODE=-20569, SQLSTATE=428IB, DRIVER=4.26.14
If I correct understand documentation parameter extbl_location should pointed directory where data will save. I suppose full directory will showed like
$extbl_location+'/'+teacher.csv
I found some documentation about error
https://www.ibm.com/support/pages/how-resolve-sql20569n-error-external-table-operation
I tried to run command in docker command line.
/opt/ibm/db2/V11.5/bin/db2 get db cfg | grep -i external
but does not information about external any tables.
CREATE EXTERNAL TABLE statement:
file-name
...
When both the REMOTESOURCE option is set to LOCAL (this is its default value) and the extbl_strict_io configuration parameter is set
to NO, the path to the external table file is an absolute path and
must be one of the paths specified by the extbl_location configuration
parameter. Otherwise, the path to the external table file is relative
to the path that is specified by the extbl_location configuration
parameter followed by the authorization ID of the table definer. For
example, if extbl_location is set to /home/xyz and the authorization
ID of the table definer is user1, the path to the external table file
is relative to /home/xyz/user1/.
So, If you use relative path to a file as teacher.csv, you must set extbl_strict_io to YES.
For an unload operation, the following conditions apply:
If the file exists, it is overwritten.
Required permissions:
If the external table is a named external table, the owner must have read and write permission for the directory of this file.
If the external table is transient, the authorization ID of the statement must have read and write permission for the directory of this file.
Moreover you must create a sub-directory equal to your username (in lowercase) which is owner of this table in the directory specified in extbl_location and ensure, that this user (not the instance owner) has rw permission to this sub-directory.
Update:
To setup presuming, that user1 runs this INSERT statement.
sudo mkdir -p /home/xyz/user1
# user1 must have an ability to cd to this directory
sudo chown user1:$(id -gn user1) /home/xyz/user1
db2 connect to mydb
db2 update db cfg using extbl_location /home/xyz extbl_strict_io YES

GPG - import a .key file containing only the private key in gpg

I am trying to restore my backup but somehow the pubring file does only show 1 key pair.
I did import my old(most important) public key from the key server.
So far, so good.
Now I have a lot of keys inside ~/.gnupg/private-keys-v1.d/
They are all named "longcombinationOfLettersAndNumbers.key"
Can I somehow use these private keys to decrypt my backup?
I tried
gpg --import < fileOfAKey.key
but I get:
gpg: no valid OpenPGP data found.
Any tips/help appreciated.

why still need to input a password when using fastlane match nuke

Now I forget the fastlane match password, I did not have any way to find out what is the password. So I want to reset the password using this command(I get this way from https://github.com/fastlane/fastlane/issues/6297):
fastlane match nuke distribution
but still tell me to input the Passphrase for Match storage:
$ fastlane match nuke distribution ‹ruby-2.7.2›
[✔] 🚀
[12:21:55]: fastlane detected a Gemfile in the current directory
[12:21:55]: However, it seems like you didn't use `bundle exec`
[12:21:55]: To launch fastlane faster, please use
[12:21:55]:
[12:21:55]: $ bundle exec fastlane match nuke distribution
[12:21:55]:
[12:21:55]: Get started using a Gemfile for fastlane https://docs.fastlane.tools/getting-started/ios/setup/#use-a-gemfile
[12:21:56]: In the config file './fastlane/Matchfile' you have the line git_url, but didn't provide any value. Make sure to append a value right after the option name. Make sure to check the docs for more information
[12:21:56]: In the config file './fastlane/Matchfile' you have the line username, but didn't provide any value. Make sure to append a value right after the option name. Make sure to check the docs for more information
[12:21:56]: Successfully loaded '/Users/dolphin/Documents/GitHub/flutter-netease-music/ios/fastlane/Matchfile' 📄
+-----------------+---------------------------+
| Detected Values from './fastlane/Matchfile' |
+-----------------+---------------------------+
| git_branch | master |
| storage_mode | git |
| type | adhoc |
| app_identifier | ["com.reddwarf.musicapp"] |
+-----------------+---------------------------+
Available session is not valid any more. Continuing with normal login.
[12:21:59]: To not be asked about this value, you can specify it using 'git_url'
[12:21:59]: URL to the git repo containing all the certificates: https://github.com/jiangxiaoqiang/music-certificate.git
[12:22:19]: Cloning remote git repo...
[12:22:19]: If cloning the repo takes too long, you can use the `clone_branch_directly` option in match.
[12:22:21]: Checking out branch master...
[12:22:21]: Enter the passphrase that should be used to encrypt/decrypt your certificates
[12:22:21]: This passphrase is specific per repository and will be stored in your local keychain
[12:22:21]: Make sure to remember the password, as you'll need it when you run match on a different machine
[12:22:21]: Passphrase for Match storage: ******
[12:22:31]: Type passphrase again: ******
[12:22:33]: wrong final block length
[12:22:33]: Couldn't decrypt the repo, please make sure you enter the right password!
keychain: "/Users/dolphin/Library/Keychains/jiangxiaoqiang-db"
version: 512
class: "inet"
attributes:
0x00000007 <blob>="match_https://github.com/jiangxiaoqiang/music-certificate.git"
0x00000008 <blob>=<NULL>
"acct"<blob>=<NULL>
"atyp"<blob>="dflt"
"cdat"<timedate>=0x32303231303831383034323233335A00 "20210818042233Z\000"
"crtr"<uint32>=<NULL>
"cusi"<sint32>=<NULL>
"desc"<blob>=<NULL>
"icmt"<blob>=<NULL>
"invi"<sint32>=<NULL>
"mdat"<timedate>=0x32303231303831383034323233335A00 "20210818042233Z\000"
"nega"<sint32>=<NULL>
"path"<blob>=<NULL>
"port"<uint32>=0x00000000
"prot"<blob>=<NULL>
"ptcl"<uint32>=0x00000000
"scrp"<sint32>=<NULL>
"sdmn"<blob>=<NULL>
"srvr"<blob>="match_https://github.com/jiangxiaoqiang/music-certificate.git"
"type"<uint32>=<NULL>
password has been deleted.
[12:22:33]: Enter the passphrase that should be used to encrypt/decrypt your certificates
[12:22:33]: This passphrase is specific per repository and will be stored in your local keychain
[12:22:33]: Make sure to remember the password, as you'll need it when you run match on a different machine
[12:22:33]: Passphrase for Match storage:
I really did not remember the password, I just remember the password I was set the password is very simple, but after I input it tell me incorrect. what should I do to reset the password or find the password? I have tried to delete all certificate files to regerneate the certificate info but still need to input Passphrase for Match storage.
you need to create a new git repo and update your Matchfile with that newly created repo URL.
Then you should be able to run without entering any Passphrase
bundle exec fastlane match nuke distribution
Please feel free to open discussion here, if you still having issue
https://github.com/fastlane/fastlane/discussions

AzCopy - how to specify metadata when copying a file to a blob storage

I'm trying to upload a file to an Azure Blob storage using AzCopy, but I want to include metadata.
According to the documentation, "AzCopy copy" has a metadata parameter where I have to provide key/value pairs as a string.
How has this string to be formatted? I can't get it to work and don't find any examples...
AzCopy.exe copy .\testfile2.txt "https://storageaccount.blob.core.windows.net/upload/testfile4.txt?sastoken" --metadata ?what_here?
Thanks!
Documentation:
https://learn.microsoft.com/en-us/azure/storage/common/storage-ref-azcopy-copy#options
The string should be in this format: --metadata "name=ivan".
If you want to add multi metadata, use this format: --metadata "name=ivan;city=tokyo"
This is the command I'm using, and the version of azcopy is 10.3.4 :
azcopy copy "file_path" "https://xxx.blob.core.windows.net/test1/aaa1.txt?sasToken" --metadata "name=ivan"
The test result:

How to change the metadata of all specific file of exist objects in Google Cloud Storage?

I have uploaded thousands of files to google storage, and i found out all the files miss content-type,so that my website cannot get it right.
i wonder if i can set some kind of policy like changing all the files content-type at the same time, for example, i have bunch of .html files inside the bucket
a/b/index.html
a/c/a.html
a/c/a/b.html
a/a.html
.
.
.
is that possible to set the content-type of all the .html files with one command in the different place?
You could do:
gsutil -m setmeta -h Content-Type:text/html gs://your-bucket/**.html
There's no a unique command to achieve the behavior you are looking for (one command to edit all the object's metadata) however, there's a command from gcloud to edit the metadata which you could use on a bash script to make a loop through all the objects inside the bucket.
1.- Option (1) is to use a the gcloud command "setmeta" on a bash script:
# kinda pseudo code here.
# get the list with all your object's names and iterate over the metadata edition command.
for OUTPUT in $(get_list_of_objects_names)
do
gsutil setmeta -h "[METADATA_KEY]:[METADATA_VALUE]" gs://[BUCKET_NAME]/[OBJECT_NAME]
# the "gs://[BUCKET_NAME]/[OBJECT_NAME]" would be your object name.
done
2.- You could also create a C++ script to achieve the same thing:
namespace gcs = google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string bucket_name, std::string object_name,
std::string key, std::string value) {
# you would need to find list all the objects, while on the loop, you can edit the metadata of the object.
for (auto&& object_metadata : client.ListObjects(bucket_name)) {
string bucket_name=object_metadata->bucket(), object_name=object_metadata->name();
StatusOr<gcs::ObjectMetadata> object_metadata =
client.GetObjectMetadata(bucket_name, object_name);
gcs::ObjectMetadata desired = *object_metadata;
desired.mutable_metadata().emplace(key, value);
StatusOr<gcs::ObjectMetadata> updated =
client.UpdateObject(bucket_name, object_name, desired,
gcs::Generation(object_metadata->generation()))
}
}