Remove a keychain from search list? - keychain

This is asked as a follow up as this SO:
Add a keychain to search list?
We know how to add a new create keychain to the search list by:
security list-keychains -s `security list-keychains | xargs` $NEW_KEYCHAIN
However, how do we remove it afterwords? After calling this multiple times (intentionally), we end up having this:
$ security list-keychains
"/Users/jenkins/Library/Keychains/login.keychain-db"
"/Users/jenkins/Library/Keychains/foo.keychain-db"
"/Users/jenkins/Library/Keychains/foo.keychain-db"
"/Library/Keychains/System.keychain"
Notice that we have multiple entries of foo.keychain-db there.

This may not be ideal, but we can simply call it again with the entries that we want to keep. For example, in this case:
$ security list-keychains -s /Users/jenkins/Library/Keychains/login.keychain-db /Library/Keychains/System.keychain
And it will be the result that we want:
$ security list-keychains
"/Users/jenkins/Library/Keychains/login.keychain-db"
"/Library/Keychains/System.keychain"

Related

Tekton: How delete successful pipelineruns?

My aspired tekton usecase is simple:
successful pipelineruns should be removed after x days
failed pipelineruns shouldn't be removed automatically.
I plan to do the cleanup in an initial cleanup-task. That seems better to me than annotation- or cronjob-approaches. As long as nothing new is built, nothing has to be deleted.
Direct approaches:
Failed: tkn delete doesn't seem very helpful because it doesn't discriminate between successful or not.
Failed: oc delete --field-selector ... doesn't contain the well hidden but highly expressive field status.conditions[0].type==Succeeded
Indirect approaches (first filtering a list of podnames and then delete them - not elegant at all):
Failed: Filtering output with -o=jsonpath... seems costly and the condition-array seems to break the statement, so that (why ever?!) everything is returned... not viable
My last attempt is tkn pipelineruns list --show-managed-fields and parse this with sed/awk... which is gross... but at least it does what I want it to do... and quite efficiently at that. But it might result as brittle when the design of the output is going to change in future releases...
Do you have any better more elegant approaches?
Thanks a lot!
Until a better solution is there, I'll post my current solution (and its drawbacks):
Our cleanup-task is now built around the following solution, evaluating the table returned by tkn pipelineruns list:
tkn pipelineruns list --show-managed-fields -n e-dodo-tmgr --label tekton.dev/pipeline=deploy-pipeline | awk '$6~/Succeeded/ && $3~/day|week|month/ {print $1}'
Advantages:
It does what it should without extensive calls or additional calculation.
Disadvantages:
Time is limited to "older than an hour / a day / a week ..." But that's acceptable, since only successful builds are concerned.
I guess the design is quite brittle, because with changes in the tkn-Client the format of the table might change which implies that awk will pick the wrong columns, or similar pattern-probs.
All in all I hope the solution will hold until there are some more helpful client-features that make the desired info directly filterable. Actually I'd hope for something like tkn pipelineruns delete --state successful --period P1D.
The notation for the time period is from ISO8601.

Redis Hashes store with new line key value

I want to store data in Redis Hashes. Data is as below (Key = Value):
30.2.25=REF_IP
30.2.24=MY_HOST_IP
30.2.32=PEER_IP
30.2.32=IM_USER_MY_HOST
30.2.2=23992
Easy way to store this info in redis is below :
hmset info 30.2.25 REF_IP 30.2.24 MY_HOST_IP 30.2.32 PEER_IP 30.2.32 IM_USER_MY_HOST 30.2.2 23992
Considering I have 1000's key value and want to change few (actually so many) values in one go so searching and editing value in above command is too painful.
i want some way to execute command in below manner, that is nice formatted command with new line after every key value :
hmset info
30.2.25 REF_IP
30.2.24 MY_HOST_IP
30.2.32 PEER_IP
30.2.32 IM_USER_MY_HOST
30.2.2 23992
Is it possible to do so ?
Currently when i copy above formatted command and paste, it ignore test after new line and giving below error which is obvious because argument is wrong due to new line.
hmset info
(error) ERR wrong number of arguments for 'hmset' command
Can anyone help please. Thanks.
Assuming you are talking about using redis-cli, there is no way to support this at the moment. There is an open issue for this. See https://github.com/antirez/redis/issues/3474
As per Redis 4.0.0, HMSET is considered deprecated. You should use HSET instead. https://redis.io/commands/hset
You can use a transaction if you want to ensure all HSETs are done at the same time, and still enter them one line at a time.
MULTI
HSET info 30.2.25 REF_IP
HSET info 30.2.24 MY_HOST_IP
...
EXEC
The commands will be sent to the server one line at a time, but they are queued and only executed at the EXEC command.
You may use another client, say in Python, and then do something fancier as well to condense your field-value hsets into one command.

How to create element from API with name from appropriate sequence?

I'd like to use EA to generate Requirement elements programatically. I need to use the same sequence numbering (REQ00000xy), as with the GUI when pressing "Auto" button in "Add Element ..." dialog in order to keep´consistent numbering for Requirement elements created either from GUI or from API.
Selecting the last used sequence number from already existing Requierement elements won't help, as it don't move the sequence number up and next Requirement created from GUI .
Is there a way to get (and properly use) the sequence number via EA API or EA SQL?
The table you're looking for is t_trxtypes. This contains something like (EA's output)
Description;NumericWeight;Notes;TRX;TRX_ID;Style;
Autocount;1,00;prefix=bla;suffix=x;active=1;active_a=0;counter=126;;Class;1; ;
You're interested in the column Notes which holds as CSV list like
prefix=bla;suffix=x;active=1;active_a=0;counter=126;
This is a test setting for a class which currently has the number 126. So the next created class would be named bla126x and the entry would change to
prefix=bla;suffix=x;active=1;active_a=0;counter=127;
Just keep the columne t_trxtypes.notes in synch with your creations.
Note EA does not (seem to) allow direct DB access. However, it has a proven back door:
Repository.Execute("UPDATE t_trxtypes SET Notes='prefix=bla;suffix=x;active=1;active_a=0;counter=127;' WHERE TRX_ID=<your id>")
will do the update (replace <your id> with the appropriate key). Though Execute is undocumented it works ever since and for sure Sparx will not limit it as nowadays everyone relies on it.
As a side note: This counter is not safe. There are lots of ways (the easiest is a simple rename) to break it. You'd need some script/add-in to have regular checks your numbering is still consistent. If you rely on requirement numbering you better use an external system like, I dare to say, DOORS.
Finally, RTFM....
For elements, where sequence is defined, if you use empty name in set =AddNew() function, EA generates the sequence upon .Update(). Not earlier. So if you plan to use the generated sequence and add some description, you need to create the element with empty name first, then Update() it and after that append your description to the content of the Name field.
As easy as this.

Augeas - Partial control of sshd-config - Match entries

In the config file /etc/ssh/sshd_config I want to determine PasswordAuthentication entries for a few specific users (or Groups) like:
Match Group xyz_admin, xyz_support
PasswordAuthentication no
Match User yvonne,yvette
PasswordAuthentication yes
I don't want to interfere with or have any control over similar but unrelated entries which may or may not be present like:
Match User xavier
X11Forwarding yes
Match Group alice
AllowTcpForwarding yes
The following Augeas expressions create the entries I need but could corrupt existing configuration entries.
set /files/etc/ssh/sshd_config/Match[1]/Condition/Group "xyz_admin,xyz_support"
set /files/etc/ssh/sshd_config/Match[1]/Settings/PasswordAuthentication "no"
set /files/etc/ssh/sshd_config/Match[2]/Condition/User "yvonne,yvette"
set /files/etc/ssh/sshd_config/Match[2]/Settings/PasswordAuthentication "yes"
Any idea how I can make these expressions more specific so they avoid messing with any existing and unrelated "Match" entries ?
You can use the Condition/* subnodes to filter out the Match nodes.
For an example, you can see how it is done in the puppet sshd_config provider (in Ruby). Note that all keys in sshd_config are case-insensitve, so you need to use regular expressions to be sure to match them regardless of their case.

bash/curl: two-step web form submission

I'd like to submit two forms on the same page in sequence with curl in bash. http://en.wikipedia.org/w/index.php?title=Special:Export contains two forms: one to populate a list of pages given a Wikipedia category, and another to fetch XML data for that list.
Using curl in bash, I can submit the first form independently, returning an html file with the pages field populated (though I can't use it, as it's local instead of on the wikipedia server):
curl -d "addcat=1&catname=Works_by_Leonardo_da_Vinci&curonly=1&action=submit" http://en.wikipedia.org/w/index.php?title=Special:Export -o "somefile.html"
And I can submit the second form while specifying a page, to get the XML:
curl -d "pages=Mona_Lisa&curonly=1&action=submit" http://en.wikipedia.org/w/index.php?title=Special:Export -o "output.xml"
...but I can't figure out how to combine the two steps, or pipe the one into the other, to return XML for all the pages in a category, like I get when I perform the two steps manually. http://www.mediawiki.org/wiki/Manual:Parameters_to_Special:Export seems to suggest that this is possible; any ideas? I don't have to use curl or bash.
Special:Export is not meant for fully automatic retrieval. The API is. For example, to get the current text of all pages in Category:Works by Leonardo da Vinci in XML format, you can use this URL:
http://en.wikipedia.org/w/api.php?format=xml&action=query&generator=categorymembers&gcmtitle=Category:Works_by_Leonardo_da_Vinci&prop=revisions&rvprop=content&gcmlimit=max
This won't return pages in subcategories and is limited only to first 500 pages (although that's not a problem in this case and there is a way to access the rest).
Assuming you can parse the output from the first html file and generate a list of pages (e.g.
Mona Lisa
The Last Supper
You can pipe the output to a bash loop using read. As a simple example:
$ seq 1 5 | while read x; do echo "I read $x"; done
I read 1
I read 2
I read 3
I read 4
I read 5