Saving environment variables in gopass - 1password

How can I save environment variables (sensitive) in gopass and retrieve them and set in my bash terminal them from command line.
I know this can be done in 1password where the password are stored:
myaccount
key1 - value1
key2 - value2
key3 - value3
Internally this is in json format and can be pull with a command like:
op get item "myaccount" | jq .
source <(op get item ${1} | jp -r --arg key2 "${2} '.details.section[] |
select(.title==$title) | .fields[0].v' | base64 -D ")
The whole idea is my environment variables must be set in a automate way from a secure vault rather than me exporting them like this:
export key1=value1
export key2=value2
export key3=value3

This is a bit of a late answer, but you could save them into gopass like this:
echo $key1 | gopass insert store/environment/key1
Or if you need them to be base64 ecoded:
echo $key1 | base64 -w 0 | gopass insert store/environment/key1
Then you could source them still using export, but sourcing them from gopass, rather than hard coding them.
key1=$(gopass store/environment/key1)
key1=$(gopass store/environment/key1 | base64 -d)

Related

On the consumer-end of a named pipe (FIFO), is there a way to distinguish each item and redirect it to its own processes?

The best I can explain is by example.
Create named pipe: mkfifo pipe
Create 5 text files, a.txt, b.txt, c.txt, d.txt, e.txt (they can hold any contents for this example)
cat [a-e].txt > pipe
Of course, because the pipe is not open at the consumer side, the terminal will seem to be busy.
In another terminal, tail -fn +1 pipe
All content is fed through the pipe (consumed and printed out by tail) as expected.
But instead of simply printing out content consumed, I would like each piped text file to be redirected to a command (5 separate processes) that can only handle one at a time:
Something like python some-script.py < pipe but where it would create 5 different instances (one instance per text file content).
Is there any way for the consumer to differentiate between objects coming in? Or does the data get appended and read all as one stream?
A potential solution that might be generally applicable (looking forward to hearing if there are more efficient alternatives.
First, an example python script that the question describes:
some-script.py:
import sys
lines = sys.stdin.readlines()
print('>>>START-OF-STDIN<<<')
print(''.join(lines))
print('>>>END-OF-STDIN<<<')
The goal is for the stream of text coming from the pipe to be differentiable.
An example of the producers:
cat a.txt | echo $(base64 -w 0) | cat > pipe &
cat b.txt | echo $(base64 -w 0) | cat > pipe &
cat c.txt | echo $(base64 -w 0) | cat > pipe &
cat d.txt | echo $(base64 -w 0) | cat > pipe &
cat e.txt | echo $(base64 -w 0) | cat > pipe &
A description of the producers:
cat concatenates entire file and then pipes to echo
echo displays text coming from sub-command $(base64 -w 0) and pipes to cat
base64 -w 0 encodes full file contents into a single line
cat used in this case concatenates the full line before redirecting output to pipe. Without it, the consumer doesn't work properly (try for yourself)
An example of the consumer:
tail -fn +1 pipe | while read line ; do (echo $line | base64 -d | cat | python some-script.py) ; done
A description of the consumer:
tail -fn +1 pipe follows (-f) pipe from the beginning (-n +1) without exiting process and pipes content to read within a while loop
while there are lines to be read (assuming base64 encoded single lines coming from producers), each line is passed to a sub-shell
In each subshell
echo pipes the line to base64 -d (-d stands for decode)
base64 -d pipes the decoded line (which now spans multiple lines potentially) to cat
cat concatenates the lines and pipes it as one to python some-script.py
Finally, the example python script is able to read line by line in exactly the same way as cat example.txt | python some-script.py
The above was useful to me when a host process did not have Docker permissions but could pipe to a FIFO (named pipe) file mounted in as a volume to a container. Potentially multiple instances of the consumer could happen in parallel. I think the above successfully differentiates content coming in so that the isolated process can process content coming in from named pipe.
An example of the Docker command involving pipe symbols, etc:
"bash -c 'tail -fn +1 pipe | while read line ; do (echo $line | base64 -d | cat | python some-script.py) ; done'"

Set VSTS output variable to be result from bash command

I'm running a task in VSTS which performs some operations on a variable from a previous step and I then need to output the result to be used in future tasks. I have the following in a command line task running on a linux build host but am having no luck when trying to use the result later with $(podName3).
COMMAND="$(echo '$(pods)' | grep -oh -P '[^ ]*' | grep schema)"
##vso[task.setvariable variable=podName3]"$COMMAND"
I have tried several variations on this to no avail and need some direction as this has stumped me for too long now
Seems the syntax is incorrect.
Just try to below format:
COMMAND="$(echo '$pods' | grep -oh -P '[^ ]*' | grep schema)"
echo "##vso[task.setvariable variable=podName3]$COMMAND"
Or add a powershell task and run below commands to set the variable:
$COMMAND="$(echo '$env:pods' | grep -oh -P '[^ ]*' | grep schema)"
Write-Host "##vso[task.setvariable variable=podName3]$COMMAND"
More information please see Define and modify your variables in a script
I created a command line tool & an Azure DevOps task for this: https://marketplace.visualstudio.com/items?itemName=riezebosch.setvar
It just lets you pipe the output of a command into the tool and output it as the magic variable string. It's written in Go and cross compiled so works on all major platforms and all different shells.
Your example:
echo '$pods' | grep -oh -P '[^ ]*' | grep schema | setvar -name podName3
You only need to include the setvar task prior to this script task in order to get the tool on the agent.
Adding a PSA to this post, looks like they changed the way the variables are accessed now - you have to access variables like this: $(variable)
COMMAND=$(echo '$pods' | grep -oh -P '[^ ]*' | grep schema)
echo "##vso[task.setvariable variable=podName3]$(COMMAND)"

Version Control in Big Query [duplicate]

I am working with bigquery, and there have been a few hundred views created. Most of these are not used and should be deleted. However, there is a chance that some are used and I cannot just blindly delete all. Therefore, I need to backup all view definitions somehow before deleting them.
Does anyone know of a good way? I am not trying to save the data, just the view definition queries and their names.
Thanks for reading!
Building off the existing answer, you can automate the backing up of all views by parsing the output of bq with jq:
#!/bin/bash
DATASETS=$(bq ls --format=sparse | tail -n+3)
for d in $DATASETS; do
TABLES=$(bq ls --format=prettyjson "$d" | jq '.[] | "\(.id), \(.type)"')
IFS=$'\n'
for table in $TABLES; do
[[ ! "$table" == *VIEW* ]] && continue
view=$(echo "$table" | sed -e 's/"//g' | cut -d , -f 1)
query=$(bq show --format=prettyjson "$view" | jq -r '.view.query')
echo -e "$query" > "$view.sql"
done
done
Part 1.
Issue the bq ls command. The --format flag can be used to control the output. If you are listing views in a project other than your default project, add the project ID to the dataset in the following format: [PROJECT_ID]:[DATASET].
bq ls --format=pretty [PROJECT_ID]:[DATASET]
Where:
[PROJECT_ID] is your project ID.
[DATASET] is the name of the dataset.
When you run the command, the Type field displays either TABLE or VIEW. For example:
+-------------------------+-------+----------------------+-------------------+
| tableId | Type | Labels | Time Partitioning |
+-------------------------+-------+----------------------+-------------------+
| mytable | TABLE | department:shipping | |
| myview | VIEW | | |
+-------------------------+-------+----------------------+-------------------+
Part 2.
Issue the bq show command. The --format flag can be used to control the output. If you are getting information about a view in a project other than your default project, add the project ID to the dataset in the following format: [PROJECT_ID]:[DATASET]. To write the view properties to a file, add > [PATH_TO_FILE] to the command.
bq show --format=prettyjson [PROJECT_ID]:[DATASET].[VIEW] > [PATH_TO_FILE]
Where:
[PROJECT_ID] is your project ID.
[DATASET] is the name of the dataset.
[VIEW] is the name of the view.
[PATH_TO_FILE] is the path to the output file on your local machine.
Examples:
Enter the following command to display information about myview in mydataset. mydataset is in your default project.
bq show --format=prettyjson mydataset.myview
Enter the following command to display information about myview in mydataset. mydataset is in myotherproject, not your default project. The view properties are written to a local file — /tmp/myview.json.
bq show --format=prettyjson myotherproject:mydataset.myview > /tmp/myview.json
You could try using bqup, which is a python script that some colleagues and I have been using to regularly back up BigQuery views and table schemata.

How to filter json data based on key value pair using jq?

Suppose I have some json data given below:
{"name":"alon","department":"abc","id":"ss12sd"}
{"name":"kate","department":"xyz","id":"ajsj3" }
{"name":"sam","department":"abc","id":"xx1d2"}
I want to filter data based on particular department and save it in a different json file. From above data suppose I want to filter all the data whose department is 'abc' and save it in some new json file. How I can do this using jq. I am checking it's manual from here but didn't understood that much.
jq 'map(select(.department == "abc")) ' yourfile.json
A flexible template might be like this:
jq --arg key department --arg value abc \
'.[] | select(.[$key] == $value)' input_file.json > output_file.json
This way you can change the criteria at the arguments stage rather than the expression.
Implementing that into a shell script might look like this:
myscript.sh
#!/usr/bin/env bash
key="$1"
value="$2"
file="$3"
outfile="$4"
jq --arg key "$1" --arg value "$2" \
'.[] | select(.[$key] == $value)' "$3" > "$4"
Which you would invoke like so:
./myscript.sh department abc input.json output.json
Edit: Changed ."\($key)" to .[$key] - thanks #peak

Extracting a string from a file name

My script takes a file name in the form R#TYPE.TXT (# is a number and TYPE is two or three characters).
I want my script to give me TYPE. What should I do to get it? Guess I need to use awk and sed.
I'm using /bin/sh (which is a requirement)
you can use awk
$ echo R1CcC.TXT | awk '{sub(/.*[0-9]/,"");sub(".TXT","")}{print}'
CcC
or
$ echo R1CcC.TXT | awk '{gsub(/.*[0-9]|\.TXT$/,"");print}'
CcC
and if sed is really what you want
$ echo R9XXX.TXT | sed 's/R[0-9]\(.*\)\.TXT/\1/'
XXX
I think this is what you are looking for.
$ echo R3cf.txt | sed "s/.[0-9]\(.*\)\..*/\1/"
cf
If txt is always upper case and the filename always starts with R you could do something like.
$ echo R3cf.txt | sed "s/R[0-9]\(.*\)\.TXT/\1/"
You can use just the shell (depending what shell your bin/sh is:
f=R9ABC.TXT
f="${f%.TXT}" # remove the extension
type="${f#R[0-9]}" # remove the first bit
echo "$type" # ==> ABC