Is there a way to format the output of the get-statement-result into the tabular format in Redshift Data API?
I am executing sql statements & fetching results using below statement:
Statement 1:
aws redshift-data execute-statement --region us-east-2 --cluster-identifier test --database dev --db-user admin --profile test --sql "select * from stl_query limit 1"
Statement 2:
aws redshift-data get-statement-result --profile test --region us-east-2 --id "id frm statement 1"
Related
I have ARM template creating Cosmos DB, database and collections through pipeline. Since there are multiple applications using the database, I want to seed initial data for testing, I was looking for the Cosmos DB import tasks in devops and found this https://marketplace.visualstudio.com/items?itemName=winvision-bv.winvisionbv-cosmosdb-tasks, but right now mongo API is not supported. its not able to import the data from json file which I have it in storage account.
My question is, Is there any other way I can add data from json file to cosmos DB through devops like powershell/api?
My question is, Is there any other way I can add data from json file to cosmos DB through devops like powershell/api?
The answer is yes.
We could try to use the Azure powershell task to execute the following powershell scripts:
param([string]$cosmosDbName
,[string]$resourceGroup
,[string]$databaseName
,[string[]]$collectionNames
,[string]$principalUser
,[string]$principalPassword
,[string]$principalTennant)
Write-Output "Loggin in with Service Principal $servicePrincipal"
az login --service-principal -u $principalUser -p $principalPassword -t $principalTennant
Write-Output "Check if database exists: $databaseName"
if ((az cosmosdb database exists -d $databaseName -n $cosmosDbName -g $resourceGroup) -ne "true")
{
Write-Output "Creating database: $databaseName"
az cosmosdb database create -d $databaseName -n $cosmosDbName -g $resourceGroup
}
foreach ($collectionName in $collectionNames)
{
Write-Output "Check if collection exists: $collectionName"
if ((az cosmosdb collection exists -c $collectionName -d $databaseName -n $cosmosDbName -g $resourceGroup) -ne "true")
{
Write-Output "Creating collection: $collectionName"
az cosmosdb collection create -c $collectionName -d $databaseName -n $cosmosDbName -g $resourceGroup
}
}
Write-Output "List Collections"
az cosmosdb collection list -d $databaseName -n $cosmosDbName -g $resourceGroup
Then press the three dots after Script Arguments and add the parameters defined in the PowerShell script (put all parameters in Variables):
You could check this great document for some more details.
I was running into the same scenario where I needed to maintain configuration documents in source control and update various instances of Cosmos as they changed. What I ended up doing is writing a python script to read a directory structure, one folder for every collection I needed to update, and then read every json file in the folder and upsert it into Cosmos. From there I ran the python script as part of a multi-stage pipeline in Azure DevOps.
This is a link to my proof of concept code https://github.com/tanzencoder/CosmosDBSeedDataExample.
And this is a link to the Python task for the pipelines https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/python-script?view=azure-devops
I am trying to execute a .sql file in Heroku PSQL and want to pass dynamic parameter values in the .sql file.
Below is the script which I am using
heroku pg:psql --app application_name <./somepath/file_to_execute.sql --param1="'$File_name'" --param2="'$Tag_id'" --param3="'$job_name'" --param4="$id"
The sql file contains insert script:
INSERT INTO version_table (col1, col2, col3, col4)
VALUES (:param1,:param2,:param3,:param4);
I get below error message from Heroku:
Error: Unexpected arguments: --param2='1.1.1', --param3='test-name', --param4=12
How to execute this sql file with dynamic value in Heroku PSQL
I also tried below query:
heroku pg:psql --app application_name <./somepath/file_to_execute.sql --v param1="'$File_name'" --v param2="'$Tag_id'" --v param3="'$job_name'" --v param4="$id"
Got below error message:
Error: Unexpected arguments: param1='file_name.sql', --v, param2='1.1.1', --v, param3='test-name', --v, param4=12
You can get the URL to the database with the following heroku command:
$ heroku pg:credentials:url --app application_name
This will print something like:
Connection information for default credential.
Connection info string:
"dbname=xyz host=something.compute.amazonaws.com port=1234 user=foobar password=p422w0rd sslmode=require"
Connection URL:
postgres://foobar:p422w0rd#something.compute.amazonaws.com:1234/xyz
The URL (last line) can be used directly with psql. With grep we can get that line and pass it to psql:
$ psql $(heroku pg:credentials:url --app application_name | grep 'postgres://') \
< ./somepath/file_to_execute.sql \
-v param1="$File_name" \
-v param2="$Tag_id" \
-v param3="$job_name" \
-v param4="$id"
Note that rather than putting single quotes in the parameter on the command line, you should access the parameter in the SQL as :'param1'.
I'm executing gcloud composer commands:
gcloud composer environments run airflow-composer \
--location europe-west1 --user-output-enabled=true \
backfill -- -s 20171201 -e 20171208 dags.my_dag_name \
kubeconfig entry generated for europe-west1-airflow-compos-007-gke.
It's a regular airflow backfill. The command above is printing the results at the end of the whole backfill range, is there any way to get the output in a streaming manner ? Each time a DAG gets backfilled it will be printed in the standard output, like in a regular airflow-cli.
I am trying to query the output of an AWS cli command using an environment variable as the query string. This works fine for me using the AWS Cli in Linux but in Powershell I am having trouble getting the Cli to use the variable in Powershell.
For example - thsi works for me in Linux:
SECGRP="RDP from Home"
aws ec2 describe-security-groups --query \
'SecurityGroups[?GroupName==`'"$SECGRP"'`].GroupId' --output text
If i run this in Powershell:
$SECGRP="RDP from Home"
aws ec2 describe-security-groups --query \
'SecurityGroups[?GroupName==`'"$SECGRP"'`].GroupId' --output text
Error Details:
Bad value for --query SecurityGroups[?GroupName==`: Bad jmespath
expression: Unclosed ` delimiter:
SecurityGroups[?GroupName==`
^
I have tried a few combinations of quotes inisde the query expression but either get errors or no output.
I have also run the following to demonstrate i can get the correct output using Powershell (but not using a variable):
aws ec2 describe-security-groups --query \
'SecurityGroups[?GroupName==`RDP from Home`].GroupId' --output text
Try this:
$SECGRP="RDP from Home"
aws ec2 describe-security-groups --query "SecurityGroups[?GroupName=='$SECGRP'].GroupId" --output text
I am trying to import a PostgreSQL data file into Amazon redshift using my command line. I did import the schema file but can not import data file. It seems that data insertion in amazon redshift is a bit different.
I want to know all kinds of way of importing data file into redshift using command line.
UPDATE
My data file looks like :
COPY actor (actor_id, first_name, last_name, last_update) FROM stdin;
0 Chad Murazik 2014-12-03 10:54:44
1 Nelle Sauer 2014-12-03 10:54:44
2 Damien Ritchie 2014-12-03 10:54:44
3 Casimer Wiza 2014-12-03 10:54:44
4 Dana Crist 2014-12-03 10:54:44
....
I typed the following command from CLI:
PGPASSWORD=**** psql -h testredshift.cudmvpnjzyyy.us-west-2.redshift.amazonaws.com -p 5439 -U abcd -d pagila -f /home/jamy/Desktop/pag_data.sql`
And then got error like :
ERROR: LOAD source is not supported. (Hint: only S3 or DynamoDB or EMR based load is allowed
Dump your table using a CSV format:
\copy <your_table_name> TO 'dump_fulename.csv' csv header NULL AS '\N'
Upload it to S3, and read it from redshift using:
COPY schema.table FROM 's3:/...' WITH CREDENTIALS '...' CSV;
Source: Importing Data into Redshift from MySQL and Postgres
You can't use pg_dump: unload all your data to s3 and use a copy command to load it into Redshift. This is a common mistake.