formating gcloud spanner queries - gcloud

Up until recently my gcloud spanner queries where nicely presented as columns across the screen, with each output line representing a single row from the query. Recently however, for some unknown reason, the output is now displayed as row data presented in single column:value pair down the screen, e.g.
PKey: 9moVr4HmSy6GGIYJyVGu3A==
Ty: Pf
IsNode: Y
P: None
IX: X
I have tried various --format command line options but alas have had no success in generating the original row-per-line-output format i.e. with columns presented across the screen as follows
PKey Ty IsNode P IX <-- columns names
9moVr4HmSy6GGIYJyVGu3A== Pf Y None X. <--- row data
What format option should I use ?.
Example of gcloud query:
gcloud spanner databases execute-sql test-sdk-db --instance=test-instance --sql="Select * from Block "
Thanks

gcloud formats the results as a table if they're being written to a file, the usual formatting rules apply otherwise.
So one easy way to see the table in the shell is to tee it somewhere:
gcloud spanner databases execute-sql test-sdk-db --instance=test-instance --sql="Select * from Block " \
| tee /dev/null
If you can't do that for some reason you can always get the same result with some --format surgery. To print the column names:
gcloud spanner databases execute-sql test-sdk-db --instance=test-instance --sql="Select * from Block " \
--format 'csv[no-heading, delimiter=" "](metadata.rowType.fields.name)'
And to print the rows:
gcloud spanner databases execute-sql test-sdk-db --instance=test-instance --sql="Select * from Block " \
--format 'csv[no-heading, delimiter="\n"](rows.map().flatten(separator=" "))'

The format of gcloud spanner databases execute-sql is a result of broader changes in formatting to better support accessibility standards for screen readers.
There are two methods to receive results in the tabular format:
Set the accessibility/screen_reader configuration property to false.
gcloud config set accessibility/screen_reader false
Similar to the other suggestion about using formatting, you can use a --format=multi(...) option in your gcloud command:
gcloud spanner databases execute-sql test-sdk-db \
--instance=test-instance --sql="Select * from Block " \
--format="multi(metadata:format='value[delimiter='\t'](rowType.fields.name)', \
rows:format='value[delimiter='\t']([])')"
The caveat of this second method is that column names and values may not align due to differences in length.

Related

How to change the metadata of all specific file of exist objects in Google Cloud Storage?

I have uploaded thousands of files to google storage, and i found out all the files miss content-type,so that my website cannot get it right.
i wonder if i can set some kind of policy like changing all the files content-type at the same time, for example, i have bunch of .html files inside the bucket
a/b/index.html
a/c/a.html
a/c/a/b.html
a/a.html
.
.
.
is that possible to set the content-type of all the .html files with one command in the different place?
You could do:
gsutil -m setmeta -h Content-Type:text/html gs://your-bucket/**.html
There's no a unique command to achieve the behavior you are looking for (one command to edit all the object's metadata) however, there's a command from gcloud to edit the metadata which you could use on a bash script to make a loop through all the objects inside the bucket.
1.- Option (1) is to use a the gcloud command "setmeta" on a bash script:
# kinda pseudo code here.
# get the list with all your object's names and iterate over the metadata edition command.
for OUTPUT in $(get_list_of_objects_names)
do
gsutil setmeta -h "[METADATA_KEY]:[METADATA_VALUE]" gs://[BUCKET_NAME]/[OBJECT_NAME]
# the "gs://[BUCKET_NAME]/[OBJECT_NAME]" would be your object name.
done
2.- You could also create a C++ script to achieve the same thing:
namespace gcs = google::cloud::storage;
using ::google::cloud::StatusOr;
[](gcs::Client client, std::string bucket_name, std::string object_name,
std::string key, std::string value) {
# you would need to find list all the objects, while on the loop, you can edit the metadata of the object.
for (auto&& object_metadata : client.ListObjects(bucket_name)) {
string bucket_name=object_metadata->bucket(), object_name=object_metadata->name();
StatusOr<gcs::ObjectMetadata> object_metadata =
client.GetObjectMetadata(bucket_name, object_name);
gcs::ObjectMetadata desired = *object_metadata;
desired.mutable_metadata().emplace(key, value);
StatusOr<gcs::ObjectMetadata> updated =
client.UpdateObject(bucket_name, object_name, desired,
gcs::Generation(object_metadata->generation()))
}
}

Google Cloud AI Platform: I can't create a model version using the "--accelerator" parameter

In order to get online predictions, I'm creating a model version on the ai-platform. It works fine unless I want to use the --accelerator parameter.
Here is the command that works:
gcloud alpha ai-platform versions create [...] --model [...] --origin=[...] --python-version=3.5 --runtime-version=1.14 --package-uris=[...] --machine-type=mls1-c4-m4 --prediction-class=[...]
Here is the parameter that makes it not work:
--accelerator=^:^count=1:type=nvidia-tesla-k80
This is the error message I get:
ERROR: (gcloud.alpha.ai-platform.versions.create) INVALID_ARGUMENT: Request contains an invalid argument.
I expect it to work, since 1) the parameter exists and uses these two keys (count and type), 2) I use the correct syntax for the parameter, any other syntaxes would return a syntax error, and 3) the "nvidia-tesla-k80" value exists (it is described in --help) and is available in the region in which the model is deployed.
Make sure you are using a recent version of the Google Cloud SDK.
Then you can use the following command:
gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--origin gs://$MODEL_DIRECTORY_URI \
--runtime-version 1.15 \
--python-version 3.7 \
--framework tensorflow \
--machine-type n1-standard-4 \
--accelerator count=1,type=nvidia-tesla-t4
For reference you can enable logging during model creation:
gcloud beta ai-platform models create {MODEL_NAME} \
--regions {REGION}
--enable-logging \
--enable-console-logging
The format for the --accelerator parameter as you can check in the official documentation is:
--accelerator=count=1,type=nvidia-tesla-k80
I think that might cause your issue, let me know.

passing parameters via dataproc workflow-templates

I understand that dataproc workflow-templates is still in beta, but how do you pass parameters via the add-job into the executable sql? Here is a basic example:
#/bin/bash
DATE_PARTITION=$1
echo DatePartition: $DATE_PARTITION
# sample job
gcloud beta dataproc workflow-templates add-job hive \
--step-id=0_first-job \
--workflow-template=my-template \
--file='gs://mybucket/first-job.sql' \
--params="DATE_PARTITION=$DATE_PARTITION"
gcloud beta dataproc workflow-templates run $WORK_FLOW
gcloud beta dataproc workflow-templates remove-job $WORK_FLOW --step-
id=0_first-job
echo `date`
Here is my first-job.sql file called from the shell:
SET hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
SET mapred.output.compress=true;
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
SET io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec;
USE mydb;
CREATE EXTERNAL TABLE if not exists data_raw (
field1 string,
field2 string
)
PARTITIONED BY (dt String)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LOCATION 'gs://data/first-job/';
ALTER TABLE data_raw ADD IF NOT EXISTS PARTITION(dt="${hivevar:DATE_PARTITION}");
In the ALTER TABLE statement, what is the correct syntax? I’ve tried what feels like over 15 variations but nothing works. If I hard code it like this (ALTER TABLE data_raw ADD IF NOT EXISTS PARTITION(dt="2017-10-31");) the partition gets created, but unfortunately it needs to be parameterized.
BTW – The error I receive is consistently like this:
Error: Error while compiling statement: FAILED: ParseException line 1:48 cannot recognize input near '${DATE_PARTITION}' ')' '' in constant
I am probably close but not sure what I am missing.
TIA,
Melissa
Update: Dataproc now has workflow template parameterization, a beta feature:
https://cloud.google.com/dataproc/docs/concepts/workflows/workflow-parameters
For your specific case, you can do the following:
Create an empty template
gcloud beta dataproc workflow-templates create my-template
Add a job with a placeholder for the value you want to parameterize
gcloud beta dataproc workflow-templates add-job hive \
--step-id=0_first-job \
--workflow-template=my-template \
--file='gs://mybucket/first-job.sql' \
--params="DATE_PARTITION=PLACEHOLDER"
Export the template configuration to a file
gcloud beta dataproc workflow-templates export my-template \
--destination=hive-template.yaml
Edit the file to add a parameter
jobs:
- hiveJob:
queryFileUri: gs://mybucket/first-job.sql
scriptVariables:
DATE_PARTITION: PLACEHOLDER
stepId: 0_first-job
parameters:
- name: DATE_PARTITION
fields:
- jobs['0_first-job'].hiveJob.scriptVariables['DATE_PARTITION']
Import the changes
gcloud beta dataproc workflow-templates import my-template \
--source=hive-template.yaml
Add a managed cluster or cluster selector
gcloud beta dataproc workflow-templates set-managed-cluster my-template \
--cluster-name=my-cluster \
--zone=us-central1-a
Run your template with parameters
gcloud beta dataproc workflow-templates instantiate my-template \
--parameters="DATE_PARTITION=${DATE_PARTITION}"
Thanks for trying out Workflows! First-class support for parameterization is part of our roadmap. However for now your remove-job/add-job trick is the best way to go.
Regarding your specific question:
Values passed via params are accessed as ${hivevar:PARAM} (see [1]). Alternatively, you can set --properties which are accessed as ${PARAM}
The brackets around params are not needed. If it's intended to handle spaces in parameter values use quotations like: --params="FOO=a b c,BAR=X"
Finally, I noticed an errant space here DATE_PARTITION =$1 which probably results in empty DATE_PARTITION value
Hope this helps!
[1] How to use params/properties flag values when executing hive job on google dataproc

Batch Filtering with Multi-Filter throws a 'Class attribute not set' exception

We have a data set of 15k classified tweets with which we need to perform sentiment analysis. I would like to test against a test set of 5k classified tweets. Due to Weka needing the same attributes within the header of the test set as exist in the header of training set, I will have to use batch filtering if I want to be able to run my classifier against this 5k test set.
However, there are several filters that I need to run my training set through, so I figured the running a multifilter against the training set would be a good idea. The multifilter works fine when not running the batch argument, but when I try to batch filter I get an error from the CLI as it tried to execute the first filter within the multi-filter:
CLI multiFilter command w/batch argument:
java weka.filters.MultiFilter -F "weka.filters.supervised.instance.Resample -B 1.0 -S 1 -Z 15.0 -no-replacement" \
-F "weka.filters.unsupervised.attribute.StringToWordVector -R first-last -W 100000 -prune-rate -1.0 -N 0 -S -stemmer weka.core.stemmers.NullStemmer -M 2 -tokenizer weka.core.tokenizers.AlphabeticTokenizer" \
-F "weka.filters.unsupervised.attribute.Reorder -R 2-last,first"\
-F "weka.filters.supervised.attribute.AttributeSelection -E \"weka.attributeSelection.InfoGainAttributeEval \" -S \"weka.attributeSelection.Ranker -T 0.0 -N -1\"" \
-F weka.filters.AllFilter \
-b -i input\Train.arff -o output\Train_b_out.arff -r input\Test.arff -s output\Test_b_out.arff
Here is the resultant error from the CLI:
weka.core.UnassignedClassException: weka.filters.supervised.instance.Resample: Class attribute not set!
at weka.core.Capabilities.test(Capabilities.java:1091)
at weka.core.Capabilities.test(Capabilities.java:1023)
at weka.core.Capabilities.testWithFail(Capabilities.java:1302)
at weka.filters.Filter.testInputFormat(Filter.java:434)
at weka.filters.Filter.setInputFormat(Filter.java:452)
at weka.filters.SimpleFilter.setInputFormat(SimpleFilter.java:195)
at weka.filters.Filter.batchFilterFile(Filter.java:1243)
at weka.filters.Filter.runFilter(Filter.java:1319)
at weka.filters.MultiFilter.main(MultiFilter.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at weka.gui.SimpleCLIPanel$ClassRunner.run(SimpleCLIPanel.java:265)
And here are the headers with a portion of data for both the training and test input arffs:
Training:
#RELATION classifiedTweets
#ATTRIBUTE ##sentence## string
#ATTRIBUTE ##class## {1,-1,0}
#DATA
"Conditioning be very important for curly dry hair",0
"Combine with Sunday paper coupon and",0
"Price may vary by store",0
"Oil be not really moisturizers",-1
Testing:
#RELATION classifiedTweets
#ATTRIBUTE ##sentence## string
#ATTRIBUTE ##class## {1,-1,0}
#DATA
"5",0
"I give the curl a good form and discipline",1
"I have be cowashing every day",0
"LOL",0
"TITLETITLE Walgreens Weekly and Midweek Deal",0
"And then they walk away",0
Am I doing something wrong here? I know that supervised resampling requires the class attribute to be on the bottom of the attribute list within the header, and it is... within both the test and training input files.
EDIT:
Further testing reveals that this error does not occur with relationship to the batch filtering, it occurs whenever I run the supervised resample filter from the CLI... The data that I use works on every other filter I've tried within the CLI, so I don't understand why this filter is any different... resampling the data in the GUI works fine as well...
Update:
This also happens with the SMOTE filter instead of the resample filter
Could not get the batch filter to work with any resampling filter. However, our workaround was to simply resample (and then randomize) the training data as step 1. From this reduced set we ran batch filters for everything else we wanted on the test set. This seemed to work fine.
You could have used the multifilter along with the ClassAssigner method to make it work:
java -classpath $jcp weka.filters.MultiFilter
-F "weka.filters.unsupervised.attribute.ClassAssigner -C last"
-F "weka.filters.supervised.instance.Resample -B 1.0 -S 1 -Z 66.0"

Tell Sphinx (or thinking Sphinx) to ignore periods when indexing

I have a strange issue with Sphinx, I am trying to be able to match things like:
L.A. Confidential
So people can search for "LA Confidential" and still get that title. Similarly for "P.M."
to be able to match "PM", etc.
I tried putting the period (full stop character U+002E) in the ignore_char list. This
didn't make any difference.
So then I tried implementing index_sp =1. This did not solve the issue either.
According to what I understand of the documentation, either of these should have solved
this issue correct?
I wonder if it has somthing to do with our math mode which is set to extended2, using
Sphinx 2.0.3.
Any help would be greatly appreciated.
Edit, here is my thinking_sphinx.yml config:
Note that the period character (U+002E) is not used anywhere else in my config except in the ignore_chars line.
production:
mem_limit: 512M
morphology: stem_en
wordforms: "db/sphinx/wordforms.txt"
stopwords: "db/sphinx/stopwords.txt"
ngram_chars: "U+4E00..U+9FBB, U+3400..U+4DB5, U+20000..U+2A6D6, U+FA0E, U+FA0F, U+FA11, \
U+FA13, U+FA14, U+FA1F, U+FA21, U+FA23, U+FA24, U+FA27, U+FA28, U+FA29, U+3105..U+312C, \
U+31A0..U+31B7, U+3041, U+3043, U+3045, U+3047, U+3049, U+304B, U+304D, U+304F, U+3051, \
U+3053, U+3055, U+3057, U+3059, U+305B, U+305D, U+305F, U+3061, U+3063, U+3066, U+3068, \
U+306A..U+306F, U+3072, U+3075, U+3078, U+307B, U+307E..U+3083, U+3085, U+3087, \
U+3089..U+308E, U+3090..U+3093, U+30A1, U+30A3, U+30A5, U+30A7, U+30A9, U+30AD, \
U+30AF, U+30B3, U+30B5, U+30BB, U+30BD, U+30BF, U+30C1, U+30C3, U+30C4, U+30C6, \
U+30CA, U+30CB, U+30CD, U+30CE, U+30DE, U+30DF, U+30E1, U+30E2, U+30E3, U+30E5, \/
U+30E7, U+30EE, U+30F0..U+30F3, U+30F5, U+30F6, U+31F0, U+31F1, U+31F2, U+31F3, \
U+31F4, U+31F5, U+31F6, U+31F7, U+31F8, U+31F9, U+31FA, U+31FB, U+31FC, U+31FD, \
U+31FE, U+31FF, U+AC00..U+D7A3, U+1100..U+1159, U+1161..U+11A2, U+11A8..U+11F9, \
U+A000..U+A48C, U+A492..U+A4C6"
ngram_len: 1
ignore_chars: "U+0027, U+2013, U+2014, U+0026, U+002E, ., &"
(huge char_set entry here for different languages, ommited.)
I ran a test locally with the following in my thinking_sphinx.yml for Thinking Sphinx v3.0.4 and it worked:
development:
ignore_chars: U+002E
The same in sphinx.yml for Thinking Sphinx v2.0.14 worked too. I am using Sphinx 2.0.8, but I'll be a little surprised if that's the problem. It's certainly unrelated to your match mode.