Pattern matching with gcloud --filter - gcloud

I want to get a list of tags matching a certain pattern and to do that I have been using the command:
gcloud container images list-tags gcr.io/project/reposiory --format="json" --filter="tags:*_master"
but I'm getting this warning:
WARNING: --filter : operator evaluation is changing for consistency across Google APIs. tags:*_master currently matches but will not match in the near future. Run `gcloud topic filters` for details.
I've searched around but can't find how I'm supposed to do this going forwards. Does anyone know how I can match a pattern in this way i.e. all tags that end with "_master"?

As mentioned in the documentation, you could use a regular expression.
To use a regular expression, you would use --filter="key ~ value. In this case, key would be 'tags'. For value, you would want to match anything that ends with '_master'. You can use a $ in a regular expression to anchor to the end of the string.
Combining all this, your new filter would be: --filter="tags ~ _master$"

From the Documentation. Your warning message is caused due to the following:
The operator evaluation is changing for consistency across Google APIs. The current default is deprecated and will be dropped shortly. A warning will be displayed when a --filter expression would return different matches using both the deprecated and new implementations.
Please refer to the previously mentioned documentation to make sure that you are using the correct matches.

Related

az appconfig kv export throwing an error when the key contains a ":" (colon)

The Microsoft guide lists 4 methods of deploying App Configurations (in my case to App Services). https://learn.microsoft.com/en-us/azure/azure-app-configuration/howto-best-practices
We currently use last method (push configuration) in combination with labels, specifically:
az appconfig kv export
This works well including for hierarchical keys, which require a double-underscore separator to represent curly braces. However, the development team are transitioning to second method, which is to transition to referencing the keys from the App Service. To do that hierarchical keys require a colon as separator.
The plan was to simply "change" (strictly, recreate) the key from:
first__second to first:second. When doing this, however, I notice that the export fails and it is the presence of the colon causing the issue. The error is:
Failed to write key-values to appservice: Operation returned an invalid status 'Bad Request'
This error appears even when the separator is specified:
--seperator ":"
In answer to the question "why export values if you have decided to read the App Configuration from the App Service?" the answer is twofold:
Because the pointer to the App Configuration store (the primary key) still needs to be "pushed".
Because we had hoped to avoid a hard linkage between the code change and the App Config key changes, so we were effectively going to have each key represented at both first__second and first:second at the same time for a short transition period to de-link the two changes.
Does anyone know if there is a way to export keys that have a colon in them? (Or, indeed, if this is just a CLI bug and it should just work?)
If you are using --separator property you have to use single colon ':' as described in the MSDOC.
Flattening key-value pairs to a json or yaml file with a delimiter. For exporting hierarchical structures, this is required. For property files and feature flags, the separator will be disregarded.

Reading CSV file with Spring batch and map to Domain objects based on the the first field and then insert them in DB accordingly [duplicate]

How can we implement pattern matching in Spring Batch, I am using org.springframework.batch.item.file.mapping.PatternMatchingCompositeLineMapper
I got to know that I can only use ? or * here to create my pattern.
My requirement is like below:
I have a fixed length record file and in each record I have two fields at 35th and 36th position which gives record type
for example below "05" is record type which is at 35th and 36th position and total length of record is 400.
0000001131444444444444445589868444050MarketsABNAKKAAAAKKKA05568551456...........
I tried to write regular expression but it does not work, i got to know only two special character can be used which are * and ? .
In that case I can only write like this
??????????????????????????????????05?????????????..................
but it does not seem to be good solution.
Please suggest how can I write this solution, Thanks a lot for help in advance
The PatternMatchingCompositeLineMapper uses an instance of org.springframework.batch.support.PatternMatcher to do the matching. It's important to note that PatternMatcher does not use true regular expressions. It uses something closer to ant patterns (the code is actually lifted from AntPathMatcher in Spring Core).
That being said, you have three options:
Use a pattern like you are referring to (since there is no short hand way to specify the number of ? that should be checked like there is in regular expressions).
Create your own composite LineMapper implementation that uses regular expressions to do the mapping.
For the record, if you choose option 2, contributing it back would be appreciated!

Using UIMA Ruta: How do I annotate the first token of a text and use that annotation further?

I would like to annotate the first token of a text and use that annotation in following rules. I have tried different patterns:
Token.begin == 0 (doesn't work, although there definitely is a token that begins at 0)
Token{STARTSWITH(DocumentMetaData)}; (also doesn't work)
The only pattern that works is:
Document{->MARKFIRST(First)};
But if I try to use that annotation e.g. in the following way:
First{->MARK(FirstAgain)};
it doesn't work again. This makes absolutely no sense to me. There seems to be a really weird behaviour with annotations that start at 0.
This trivial task can be a bit tricky indeed, mainly because of the visibility settings. I do not know why your rules in the question do not work without having a look at the text that should be processed.
As for UIMA Ruta 2.7.0, I prefer a rule like:
# Token{->First};
Here some additional thoughts about the rules in the question:
Token.begin == 0;
Normally, there is not token with begin at 0 since the document starts with some whitespaces or line breaks. If there is actually a token that starts at offset 0 and the rule does not match, then something invisible is covering the begin of the end of the token. This depends of course of the filtering settings, but in case that you did not change them, it could be a bom.
Token{STARTSWITH(DocumentMetaData)};
Here, either the problem above applies, or the begin offset is not identical. If the DocumentMetaData covers the complete document, then I would bet on the leading whitespaces. Another reason could be that the internal indexing is broken, e.g., the tokens or the DocumentMetaData are created by an external analysis engine which was called with EXEC and no reindexing was configured in the action. This situation could also occur with unfortunate optimizations using the config params.
Document{->MARKFIRST(First)};
First{->MARK(FirstAgain)};
MARKFIRST creates an annotation using the offset of the first RutaBasic in the matched context IIRC. If the document starts with something invisible, e.g., a line break, then the second rule cannot match.
As a general advice in situations like this when some obvious simple rules do not work correctly as expected, I recommend adding some additional rules and using the debugging config with the explanation view. As rule like Token; can directly highlight if the visibility setting are problematic for the given tokens.
DISCLAIMER: I am a developer of UIMA Ruta

IBM Watson Assistant: Regular expressions with context variables

I am gathering some context variables with slots, and they work just fine.
So I decided to do in another node of the conversation, check if one of these context variables is a specific number:
I was thinking on enabling multi-responses and check if, for example $dni:1 (it is an integer, pattern of 1 integer only), or if it is 2 or 3:
But this is not working. I was trying to solve it for some days with different approaches but I really cannot find a way through it.
My guess is that a context variable has a value, and you can print it to use it like responding with the user's name and stuff like that (which indeed is useful!), but comparing values is not possible.
Any insights on this I can receive?
Watson Assistant uses a short-hand syntax but also supports the more complex expressions. What you could do is to edit the condition in the JSON editor. There, for the condition, use a function like matches() on the value of the context variable.
Note that it is not recommended to check for context variables in the slot conditions. You can use multi-responses. An alternative way is to put the check into the response itself. There, you can use predicates to generate the answer.
<? context.dni==1 ? 'Very well' : 'Your number is not 1' ?>
You can nest the evaluation to have three different answers. Another way is to build an array of responses and use dni as key.
Instead of matching to specific integers, you could consider using the Numbers system entity. Watson Assistant supports several languages. As a benefit, users could answer "the first one", "the 2nd option", etc., and the bot still would understand and your logic could still route to the correct answer.

How to call google map api using swift 4?

My issue is a, I need only restaurant, bar using google API.
If you need code i'm send you.
this is my base url:- "https://maps.googleapis.com/maps/api/place/"
nearbyURLFragment :- let nearbyURLFragment = "nearbysearch/json?key=%#&location=%f,%f&rankby=distance&type=restaurant,bar"
a single type of proper working.
but here is I'm not getting a proper result.
Quoting the Google API Docs,
Restricts the results to places matching the specified type. Only one
type may be specified (if more than one type is provided, all types
following the first entry are ignored).
The only workaround of this would be to define one in the keyword attribute however this will return very inconsistent results.
The best way to do this is to do two separate API calls with differing types.