Tekton: How delete successful pipelineruns? - kubernetes

My aspired tekton usecase is simple:
successful pipelineruns should be removed after x days
failed pipelineruns shouldn't be removed automatically.
I plan to do the cleanup in an initial cleanup-task. That seems better to me than annotation- or cronjob-approaches. As long as nothing new is built, nothing has to be deleted.
Direct approaches:
Failed: tkn delete doesn't seem very helpful because it doesn't discriminate between successful or not.
Failed: oc delete --field-selector ... doesn't contain the well hidden but highly expressive field status.conditions[0].type==Succeeded
Indirect approaches (first filtering a list of podnames and then delete them - not elegant at all):
Failed: Filtering output with -o=jsonpath... seems costly and the condition-array seems to break the statement, so that (why ever?!) everything is returned... not viable
My last attempt is tkn pipelineruns list --show-managed-fields and parse this with sed/awk... which is gross... but at least it does what I want it to do... and quite efficiently at that. But it might result as brittle when the design of the output is going to change in future releases...
Do you have any better more elegant approaches?
Thanks a lot!

Until a better solution is there, I'll post my current solution (and its drawbacks):
Our cleanup-task is now built around the following solution, evaluating the table returned by tkn pipelineruns list:
tkn pipelineruns list --show-managed-fields -n e-dodo-tmgr --label tekton.dev/pipeline=deploy-pipeline | awk '$6~/Succeeded/ && $3~/day|week|month/ {print $1}'
Advantages:
It does what it should without extensive calls or additional calculation.
Disadvantages:
Time is limited to "older than an hour / a day / a week ..." But that's acceptable, since only successful builds are concerned.
I guess the design is quite brittle, because with changes in the tkn-Client the format of the table might change which implies that awk will pick the wrong columns, or similar pattern-probs.
All in all I hope the solution will hold until there are some more helpful client-features that make the desired info directly filterable. Actually I'd hope for something like tkn pipelineruns delete --state successful --period P1D.
The notation for the time period is from ISO8601.

Related

patch endpoint naming and realisation

We have rest resource
/tasks/{task-type}
and only GET methods available.
GET /tasks/{task-type}
GET /tasks/{task-type}/{id}
Task entity contains meta info like created, finished, status, ref key and try counts for scheduled tasks.
Now we faced with problem, when task may contains incorrect data and its execution always failed.
Due to scheduler invoked tasks every 5 min there are a lot of errors in logs and largest try counts around 500k. The solution i found is to limit try_count to five (for example). And now we need way to manual discard try-count to zero. So i found two solutions:
1.
PATCH /tasks/{task-type}/{id}/discard-try-count - no response body
This solution look pretty simple, but violates the REST convention, because we use action(verb) in naming. But if we need to change other fields, then we will make a lot of endpoints in this style.
2a.
PATCH /tasks/{task-type}/{id}
body:
{
"tryCounts": int
}
This looks like REST want to see it and we can easy add new fields to modify, but now client can set any value for tryCount.
2b
PATCH /tasks/{task-type}/{id}
body:
{
"tryCounts": int // validate that try count can be only zero
}
Differs from the previous one by the presence of validation.
This looks like the most reliable solution. Is it really the best fit?
The non-verb convention is not a standard, you can violate it if you want to, though it can be worked around with very simple stuff, just convert the verb into a noun and you will be ok, something like:
POST /tasks/{task-type}/{id}/try-count-discarding
Another way is setting the try count to zero:
PUT /tasks/{task-type}/{id}/try-count 0
Yet another solution is combining the two, which I like the most:
PATCH /tasks/{task-type}/{id}/try-count {"op": "reset"}
Or another variant:
PATCH /tasks/{task-type}/{id} {"op": "discard-try-count"}

How to filter pods based on status in k9s

Often there is more than one search result returned from / filter command which is confirm by pressing Enter. Then I can navigate the results, but have no way to filter based on data displayed e.g. Status or CPU.
QUESTION: Is this possible? If so, what I need to do to achieve it?
I don't think it's possible to filter search result, but you can sort them, which can be helpful in most cases. For example:
SHIFT+C sorts by CPU
SHIFT+M sorts by MEMORY
SHIFT+S sorts by STATUS
...
You can filter results using / just like with :
As #dgebert said for sorting use SHIFT + first letter of keyword to sort by
you can try something like this where replace ImagePullBackOff with filter of your need
kubectl get pod -o=json | jq '.items[]|select(any( .status.containerStatuses[]; .state.waiting.reason=="ImagePullBackOff"))|.metadata.name'

Grok filter for a time counter HH:MM

I'm quite new to ELK and Grok-filtering, and I'm struggling with parsing this particular pattern in my grok filter.
I've used the grok debugger to try and solve this, but although I like the tool, I just get confused by the custom patterns.
Eventually, I hope to parse lots of log files sent by filebeat to logstash, then send the parsed logs to elasticsearch and display with kibana or some similar visualization tool.
The lines that I need to parse follow the following pattern:
1310 2017-01-01 16:48:54 [325:51] [326:49] [359:57] Some log info text
The first four digits is a log type identifier, and will be used for grouping. I've called the field "LogLineID".
The date is formatted YYYY-MM-DD HH:MM:SS, and is parsed ok. I called the field "LogDate".
But now the problem begins. Within the square brackets, I have counters, formatted as MM:SS if you like. I cannot for the life of me find a way to sort these out, but I need to compare these times, hence I want to store them as minutes and seconds, not just numbers.
The first is a counter "TimeSpent",
the second is a counter "TimeStarted" and
the third is a counter "TimeSinceDown".
Then, last, comes the info text, which I've managed to grok with simply applying %{GREEDYDATA:LogInfo}.
I notice that the amount of minutes could be far higher than the standard 60 minutes within an hour, so I may be barking up the wrong tree here trying to parse it with date patterns such as TIMESTAMP_ISO8601, but then, I don't really know how else to do this.
So, I came this far:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate}
and were as mentioned able to (by cutting away the square bracket parts) to parse the log info text with
%{GREEDYDATA:LogInfo}
to create a field LogInfo.
But that's were I'm stuck. Could someone please help me figure out the rest?
Massive thanks in advance.
PS! I also found %{NUMBER:duration}, but it could as far as I could tell only parse timestamps with dot, not colon..
grok regex expression can help you solve the problem.
but first I wanna make sure that do you mean [325:51] [326:49] [359:57] are the three component that you wanna to fetch? And it will returns the result like :
TimeSpent: 325:51
TimeStarted: 326:49
TimeSinceDown: 359:57
were i get the point , you can use my ways in on of the following suggestions:
define your own custom pattern files and add the pattern in your file.
just use the expression in filter part of logstash conf file
hope it will helps you
Ah, there was a space.. Actually, I was misleading myself and everybody in my question, as it was not actually that log line that was causing problems. I just took the first one, not realizing where the problem really were, but the one causing problems had a space within the brackets as such: [ 42:31]. There are also some parts where there are two spaces, so the way I managed to solve this was to include a %{SPACE} between the \[ and the %{NUMBER}:
%{NUMBER:LogLineID} %{TIMESTAMP_ISO8601:LogDate} \[%{SPACE}%{NUMBER:TimeSpentMinutes}\:%{NUMBER:TimeSpentSeconds}\] \[%{SPACE}%{NUMBER:TimeStartedMinutes}\:%{NUMBER:TimeStartedSeconds}\] \[%{SPACE}%{NUMBER:TimeSinceDownMinutes}\:%{NUMBER:TimeSinceDownSeconds}\] %{GREEDYDATA:LogText}
I still haven't solved the merging of minutes and seconds, but this I can also handle in a later stage.
Thanks to Lin Don for showing an interest in my problem, and sorry for not replying sooner.
Hope the solution will help others (or even myself) if their stuck on the same kind of problem.
Note to myself: Read the logs more carefully before grok'ing.. :)

In k8s reflector, DeltaFiFo.Replace function

In reflector framework, it will first exec 'LIST' request and then watch changes. Get the list result and put them into a queue(such as DeltaFiFo) by calling Replace function, however, when compute key error it will just return and listwatch will end to exec next round ListAndWatch.
(source on Github)
One drop of poison infects the whole tun of wine.
I don't know why deal with it in this way. When one of the items is in wrong state(so it will result in computing key error all the time), ListAndWatch will always fail. I think log the error instead of return is a better way.
Thanks.

PHPStorm: unable to resolve column '$1' in PostgreSQL prepared queries

How to get rid of this warning in statements like this:
pg_prepare('select * from table where id=$1', array($id));
This is no error, but a little annoying thing.
You are using v9.5 -- you really should state such info in your question -- settings screen is a bit different from current stable v9.0.2.
Edit your 2nd Parameters Pattern: make sure it has \ in front of ? -- somehow it interferes with other patterns.
It should be \?\d+ at least.