Binary operator in loki - grafana

Do binary operators for Loki queries exist?
Is there a workaround to achieve something as this?
{job="myService"}
|~"AI Unable to update the session state in"
OR
|~"Database Write Error"

No, you can't do that but you can do, for example, the following:
{job="myService"} |~ "AI Unable to update the session state in|Database Write Error"
Or the following:
count_over_time({job="myService"} |~ "AI Unable to update the session state in" [1h])
or
count_over_time({job="myService"} |~ "Database Write Error" [1h])

Related

Updating selectors after delegate addition failed with: Error Domain=NSCocoaError Domain Code=4099

Is anyone able to explain this error, and what might have caused it? I've been trying to create a messaging app using Firebase's Realtime Database in SwiftUI, but am unable to read from the database and have been getting this error.
Updating selectors after delegate addition failed with: Error
Domain=NSCocoaErrorDomain Code=4099 "The connection to service named
com.apple.commcenter.coretelephony.xpc was invalidated: failed at
lookup with error 3 - No such process."
I'm able to write to the database just fine but it doesn't seem to want to be read from, so I'm assuming it's because of whatever this error is. Thanks

How to resolve "Invalid Sequence Token" when using cloudwatch agent?

I'm seeing the following warning in the /var/log/amazon/amazon-cloudwatch-agent/amazon-cloudwatch-agent.log:
2021-10-06T06:39:23Z W! [outputs.cloudwatchlogs] Invalid SequenceToken used, will use new token and retry: The given sequenceToken is invalid. The next expected sequenceToken is: 49619410836690261519535138406911035003981074860446093650
But there is no mention about which file is really the one that it's failing. Not even when I add "debug": true to the /opt/aws/amazon-cloudwatch-agent/bin/config.json.
cat /opt/aws/amazon-cloudwatch-agent/bin/config.json|jq .agent
{
"metrics_collection_interval": 60,
"debug": true,
"run_as_user": "root"
}
I have many (28) files in my .logs.logs_collected.files.collect_list section of the config.json file, so how can I find which file is exactly causing trouble?
As of 2021-11-29 a PR to improve the log messages has been merged to the cloudwatch-agent but a new version of the cloudwatch-agent has not been released yet, the next version after v1.247349.0 will likely include a fix for this.
The fix will change the log statements to say
INFO: First time sending logs to %v/%v since startup so sequenceToken is nil, learned new token: xxxx: yyyy: This is an INFO message, as this behaviour is expected at startup for example.
WARN: Invalid SequenceToken used (%v) while sending logs to %v/%v, will use new token and retry: xxxxxv: This on the other hand is not expected and may mean that someone else is writing to the loggroup/logstream concurrently.
If those warnings come right after a restart of the cloudwatch agent (cwagent) then you can safely ignore them, it's expected behaviour . The cloudwatch agent does not save the next sequence token in its persistent state so on restart it will "learn" the correct sequence number by issuing a PutLogEvent with no sequence token at all, that returns an InvalidSequenceTokenException with the next sequence token to use. So it's expected to see those at startup, anyway I proposed a PR to amazon-cloudwatch-agent to improve those log messages.
If the "Invalid SequenceToken used" is seen long after the restart then you may have other issues.
The "Invalid SequenceToken used" error usually means that two entities/sources are trying to write to the same log group/log stream as mentioned in 2 (which is really for the old awslogs agent but still useful):
Caught exception: An error occurred (InvalidSequenceTokenException)
when calling the PutLogEvents operation: The given sequenceToken is
invalid[…] -or- Multiple agents might be sending log events to log
stream[…] – You can't push logs from multiple log files to a single
log stream. Update your configuration to push each log to a log
stream-log group combination.
I could be that the amazon cloudwatch agent itself it's trying to upload the same file twice because you have duplicates in your config.json.
So first print all your log group / log stream pairs in your config.json with:
cat /opt/aws/amazon-cloudwatch-agent/bin/config.json|jq -r '.logs.logs_collected.files.collect_list[]|"\(.log_group_name) \(.log_stream_name)"'|sort
which should give an output similar to:
/tableauserver/apigateway apigateway_node5-0.log
/tableauserver/apigateway control_apigateway_node5-0.log
/tableauserver/appzookeeper appzookeeper-discovery_node5-1.log
...
/tableauserver/vizqlserver vizqlserver_node5-3.log
Then you can use uniq -d to find the duplicates in that list with:
cat /opt/aws/amazon-cloudwatch-agent/bin/config.json|jq -r '.logs.logs_collected.files.collect_list[]|"\(.log_group_name) \(.log_stream_name)"'|sort|uniq -d
# The list should be empty otherwise you have duplicates
If that command produces any output it means that you have duplicates in your config.json collect_list.
I personally think that cwagent itself should print the "offending" loggroup/logstream in the logs so I opened in issue in amazon-cloudwatch-agent GitHub page.

Cloudwatch logs "AND NOT" search

I'm searching Cloudwatch log events for errors with the following criteria:
?"error" ?"ERROR" ?"Error:"
How can I exclude specific terms from the result? For example, if I don't care about specific_error, how can I specify not to match on it?
I'm expecting to be able to do something like:
(?"error" AND -"specific_error") ?"ERROR" ?"Error:"
In the CloudWatch console, this can be accomplished with the - operand before the term you wish to exclude:
"error" -"something minor happened"
This is from the AWS docs for "Matching terms in log events".
Similarly, using aws logs tail, you can pass this to the --filter-pattern argument:
$ aws logs tail --format short /aws/lambda/my_lambda --filter-pattern '"error" -"something minor happened"' --since 3h
2021-07-09T19:28:47 error: something bad happened
2021-07-09T19:28:51 error: something bad happened
2021-07-09T19:29:52 error: something REALLY bad happened
2021-07-09T19:30:15 error: something CATASTROPHIC happened! Aiee!
2021-07-09T19:30:36 error: something bad happened

Error while loading items: no deployed process definition found

In the dashlet "My tasks" there are two items: "Current tasks" and "Completed tasks".
When I click on the "Completed tasks" I see the following error on a red background:
Error while loading items
When this error occurs in the logs I see the following.
catalina.out:
...
Caused by: org.activiti.engine.ActivitiObjectNotFoundException: no deployed process definition found with id 'publishWhitepaper:1:1115'
at org.activiti.engine.impl.persistence.deploy.DeploymentManager.findDeployedProcessDefinitionById(DeploymentManager.java:75)
at org.activiti.engine.impl.cmd.GetDeploymentProcessDefinitionCmd.execute(GetDeploymentProcessDefinitionCmd.java:39)
at org.activiti.engine.impl.cmd.GetDeploymentProcessDefinitionCmd.execute(GetDeploymentProcessDefinitionCmd.java:26)
at org.activiti.engine.impl.interceptor.CommandInvoker.execute(CommandInvoker.java:24)
at org.activiti.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:57)
at org.activiti.spring.SpringTransactionInterceptor$1.doInTransaction(SpringTransactionInterceptor.java:47)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:131)
at org.activiti.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:45)
at org.activiti.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:31)
at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:40)
at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:35)
at org.activiti.engine.impl.RepositoryServiceImpl.getDeployedProcessDefinition(RepositoryServiceImpl.java:138)
at org.alfresco.repo.workflow.activiti.ActivitiUtil.getDeployedProcessDefinition(ActivitiUtil.java:133)
at org.alfresco.repo.workflow.activiti.ActivitiTypeConverter.getTaskDefinition(ActivitiTypeConverter.java:223)
at org.alfresco.service.cmr.workflow.LazyActivitiWorkflowTask.<init>(LazyActivitiWorkflowTask.java:93)
at org.alfresco.repo.workflow.activiti.ActivitiWorkflowEngine.getAssignedTasks(ActivitiWorkflowEngine.java:1543)
... 92 more
Before that I installed and watched some examples of business processes, but then deleted them (and via workflow console). - most likely, I didn't do it correctly...
I can't understand why this error appear?..
no deployed process definition found with id
'publishWhitepaper:1:1115'
Maybe somewhere something is cached?
Axel Faust gave an exhaustive answer:
Is there enough functionality of the workflow admin console?
Now I understand the cause of the error: as Axel Faust said, "..the tables for historic information do require referential integrity in their relation to the process definition and are not automatically cascade-deleted when you undeploy a process."
Thanks to all for the assistance!
put this configuration in application.yml of springboot. basically it couldn't find your .bpmn file. just pointing to the correct location would solve this issue
spring:
activiti:
database-schema-update: true
db-history-used: true
check-process-definitions: true
process-definition-location-prefix: file:/opt/try-uploads/
# process-definition-location-prefix: classpath:/processes/
process-definition-location-suffixes: '*.bpmn, *.bpmn20.xml'
history-level: audit

Configure MongoDB with Nutch2.3, some error about indexerJob?

I had successfully configure MongoDB(5.3.1) and Nutch(2.3), when I run the command "./bin/nutch index -all" some errors printed after inject/generate/fetch/parse/updatedb commands work,the error details like:
SolrIndexerJob: java.lang.RuntimeException: job failed: name=apache-nutch-2.3.1.jar, jobid=job_local140530148_0001
at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:120)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:154)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:176)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:202)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:211)
I had configure the file in $NUTCH_HOME/runtime/local/conf/nutch-site.xml
details:
If all the others steps was running, it would be not a problem with mongodb but with solr (your nutch-site.xml suggests that you wanted index your data in solr). As far that i remember, when i used solr, i precised the core name, it would be something like that :
http://localhost:8983/solr/mycore/