How do I suppress all logging message in SailsJS/Waterline? - sails.js

I have the following, but am still seeing a ton of Waterline logging messages.
log: {
level: 'silent',
}

There are several files to configure logs (in order of precedence) :
config/log.js
config/env/*.js
config/local.js
Check what you have in these files.

Related

How to resolve "Invalid Sequence Token" when using cloudwatch agent?

I'm seeing the following warning in the /var/log/amazon/amazon-cloudwatch-agent/amazon-cloudwatch-agent.log:
2021-10-06T06:39:23Z W! [outputs.cloudwatchlogs] Invalid SequenceToken used, will use new token and retry: The given sequenceToken is invalid. The next expected sequenceToken is: 49619410836690261519535138406911035003981074860446093650
But there is no mention about which file is really the one that it's failing. Not even when I add "debug": true to the /opt/aws/amazon-cloudwatch-agent/bin/config.json.
cat /opt/aws/amazon-cloudwatch-agent/bin/config.json|jq .agent
{
"metrics_collection_interval": 60,
"debug": true,
"run_as_user": "root"
}
I have many (28) files in my .logs.logs_collected.files.collect_list section of the config.json file, so how can I find which file is exactly causing trouble?
As of 2021-11-29 a PR to improve the log messages has been merged to the cloudwatch-agent but a new version of the cloudwatch-agent has not been released yet, the next version after v1.247349.0 will likely include a fix for this.
The fix will change the log statements to say
INFO: First time sending logs to %v/%v since startup so sequenceToken is nil, learned new token: xxxx: yyyy: This is an INFO message, as this behaviour is expected at startup for example.
WARN: Invalid SequenceToken used (%v) while sending logs to %v/%v, will use new token and retry: xxxxxv: This on the other hand is not expected and may mean that someone else is writing to the loggroup/logstream concurrently.
If those warnings come right after a restart of the cloudwatch agent (cwagent) then you can safely ignore them, it's expected behaviour . The cloudwatch agent does not save the next sequence token in its persistent state so on restart it will "learn" the correct sequence number by issuing a PutLogEvent with no sequence token at all, that returns an InvalidSequenceTokenException with the next sequence token to use. So it's expected to see those at startup, anyway I proposed a PR to amazon-cloudwatch-agent to improve those log messages.
If the "Invalid SequenceToken used" is seen long after the restart then you may have other issues.
The "Invalid SequenceToken used" error usually means that two entities/sources are trying to write to the same log group/log stream as mentioned in 2 (which is really for the old awslogs agent but still useful):
Caught exception: An error occurred (InvalidSequenceTokenException)
when calling the PutLogEvents operation: The given sequenceToken is
invalid[…] -or- Multiple agents might be sending log events to log
stream[…] – You can't push logs from multiple log files to a single
log stream. Update your configuration to push each log to a log
stream-log group combination.
I could be that the amazon cloudwatch agent itself it's trying to upload the same file twice because you have duplicates in your config.json.
So first print all your log group / log stream pairs in your config.json with:
cat /opt/aws/amazon-cloudwatch-agent/bin/config.json|jq -r '.logs.logs_collected.files.collect_list[]|"\(.log_group_name) \(.log_stream_name)"'|sort
which should give an output similar to:
/tableauserver/apigateway apigateway_node5-0.log
/tableauserver/apigateway control_apigateway_node5-0.log
/tableauserver/appzookeeper appzookeeper-discovery_node5-1.log
...
/tableauserver/vizqlserver vizqlserver_node5-3.log
Then you can use uniq -d to find the duplicates in that list with:
cat /opt/aws/amazon-cloudwatch-agent/bin/config.json|jq -r '.logs.logs_collected.files.collect_list[]|"\(.log_group_name) \(.log_stream_name)"'|sort|uniq -d
# The list should be empty otherwise you have duplicates
If that command produces any output it means that you have duplicates in your config.json collect_list.
I personally think that cwagent itself should print the "offending" loggroup/logstream in the logs so I opened in issue in amazon-cloudwatch-agent GitHub page.

Camel DefaultShutdownStrategy logging full url

I'm having an issue where I cannot prevent the shutdown strategy in camel from logging my full URL. This is a problem because the URL has a password in it.
Neither of the .logMask() calls suppress this log line. How can I go about preventing this from being logged?
context.addRoutes(new RouteBuilder() {
#Override
public void configure() {
from(uriString)
.logMask()
.process(exchange -> {
Message in = exchange.getIn();
// Doing some business logic here
})
.toD("direct:someOtherRoute")
.logMask();
}
});
The line being logged:
kafka://MY-TOPIC-NAME?saslJaasConfig=passwordThatShouldNotBeLogged&otherParams...
Edit: The full url is being logged both on startup and shutdown.
Okay this information is logged by Camel. But we have fixed this for next releases (2.25.0 onwards, and the 3.x branches).
For your current version, you cannot prevent this. However you can configure the logging level for that shutdown class to be WARN level.
You can also patch your Camel version yourself, this is the commit
https://github.com/apache/camel/commit/f60e4a73935bea211eec38823698d73bd1d0bd62
If your sensitive endpoint parameters are for SSL configuration, you can register the SSL configuration in the Camel Context and only reference it in the endpoint parameters.
kafka://MY-TOPIC-NAME?sslContextParameters=#ssl
See the Camel-Kafka Docs (section SSL Configuration) for details.

Error while loading items: no deployed process definition found

In the dashlet "My tasks" there are two items: "Current tasks" and "Completed tasks".
When I click on the "Completed tasks" I see the following error on a red background:
Error while loading items
When this error occurs in the logs I see the following.
catalina.out:
...
Caused by: org.activiti.engine.ActivitiObjectNotFoundException: no deployed process definition found with id 'publishWhitepaper:1:1115'
at org.activiti.engine.impl.persistence.deploy.DeploymentManager.findDeployedProcessDefinitionById(DeploymentManager.java:75)
at org.activiti.engine.impl.cmd.GetDeploymentProcessDefinitionCmd.execute(GetDeploymentProcessDefinitionCmd.java:39)
at org.activiti.engine.impl.cmd.GetDeploymentProcessDefinitionCmd.execute(GetDeploymentProcessDefinitionCmd.java:26)
at org.activiti.engine.impl.interceptor.CommandInvoker.execute(CommandInvoker.java:24)
at org.activiti.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:57)
at org.activiti.spring.SpringTransactionInterceptor$1.doInTransaction(SpringTransactionInterceptor.java:47)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:131)
at org.activiti.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:45)
at org.activiti.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:31)
at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:40)
at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:35)
at org.activiti.engine.impl.RepositoryServiceImpl.getDeployedProcessDefinition(RepositoryServiceImpl.java:138)
at org.alfresco.repo.workflow.activiti.ActivitiUtil.getDeployedProcessDefinition(ActivitiUtil.java:133)
at org.alfresco.repo.workflow.activiti.ActivitiTypeConverter.getTaskDefinition(ActivitiTypeConverter.java:223)
at org.alfresco.service.cmr.workflow.LazyActivitiWorkflowTask.<init>(LazyActivitiWorkflowTask.java:93)
at org.alfresco.repo.workflow.activiti.ActivitiWorkflowEngine.getAssignedTasks(ActivitiWorkflowEngine.java:1543)
... 92 more
Before that I installed and watched some examples of business processes, but then deleted them (and via workflow console). - most likely, I didn't do it correctly...
I can't understand why this error appear?..
no deployed process definition found with id
'publishWhitepaper:1:1115'
Maybe somewhere something is cached?
Axel Faust gave an exhaustive answer:
Is there enough functionality of the workflow admin console?
Now I understand the cause of the error: as Axel Faust said, "..the tables for historic information do require referential integrity in their relation to the process definition and are not automatically cascade-deleted when you undeploy a process."
Thanks to all for the assistance!
put this configuration in application.yml of springboot. basically it couldn't find your .bpmn file. just pointing to the correct location would solve this issue
spring:
activiti:
database-schema-update: true
db-history-used: true
check-process-definitions: true
process-definition-location-prefix: file:/opt/try-uploads/
# process-definition-location-prefix: classpath:/processes/
process-definition-location-suffixes: '*.bpmn, *.bpmn20.xml'
history-level: audit

Jboss shows error with datasource on startup

On starting jboss I am getting the following error :
--- MBEANS THAT ARE THE ROOT CAUSE OF THE PROBLEM ---
ObjectName: jboss.jca:service=DataSourceBinding,name=DefaultDS
State: NOTYETINSTALLED
Depends On Me:
jboss.ejb:service=EJBTimerService,persistencePolicy=database
jboss:service=KeyGeneratorFactory,type=HiLo
jboss.mq:service=StateManager
jboss.mq:service=PersistenceManager
And for all database connections in the servlet I get the following exception :
org.postgresql.util.PSQLException: FATAL: password a
uthentication failed for user "poll"
It was working fine and all of a sudden I started getting these errors. My password is correct. I even tried changing the password and then tried again it showed the same exception. What is happening here?
The DefaultDS data source is what the name suggests; the default datasource. It ships with JBoss and is configured to use the Hypersonic (ie in-memory) database. JBoss uses the DefaultDS datasource to read/write internal queues, timed events, etc
Check the file ../conf/standardjbosscmp-jdbc.xml to see what you've got configured for the DefaultDS datasource. It sounds like you've edited that file unintentionally. Unless you need to persist internal queues etc across boots, just leave it as shipped using Hypersonic.
See the JBoss doc for more.

Can Perl's Log::Log4perl's log levels be changed dynamically without updating config?

I have a Mason template running under mod_perl, which is using Log::Log4perl.
I want to change the log level of a particular appender, but changing the config is too awkward, as it would have to pass through our deployment process to go live.
Is there a way to change the log level of an appender at run-time, after Apache has started, without changing the config file, and then have that change affect any new Apache threads?
If you've imported the log level constants from Log::Log4perl::Level, then you can do things like:
$logger->level($ERROR); # one of DEBUG, INFO, WARN, ERROR, FATAL
$logger->more_logging($delta); # Increase log level by $delta levels,
# a positive integer
$logger->less_logging($delta); # Decrease log level by $delta levels.
This is in the Changing the Log Level on a Logger section in the Log::Log4perl docs.
It seems kinda hacky to me, but it works:
$Log::Log4perl::Logger::APPENDER_BY_NAME{SCREEN}->threshold($DEBUG);
And to make it more dynamic, you could pass in a variable for the Appender name and level.
%LOG4PERL_LEVELS =
(
OFF =>$OFF,
FATAL =>$FATAL,
ERROR =>$ERROR,
WARN =>$WARN,
INFO =>$INFO,
DEBUG =>$DEBUG,
TRACE =>$TRACE,
ALL =>$ALL
);
$Log::Log4perl::Logger::APPENDER_BY_NAME{$appender_name}->threshold($LOG4PERL_LEVELS{$new_level});