Kubernetes audit log showing 404 not found on event - kubernetes

I'm seeing the following log continuously in the Kubernetes audit log file.
Can anyone explain what is this error and its reason
{
"kind":"Event",
"apiVersion":"audit.k8s.io/v1beta1",
"metadata":{"creationTimestamp":"2018-08-29T06:59:04Z"},
"level":"Request",
"timestamp":"2018-08-29T06:59:04Z",
"auditID":"97187fc8-76c1-42f0-9435-c11928b6ec49",
"stage":"ResponseComplete",
"requestURI":"/apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations",
"verb":"list",
"user":{"username":"system:apiserver","uid":"44rrd678-859a-4f663-bt79-23bar678uj66","groups":["system:masters"]},
"sourceIPs":["X.X.X.X"],
"objectRef":{"resource":"initializerconfigurations","apiGroup":"admissionregistration.k8s.io","apiVersion":"v1alpha1"},
"responseStatus":{"metadata":{},"status":"Failure","reason":"NotFound","code":404},
"requestReceivedTimestamp":"2018-08-29T06:59:04.350346Z",
"stageTimestamp":"2018-08-29T06:59:04.350425Z",
"annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}
}

Related

SAM Deployment failed Error- Waiter StackCreateComplete failed: Waiter encountered a terminal failure state

When I try to deploy package on SAM, the very first status comes in cloud formation console is ROLLBACK_IN_PROGRESS after that it gets changed to ROLLBACK_COMPLETE
I have tried deleting the stack and trying again, but every time same issue occurs.
Error in terminal looks like this-
Sourcing local options from ./SAMToolkit.devenv
SAM_PARAM_PKG environment variable not set
SAMToolkit will operate in legacy mode.
Please set SAM_PARAM_PKG in your .devenv file to run modern packaging.
Run 'sam help package' for more information
Runtime: java
Attempting to assume role from AWS Identity Broker using account 634668058279
Assumed role from AWS Identity Broker successfully.
Deploying stack sam-dev* from template: /home/***/1.0/runtime/sam/template.yml
sam-additional-artifacts-url.txt was not found, which is fine if there is no additional artifacts uploaded
Replacing BATS::SAM placeholders in template...
Uploading template build/private/tmp/sam-toolkit.yml to s3://***/sam-toolkit.yml
make_bucket failed: s3://sam-dev* An error occurred (BucketAlreadyOwnedByYou) when calling the CreateBucket operation: Your previous request to create the named bucket succeeded and you already own it.
upload: build/private/tmp/sam-toolkit.yml to s3://sam-dev*/sam-toolkit.yml
An error occurred (ValidationError) when calling the DescribeStacks operation: Stack with id sam-dev* does not exist
sam-dev* will be created.
Creating ChangeSet ChangeSet-2020-01-20T12-25-56Z
Deploying stack sam-dev*. Follow in console: https://aws-identity-broker.amazon.com/federation/634668058279/CloudFormation
ChangeSet ChangeSet-2020-01-20T12-25-56Z in sam-dev* succeeded
"StackStatus": "REVIEW_IN_PROGRESS",
sam-dev* reached REVIEW_IN_PROGRESS
Deploying stack sam-dev*. Follow in console: https://console.aws.amazon.com/cloudformation/home?region=us-west-2
Waiting for stack-create-complete
Waiter StackCreateComplete failed: Waiter encountered a terminal failure state
Command failed.
Please see the logs above.
I set SQS as event source for Lambda, but didn't provided the permissions like this
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
Resource: "*"
in lambda policies.
I found this error in "Events" tab of "CloudFormation" service.

Splunk with ECS

I am having a problem with configuring Splunk to send logs on ECS Cluster.
From the event tab in service, this error was there
Problem Statement: unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxx is missing an attribute required by your task.
after doing a deep drive I found have to update /etc/ecs/ecs.config. and entry echo ECS_AVAILABLE_LOGGING_DRIVERS='["splunk","awslogs"]'.
But this couldn't help?
still getting the same error.
Can anyone please help?
If you are looking to send container logs to Splunk. you need to have logConfiguration tag in task definition json with all required details like below
{
"log-driver": "splunk",
"log-opts": {
"splunk-token": "",
"splunk-url": "",
...
}
}
AWS task definition parameters
splunk logging options

Google cloud sql instance unknown error

I have an Cloud-SQL instance restarted by itself for no reason. In any case, the restart failed with the following error:
2018-02-08 16:33:22.552 CST
+ exec /usr/sbin/mysqld --defaults-file=/mysql/my.cnf
Expand all | Collapse all {
insertId: "s=1eb5f90cdd6e4332b0bfd1260e067581;i=21ee;b=4ff35c4064f348848019b0498c04fcfd;m=50ef121;t=564baffd724ea;x=3528d562989af59-0#b1a"
logName: "projects/xxxxxxx/logs/cloudsql.googleapis.com%2Fmysql.err"
receiveTimestamp: "2018-02-08T22:33:31.058969560Z"
resource: {
labels: {
database_id: "xxxxxx:yyyyyyyy"
project_id: "yyyyyyy"
region: "us-central"
}
type: "cloudsql_database"
}
severity: "ERROR"
textPayload: "+ exec /usr/sbin/mysqld --defaults-file=/mysql/my.cnf"
timestamp: "2018-02-08T22:33:22.552734Z"
}
Looking at Cloud-SQL instance console, all action links were greyed out and my instance is showing a yellow warning sign. The operation and logs on the console displayed:
Feb 8, 2018, 3:50:48 PM Restart An unknown error occurred.
Click on users and database tab, I've got this:
Users/Database cannot be loaded from MySQL at this time. Make sure your instance is runnable.
I am unable to restart the instance via console or gcloud cl:
$ gcloud sql instances restart xxxxxxxx
The instance will shut down and start up again immediately if its
activation policy is "always." If "on demand," the instance will start
up again when a new connection request is made.
Do you want to continue (Y/n)? y
ERROR: (gcloud.sql.instances.restart) HTTPError 409: The instance or operation is not in an appropriate state to handle the request.
Query using MySQL workbench via ip is still working but my Firebase Cloud Function was not able to access the MySQL DB via socket path.
Not sure what to do to get back my instance, shall I just create another instance and try to restore?
With Refrence to Google Issues Tracker
It has been fixed.
If any issue persists, please report at Google issue tracker they will re-open to examine.

Error while loading items: no deployed process definition found

In the dashlet "My tasks" there are two items: "Current tasks" and "Completed tasks".
When I click on the "Completed tasks" I see the following error on a red background:
Error while loading items
When this error occurs in the logs I see the following.
catalina.out:
...
Caused by: org.activiti.engine.ActivitiObjectNotFoundException: no deployed process definition found with id 'publishWhitepaper:1:1115'
at org.activiti.engine.impl.persistence.deploy.DeploymentManager.findDeployedProcessDefinitionById(DeploymentManager.java:75)
at org.activiti.engine.impl.cmd.GetDeploymentProcessDefinitionCmd.execute(GetDeploymentProcessDefinitionCmd.java:39)
at org.activiti.engine.impl.cmd.GetDeploymentProcessDefinitionCmd.execute(GetDeploymentProcessDefinitionCmd.java:26)
at org.activiti.engine.impl.interceptor.CommandInvoker.execute(CommandInvoker.java:24)
at org.activiti.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:57)
at org.activiti.spring.SpringTransactionInterceptor$1.doInTransaction(SpringTransactionInterceptor.java:47)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:131)
at org.activiti.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:45)
at org.activiti.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:31)
at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:40)
at org.activiti.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:35)
at org.activiti.engine.impl.RepositoryServiceImpl.getDeployedProcessDefinition(RepositoryServiceImpl.java:138)
at org.alfresco.repo.workflow.activiti.ActivitiUtil.getDeployedProcessDefinition(ActivitiUtil.java:133)
at org.alfresco.repo.workflow.activiti.ActivitiTypeConverter.getTaskDefinition(ActivitiTypeConverter.java:223)
at org.alfresco.service.cmr.workflow.LazyActivitiWorkflowTask.<init>(LazyActivitiWorkflowTask.java:93)
at org.alfresco.repo.workflow.activiti.ActivitiWorkflowEngine.getAssignedTasks(ActivitiWorkflowEngine.java:1543)
... 92 more
Before that I installed and watched some examples of business processes, but then deleted them (and via workflow console). - most likely, I didn't do it correctly...
I can't understand why this error appear?..
no deployed process definition found with id
'publishWhitepaper:1:1115'
Maybe somewhere something is cached?
Axel Faust gave an exhaustive answer:
Is there enough functionality of the workflow admin console?
Now I understand the cause of the error: as Axel Faust said, "..the tables for historic information do require referential integrity in their relation to the process definition and are not automatically cascade-deleted when you undeploy a process."
Thanks to all for the assistance!
put this configuration in application.yml of springboot. basically it couldn't find your .bpmn file. just pointing to the correct location would solve this issue
spring:
activiti:
database-schema-update: true
db-history-used: true
check-process-definitions: true
process-definition-location-prefix: file:/opt/try-uploads/
# process-definition-location-prefix: classpath:/processes/
process-definition-location-suffixes: '*.bpmn, *.bpmn20.xml'
history-level: audit

Message hub service going down everyday

Iam using bluemix message hub service in my node app which is in production.Now we have run in to an issue that message hub service goes down everyday and the app needs to be restarted then.This cant be done so.
We are getting the following logs
2016-10-16T17:41:42.66+0100 [App/0] OUT Unable to consume topic: Error: Request returned
status code 404 but it was not in the accepted list. The REST API responded with the
following message: Consumer instance not found.
2016-10-16T17:41:46.66+0100 [App/0] OUT got error: { [Error: Request returned status code
404 but it was not in the accepted list. The REST API responded with the following message:
Consumer instance not found.] statusCode: 404, errorCode: 40403 }
Is there any way that we can handle this.This is failing over here
run: function(callback) {
var that = this;
consumerInstance.get(topic)
.then(function(data) {
that.consume(data);
return callback();
})
.fail(function(error) {
console.log("got error: ", error);
return callback(error);
})
}
This is the code which we are using for reference
https://github.com/ibm-cds-labs/Spark-Twitter-Watson-Dashboard/blob/master/server/messageHubBridge.js?s_tact=C43301PW
Any thoughts on how to resolve this issue.
Thanks,
Harish.
Hi the REST endpoint for MessageHub gets recycled every 24 hours.
Clients are expected to handle this by creating a new consumer instance.
HTH,
Edo
As per the Message Hub documentation, the REST service restarts daily. After the REST API has restarted, you will have to recreate your Kafka consumer instances.
Thanks,
Simon.