Loki-logs for storing logs in gcs bucket - grafana

I am trying to configure the storage in loki logs , i have configured gcs bucket.
But in when I try to see loki logs, I am getting a 403 error as follows:
""2822-18-21 18:43:52 level-error ts-2822-18-21T05:13:52.8647427222
caller=flush.go:146 org_id=fake msg="failed to flush user err="store put
chunk: googleapi: Error 483: Access denied., forbidden""
What might be the reason?

Related

Azure AKS fluxconfig-agent 401 causing unhealthy

I have an AKS environment based on the AKS-Construction templates
At some point fluxconfig-agent started reporting unhealthy. I checked the logs and it looks like there is a 401 when it tries to fetch config from https://eastus.dp.kubernetesconfiguration.azure.com
{"Message":"2022/10/03 17:09:01 URL:\u003e https://eastus.dp.kubernetesconfiguration.azure.com/subscriptions/xxx/resourceGroups/my-aks/provider/Microsoft.ContainerService-managedclusters/clusters/my-aks/configurations/getPendingConfigs?api-version=2021-11-01","LogType":"ConfigAgentTrace","LogLevel":"Information","Environment":"prod","Role":"ClusterConfigAgent","Location":"eastus","ArmId":"/subscriptions/xxx/resourceGroups/my-aks/providers/Microsoft.ContainerService/managedclusters/my-aks","CorrelationId":"","AgentName":"FluxConfigAgent","AgentVersion":"1.6.0","AgentTimestamp":"2022/10/03 17:09:01"}
{"Message":"2022/10/03 17:09:01 GET configurations returned response code {401}","LogType":"ConfigAgentTrace","LogLevel":"Information","Environment":"prod","Role":"ClusterConfigAgent","Location":"eastus","ArmId":"/subscriptions/xxx/resourceGroups/my-aks/providers/Microsoft.ContainerService/managedclusters/my-aks","CorrelationId":"","AgentName":"FluxConfigAgent","AgentVersion":"1.6.0","AgentTimestamp":"2022/10/03 17:09:01"}
{"Message":"2022/10/03 17:09:01 Failed to GET configurations with ResponseCode : {401}","LogType":"ConfigAgentTrace","LogLevel":"Information","Environment":"prod","Role":"ClusterConfigAgent","Location":"eastus","ArmId":"/subscriptions/xxx/resourceGroups/my-aks/providers/Microsoft.ContainerService/managedclusters/my-aks","CorrelationId":"","AgentName":"FluxConfigAgent","AgentVersion":"1.6.0","AgentTimestamp":"2022/10/03 17:09:01"}
{"Message":"Error in the getting the Configurations: error {%!s(\u003cnil\u003e)}","LogType":"ConfigAgentTrace","LogLevel":"Error","Environment":"prod","Role":"ClusterConfigAgent","Location":"eastus","ArmId":"/subscriptions/xxx/resourceGroups/my-aks/providers/Microsoft.ContainerService/managedclusters/my-aks","CorrelationId":"","AgentName":"FluxConfigAgent","AgentVersion":"1.6.0","AgentTimestamp":"2022/10/03 17:09:01"}
{"Message":"2022/10/03 17:09:01 \"Errorcode: 401, Message Unauthorized client credentials., Target /subscriptions/xxx/resourceGroups/my-aks/provider/Microsoft.ContainerService-managedclusters/clusters/my-aks/configurations/getPendingConfigs\"","LogType":"ConfigAgentTrace","LogLevel":"Information","Environment":"prod","Role":"ClusterConfigAgent","Location":"eastus","ArmId":"/subscriptions/xxx/resourceGroups/my-aks/providers/Microsoft.ContainerService/managedclusters/my-aks","CorrelationId":"","AgentName":"FluxConfigAgent","AgentVersion":"1.6.0","AgentTimestamp":"2022/10/03 17:09:01"}
Is anyone here familiar with how fluxconfig-agent authenticates and what might cause a 401 here?
Seems to have went away for now after upgrading my AKS cluster and nodes to latest Kubernetes version.

OpenSearch error Rejecting request because cold storage is not enabled on the domain

I have an AWS OpenSearch domain with ultrawarm/cold storage disabled.
My application/error log is spammed (new entries every 2-5 minutes) with the following error
[2022-09-05T09:29:13,909][ERROR][o.o.b.OpenSearchUncaughtExceptionHandler] [eaab83ab8ab8fd53a42b42e694708350] uncaught exception in thread [DefaultDispatcher-worker-1]
OpenSearchStatusException[Rejecting request because cold storage is not enabled on the domain. Enabling cold storage for the first time can take several hours. Please try again later.]
__AMAZON_INTERNAL__
__AMAZON_INTERNAL__
__AMAZON_INTERNAL__
at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:195)
at org.opensearch.indexmanagement.rollup.actionfilter.FieldCapsFilter.apply(FieldCapsFilter.kt:120)
at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:193)
at org.opensearch.performanceanalyzer.action.PerformanceAnalyzerActionFilter.apply(PerformanceAnalyzerActionFilter.java:99)
at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:193)
at org.opensearch.security.filter.SecurityFilter.apply0(SecurityFilter.java:266)
at org.opensearch.security.filter.SecurityFilter.apply(SecurityFilter.java:154)
at org.opensearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:193)
at org.opensearch.action.support.TransportAction.execute(TransportAction.java:170)
at org.opensearch.action.support.TransportAction.execute(TransportAction.java:98)
at org.opensearch.client.node.NodeClient.executeLocally(NodeClient.java:108)
at org.opensearch.client.node.NodeClient.doExecute(NodeClient.java:95)
at org.opensearch.client.support.AbstractClient.execute(AbstractClient.java:433)
at org.opensearch.indexmanagement.kraken.ColdIndexMetadataService$getMetadata$response$1.invoke(ColdIndexMetadataService.kt:31)
at org.opensearch.indexmanagement.kraken.ColdIndexMetadataService$getMetadata$response$1.invoke(ColdIndexMetadataService.kt:17)
at org.opensearch.indexmanagement.kraken.OpenSearchExtensionsKt.suspendUntil(OpenSearchExtensions.kt:30)
at org.opensearch.indexmanagement.kraken.ColdIndexMetadataService.getMetadata(ColdIndexMetadataService.kt:31)
at org.opensearch.indexmanagement.indexstatemanagement.IndexMetadataProvider.getISMIndexMetadataByType(IndexMetadataProvider.kt:46)
at org.opensearch.indexmanagement.indexstatemanagement.IndexMetadataProvider$getMultiTypeISMIndexMetadata$2$invokeSuspend$$inlined$forEach$lambda$1.invokeSuspend(IndexMetadataProvider.kt:66)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(DispatchedTask.kt:56)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:571)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.executeTask(CoroutineScheduler.kt:738)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.runWorker(CoroutineScheduler.kt:678)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:665)
I've tried stopping all applications that interacts with OpenSearch, tried to delete my ISM policy, but these errors keep filling the logs
any ideas?

Ec2 Metadata updgrade from imdSV1 to imdSV2 causes 403 and 401 error- kube2iam

I recently updated my ec2 instances to use imdSV2 but had to rollback because of the following issue:
It looks like after i did the upgrade my init containers started failing and i saw the following in the logs:
time="2022-01-11T14:25:01Z" level=info msg="PUT /latest/api/token (403) took 0.753220 ms" req.method=PUT req.path=/latest/api/token req.remote=XXXXX res.duration=0.75322 res.status=403 time="2022-01-11T14:25:37Z" level=error msg="Error getting instance id, got status: 401 Unauthorized"
We are using Kube2iam for the same. Any advice what changes need to be done on the Kube2iam side to support imdSV2? Below is some info from my kube2iam daemonset:
EKS =1.21
image = "jtblin/kube2iam:0.10.9"

How to remove the unwanted characters from fluentd logs

Currently I am sending my Kubernetes logs to cloud watch using Fluentd, but when I check the logs in cloudwatch, the logs are having extra unicode characters. I tried different ways to and regexp to solve but no luck. Here is the sample how my log is in cloud watch
Log in Cloudwatch: "log": "\u001b[2m2021-10-13 20:07:10.351\u001b[0;39m \u001b[32m INFO\u001b[0;39m \u001b[35m1\u001b[0;39m \u001b[2m---\u001b[0;39m \u001b[2m[trap-executor-0]\u001b[0;39m \u001b[36mc.n.d.s.r.aws.ConfigClusterResolver \u001b[0;39m \u001b[2m:\u001b[0;39m Resolving eureka endpoints via configuration\n"
Actual log : 2021-10-13 20:07:10.351 INFO 1 --- [trap-executor-0] c.n.d.s.r.aws.ConfigClusterResolver : Resolving eureka endpoints via configuration

Kubernetes audit log showing 404 not found on event

I'm seeing the following log continuously in the Kubernetes audit log file.
Can anyone explain what is this error and its reason
{
"kind":"Event",
"apiVersion":"audit.k8s.io/v1beta1",
"metadata":{"creationTimestamp":"2018-08-29T06:59:04Z"},
"level":"Request",
"timestamp":"2018-08-29T06:59:04Z",
"auditID":"97187fc8-76c1-42f0-9435-c11928b6ec49",
"stage":"ResponseComplete",
"requestURI":"/apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations",
"verb":"list",
"user":{"username":"system:apiserver","uid":"44rrd678-859a-4f663-bt79-23bar678uj66","groups":["system:masters"]},
"sourceIPs":["X.X.X.X"],
"objectRef":{"resource":"initializerconfigurations","apiGroup":"admissionregistration.k8s.io","apiVersion":"v1alpha1"},
"responseStatus":{"metadata":{},"status":"Failure","reason":"NotFound","code":404},
"requestReceivedTimestamp":"2018-08-29T06:59:04.350346Z",
"stageTimestamp":"2018-08-29T06:59:04.350425Z",
"annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}
}