Tomcat 9 throwing - rg.apache.catalina.webresources.Cache.getResource Unable to add the resource - tomcat9

02-Feb-2022 16:35:23.846 WARNING [http-nio-9020-exec-442] org.apache.catalina.webresources.Cache.getResource Unable to add the resource at [/] to the cache for web application [] because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
02-Feb-2022 16:35:23.870 WARNING [http-nio-9020-exec-442] org.apache.catalina.webresources.Cache.getResource Unable to add the resource at [/] to the cache for web application [] because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
02-Feb-2022 16:35:23.975 WARNING [http-nio-9020-exec-442] org.apache.catalina.webresources.Cache.getResource Unable to add the resource at [/] to the cache for web application [] because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
02-Feb-2022 16:35:27.468 WARNING [mysql-cj-abandoned-connection-cleanup] org.apache.catalina.webresources.Cache.getResource Unable to add the resource at [/WEB-INF/classes/] to the cache for web application [] because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
02-Feb-2022 16:35:27.518 WARNING [http-nio-9020-exec-452] org.apache.catalina.webresources.Cache.getResource Unable to add the resource at [/audio-cassettes-to-cd/index.html] to the cache for web application [] because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache
02-Feb-2022 16:35:27.518 WARNING [http-nio-9020-exec-452] org.apache.catalina.webresources.Cache.getResource Unable to add the resource at [/audio-cassettes-to-cd/index.htm] to the cache for web application [] because there was insufficient free space available after evicting expired cache entries - consider increasing the maximum size of the cache

Related

containerd - cannot update memory of a running container lower than its current memory

I am using 'crictl' tool to work with containerd runtime containers (under kubernetes) in a managed cluster.
I'm trying to set the memory limit (in bytes) to 16MB with the command:
crictl -r unix:///run/containerd/containerd.sock update --memory 16777216 c60df9ef3381e
And get the following error:
E1219 11:10:11.616194 1241 remote_runtime.go:640] "UpdateContainerResources from runtime service failed" err=<
rpc error: code = Unknown desc = failed to update resources: failed to update resources: /usr/bin/runc did not terminate successfully: exit status 1: unable to set memory limit to 16777216 (current usage: 97058816, peak usage: 126517248)
: unknown
> containerID="c60df9ef3381e"
FATA[0000] updating container resources for "c60df9ef3381e": rpc error: code = Unknown desc = failed to update resources: failed to update resources: /usr/bin/runc did not terminate successfully: exit status 1: unable to set memory limit to 16777216 (current usage: 97058816, peak usage: 126517248)
: unknown
At first I thought that maybe I cannot set a memory limit directly to a running container lower than the limit that appears in the kubernetes yaml.
Here Are the limits from K8s:
Requests:{"cpu":"100m","memory":"64Mi"} Limits:{"cpu":"200m","memory":"128Mi"}
But not, even setting a memory limit above the K8S request (e.g. 65MB) gives this same error!
This works on Docker runtime - I'm able to limit the memory of the container. Yes, it might crash, but the operation works..
Then, I tried to give a memory limit higher than the current usage, and it succeeded...
Can anyone help understanding this error and what might be causing it on containerd runtime?? Is this indeed a limitation that I cannot limit to a lower memory currently used by the container? Is there a way to overcome that?
Thanks a lot for your time!!!

How to optimize config Keycloak

I have deployed Keycloak on Kubernetes and am having a performance issue with Keycloak as follows:
I run 6 pods Keycloak with mode standalone HA using KUBE_PING in Kubernetes and auto scale hpa if CPU over 80%. When I test the login load with Keycloak, the threshold is only 150ccu and if over this threshold an error will occur. I see log pods Keycloak occur timeout as below
ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p16-t1) ISPN000136: Error executing command RemoveCommand on Cache 'authenticationSessions', writing keys [f85ac151-6196-48e9-977c-048fc8bcd975]: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 2312955 from keycloak-9b6486f7-bgw8s"
ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p16-t1) ISPN000136: Error executing command ReplaceCommand on Cache 'loginFailures', writing keys [LoginFailureKey [ realmId=etc-internal. userId=a76c3578-32fa-42cb-88d7-fcfdccc5f5c6 ]]: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 2201111 from keycloak-9b6486f7-bgw8s"
ERROR [org.infinispan.interceptors.impl.InvocationContextInterceptor] (timeout-thread--p20-t1) ISPN000136: Error executing command PutKeyValueCommand on Cache 'sessions', writing keys [6a0e8cde-82b7-4824-8082-109ccfc984b4]: org.infinispan.util.concurrent.TimeoutException: ISPN000476: Timed out waiting for responses for request 2296440 from keycloak-9b6486f7-9l9lq"
I see the RAM, CPU of keycloak takes up very little under 20% so it does not auto scale hpa. So I think the current configuration of Keycloak is not optimize as about number of CACHE_OWNERS, Access Token Lifespan, SSO Session Idle, SSO Session Max, etc...
I want to know what configurations to configure accordingly and can load Keycloak to 500ccu with response time ~ 3s. Please support me if you know about this !
In standalone-ha.xml config, I only update config about datasource as image below

Ceph libRBD cache control

So Ceph has a user-space page cache implementation in librbd. Does it allow users to mention how much page cache to allocate to each pod? If yes, can we dynamically change the allocations?
There is no reference to page cache allocation at the POD level according to documentation and issues in the project github.
Ceph supports write-back caching for RBD. To enable it, add rbd cache = true to the [client] section of your ceph.conf file. By default librbd does not perform any caching. Writes and reads go directly to the storage cluster, and writes return only when the data is on disk on all replicas. With caching enabled, writes return immediately, unless there are more than rbd cache max dirty unflushed bytes. In this case, the write triggers writeback and blocks until enough bytes are flushed.
This are the currently supported RDB Cache parameters and they must be inserted in the client section of your ceph.conf file:
rbd cache = The RBD cache size in bytes. | Type: Boolean, Required: No, Default: false
rbd cache size = Enable caching for RADOS Block Device (RBD). | Type: 64-bit Integer, Required: No, Default: 32 MiB
rbd cache max dirty = The dirty limit in bytes at which the cache triggers write-back. | If 0, uses write-through caching.
Type: 64-bit Integer, Required: No, Constraint: Must be less than rbd cache size, Default: 24 MiB
rbd cache target dirty = The dirty target before the cache begins writing data to the data storage. Does not block writes to the cache. | Type: 64-bit Integer, Required: No, Constraint: Must be less than rbd cache max dirty, Default: 16 MiB
rbd cache max dirty age = The number of seconds dirty data is in the cache before writeback starts. | Type: Float, Required: No, Default: 1.0
rbd cache max dirty age
rbd cache writethrough until flush = Start out in write-through mode, and switch to write-back after the first flush request is received. Enabling this is a conservative but safe setting in case VMs running on rbd are too old to send flushes, like the virtio driver in Linux before 2.6.32. | Type: Boolean, Required: No, Default: false

Dynamodb - Eclipse Error

Firstly i am using
Titan Graph 1.0.0
Backend Storage = DynamoDB_Local
I am deleting graph created and stored in local P.C.
I have code as below
{
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty("storage.backend", "com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager");
conf.setProperty("storage.dynamodb.client.endpoint", "http://localhost:4567");
conf.setProperty("storage.dynamodb.enable-parallel-scan", "true");
conf.setProperty("ids.flush", "false");conf.setProperty("faunus.output.titan.storage.index.search.backend", "elasticsearch");
conf.setProperty("faunus.graph.output.titan.storage.index.search.hostname", "/tmp/searchindex");
conf.setProperty("faunus.graph.output.titan.storage.index.search.elasticsearch.client-only", "false");
conf.setProperty("faunus.graph.output.titan.storage.index.search.elasticsearch.local-mode", "true");
TitanGraph graph = TitanFactory.open(conf);
graph.close();
TitanCleanup.clear(graph);
System.out.println("graph delete");
System.exit(0);
}}
And when i run this code gives me an error like this
Exception in thread "main" com.thinkaurelius.titan.core.TitanException: Could not initialize backend
at com.thinkaurelius.titan.diskstorage.Backend.initialize(Backend.java:301)
at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:1806)
at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:123)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:94)
at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:74)
at deleteGraph.main(deleteGraph.java:56)
Caused by: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: CreateTable_titan_graphindex Cannot increase provisioned throughput to more than 80,000 units per account (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 34d7515c-d628-4224-bca3-b75acb936c71)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.processDynamoDBAPIException(DynamoDBDelegate.java:215)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.createTable(DynamoDBDelegate.java:702)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.createTableAndWaitForActive(DynamoDBDelegate.java:838)
at com.amazon.titan.diskstorage.dynamodb.AbstractDynamoDBStore.ensureStore(AbstractDynamoDBStore.java:92)
at com.amazon.titan.diskstorage.dynamodb.MetricStore.ensureStore(MetricStore.java:47)
at com.amazon.titan.diskstorage.dynamodb.TableNameDynamoDBStoreFactory.create(TableNameDynamoDBStoreFactory.java:52)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.openDatabase(DynamoDBStoreManager.java:202)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBStoreManager.openDatabase(DynamoDBStoreManager.java:57)
at com.thinkaurelius.titan.diskstorage.Backend.initialize(Backend.java:235)
... 5 more
Caused by: com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: Cannot increase provisioned throughput to more than 80,000 units per account (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException; Request ID: 34d7515c-d628-4224-bca3-b75acb936c71)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1579)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1249)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:1835)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:1811)
at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.createTable(AmazonDynamoDBClient.java:640)
at com.amazon.titan.diskstorage.dynamodb.DynamoDBDelegate.createTable(DynamoDBDelegate.java:700)
... 12 more
I don't know how to solve this error also no reference has been given for this exception
Can anyone help it...!
You appear to be running the stack in us-east-1, as the maximum provisioned throughput is 80k there. You omitted the edgestore/graphindex provisioning configuration, but I suspect you tried to provision more than 80000 units of provisioned throughput. To get more throughput, you need to file a request through AWS support to have the per table and per region throughput limits raised.

Mesos Kafka task failed memory limit

I am going to set up a kafka cluster on apache mesos.
I follow the instruction at kafka-mesos on github. I installed a mesos cluster (using Mesosphere without Marathon) with 3 nodes each with 2 CPUs and 4GB memory. I tested the cluster with hello world examples successfully.
I can run kafka-mesos scheduler on it and can add brokers to it.
But when i want to start the broker, an memory limit issued appear.
broker-191-.... TASK_FAILED slave:#c3-S1 reason:REASON_MEMORY_LIMIT
Although, the cluster has 12GB memory, but broker just need 3GB memory with 1GB heap. (I test it with various configuration from 512M till 3GB, but not worked)
What is the problem? and what is the solution?
the complete story is here:
2015-10-17 17:39:24,748 [Jetty-17] INFO ly.stealth.mesos.kafka.HttpServer$ - handling - http://192.168.11.191:7000/api/broker/start
2015-10-17 17:39:28,202 [Thread-605] INFO ly.stealth.mesos.kafka.Scheduler$ - [resourceOffers]
mesos-2#O1160 cpus:2.00 mem:4098.00 disk:9869.00 ports:[31000..32000]
mesos-3#O1161 cpus:2.00 mem:4098.00 disk:9869.00 ports:[31000..32000]
mesos-1#O1162 cpus:2.00 mem:4098.00 disk:9869.00 ports:[31000..32000]
2015-10-17 17:39:28,204 [Thread-605] INFO ly.stealth.mesos.kafka.Scheduler$ - Starting broker 191: launching task broker-191-0abe9e57-b0fb-4d87-a1b4-529acb111940 by offer mesos-2#O1160
broker-191-0abe9e57-b0fb-4d87-a1b4-529acb111940 slave:#c6-S3 cpus:1.00 mem:3096.00 ports:[31000..31000] data:defaults=broker.id\=191\,log.dirs\=kafka-logs\,port\=31000\,zookeeper.connect\=192.168.11.191:2181\\\,192.168.11.192:2181\\\,192.168.11.193:2181\,host.name\=mesos-2\,log.retention.bytes\=10737418240,broker={"stickiness" : {"period" : "10m"\, "stopTime" : "2015-10-17 13:43:29.278"}\, "id" : "191"\, "mem" : 3096\, "cpus" : 1.0\, "heap" : 1024\, "failover" : {"delay" : "1m"\, "maxDelay" : "10m"}\, "active" : true}
2015-10-17 17:39:28,417 [Thread-606] INFO ly.stealth.mesos.kafka.Scheduler$ - [statusUpdate] broker-191-0abe9e57-b0fb-4d87-a1b4-529acb111940 TASK_FAILED slave:#c6-S3 reason:REASON_MEMORY_LIMIT
2015-10-17 17:39:28,418 [Thread-606] INFO ly.stealth.mesos.kafka.Scheduler$ - Broker 191 failed 1, waiting 1m, next start ~ 2015-10-17 17:40:28+03
2015-10-17 17:39:29,202 [Thread-607] INFO ly.stealth.mesos.kafka.Scheduler$ - [resourceOffers]
I found the following in Mesos master log:
...validation.cpp:422] Executor broker-191-... for task broker-191-... uses less CPUs (None) than the minimum required (0.01). Please update your executor, as this will be mandatory in future releases.
...validation.cpp:434] Executor broker-191-... for task broker-191-... uses less memory (None) than the minimum required (32MB). Please update your executor, as this will be mandatory in future releases.
but i set the CPU and MEM for brokers via broker add (update):
broker updated:
id: 191
active: false
state: stopped
resources: cpus:1.00, mem:2048, heap:1024, port:auto
failover: delay:1m, max-delay:10m
stickiness: period:10m, expires:2015-10-19 11:15:53+03
The executor doesn't get the heap setting just the broker. I opened an issue for this https://github.com/mesos/kafka/issues/137. Please increase the mem until a patch is available.
This hasn't been a problem seen I suspect because the mem gets set as a larger value (the size of your data set you don't want to hit disk from when reading) so there is page cache for max efficiencies http://kafka.apache.org/documentation.html#maximizingefficiency