Varnish restarting suddenly - restart

Does varnish keep a crash / restart log?
I am currently monitoring a varnish server and it seems to restart every week or so, when CPU usage reaches about 100% (load gets a bit high - about 6~7 on a 2 cores machine) and IO wait takes an avg of 45% of CPU time.
Am I missing any configuration or predefined behavior? Does it mean that I have a bottleneck in my hardware causing varnish failures?
Thanks!

When the child dies you should see a message in syslog. It will say something like Child exited.... Varnish is good about keeping track of the child, so when it does crash it will be immediately restarted and it should log it.
Load of 6-7 seems high. If you are using file backed storage I suggest switching to malloc. If you need more cache space, get a box with more memory. Use the nuking behavior as your guide (varnishstat -1 | grep nuke). If the value there reported by varnish is 0 your cache size is sufficient.

Related

kubernetes pod high cache memory usage

I have a java process which is running on k8s.
I set Xms and Xmx to process.
java -Xms512M -Xmx1G -XX:SurvivorRatio=8 -XX:NewRatio=6 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled -jar automation.jar
My expectation is that pod should consume 1.5 or 2 gb memory, but it consume much more, nearly 3.5gb. its too much.
if ı run my process on a virtual machine, it consume much less memory.
When ı check memory stat for pods, ı reliase that pod allocate too much cache memory.
Rss nearly 1.5GB is OK. Because Xmx is 1gb. But why cache nearly 3GB.
is there any way to tune or control this usage ?
/app $ cat /sys/fs/cgroup/memory/memory.stat
cache 2881228800
rss 1069154304
rss_huge 446693376
mapped_file 1060864
swap 831488
pgpgin 1821674
pgpgout 966068
pgfault 467261
pgmajfault 47
inactive_anon 532504576
active_anon 536588288
inactive_file 426450944
active_file 2454777856
unevictable 0
hierarchical_memory_limit 16657932288
hierarchical_memsw_limit 9223372036854771712
total_cache 2881228800
total_rss 1069154304
total_rss_huge 446693376
total_mapped_file 1060864
total_swap 831488
total_pgpgin 1821674
total_pgpgout 966068
total_pgfault 467261
total_pgmajfault 47
total_inactive_anon 532504576
total_active_anon 536588288
total_inactive_file 426450944
total_active_file 2454777856
total_unevictable 0
A Java process may consume much more physical memory than specified in -Xmx - I explained it in this answer.
However, in your case, it's not even the memory of a Java process, but rather an OS-level page cache. Typically you don't need to care about the page cache, since it's the shared reclaimable memory: when an application wants to allocate more memory, but there is not enough immediately available free pages, the OS will likely free a part of the page cache automatically. In this sense, page cache should not be counted as "used" memory - it's more like a spare memory used by the OS for a good purpose while application does not need it.
The page cache often grows when an application does a lot of file I/O, and this is fine.
Async-profiler may help to find the exact source of growth:
run it with -e filemap:mm_filemap_add_to_page_cache
I demonstrated this approach in my presentation.

HAProxy reverse ssl termination: Memory keeps growing. Memory leak?

I have haproxy 2.5.1 in SSL termination config running in a container of a Kubernetes POD, the backend is an Scala App that runs in another container of same POD.
I have seen that I can put 500K connections in the setup and the RSS memory usage of HAProxy is 20GB. If I remove the traffic and wait 15 minutes the RSS memory drops to 15GB, but if I repeat the same exercise one or two more times, RSS for HAProxy will hit 30GB and HAProxy will be kill as I have a limit of 30GB in the POD for HAProxy.
The question here is if this behavior of continuous memory growth is expected?
Here is the incoming traffic:
And here is the memory usage chart which shows how after 3 cycles of Placing Load and Removing Load, the RSS memory reached 30GB and then got killed (Just as an observation the two charts have different timezone but they belong to same execution)
We switched from Alpine based image(musl) into libc based image and that solved the problem. We got 5X increase on connection rate and memory growth gone too.

K8s cluster memory decreases when running an Apache Flink Job

We are trying to deploy an apache Flink job on a K8s Cluster, but we are noticing an odd behavior, when we start our job, the task manager memory starts with the amount assigned, in our case is 3 GB.
taskmanager.memory.process.size: 3g
eventually, the memory starts decreasing until it reaches about 160 MB, at that point, it recovers a little memory so it doesn't reach its end.
that very low memory often causes that the job is terminated due to task manager heartbeat exception even when trying to watch the logs on Flink dashboard or doing the job's process.
Why is it going so low on memory? we expected to have that behavior but in the range of GB because we assigned those 3Gb to the task manager even if we change our task manager memory size we have the same behavior.
Our Flink conf looks like this:
flink-conf.yaml: |+
taskmanager.numberOfTaskSlots: 1
blob.server.port: 6124
taskmanager.rpc.port: 6122
taskmanager.memory.process.size: 3g
metrics.reporters: prom
metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
metrics.reporter.prom.port: 9999
metrics.system-resource: true
metrics.system-resource-probing-interval: 5000
jobmanager.rpc.address: flink-jobmanager
jobmanager.rpc.port: 6123
is there a recommended configuration on K8s for memory or something that we are missing on our flink-conf.yml?
Thanks.
Your configuration looks fine. It's most likely an issue with your code and some kind of memory leak. This is a very good answer describing what may be the problem.
You can try setting a limit on the JVM heap with taskmanager.memory.task.heap.size that you give the JVM some extra room to do GC, etc. But in the end, if you are allocating something that is not being referenced you will run into the situation.
Presumably, you are using your memory to store your state in which case you can also try RockDB as a state backend in case you are storing large objects.
What are your requests/limits in you deployment templates? If there are no specified request sizes you may be seeing your cluster resources get eaten.

BorPred in local iisexpress - is it memory leak?

I've installed BorPred in local iisexpress on clean server 2019 core. Debug in web.config is disabled, log4net setup changed to show only ERROR/FATAL.
Borpred started with mem usage less than 20M, and then I connect to it mem usage start growing and this is ok.
If I leave borpred alone for 1 hour it keeps running and it is normal too due to the periodic api/admin_WebApi/GetChangesSince calls.
But the mem usage after 1 hour increased up to 600M
I use TASKLIST command to check it.
Question - is it normal behavior or it can be mem leak?
Are there some settings to change/to check that can help to decrease mem usage?
Thank you
New name for this product is MDrivenServer.
The MDrivenServer has client synchronization - this builds up a list of changed identities. It will be expected to see a build up of memory due to update operations building the memory of the recently changed objects.
The MDrivenServer also has internal EcoSpaces to handle its own administration and ServerSide jobs - these will be garbaged and recreated when used a certain period of time.
.NET does not necessarily release memory from processes that have shown a need for the memory in the past - this causes you to see the used memory to equal the worst case need - like if you have a server-side job that pushes memory usage and it run once a day - the memory usage may still reflect the max usage.

CPU usage of Jboss JVM goes upto 99% and stays there

I am doing load testing on my application using jmeter and I have a situation where the cpu usage by the applications jvm goes to 99% and it stays there. Application still work, I am able to login and do some activity. But, it’s understandably slower.
Details of environment:
Server: AMD Optrom, 2.20 Ghz, 8 Core, 64bit, 24 GB RAM. Windows Server 2008 R2 Standard
Application server: jboss-4.0.4.GA
JAVA: jdk1.6.0_25, Java HotSpot(TM) 64-Bit Server VM
JVM settings:
-Xms1G -Xmx10G -XX:MaxNewSize=3G -XX:MaxPermSize=12G -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+UseCompressedOops -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000
Database: MySql 5.6 (in a different machine)
Jmeter: 2.13
My scenario is that, I make 20 users of my application to log into it and perform normal activity that should not be bringing huge load. Some, minutes into the process, JVM of Jboss goes up and it never comes back. CPU usage will remain like that till JVM is killed.
To help better understand, here are few screen shots.
I found few post which had cup # 100%, but nothing there was same as my situation and could not find a solution.
Any suggestion on what’s to be done will be great.
Regards,
Sreekanth.
To understand the root cause of the high CPU utilization, we need to check the CPU data and thread dumps at same time.
Capture 5-6 thread dumps at the time of the issue. Similarly capture CPU consumption thread-by-thread basis.
Generally the root cause of the CPU issue would be problems with threads like BLOCKED threads, long running threads, dead-lock, long running loops etc. That can be resolved by going through the stacks of the threads.