Is it possible to configure nginx to skip caching when the file size is 392 bytes or Content-Length=0?
Related
I would like to set the value of terminationMessagePolicy to FallbackToLogsOnError by default for all my pods.
Is there any way to do that?
I am running Kubernetes 1.21.
terminationMessagePolicy is a field in container spec, currently beside set it in your spec there is no cluster level setting that could change the default value ("File").
Community wiki answer to summarise the topic.
The answer provided by the gohm'c is good. It is not possible to change this value from the cluster level. You can find more information about it in the official documentation:
Moreover, users can set the terminationMessagePolicy field of a Container for further customization. This field defaults to "File" which means the termination messages are retrieved only from the termination message file. By setting the terminationMessagePolicy to "FallbackToLogsOnError", you can tell Kubernetes to use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller.
See also this page about Container v1 core API for 1.21 version. You can find there information about terminationMessagePolicy:
Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.
This can be done only from the Container level.
I use wildfly appserver, when deploying a war file using Command-Line Interface (CLI) the process requires JVM heap size greater than 10 times the war file size.
How can I reduce this memory size that is consumed by jboss-cli during the deployment.
Problem detail:
I have to deploy 8 war files with 100 MB for each file, this process is applied in one transaction using "batch" and "batch.run", the memory consumed by this process exceeds 8GB.
I'm using the batch behavior because i have remote injections between wars, and i don't know the deployment order.
My question is how can I reduce the memory size consumed by wildfly when using jboss-cli, and if there is no way to reduce it, how can i know the deployment order between wars. (e.g. if app1 injects a remote session bean from app2, then the app2 must be deployed before app1).
You can define JVM options in $JAVA_OPTS environment variable, which will be loaded by WildFly.
For default JVM behavior take a brief look into bin/standalone.conf or bin/domain.conf.
I am running varnish with nginx as proxy on ubuntu and I am getting (24: Too many open files) error every few days.
Restarting nginx solves the problem.
After researching about this error I found that the common solution is to increase worker_rlimit_nofile in nginx.conf.
I feel like this is not a real solution since the limit I will set might reach as well.
Why nginx keeps these files (I believe these are the sockets) open? and what will a solution to my situation?
UPDATE:
I just noticed there are hundreds of varnish sockets open when I run lsof. I believe my issue is that these sockets don't get closed.
It's a good practice to increase the standard max number of files open on your server when it is a web server, the same goes for the number of ephemeral ports.
I think the default number of opened files is 1024 which is way too small for varnish
I am setting it to 131072
ulimit -n 131072
I am running Redis 2.8.19 on Windows Server 2008.
I get an error saying that I have insufficient disc space for my Redis heap. (The memory mapping file instead of fork()).
I can only get Redis running, if I have 'maxheap 1024M' in the cfg, even though I have ~50GB of free space on the directory I have set 'heapdir' to.
If I try to run it with higher maxheap, or with no maxheap, I get this error (PowerShell):
PS C:\Users\admasgve> cd D:\redis-2.8.19
PS D:\redis-2.8.19> .\redis-server.exe
[7476] 25 Feb 09:32:38.419 #
The Windows version of Redis allocates a large memory mapped file for sharing
the heap with the forked process used in persistence operations. This file
will be created in the current working directory or the directory specified by
the 'heapdir' directive in the .conf file. Windows is reporting that there is
insufficient disk space available for this file (Windows error 0x70).
You may fix this problem by either reducing the size of the Redis heap with
the --maxheap flag, or by moving the heap file to a local drive with sufficient
space.
Please see the documentation included with the binary distributions for more
details on the --maxheap and --heapdir flags.
Redis can not continue. Exiting.
Screenshot: http://i.stack.imgur.com/Xae0f.jpg
Free space on D: 49,4 GB
Free space on C: 2,71 GB
Total RAM: 16 GB
Free RAM: ~9 GB
redis.windows.conf:
# Generated by CONFIG REWRITE
loglevel verbose
logfile "stdout"
save 900 1
save 300 10
save 60 10000
dir "D:\\redis-2.8.19"
maxmemory 1024M
# maxheap 2048M
heapdir "D:\\redis-2.8.19"
Everything beside the last 3 lines are generated by redis with the 'CONFIG REWRITE' cmd. I have tried various things, with maxmemory, maxheap and heapdir.
From Redis documentation:
maxmemory / maxheap - the maxheap flag controls the maximum size of this memory mapped file, as well as the total usable space for the Redis heap. Running Redis without either maxheap or maxmemory will result in a memory mapped file being created that is equal to the size of physical memory; The Redis heap must be larger than the value specified by the maxmemory
Have anybody encountered this problem before? What do I do wrong?
Redis doesn't use the conf file in its home directory by default. You have to pass the file in on the command line:
.\redis-server.exe redis.windows.conf
This is what is in my conf file:
maxheap 2048M
heapdir D:\\redisheap
These settings resolved my issue.
This is how to use the maxheap flag, which is more convenient then using a config file:
redis-server --maxheap 2gb
To back up Michael's response, I've had the same problem.
I had ~40GB of free space, and paging file set to 4G-8G.
Redis did not want to start until I set paging file to the amount recommended by Windows themselves, which was 12GB.
Really odd beahaviour.
.\redis-server.exe redis.windows.conf
This is what is in my conf file:
maxheap 2048M
heapdir D:\\redisheap
after passing the above parameters in redis-server.exe redis.windows.conf
the service has started for me thanks for the solution.
maxheap 2048M
heapdir D:\"location where your server is
This Should Solve problem Please Ping me if you have Same Question
FINAL
Further testing revealed that in a newer version of G-WAN everything works as expected.
ORIGINAL
I'm working with large files and G-WAN seems perfect for my use case, but I can't seem to wrap my head around streaming content to the client.
I would like to avoid buffered responses as memory will be consumed very fast.
source code is published now
Thanks. The value you got is obviously wrong and this is likely to come from a mismatch in the gwan.h file where the CLIENT_SOCKET enum is defined. Wait for the next release for a file in sync with the executable.
Note that, as explained below, you won't have to deal with CLIENT_SOCKET for streaming files - either local or remote files - as local files are served streamed by G-WAN and remote files will be better served using G-WAN's reverse proxy.
copying to disk and serve from gwan is inefficient, and buffering the file in memory is also inefficient
G-WAN, like Nginx and many others, is already using sendfile() so you don't have anything to do in order to "stream large files to the client".
I've looked at sendfile() but I couldn't find where gwan stores the client socket. I've tried to use CLIENT_SOCKET but it didn't work
The only way for CLIENT_SOCKET to fail to return the client socket is to use a gwan.h header that does not match the version of your gwan executable.
By using a G-WAN connection handler, you can bypass G-WAN's default behavior (I assume that's what you tried)... but again, that's unecessary as G-WAN already does what you are trying to achieve (as explained above).
This in mind, here are a few points regarding G-WAN and sendfile():
an old release of G-WAN accidentally disabled sendfile() - don't use it, make sure you are using a more recent release.
the April public release was too careful at closing connections, (slowing down non-keep-alived connections) and was using sendfile() only for files greater than a certain size.
more recent development releases are using sendfile() for all static files (by default, as it confused too many users, we have disabled caching which can be explicitly restored either globally, per-connection, or for a specific resource).
As a result, for large files test loads, G-WAN is now faster than all the other servers that we have tested.
We have also enormously reworked memory consumption to reach unparalleled levels (a small fraction of Nginx's memory consumption) - even with large files served with sendfile().
G-WAN at startup on a 6-Core Xeon takes 2.2 MB of RAM (without compiled and loaded scripts like servlets and handlers):
> Server 'gwan' process topology:
---------------------------------------------
6] pid:4843 Thread
5] pid:4842 Thread
4] pid:4841 Thread
3] pid:4840 Thread
2] pid:4839 Thread
1] pid:4838 Thread
0] pid:4714 Process RAM: 2.19 MB
---------------------------------------------
Total 'gwan' server footprint: 2.19 MB
In contrast, Nginx with worker_connections 4096; eats 15.39 MB at startup:
> Server 'nginx' process topology:
---------------------------------------------
6] pid:4703 Process RAM: 2.44 MB
5] pid:4702 Process RAM: 2.44 MB
4] pid:4701 Process RAM: 2.44 MB
3] pid:4700 Process RAM: 2.44 MB
2] pid:4699 Process RAM: 2.44 MB
1] pid:4698 Process RAM: 2.44 MB
0] pid:4697 Process RAM: 0.77 MB
---------------------------------------------
Total 'nginx' server footprint: 15.39 MB
And, unlike, Nginx, G-WAN can handle more than 1 million of concurrent connections without reserving the memory upfront (nor any configuration by the way).
If you configure Nginx with worker_connections 1000000; then you have:
> Server 'nginx' process topology:
---------------------------------------------
6] pid:4568 Process RAM: 374.71 MB
5] pid:4567 Process RAM: 374.71 MB
4] pid:4566 Process RAM: 374.71 MB
3] pid:4565 Process RAM: 374.71 MB
2] pid:4564 Process RAM: 374.71 MB
1] pid:4563 Process RAM: 374.71 MB
0] pid:4562 Process RAM: 0.77 MB
---------------------------------------------
Total 'nginx' server footprint: 2249.05 MB
Nginx is eating 2.2 GB of RAM even before receiving any connection!
Under the same scenario, G-WAN needs only 2.2 MB of RAM (1024x less).
And G-WAN is now faster than Nginx for large files.
I want to stream large files from a remote source
sendfile() might not be what you are looking for as you state: "I want to stream large files from a remote source".
Here, if I correctly understand your question, you would like to RELAY large files from a remote repository, using G-WAN as a reverse-proxy, which is a totally different game (as opposed to serving local files).
The latest G-WAN development release has a generic TCP reverse-proxy feature which can be personalized with a G-WAN connection handler.
But in your case, you would just need a blind relay (without traffic rewrite) to go as fast as possible instead of allowing you to filter and alter the backend server replies.
The splice() syscall mentionned by Griffin is the (zero-copy) way to go - and G-WAN's (efficient event-based and multi-threaded) achitecture will do marvels - especially with its low RAM usage.
G-WAN can do this in a future release (this is simpler than altering the traffic), but that's a pretty vertical application as opposed to G-WAN's primary target which is to let Web/Cloud developers write applications.
Anyway, if you need this level of efficiency, G-WAN can help to reach new levels of performance. Contact us at G-WAN's website.
There is a nice example of the require functionality, also included with gwan application.
http://gwan.com/source/comet.c
Hope this helps.
I think you probably mean http streaming, not comet - in this case, there is a flv.c connection handler example provided with gwan. Also, you can use c sendfile() for zero copy transfering of files, or splice() syscall depending on what you need.