How to Set Max User Processes on Kubernetes - kubernetes

I built a docker container for my jboss application (wildfly11 in used). Container ran in AWS EKS Fargate. After running the container for several minutes, “java.lang.OutOfMemoryError: Unable to create new native thread ” occurred.
After reading this article “How to solve java.lang.OutOfMemoryError: unable to create new native thread”, I would like to change the max user processes from 1024 to 4096. However, I can't find any possible way to change it by reading the documentation in kubernetes.
I have tried those methods in this article How do I set ulimit for containers in Kubernetes? , but it seems cannot help.
I have also edited the file /etc/security/limits.conf in my Dockerfile, but the number still hasn't changed.
Anyone have an idea about this?
Thank you.

Related

Writing to neo4j pod takes much more time than writing to local neo4j

I have a python code where I process some data, write neo4j queries and then commit these queries to neo4j. When I run the code on my local machine and write the output to local neo4j it doesn't take more than 15 minutes. However, when I run my code locally and write the output to noe4j pod in k8s pod it takes double the time, and when I build my code and deploy it to k8s and run that pod and write the output to neo4j pod it takes a round 3 hours. since I'm new to k8s deployment it might something in the pod configurations or settings, so I appreciate if I can get some hints
There could be few reasons of that.
I would first check how much resources does your pod consume while you are processing data, you can do that using kubectl top pod.
Second I would check if there are any limits inside pod. You can read a great deal about them on Managing Compute Resources for Containers.
If you have a limit set then it might be too low and that's causing the extended time while processing data.
If limits are not set then it might be because of how you installed minik8s. I think as default it's being installed with 4G is memory, you can look at alternative methods of installing minik8s. With multipass you can specify more memory to allocate.
There also can be a issue with Page Cache Sizing, Heap Sizing or number of open files. Please read the Neo4j Performance Tuning.

How to overcome the IllegalAccessError while start up of connector in Kafka

I am writing a connector for Kafka Connect. The error I see during the start up of connector is
java.lang.IllegalAccessError: tried to access field org.apache.kafka.common.config.ConfigTransformer.DEFAULT_PATTERN from class org.apache.kafka.connect.runtime.AbstractHerder
The error seems to happen at https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractHerder.java#L449
Do I need to set this DEFAULT.PATTERN manually? Is this not set by default.
I am using the docker image confluentinc/cp-kafka:5.0.1. The version of connect-api I am using in my connector app is org.apache.kafka:connect-api:2.0.0. I am running my set up inside Kubernetes.
The issue was resolved when I changed the image to confluentinc/cp-kafka:5.0.0-2.
I already tried this option before posting the question, but the pod was in a Pending state and was refusing to start. I thought that it could have been an issue with the image. Upon doing some more research later, I came to know that sometimes Kubernetes is unable to allot enough resources and hence pods can stay in Pending state.
I tried the image confluentinc/cp-kafka:5.0.0-2 and it works fine.

Why would running a container on GCE get stuck Metadata request unsuccessful forbidden (403)

I'm trying to run a container in a custom VM on Google Compute Engine. This is to perform a heavy ETL process so I need a large machine but only for a couple of hours a month. I have two versions of my container with small startup changes. Both versions were built and pushed to the same google container registry by the same computer using the same Google login. The older one works fine but the newer one fails by getting stuck in an endless list of the following error:
E0927 09:10:13 7f5be3fff700 api_server.cc:184 Metadata request unsuccessful: Server responded with 'Forbidden' (403): Transport endpoint is not connected
Can anyone tell me exactly what's going on here? Can anyone please explain why one of my images doesn't have this problem (well it gives a few of these messages but gets past them) and the other does have this problem (thousands of this message and taking over 24 hours before I killed it).
If I ssh in to a GCE instance then both versions of the container pull and run just fine. I'm suspecting the INTEGRITY_RULE checking from the logs but I know nothing about how that works.
MORE INFO: this is down to "restart policy: never". Even a simple Centos:7 container that says "hello world" deployed from the console triggers this if the restart policy is never. At least in the short term I can fix this in the entrypoint script as the instance will be destroyed when the monitor realises that the process has finished
I suggest you try creating a 3rd container that's focused on the metadata service functionality to isolate the issue. It may be that there's a timing difference between the 2 containers that's not being overcome.
Make sure you can ‘curl’ the metadata service from the VM and that the request to the metadata service is using the VM's service account.

Openshift says Quota limit reached

In the Open shift i have 4 projects and 25Gb of space allocated to the projects.
And db i use is Mongo Db(3.2 Version).
So in openshift iam getting the message has Quota limit reached and if i check all the 25 GB has been used as per openshift
But in Mongo db if i check using db.stats() for all the projects i have used 5.7GB
I want to know where the remaining space is used Or how to find exact space that i am using.
I think you’d like to do double checks about your resource issues.
check what resource limit was reached, is it a storage?
you should check the event logs which provide more details.
check what quota limits were configured your cluster or project.
have you been experienced some troubles after the showing the messages? Such as db hanging up, no response from pod and so on.
They are just troubleshooting guides, but i hope it help you.

Which way to run PostgreSQL in Docker?

Which of these methods is correct?
One db container for each app
One db container for all apps
Install db without docker
I tried to find information, but nothing. Or I badly searched?
It is immature, but that doesn't seem to be stopping a lot of people using Docker for persistence.
The official Postgres image has 4.5 million pulls - Ok, this doesn't mean that all those images/containers are being used but it does suggest that it is a popular solution.
If you have already decided that you would like to use Docker, because of what containers can offer your architecture, then I don't think you will have trouble using it for persistence - assuming you are happy learning Docker.
I'm using Postgres and MySql in several projects quite successfully on Docker.
In choosing option 1 or 2, I would say that unless your apps are related to the same problem domain/company/project I would go with option 1. Of course, running costs will possibly factor in as well.
I generally go with option 1.
All 3 options could be valid but it depends on the use you have to perform.
In my server I've 1 container for every main postgresql releases currently I use.
I run all of them on different ports (use not random number for ports but some easy to remember because a problem with docker is remember all port numbers and other for every container).
pg84(port 8432), pg93(port 9332), pg94 (port 9432)
I link the pgXX to the container I need and that's perfect for me.
For my experience I prefer option 2 so.