Quarkus + Vertx not performing as well as Vertx alone [closed] - vert.x

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I have 2 projects that retrieve data from the same postgres DB, running in a docker container, obtained from https://hub.docker.com/_/postgres
1 - Full Vertx - https://github.com/lurumbo/vertx-demo
2 - Quarkus + Vertx (it is actually the Vertx project, plus some adjustments to make it work with Quarkus) - https://github.com/lurumbo/quarkus-reactive-pg-vertx-demo
I performed some stress tests on them with Jmeter and I can't wrap my head around the results
Vertx Report
Quarkus Report
The tests were executed with these params:
Number of Threads (users): 1000
Ramp-Up Period (in seconds): 0
Loop Count: 10
What troubles me the most is the diff in the throughput values.
Vertx: 1776.5/sec
Quarkus: 1032.0/sec
Can you please help me understand the results?
Shouldn't the metrics be roughly the same?
Why am I getting such a diff in throughput?
Some notes:
I tested it using the runner.jar
The tests were made in JVM mode
Technical info:
Processor Intel® Core™ i7-8565U CPU # 1.80GHz × 8
Memory 15,5 GiB
Thank you very much!

Related

How to add nodes if "kubectl get nodes" shows an empty list? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
This post was edited and submitted for review 4 months ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I am trying to run some installation instructions for a software development environment built on top of K3S.
I am getting the error "no nodes available to schedule pods", which when Googled takes me to the question no nodes available to schedule pods - Running Kubernetes Locally with No VM
The answer to that question tells me to run kubectl get nodes.
And when I do that, it shows me perhaps not surprisingly, that I don't have any nodes running.
Without having to learn how Kubernetes actually works, how can I start some nodes and get past this error?
This is a local environment running on a single VM (just like the linked question).
It would depend how your K8s was installed. Kubernetes is a complex system requiring multiple nodes all configured correctly in order to function.
If there are no nodes found for scheduling, my first though would be you only have a single node and its a master node (which runs the control plane services but not workloads) and have not attached any worker nodes. You would need to add another node to the cluster which is running as a worker for it to schedule workloads.
If you want to get up and running without understanding it, there are distributions such as minikube or k3s, which will set it up out of the box and are designed to run on a single machine.

GCP Cloud Run with PostgreSQL - how to do migrations? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
we are starting with our first cloud run project. in the past we used postgres in combination with spring boot. and we had our migrations running via flyway (similar to liquibase) when the app started.
now with cloud run, this approach will maybe hit it's limit because of the following (corner) cases:
multiple incoming requests (http, messages) routed to parallel instances that could do the same migration in parallel when bootstrapping the container. that would result in exceptions and retries of failed messages or http errors
flyway check on bootstrap would slow down the cold start times additionally every time a container gets started, which could be a lot if we do not have constant traffic with "warm" instances
what would be a good approach having springboot/flyway and postgres as backing database shared accross the instances? a similar problem arises when you replace postgres with a nosql datastore i guess if you want/need to migrate new structures....
right now i can think of:
do a migration of the postgres schema as part of the deployment pipeline before the cloud revision gets replaced which would introduce new challenges (rollbacks etc.)
please share your ideas? looking forward to your answers?
marcel
For migrations that introduce breaking changes either on commit or on rollback, it's mandatory to have a full stop, and of course, rollback planned accordingly.
Also pay attention, that a commit/push should not trigger immediately the new migrations. Often these are not part of the regular CI/CD pipeline that goes to production.
After you deploy a service, you can create a new revision and assign a tag that allows you to access the revision at a specific URL without serving traffic.
A common use case for this, is to run and control the first visit to this container. You can then use that tag to gradually migrate traffic to the tagged revision, and to rollback a tagged revision.
To deploy a new revision of an existing service to production:
gcloud beta run deploy myservice --image IMAGE_URL --no-traffic --tag TAG_NAME
The tag allows you to directly test(or run via this the migration - the very first request) the new revision at a specific URL, without serving traffic. The URL starts with the tag name you provided: for example if you used the tag name green on the service myservice, you would test the tagged revision at the URL https://green---myservice-abcdef.a.run.app

Log entries are split in StackDriver using GKE [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 20 days ago.
Improve this question
I am having some issues with the log entries in Stackdriver using GKE, when the log entry is greater than 20 KB, this is split into several chunks. According to GCP documentation, the limit size of log entries is 256 KB (https://cloud.google.com/logging/quotas). I have tested several configurations and found out something very curious: when the Cluster is set up using Ubuntu nodes thet issue is seen. When I use the default node type: Container-Optimized OS (cos), Stackdriver captures the log entries correctly.
Can somebody explain me the cause of this error?. I have checked this Logging with Docker and Kubernetes. Logs more than 16k split up, I think it could be related.
Additional information:
GKE static version: v1.14.10-gke.50
Kernel version (nodes): 4.15.0-1069-gke
OS image (nodes): Ubuntu 18.04.5 LTS
Docker version (nodes): 18.9.7
Cloud Operations for GKE: Legacy Logging and Monitoring
New feedback: I have created more clusters using different GKE versions and another "Cloud Operations for GKE" implementation (System and Workload and Monitoring) and the issue is the same one. Curret steps to reproduce the issue:
Create a GKE cluster using as image Ubuntu (No matter the GKE version)
Deploy an application which logs a log entry greater than 16 KB. I am using a Spring boot application + Log4j 1.X
Look for the log entry in the Stackdriver web console. The log entry is split into multiple chunks.
I see a similar happens in my GCP project when output of one log entry is large, (17KB). what difference is: the first log entry contains 0~40% of the full log output, the second contains 0~70%, and third/last contains 0~100%. My app is Spring Boot reactive application. I use a reactive log filter.

Why Kubernetes system requirements are so high? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I tried different implentations of Kubernetes and realized that master node requires approximately 2GB RAM with 2 CPU cores and worker node 700MB with 1 core. Each component of k8s seems to be not so heavy-loaded, but it still requires a lot of resources.
What is a bottleneck and is it configurable?
Have you tried Lightweight Kubernetes K3S
K3s cluster to run in a single-node configuration can run with 1 CPU and 512MB RAM
Hardware requirements scale based on the size of your deployments. Minimum recommendations are outlined here.
RAM: 512MB Minimum
CPU: 1 Minimum
If k3s can do it so for sure the resources look configurable to use lower values.

Spring Batch and Pivotal Cloud Foundry [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
We are evaluating Spring Batch framework to replace our home grown batch framework in our organization and we should be able to deploy the batch in Pivotal Cloud Foundry (PCF). In this regard, can you let us know your thoughts on the issue below:
Let us say if we use Remote Partitioning strategy to process large volume of records, can the batch job auto scale Slave nodes in the cloud based on the amount of that the batch job processes? Or we have to scale appropriate number of Slave nodes and keep them in place before the batch job kicks-off?
How does the "grid size" parameter configuration in the scenario above?
You have a few questions here. However, before getting into them, let me take a minute and walk through where batch processing is on PCF right now and then get to your questions.
Current state of CF
As of PCF 1.6, Diego (the dynamic runtime within CF) provided a new primitive called Tasks. Traditionally, all applications running on CF were expected to be long running processes. Because of this, in order to run a batch job on CF, you'd need to package it up as a long running process (web app usually) and then deploy that. If you wanted to use remote partitioning, you'd need to deploy and scale slaves as you saw fit, but it was all external to CF. With Tasks, Diego now supports short lived processes...aka processes that won't be restarted when they complete. This means that you can run a batch job as a Spring Boot über jar and once it completes, CF won't try to restart it (that's a good thing). The issue with 1.6 is that an API exposing Tasks was not available so it was only an internal construct.
With PCF 1.7, a new API is being released to expose Tasks for general use. As part of the v3 API, you'll be able to deploy your own apps as Tasks. This allows you to launch a batch job as a task knowing it will execute, then be cleaned up by PCF. With that in mind...
Can the batch job auto scale Slave nodes in the cloud based on the amount of that the batch job processes?
When using Spring Batch's partitioning capabilities, there are two key components. The Partitioner and the PartitionHandler. The Partitioner is responsible for understanding the data and how it can be divided up. The PartitionHandler is responsible for understanding the fabric in which to distribute the partitions to the slaves.
For Spring Cloud Data Flow, we plan on creating a PartitionHandler implementation that will allow users to execute slave partitions as Tasks on CF. Essentially, what we'd expect is that the PartitionHandler would launch the slaves as tasks and once they are complete, they would be cleaned up.
This approach allows the number of slaves to be dynamically launched based on the number of partitions (configurable to a max).
We plan on doing this work for Spring Cloud Data Flow but the PartitionHandler should be available for users outside of that workflow as well.
How does the "grid size" parameter configuration in the scenario above?
The grid size parameter is really used by the Partitioner and not the PartitionHandler and is intended to be a hint on how many workers there may be. In this case, it could be used to configure how many partitions you want to create, but that really is up to the Partitioner implementation.
Conclusion
This is a description of how a batch workflow on CF would look like. It's important to note that CF 1.7 is not out as of the writing of this answer. It is scheduled to be out Q1 of 2016 and at that time, this functionality will follow shortly afterwards.