How to configure Spring Gateway for correct routing - spring-cloud

I created a Spring Eureka Registry, Spring Gateway and a Customer-Servvice.
On my local machine, everything works fine. I open my Registry and see there something like.
host.docker.internal:customer-management-ws:c390c86d4ab279fb780ad324347681b0
On click on it, I see a prepared info json with useful information of this service (acutator/info).
My Gateway runs on port 8011. If I call
localhost:8011/customer-management-ws/customer/all
I do a GET Request, that Gateway with integrated loadbalancer delegate my request to my runing customer ws...perfect.
=========
Today I deployed all three services to a external linux-server. Registry, Gateway and customer-ws running inside a docker container. So three docker container in total.
When I go to my new Eureka Registry URL I see something slightly different to my localhost.
1ee684bc4d4e:customer-management-ws:8fef838e6471c0f72ce610d3c9b960a3
On click on it, nothing happens (page not found).
My docker Gateway runs on port 8011. If I call
myserver.org:8011/customer-management-ws/customer/all
That GET Request returns Error 500, docker logs of my gateway show me, this
java.net.UnknownHostException: failed to resolve '1ee684bc4d4e' after 2 queries
at io.netty.resolver.dns.DnsResolveContext.finishResolve(DnsResolveContext.java:1046) ~[netty-resolver-dns-4.1.67.Final.jar!/:4.1.67.Final]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.cloud.gateway.filter.WeightCalculatorWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP GET "/customer-management-ws/api/swagger-ui.html" [ExceptionHandlingWebHandler]
Any Ideas how to fix that? What is 1ee684bc4d4e and why the loadbalancer cant resolve it to the docker customer-ws container behind it?
My docker customer-ws runs on port 8020. If do direct call
myserver.org:8020/customer/all
The get request is successfull.

Solution was to run all three Docker Container inside same network.
Create bridge network
Run all container in this network.

Related

Kubernetes get log within container

Background: I use glog to register signal handler, but it cannot kill the init process (PID=1) with kill sigcall. That way, even though deadly signals like SIGABRT is raised, kubernetes controller manager won't be able to understand the pod is actually not functioning, thus kill the pod and restart a new one.
My idea is to add logic into my readiness/liveness probe: check the content for current container, whether it's in healthy state.
I'm trying to look into the logs on container's local filesystem /var/log, but haven't found anything useful.
I'm wondering if it's possible to issue a HTTP request to somewhere, to get the complete log? I assume it's stored somewhere.
You can find the kubernetes logs on Master machine at:
/var/log/pods
if using docker containers:
/var/lib/docker/containers
Containers are Ephemeral
Docker containers emit logs to the stdout and stderr output streams. Because containers are stateless, the logs are stored on the Docker host in JSON files by default.
The default logging driver is json-file. The logs are then annotated with the log origin, either stdout or stderr, and a timestamp. Each log file contains information about only one container.
As #Uri Loya said, You can find these JSON log files in /var/lib/docker/containers/ directory on a Linux Docker host. Here's how you can access them:
/var/lib/docker/containers/<container id>/<container id>-json.log
You can collect the logs with a log aggregator and store them in a place where they'll be available forever. It's dangerous to keep logs on the Docker host because they can build up over time and eat into your disk space. That's why you should use a central location for your logs and enable log rotation for your Docker containers.

Jenkins Kubernetes slaves are offline

I'm currently trying to run a Jenkins build on top of a Kubernetes minikube 2-node cluster. This is the code that I am using: https://github.com/rsingla2012/docker-development-youtube-series-youtube-series/tree/main/jenkins. Every time I run the build, I get an error that the slave is offline. This is the output of "kubectl get all -o wide -n jenkinsonkubernetes2" after I apply the files:
cmd line logs
Looking at the Jenkins logs below, Jenkins is able to spin up and provision a slave pod but as soon as the container is run (in this case, I'm using the inbound-agent image although it's named jnlp), the pod is terminated and deleted and another is created. Jenkins logs
2: https://i.stack.imgur.com/mudPi.png`enter code here`
I also added a new Jenkins logger for org.csanchez.jenkins.plugins.kubernetes at all levels, the log of which is shown below.
kubernetes logs
This led me to believe that it might be a network issue or a firewall blocking the port so I checked with netstat and although jenkins was listening at 0.0.0.0:8080, port 50000 was not. So, I opened port 50000 with an inbound rule for Windows 10, but after running the build, it's still not listening. For reference, I also created a node port for the service and port forwarded the master pod to port 32767, so that the Jenkins UI is accessible at 127.0.01:32767. I believed opening the port should fix the issue, but upon using Microsoft Telnet to double check, I received the error "Connecting To 127.0.0.1...Could not open connection to the host, on port 50000: Connect failed" with the command "open 127.0.0.1 50000". One thing I thought was causing the problem was the lack of a server certificate when accessing the kubernetes API from jenkins, so I added the Kubernetes server certificate key to the Kubernetes cloud configuration, but still receiving the same error. My kubernetes URL is set to https://kubernetes.default:443, Jenkins URL is http://jenkins, and I'm using Jenkins tunnel jenkins:50000 with no concurrency limit.

Fabric v2.0 in kubernetes (minikube) - problem running docker inside peer for running chaincode

I am trying to run the Fabric 2.0 test-network in Kubernetes (locally, in minikube) and am facing an issue with the installing or running of the chaincode by the peers (in a docker container, it seems).
I created kubernetes files based on the docker-compose-test-net.yaml and successfully deployed the network, generated the crypto material, created and joined the channel, installed the chaincode on the peers, commited its definition. But when I try to invoke it, I have the following error:
Error: endorsement failure during invoke. response: status:500 message:"error in simulation:
failed to execute transaction 68e996b0d17c210af9837a78c0480bc7ba0c7c0f84eec7da359a47cd1f5c704a:
could not launch chaincode fabcar_01:bb76beb676a23a9be9eb377a452baa4b756cb1dc3a27acf02ecb265e1a7fd3df:
chaincode registration failed: container exited with 0"
I included in that pastebin the logs of the peer. We can see in there that it starts the container, but then I don't understand what happens with it: https://pastebin.com/yrMwG8Nd
I then tried as explained here: https://github.com/IBM/blockchain-network-on-kubernetes/issues/27. Where they say that
IKS v1.11 and onwards now use containerd as its container runtime
instead of the docker engine therefore using docker.sock is no longer
possible.
And they propose to deploy a docker pod (dind) with that file and that file and change the occurences of unix:///host/var/run/docker.sock to tcp://docker:2375.
But then I have the following error when I try to install the chaincode:
Error: chaincode install failed with status:
500 - failed to invoke backing implementation of 'InstallChaincode':
could not build chaincode:
docker build failed:
docker image inspection failed:
cannot connect to Docker endpoint
So it seems it cannot connect to the Docker endpoint. But I cannot find how to fix this.
If you have an idea, it would help a lot!
I found my issue:
For the peers, I was setting:
- name: CORE_PEER_CHAINCODEADDRESS
value: peer0-org1-example-com:7052
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: 0.0.0.0:7052
like they do for the test-network with docker-compose.
Removing those made it work. I guess there were important for the docker-compose setup, but not adequate for kubernetes.

Gitlab CI cannot resolve service hostname on kubernetes runner

I have a GitLab CI pipeline configured to run on Kubernetes runner. Everything worked great until I tried to add services (https://docs.gitlab.com/ee/ci/services/mysql.html) for test job. The service hostname (eg.: mysql) cannot be resolved on kubernetes, resulting into the following error dial tcp: lookup mysql on 10.96.0.10:53: no such host. However, it works on docker runner, but that's just not what I want. Is there any way to
The job definition from .gitlab-ci.yml:
test:
stage: test
variables:
MYSQL_ROOT_PASSWORD: --top-secret--
MYSQL_DATABASE: --top-secret--
MYSQL_USER: --top-secret--
MYSQL_PASSWORD: --top-secret--
services:
- mysql:latest
- nats:latest
script:
- ping -c 2 mysql
- go test -cover -coverprofile=coverage.prof.tmp ./...
Edit:
Logs from runner-jd6sxcl7-project-430-concurrent-0g5bm8 pod show that the services started. There are 4 containers total inside the pod: build,helper,svc-0 (mysql), svc-1 (nats)
svc-0 logs show the mysql service started successfully:
2019-12-09 21:52:07+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.18-1debian9 started.
2019-12-09 21:52:07+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2019-12-09 21:52:08+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.18-1debian9 started.
2019-12-09 21:52:08+00:00 [Note] [Entrypoint]: Initializing database files
2019-12-09T21:52:08.226747Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
2019-12-09T21:52:08.233097Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.18) initializing of server in progress as process 46
svc-1 logs show the nats service started successfully as well:
[1] 2019/12/09 21:52:12.876121 [INF] Starting nats-server version 2.1.2
[1] 2019/12/09 21:52:12.876193 [INF] Git commit [679beda]
[1] 2019/12/09 21:52:12.876393 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2019/12/09 21:52:12.876522 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2019/12/09 21:52:12.876548 [INF] Server id is NCPAQNFKKWPI67DZHSWN5EWOCQSRACFG2FXNGTLMW2NNRBAMLSDY4IYQ
[1] 2019/12/09 21:52:12.876552 [INF] Server is ready
[1] 2019/12/09 21:52:12.876881 [INF] Listening for route connections on 0.0.0.0:6222
This was a known issue with older versions of GitLab Runner (< 12.8), or when running Kubernetes executor in older versions of Kubernetes (< 1.7).
From the Kubernetes executor documentation of GitLab Runner:
Since GitLab Runner 12.8 and Kubernetes 1.7, the services are accessible via their DNS names. If you are using an older version you will have to use localhost.
(emphasis mine)
It's important to keep in mind that the other restrictions and implications of the Kubernetes executor. From the same document:
Note that when services and containers are running in the same Kubernetes
pod, they are all sharing the same localhost address.
So, even if you're able to use the service specific hostname to talk to your service, it's all really localhost (127.0.0.1) underneath.
As such, keep in mind the other important restriction from the same document:
You cannot use several services using the same port
(thanks to #user3154003 for the link to the GitLab Runner issue in a currently deleted answer that pointed me in the right direction for this answer.)
From the documentation link you provided, I see that the mysql hostname should be accessible.
How services are linked to the job
To better understand how the container linking works, read Linking
containers together.
To summarize, if you add mysql as service to your application, the
image will then be used to create a container that is linked to the
job container.
The service container for MySQL will be accessible under the hostname
mysql. So, in order to access your database service you have to
connect to the host named mysql instead of a socket or localhost. Read
more in accessing the services.
Also from the doc
If you don’t specify a service alias, when the job is run, service will be started and you will have access to it from your build container
So can you check if there are errors when the mysql service started.
I provided this as answer since I could not fit this in comment.

Forwarding logs from kubernetes to splunk

I'm pretty much new to Kubernetes and don't have hands-on experience on it.
My team is facing issue regarding the log format pushed by kubernetes to splunk.
Application is pushing log to stdout in this format
{"logname" : "app-log", "level" : "INFO"}
Splunk eventually get this format (splunkforwarder is used)
{
"log" : "{\"logname\": \"app-log\", \"level\": \"INFO \"}",
"stream" : "stdout",
"time" : "2018-06-01T23:33:26.556356926Z"
}
This format kind of make things harder in Splunk to query based on properties.
Is there any options in Kubernetes to forward raw logs from app rather than grouping into another json ?
I came across this post in Splunk, but the configuration is done on Splunk side
Please let me know if we have any option from Kubernetes side to send raw logs from application
Kubernetes architecture provides three ways to gather logs:
1. Use a node-level logging agent that runs on every node.
You can implement cluster-level logging by including a node-level logging agent on each node. The logging agent is a dedicated tool that exposes logs or pushes logs to a backend. Commonly, the logging agent is a container that has access to a directory with log files from all of the application containers on that node.
The logs format depends on Docker settings. You need to set up log-driver parameter in /etc/docker/daemon.json on every node.
For example,
{
"log-driver": "syslog"
}
or
{
"log-driver": "json-file"
}
none - no logs are available for the container and docker logs does not
return any output.
json-file - the logs are formatted as JSON. The
default logging driver for Docker.
syslog - writes logging messages to
the syslog facility.
For more options, check the link
2. Include a dedicated sidecar container for logging in an application pod.
You can use a sidecar container in one of the following ways:
The sidecar container streams application logs to its own stdout.
The sidecar container runs a logging agent, which is configured to pick up logs from an application container.
By having your sidecar containers stream to their own stdout and stderr streams, you can take advantage of the kubelet and the logging agent that already run on each node. The sidecar containers read logs from a file, a socket, or the journald. Each individual sidecar container prints log to its own stdout or stderr stream.
3. Push logs directly to a backend from within an application.
You can implement cluster-level logging by exposing or pushing logs directly from every application.
For more information, you can check official documentation of Kubernetes
This week we had the same issue.
Using splunk forwarder DaemonSet
installing https://splunkbase.splunk.com/app/3743/ this plugin on splunk will solve your issue.
Just want to update with the solution what we tried, this worked for our log structure
SEDCMD-1_unjsonify = s/{"log":"(?:\\u[0-9]+)?(.*?)\\n","stream.*/\1/g
SEDCMD-2_unescapequotes = s/\\"/"/g
BREAK_ONLY_BEFORE={"logname":