Jenkins-X: How to link external service in preview environment - kubernetes

From preview environment I want to access a database located in staging environment (in namespace jx-staging).
I am trying to follow Service Linking from Jenkins-X documentation with no success. Documentation is not really clear where to put the service link definition.
I created a service file charts/preview/resources/mysql.yaml with following content, but the service link is not created.
kind: Service
apiVersion: v1
metadata:
name: mysql
spec:
type: ExternalName
externalName: mysql.jx-staging.svc.cluster.local
ports:
- port: 3306
JX Environment:
jx version:
NAME VERSION
jx 1.3.688
jenkins x platform 0.0.3125
Kubernetes cluster v1.10.9-gke.5
kubectl v1.10.7
helm client v2.12.1+g02a47c7
helm server v2.12.0+gd325d2a
git git version 2.11.0
Operating System Debian GNU/Linux 9.6 (stretch)
Where and how to define a service link?
GitHub issue: How to link external service in preview environment

Solution is to move mysql.yaml from resources to templates sub-folder:
charts/preview/templates/mysql.yaml
Issue was cause by a typo in Service Linking documentation which is now corrected.

BTW there is also a FAQ entry on adding more resources to a preview.
Your Service YAML looks good to me. Do you see the Service created when you create a Preview Environment?
You can find the namespace by typing jx get preview then to see if there is a Service in your environment try kubectl get service -n jx-myuser-myapp-pr-1

Related

Minikube Ingress Stuck In "Scheduled for sync"

Summary
trying to get minikube-test-ifs.com to map to my deployment using minikube.
What I Did
minikube start
minikube addons enable ingress
kubectl apply -f <path-to-yaml-below>
kubectl get ingress
Added ingress ip mapping to /etc/hosts file in form <ip> minikube-test-ifs.com
I go to chrome and enter minikube-test-ifs.com and it doesn't load.
I get "site can't be reached, took too long to respond"
yaml file
note - it's all in the default namespace, I don't know if that's a problem.
There may be a problem in this yaml, but I checked and double checled and see no potential error... unless I'm missing something
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: nginx
ports:
- name: client
containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: test-service
spec:
selector:
app: test
ports:
- name: client
protocol: TCP
port: 3000
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: minikube-test-ifs.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-service
port:
number: 3000
OS
Windows 10
Other Stuff
I checked Minikube with ingress example not working but I already added to my /etc/hosts and I also tried removing the spec.host but that still doesn't work...
also checked Minikube Ingress (Nginx Controller) not working but that person has his page already loading so not really relevent to me from what I can tell
Any Ideas?
I watched so many Youtube tutorials on this and I follow everything perfectly. I'm still new to this but I don't see a reason for it not working?
Edit
When I run kubectl describe ingress <ingress> I get:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 8s (x5 over 19m) nginx-ingress-controller Scheduled for sync
How do I get it to sync? Is there a problem since it's been "Scheduled for sync" for a long time
Overview
Ingress addon for Minikube using docker driver only works on linux
Docker for Windows uses Hyper-V, therefore, if the Docker daemon is running, you will not be able to use VM platforms such as VirtualBox or VMware
If you have Windows Pro, Enterprise or Education, you may be able to get it working if you use Hyper-V as your minikube cluster (see Solution 1)
If you don't want to upgrade Windows, you can open a minikube cluster on a Linux Virtual Machine and run all your tests there. This will require you to configure some Windows VM settings in order to get your VM's to run (see Solution 2). Note that you can only run either Docker or a VM platform (other than Hyper-V) but not both (See The Second Problem for why this is the case).
The Problem
For those of you who are in the same situation as I was, the problem lies in the fact that the minikube ingress addon only works for Linux OS when using the docker driver (Thanks to #rriovall for showing me this documentation).
The Second Problem
So the solution should be simple, right? Just use a different driver and it should work. The problem here is, when Docker is intalled on Windows, it uses the built in Hyper-V virtualization technology which by default seems to disable all other virtualizatioin tech.
I have tested this hypothesis and this seems to be the case. When the Docker daeomon is running, I am unable to boot any virtual machine that I have. For instance, I get an error when I tried to run my VM's on VirtualBox and on VMWare.
Furthermore, when I attemt to start a minikube cluster using the virtualbox driver, it gets stuck "booting the kernel" and then I get a This computer doesn't have VT-X/AMD-v enabled error. This error is false as I do have VT-X enabled (I checked my BIOS). This is most likely due to the fact that when Hyper-V is enabled, all other types of virtualization tech seems to be disabled.
For my personal machine, when I do a search for "turn windows features on or off" the Docker daemon enabled "Virtual Machine Platform" and then asked me to restart my computer. This happened when I installed Docker. As a test, I turned off both "Virtual Machine Platform" and "Windows Hypervsor Platform" features and restarted my computer.
What happened when I did that? The Docker daemon stopped running and I could no longer work with docker, however, I was able to open my VM's and I was able to start my minikube cluster with virtualbox as the driver. The problem? Well, Docker doesn't work so when my cluster tries to pull the docker image I am using, it won't be able to.
So here lies the problem. Either you have VM tech enables and Docker disabled, or you have VM tech (other than Hyper-V, I'll touch on that soon) disabled and Docker enabled. But you can't have both.
Solution 1 (Untested)
The simplest solution would probably be upgrading to Windows Pro, Enterpriseor or Education. The Hyper-V platform is not accessable on normal Windows. Once you have upgraded, you should be able to use Hyper-V as your driver concurrently with the Docker daemon. This, in theory, should make the ingress work.
Solution 2 (Tested)
If you're like me and don't want to do a system upgrade for something so miniscule, there's another solution.
First, search your computer for the "turn windows features on or off" section and disable "Virtual Machine Platform" and "Windows Hypervisor Platform" and restart your computer. (See you in a bit :D)
After that, install a virtual machine platform on your computer. I prefer VirtualBox but you can also use others such as VMware.
Once you have a VM platform installed, add a new Linux VM. I would recommend either Debian or Ubuntu. If you are unfamiliar with how to set up a VM, this video will show you how to do so. This will be the general set up for most iso images.
After you have your VM up and running, download minikube and Docker on it. Be sure to install the correct version for your VM (for Debian, install Debian versions, for Ubuntu, install Ubuntu versions. Some downlaods may just be general Linux wich should work on most Linux versions).
Once you have everything installed, create a minikube cluster with docker as the driver, apply your Kubernetes configurations (deployment, service and ingress). Configure your /etc/hosts file and go to your browser and it should work. If you don't know how to set up an ingress, you can watch this video for an explanation on what an ingress is, how it works, and an example of how to set it up.

Application not showing in ArgoCD when applying yaml

I am trying to setup ArgoCD for gitops. I used the ArgoCD helm chart to deploy it to my local Docker Desktop Kubernetes cluster. I am trying to use the app of apps pattern for ArgoCD.
The problem is that when I apply the yaml to create the root app, nothing happens.
Here is the yaml (created by the command helm template apps/ -n argocd from the my public repo https://github.com/gajewa/gitops):
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
server: http://kubernetes.default.svc
namespace: argocd
project: default
source:
path: apps/
repoURL: https://github.com/gajewa/gitops.git
targetRevision: HEAD
syncPolicy:
automated:
prune: true
selfHeal: true
The resource is created but nothing in Argo UI actually happened. No application is visible. So I tried to create the app via the Web UI, even pasting the yaml in there. The application is created in the web ui and it seems to synchronise and see the repo with the yaml templates of prometheus and argo but it doesn't actually create the prometheus application in ArgoCD. And the prometheus part of the root app is forever progressing.
Here are some screenshots:
The main page with the root application (where also argo-cd and prometheus should be visible but aren't):
And then the root app view where something is created for each template but Argo seems that it can't create kubernetes deployments/pods etc from this:
I thought maybe the CRD definitions are not present in the k8s cluster but I checked and they're there:
λ kubectl get crd
NAME CREATED AT
applications.argoproj.io 2021-10-30T16:27:07Z
appprojects.argoproj.io 2021-10-30T16:27:07Z
I've ran out of things to check why the apps aren't actually deployed. I was going by this tutorial: https://www.arthurkoziel.com/setting-up-argocd-with-helm/
the problem is you have to use the below code in your manifest file in metadata:
just please change the namespace with the name your argocd was deployed in that namespace. (default is argocd)
metadata:
namespace: argocd
From another SO post:
https://stackoverflow.com/a/70276193/13641680
It turns out that at the moment ArgoCD can only recognize application declarations made in ArgoCD namespace,
Related GitHub Issue

Hazelcast doesn't connect to pods when using Headless Service

The hazelcast members are able to communicate with each other in non-kube environments. However, the same is not happening in Kube environments. The pod is not able to resolve its own domain.
I am working with Spring Boot 2.1.8 and Hazelcast 3.11.4 (TcpIp config)
Hazelcast configuration:
Config config = new Config();
config.setInstanceName("hazelcast-instance")
.setGroupConfig(new GroupConfig(hazelcastGroupName, hazelcastGroupPassword))
.setProperties(hzProps)
.setNetworkConfig(
new NetworkConfig()
.setPort(hazelcastNetworkPort)
.setPortAutoIncrement(hazelcastNetworkPortAutoIncrement)
.setJoin(
new JoinConfig()
.setMulticastConfig(new MulticastConfig().setEnabled(false))
.setAwsConfig(new AwsConfig().setEnabled(false))
.setTcpIpConfig(new TcpIpConfig().setEnabled(true).setMembers(memberList))))
.addMapConfig(initReferenceDataMapConfig());
return config;
Members definition in Statefulset config file:
- name: HC_NETWORK_MEMBERS
value: project-0.project-ha, project-1.project-ha
Headless service config:
---
apiVersion: v1
kind: Service
metadata:
namespace: {{ kuber_namespace }}
labels:
app: project
name: project-ha
spec:
clusterIP: None
selector:
app: project
Errors:
Resolving domain name 'project-0.project-ha' to address(es): [10.42.30.215]
Cannot resolve hostname: 'project-1.project-ha'
Members {size:1, ver:1} [
Member [10.42.11.173]:5001 - d928de2c-b4ff-4f6d-a324-5487e33ca037 this
]
The other pod has a similar error.
Fun facts:
It works fine when I set the full hostname in the HC_NETWORK_MEMBERS var. For example: project-0.project-ha.namespace.svc.cluster.local, project-1.project-ha.namespace.svc.cluster.local
There is a discovery plugin for Kubernetes, see here, and varies "how to" guides for Kubernetes here
If you use that, most of your problems should go away. The plugin looks up the member list from Kubernete's DNS.
If you can, it's worth upgrading to Hazelcast 4.2, as 3.11 is not the latest. Make sure to get a matching version of the plugin.
On 4.2, discovery has auto-detection, so will do it's best to help.
Please check the related guides:
Hazelcast Guide: Hazelcast for Kubernetes
Hazelcast Guide: Embedded Hazelcast on Kubernetes
For the embedded Hazelcast, you need to use the Hazelcast Kubernetes plugin. For the client-server, you can use TCP-IP configuration on the member side and use Hazelcast service name as the static DNS. It will be resolved automatically.

is it possible to remote debugging java program in kubernetes using service name

Now I am remote debugging my java program in kubernetes(v1.15.2) using kubectl proxy forward like this:
kubectl port-forward soa-report-analysis 5018:5018 -n dabai-fat
I could using intellij idea to remote connect my localhost port 5018 to remote debugging my pod in kubernetes cluster in remote datacenter,but now I am facing a problem is every time I must change the pod name to redebug after pod upgrade,any way to keep a stable channel for debugging?
I could suggest for anyone who looks for ways to debug Java(and Go, NodeJS, Python, .NET Core) applications in Kubernetes to look at skaffold.
It simple CLI tool that uses already existing build and deploy configuration that you used to work with.
There is no need for additional installation in the cluster, modification for existing deployment configuration, etc.
Install CLI: https://skaffold.dev/docs/install/
Open your project, and try:
skaffold init
This will make skaffold create
skaffold.yaml
(the only needed config file for skaffold)
And then
skaffold debug
Which will use your existing build and deploy config, to build a container and deploy it. If needed necessary arguments will be injected into the container, and port forwarding will start automatically.
For more info look at:
https://skaffold.dev/docs/workflows/debug/
This can provide a consistent way to debug your application without having to be aware all time about the current pod or deployment state.
I use this script to improve my workflow:
#!/usr/bin/env bash
set -u
set -e
set -x
kubectl get pods -n dabai-fat | grep "soa-illidan-service"
POD=$(kubectl get pod -l k8s-app=soa-illidan-service -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward ${POD} 11014:11014
This script automatic get the pod name and open remote debugging.
We can use a service of type nodeport to resolve your issue.Here is a sample yaml file:-
apiVersion: v1
kind: Service
metadata:
name: debug-service
spec:
type: NodePort
selector:
app: demoapp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 8001 // port which exposed in DockerFile for debugging purpose
targetPort: 8001
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30019
In IntelliJ, you will be able to connect to
Host: localhost
Port: 30019

How to edit code in kubernetes pod containers using VS Code?

Typically, if I have a remote server, I could access it using ssh, and VS Code gives a beautiful extension for editing and debugging codes for the remote server. But when I create pods in Kuberneters, I can't really ssh into the container and so I cannot edit the code inside the pod or machine. And the kuberneters plugin in VSCode does not really help because the plugin is used to deploy the code. So, I was wondering whether there is a way edit codes inside a pod using VSCode.
P.S. Alternatively if there is a way to ssh into a pod in a kuberneters, that will do too.
If your requirement is "kubectl edit xxx" to use VSCode.
The solution is:
For Linux,macos: export EDITOR='code --wait'
For Windows: set EDITOR=code --wait
Kubernetes + Remote Development extensions now allow:
attaching to k8s pods
open remote folders
execute remotely
debug on remote
integrated terminal into remote
must have:
kubectl
docker (minimum = docker cli - Is it possible to install only the docker cli and not the daemon)
required vscode extentions:
Kubernetes. https://marketplace.visualstudio.com/items?itemName=ms-kubernetes-tools.vscode-kubernetes-tools
Remote Development - https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack
Well a pod is just a unit of deployment in kubernetes which means you can tune the containers inside it to receive an ssh connection.
Let's start by getting a docker image that allows ssh connections. rastasheep/ubuntu-sshd:18.04 image is quite nice for this. Create a deployment with it.
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: debugger
name: debugger
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: debugger
template:
metadata:
creationTimestamp: null
labels:
app: debugger
spec:
containers:
- name: debugger
image: rastasheep/ubuntu-sshd:18.04
imagePullPolicy: "Always"
hostname: debugger
restartPolicy: Always
Now let's create a service of type LoadBalancer such that we can access the pod remotely.
---
apiVersion: v1
kind: Service
metadata:
namespace: default
labels:
app: debugger
name: debugger
spec:
type: LoadBalancer
ports:
- name: "22"
port: 22
targetPort: 22
selector:
app: debugger
status:
loadBalancer: {}
Finally, get the external ip address by running kubectl get svc | grep debugger and use it to test the ssh connection ssh root#external_ip_address
Note the user / pass of this docker image is root / root respectively.
UPDATE
Nodeport example. I tested this and it worked running ssh -p30036#ipBUT I had to enable a firewall rule to make it work. So, the nmap command that I gave you has the answer. Obviously the machines that run kubernetes don't allow inbound traffic on weird ports. Talk to them such that they can give you an external ip address or at least a port in a node.
---
apiVersion: v1
kind: Service
metadata:
name: debugger
namespace: default
labels:
app: debugger
spec:
type: NodePort
ports:
- name: "ssh"
port: 22
nodePort: 30036
selector:
app: debugger
status:
loadBalancer: {}
As mentioned in some of the other answers, you can do this although it is fraught with danger as the cluster can/will replace pods regularly and when it does, it starts a new pod idempotently from the configuration which will not have your changes.
The command below will get you a shell session in your pod , which can sometimes be helpful for debugging if you don't have adequate monitoring/local testing facilities to recreate an issue.
kubectl --namespace=exmaple exec -it my-cool-pod-here -- /bin/bash
Note You can replace the command with any tool that is installed in your container (python3, sh, bash, etc). Also know that that some base images like alpine wont have bash/shell installed be default.
This will open a bash session in the running container on the cluster, assuming you have the correct k8s RBAC permissions.
There is an Cloud Code extension available for VS Code that will serve your purpose.
You can install it in your Visual Studio Code to interact with your Kubernetes cluster.
It allows you to create minikube cluster, Google GKE, Amazon EKS or Azure AKS and manage it from VS Code (you can access cluster information, stream/view logs from pods and open interactive terminal to the container).
You can also enable continuous deployment so it will continuously watch for changes in your files, rebuild the container and redeploy application to the cluster.
It is well explained in Documentation
Hope it will be useful for your use case.