Creating a Jboss node apart from default node - jboss

How to create a new Jboss node .
I am using jboss app server. Need to configure another node apart from default node.
How can it be done ?

There are two way to create JBoss Node. You can use Admin panel to add a node or You can manually changes code(xml) in domail.xml and host.xml to add a node.
http://blog.akquinet.de/2012/06/29/managing-cluster-nodes-in-domain-mode-of-jboss-as-7-eap-6/

Related

K3S HA installation issue - adding floating ip to cluster

I am trying to create a HA k3s cluster using HAProxy and Keepalived.
For new installations ( where --tls-san param is added on first time install ) everything works great.
I am encountering an issue when I have an existing cluster, and I try to update the configuration and add --tls-san <floating_IP>.
I can see that the service’s unit file is updated correctly, and that the service does restart, but editing the kubeconfig file to connect to the new floating IP results in a tls error.
Any ideas?
Thanks
Salmon
EDIT:
It seems as if no new listener is created (i.e. in the k3s-serving secret)

Migrating from the Docker runtime to the Containerd runtime on a specific NODE kubernetes

I would like to know if it's possible to change the param --image-type COS_CONTAINERD in the google cloud command line but on a specific node.
I found this command below :
gcloud container clusters upgrade CLUSTER_NAME --image-type COS_CONTAINERD [--node-pool POOL_NAME]
But it will update my entire node pool so it'will create a disturbance on my services web .
Every node in my pool will be updated in the same time ( destroy & re-create ).
Or there is another way ???
I would like to know if you have the same topic / issue . It' will help me during this migration.
Thanks for your experience .
The command you are mentioning, will update all the nodes within the node pool.
I would suggest you trying to create a new node pool on your cluster and choose the COS_CONTAINERD image type, and once you have the new node pool with cos-containerd migrate your workload to the new node pool node(s) by following this process. This will also help you to manage the downtime.

How to change OpenShift console URL and API URL

My company runs OpenShift v3.10 cluster consisting of 3 masters and 4 nodes. We would like to change URL of the OpenShift API and also the URL of the OpenShift web console. Which steps we need to take to successfully do so?
We have already tried to update the openshift_master_cluster_hostname and openshift_master_cluster_public_hostname variables to new DNS names, which resolve our F5 virtual hosts which load balances the traffic between our masters, and then started the upgrade Ansible playbook, but the upgrade fails. We have also tried to run the Ansible playbook which redeploys the cluster certificates, but after that step the OpenShift nodes status changes to NotReady.
We have solved this issue. What we had to do is to change the URL-s defined in the variables in the inventory file and then we executed the ANSIBLE playbook to update master configuration. The process of running that playbook is describe in the official documentation.
After that we also had to update the OpenShift Web Console configuration map with new URL-s and then scale down and scale up the web-console deployment. The process on how to update the configuration of the web-console is described here.

Spinnaker server group labelling

I am creating a server group and I want to add a label to the deployment. I don't find any option in the spinnaker UI to add one. Any help on this?
The current version of the Kubernetes cloud provider (v1) does not support configuring labels on Server Groups.
The new Kubernetes Provider (v2), which is manifest-based, allows you to configure labels. This version, however, is still in alpha.
Sources
https://github.com/spinnaker/spinnaker/issues/1624
https://www.spinnaker.io/reference/providers/kubernetes-v2/

How to update services in Kubernetes?

Background:
Let's say I have a replication controller with some pods. When these pods were first deployed they were configured to expose port 8080. A service (of type LoadBalancer) was also create to expose port 8080 publicly. Later we decide that we want to export an additional port from the pods (port 8081). We change the pod definition and do a rolling-update with no downtime, great! But we want this port to be publicly accessible as well.
Question:
Is there a good way to update a service without downtime (for example by adding a an additional port to expose)? If I just do:
kubectl replace -f my-service-with-an-additional-port.json
I get the following error message:
Replace failedspec.clusterIP: invalid value '': field is immutable
If you name the ports in the pods, you can specify the target ports by name in the service rather than by number, and then the same service can direct target to pods using different port numbers.
Or, as Yu-Ju suggested, you can do a read-modify-write of the live state of the service, such as via kubectl edit. The error message was due to not specifying the clusterIP that had already been set.
In such case you can create a second service to expose the second port, it won't conflict with the other one and you'll have no downtime.
If you have more that one pod running for the same service you may use the Kubernetes Engine within the Google Cloud Console as follows:
Under "Workloads", select your Replication Controller. Within that screen, click "EDIT" then update and save your replication controller details.
Under "Discover & Load Balancing", select your Service. Within that screen, click "EDIT" then update and save your service details. If you changed ports you should see those reflecting under the column "Endpoints" when you've finished editing the details.
Assuming you have at least two pods running on a machine (and a restart policy of Always), if you wanted to update the pods with the new configuration or container image:
Under "Workloads", select your Replication Controller. Within that screen, scroll down to "Managed pods". Select a pod, then in that screen click "KUBECTL" -> "Delete". Note, you can do the same with the command line: kubectl delete pod <podname>. This would delete and restart it with the newly downloaded configuration and container image. Delete each pod one at a time, making sure to wait until a pod has fully restarted and working (i.e. check logs, debug) etc, before deleting the next.