Azure Service Fabric end point after container deployment - azure-service-fabric

This was a container deployment.
When run locally, it used the machine name: http://joemachine:8229/account/Index.
"Application URL" property of Service Fabric project is "http://{MachineName}:8229"
After deployment, under All Applications menu, it is showing
fabric:/Application1
Cluster URL: http://abcf12.uksouth.cloudapp.azure.com:19000
end point from Manisfest.xml given below
There is reverse proxy port (19081), and there is a port given in the manifest below. Which port affects the URL?
Also since the machine name is provided in properties, http://{MachineName}:8229, how do I find the machine name of the container
as this is used in the URL?
What is the full URL to access the application?
<Resources>
<Endpoints>
<!-- This endpoint is used by the communication listener to obtain the port on which to
listen. Please note that if your service is partitioned, this port is shared with
replicas of different partitions that are placed in your code. -->
<Endpoint Protocol="http" Name="AST.XyTypeEndpoint" Type="Input" Port="8229" />
</Endpoints>
Edit:
In the innermost node in SF explorer, Address->Endpoints, there is a 10.0.0.2:8229 IP given. I tried this, but it's not working.
Also is Node name the machine name? I tried putting that in the URL, and it is not working either.

partial answer. verified container is working successfully inside.
RDP-ed to node cluster mstsc /v:abcf12.uksouth.cloudapp.azure.com:3389 admin username+password
command prompt
set DOCKER_HOST=localhost:2375
(exact same port)
and then
docker ps --no-trunc
this will list containers
docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" s1233 where s1223 is the first 4 chars of container id.
this will give the IP. http://IP/account/Index proved that service is loading.
Question still remains how to expose it to the outside world!

Related

dynamic container hostname injection to application properties and /etc/hosts

I am trying to create a deployment of 3 replicas of containers of the same image.
Each of container would require to include the unique hostname in two places:
/etc/hosts or resolvable by a dns
an application properties file and cmd line
However, the hostname is not known until after container is started and even it's started, I found that I can ping ip of another container in one container but NOT the container name.
I understand that I can write a script to update but is there a more generic way to get this done. This is not an uncommon task/requirement.
Thanks.

Jenkins Kubernetes slaves are offline

I'm currently trying to run a Jenkins build on top of a Kubernetes minikube 2-node cluster. This is the code that I am using: https://github.com/rsingla2012/docker-development-youtube-series-youtube-series/tree/main/jenkins. Every time I run the build, I get an error that the slave is offline. This is the output of "kubectl get all -o wide -n jenkinsonkubernetes2" after I apply the files:
cmd line logs
Looking at the Jenkins logs below, Jenkins is able to spin up and provision a slave pod but as soon as the container is run (in this case, I'm using the inbound-agent image although it's named jnlp), the pod is terminated and deleted and another is created. Jenkins logs
2: https://i.stack.imgur.com/mudPi.png`enter code here`
I also added a new Jenkins logger for org.csanchez.jenkins.plugins.kubernetes at all levels, the log of which is shown below.
kubernetes logs
This led me to believe that it might be a network issue or a firewall blocking the port so I checked with netstat and although jenkins was listening at 0.0.0.0:8080, port 50000 was not. So, I opened port 50000 with an inbound rule for Windows 10, but after running the build, it's still not listening. For reference, I also created a node port for the service and port forwarded the master pod to port 32767, so that the Jenkins UI is accessible at 127.0.01:32767. I believed opening the port should fix the issue, but upon using Microsoft Telnet to double check, I received the error "Connecting To 127.0.0.1...Could not open connection to the host, on port 50000: Connect failed" with the command "open 127.0.0.1 50000". One thing I thought was causing the problem was the lack of a server certificate when accessing the kubernetes API from jenkins, so I added the Kubernetes server certificate key to the Kubernetes cloud configuration, but still receiving the same error. My kubernetes URL is set to https://kubernetes.default:443, Jenkins URL is http://jenkins, and I'm using Jenkins tunnel jenkins:50000 with no concurrency limit.

Openshift Kafka Connect Project access through Postman

I wanted to create kafka connect connector on openshift project through postman. But when sending Post command through postman getting error as below. In openshift to expose pod as a service(interact through postman) any specific command we need to run? Please advise.
Possible reasons you are seeing this page:
The host doesn't exist. Make sure the hostname was typed correctly and that a route matching this hostname exists.
The host exists, but doesn't have a matching path. Check if the URL path was typed correctly and that the route was created using the desired path.
Route and path matches, but all pods are down. Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.
The error you are getting is because you havent created any route to interact with the required service.
In openshift a service is exposed to the applications outside of cluster through routes.
Use the following command to expose service outside of cluster :
oc expose service
You can include many options with the above command . to know more use the following command
oc expose service --help

openshift does not honor the USER directive in Dockerfile

I'm new to Openshift/k8s. The docker image I'm running in openshift is using USER blabla. But when I exec into the pod, it use a different rather than the one in Dockerfile.
I'm wondering why? and how can I work around this ?
Thanks
For security, cluster administrators have the option to force containers to run with cluster assigned uids. By default, most containers run using a uid from a range assigned to the project.
This is controlled by the configured SecurityContextConstraints.
To allow containers to run as the user declared in their dockerfile (even though this can expose the cluster, security-wise), allow the pod's service account access to the anyuid SecurityContextConstraint (oadm policy add-scc-to-user anyuid system:serviceaccount:<your ns>:<your service account>

How to update services in Kubernetes?

Background:
Let's say I have a replication controller with some pods. When these pods were first deployed they were configured to expose port 8080. A service (of type LoadBalancer) was also create to expose port 8080 publicly. Later we decide that we want to export an additional port from the pods (port 8081). We change the pod definition and do a rolling-update with no downtime, great! But we want this port to be publicly accessible as well.
Question:
Is there a good way to update a service without downtime (for example by adding a an additional port to expose)? If I just do:
kubectl replace -f my-service-with-an-additional-port.json
I get the following error message:
Replace failedspec.clusterIP: invalid value '': field is immutable
If you name the ports in the pods, you can specify the target ports by name in the service rather than by number, and then the same service can direct target to pods using different port numbers.
Or, as Yu-Ju suggested, you can do a read-modify-write of the live state of the service, such as via kubectl edit. The error message was due to not specifying the clusterIP that had already been set.
In such case you can create a second service to expose the second port, it won't conflict with the other one and you'll have no downtime.
If you have more that one pod running for the same service you may use the Kubernetes Engine within the Google Cloud Console as follows:
Under "Workloads", select your Replication Controller. Within that screen, click "EDIT" then update and save your replication controller details.
Under "Discover & Load Balancing", select your Service. Within that screen, click "EDIT" then update and save your service details. If you changed ports you should see those reflecting under the column "Endpoints" when you've finished editing the details.
Assuming you have at least two pods running on a machine (and a restart policy of Always), if you wanted to update the pods with the new configuration or container image:
Under "Workloads", select your Replication Controller. Within that screen, scroll down to "Managed pods". Select a pod, then in that screen click "KUBECTL" -> "Delete". Note, you can do the same with the command line: kubectl delete pod <podname>. This would delete and restart it with the newly downloaded configuration and container image. Delete each pod one at a time, making sure to wait until a pod has fully restarted and working (i.e. check logs, debug) etc, before deleting the next.