How to stop Reverse Proxy on particular node? - azure-service-fabric

Is it possible to stop only Reverse Proxy service/process without affecting everything else?
As far as I see the only way to disable it currently is to stop the whole windows Service Fabric Host Service which literally means to take node down.

Hi #BulakaievOleksandr
I am not sure what kind of cluster you are running so I hope my assumption that you are talking about standalone cluster are correct.
As far as I know ReverseProxy is disable by default so in order to enable it the steps should be taken. I think if we reverse them - we should be able to disable ReverseProxy.
According to this and this we should check the following:
In ClusterConfig.json please check that all NodeTypes have their reverseProxyEndpointPort property removed.
"properties": {
...
"nodeTypes": [
{
"name": "NodeType0",
...
"reverseProxyEndpointPort": "19081",
...
}
],
...
}
Then please make sure that inside fabricSettings you have turned off the ApplicationGateway/Http.
"fabricSettings": [
...
{
"name": "ApplicationGateway/Http",
"parameters": [
{
"name": "IsEnabled",
"value": "false"
}
]
}
],
...
Hope this helps.

Didn't check this but I think it is possible. When you configure cluster configuration you can specify node types:
"nodeTypes": [
{
"name": "PrimaryNodeType",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20031"
},
"isPrimary": true
}
]
Notice that name for this node type is "PrimaryNodeType". You can add secondary node type and do not configure "reverseProxyEndpointPort". Use this node type for nodes you want to disable reverse proxy:
"nodes": [
{
"nodeName": "VM01",
"iPAddress": "10.1.0.11",
"nodeTypeRef": "PrimaryNodeType",
"faultDomain": "fd:/dc0/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "VM02",
"iPAddress": "10.1.0.12",
"nodeTypeRef": "SecondaryNodeType",
"faultDomain": "fd:/dc0/r2",
"upgradeDomain": "UD2"
},
It should work, although with Service Fabric you never know. Common sense solutions often do not work. Please let me know.

Related

POD is being terminated and created again due to scale up and it's running twice

I have an application that runs a code and at the end it sends an email with a report of the data. When I deploy pods on GKE , certain pods get terminated and a new pod is created due to Auto Scale, but the problem is that the termination is done after my code is finished and the email is sent twice for the same data.
Here is the JSON file of the deploy API:
{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "$name",
"namespace": "$namespace"
},
"spec": {
"template": {
"metadata": {
"name": "********"
},
"spec": {
"priorityClassName": "high-priority",
"containers": [
{
"name": "******",
"image": "$dockerScancatalogueImageRepo",
"imagePullPolicy": "IfNotPresent",
"env": $env,
"resources": {
"requests": {
"memory": "2000Mi",
"cpu": "2000m"
},
"limits":{
"memory":"2650Mi",
"cpu":"2650m"
}
}
}
],
"imagePullSecrets": [
{
"name": "docker-secret"
}
],
"restartPolicy": "Never"
}
}
}
}
and here is a screen-shot of the pod events:
Any idea how to fix that?
Thank you in advance.
"Perhaps you are affected by this "Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = "Never", the same program may sometimes be started twice." from doc. What happens if you increase terminationgraceperiodseconds in your yaml file? – "
#danyL
my problem was that I had another jobs that deploy pods on my nodes with more priority , so it was trying to terminate my running pods but the job was already done and the email was already sent , so i fixed the problem by fixing the request and the limit resources on all my json files , i don't know if it's the perfect solution but for now it solved my problem.
Thank you all for you help

Getting Kafka Connect JMX metrics reporting into Datadog

I am working won a project involving Kafka Connect. We have a Kafka Connect cluster running on Kubernetes with some Snowflake connectors already spun up and working. The part we are having issues with now is trying to get the JMX metrics from the Kafka Connect cluster to report in Datadog. From my understanding of the Docs (https://docs.confluent.io/home/connect/monitoring.html#using-jmx-to-monitor-kconnect) the workers are already emitting metrics by default and we just need to find a way to get it reported to Datadog.
In our K8 Configmap we have these values set:
CONNECT_KAFKA_JMX_PORT: "9095"
KAFKA_JMX_PORT: "9095"
JMX_PORT: "9095"
I have included this launch script where we are setting the KAFKA_JMX_PORT env var:
export KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=<redacted> -Dcom.sun.management.jmxremote.rmi.port=${JMX_PORT}"
I’ve been looking online and all over Stackoverflow and haven’t actually seen an example of people getting JMX metrics reporting to Datadog and standing up a dashboard there so I was wondering if anyone had experience with this.
Firstly, your Datadog agents need to have Java/JMX integration.
Secondly, use Datadog JMX integration with auto-discovery, where kafka-connect must match the container name.
annotations:
ad.datadoghq.com/kafka-connect.check_names: '["jmx"]'
ad.datadoghq.com/kafka-connect.init_configs: '[{}]'
ad.datadoghq.com/kafka-connect.instances: |
[
{
"host": "%%host%%",
"port": 9095,
"conf": [
{
"include": {
"domain": "kafka.connect",
"type": "connector-task-metrics",
"bean_regex": [
"kafka.connect:type=connector-task-metrics,connector=.*,task=.*"
],
"attribute": {
"batch-size-max": {
"alias": "jmx.kafka.connect.connector.batch_size_max"
},
"status": {
"metric_type": "gauge",
"alias": "jmx.kafka.connect.connector.status",
"values": {
"running":0,
"paused":1,
"failed":2,
"destroyed":3,
"unassigned":-1
}
},
"batch-size-avg": {
"alias": "jmx.kafka.connect.connector.batch_size_avg"
},
"offset-commit-avg-time-ms": {
"alias": "jmx.kafka.connect.connector.offset_commit_avg_time"
},
"offset-commit-max-time-ms": {
"alias": "jmx.kafka.connect.connector.offset_commit_max_time"
},
"offset-commit-failure-percentage": {
"alias": "jmx.kafka.connect.connector.offset_commit_failure_percentage"
}
}
}
},
{
"include": {
"domain": "kafka.connect",
"type": "source-task-metrics",
"bean_regex": [
"kafka.connect:type=source-task-metrics,connector=.*,task=.*"
],
"attribute": {
"source-record-poll-rate": {
"alias": "jmx.kafka.connect.task.source_record_poll_rate"
},
"source-record-write-rate": {
"alias": "jmx.kafka.connect.task.source_record_write_rate"
},
"poll-batch-avg-time-ms": {
"alias": "jmx.kafka.connect.task.poll_batch_avg_time"
},
"source-record-active-count-avg": {
"alias": "jmx.kafka.connect.task.source_record_active_count_avg"
},
"source-record-write-total": {
"alias": "jmx.kafka.connect.task.source_record_write_total"
},
"source-record-poll-total": {
"alias": "jmx.kafka.connect.task.source_record_poll_total"
}
}
}
}
]
}
]

Kubernetes nginx ingress 0.22 not respecting cookie affinity annotation?

We recently upgraded to nginx-ingress 0.22. Before this upgrade, my service was using the old namespace ingress.kubernetes.io/affinity: cookie and everything was working as I expected. However, upon the upgrade to 0.22, affinity stopped being applied to my service (I don't see sticky anywhere in the nginx.conf).
I looked at the docs and changed the namespace to nginx.ingress.kubernetes.io as shown in this example, but it didn't help.
Is there some debug log I can look at that will show the configuration parsing/building process? My guess is that some other setting is preventing this from working (I can't imagine the k8s team shipped a release with this feature completely broken), but I'm not sure what that could be.
My ingress config as shown by the k8s dashboard follows:
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "example-ingress",
"namespace": "master",
"selfLink": "/apis/extensions/v1beta1/namespaces/master/ingresses/example-ingress",
"uid": "01e81627-3b90-11e9-bb5a-f6bc944a4132",
"resourceVersion": "23345275",
"generation": 1,
"creationTimestamp": "2019-02-28T19:35:30Z",
"labels": {
},
"annotations": {
"ingress.kubernetes.io/backend-protocol": "HTTPS",
"ingress.kubernetes.io/limit-rps": "100",
"ingress.kubernetes.io/proxy-body-size": "100m",
"ingress.kubernetes.io/proxy-read-timeout": "60",
"ingress.kubernetes.io/proxy-send-timeout": "60",
"ingress.kubernetes.io/secure-backends": "true",
"ingress.kubernetes.io/secure-verify-ca-secret": "example-ingress-ssl",
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/affinity": "cookie",
"nginx.ingress.kubernetes.io/backend-protocol": "HTTPS",
"nginx.ingress.kubernetes.io/limit-rps": "100",
"nginx.ingress.kubernetes.io/proxy-body-size": "100m",
"nginx.ingress.kubernetes.io/proxy-buffer-size": "8k",
"nginx.ingress.kubernetes.io/proxy-read-timeout": "60",
"nginx.ingress.kubernetes.io/proxy-send-timeout": "60",
"nginx.ingress.kubernetes.io/secure-verify-ca-secret": "example-ingress-ssl",
"nginx.ingress.kubernetes.io/session-cookie-expires": "172800",
"nginx.ingress.kubernetes.io/session-cookie-max-age": "172800",
"nginx.ingress.kubernetes.io/session-cookie-name": "route",
"nginx.org/websocket-services": "example"
}
},
"spec": {
"tls": [
{
"hosts": [
"*.example.net"
],
"secretName": "example-ingress-ssl"
}
],
"rules": [
{
"host": "*.example.net",
"http": {
"paths": [
{
"path": "/",
"backend": {
"serviceName": "example",
"servicePort": 443
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}
As I tested Sticky session affinity with Nginx Ingress version 0.22, I can assure that it works just fine. Then when I was looking for your configuration, I replaced wildcard host host: "*.example.net" with i.e host: "stickyingress.example.net" just to ignore wildcard, and it worked fine again.
So after some search I found out that from this issue
Wildcard hostnames are not supported by the Ingress spec (only SSL
wildcard certificates are)
Even this issue was opened for NGINX Ingress controller version:
0.21.0

Service Fabric Networking - Is it possible to have two nodes of different Node types on one VM?

I'm asking this in relations to my previous post. I've done some reading but haven't seen a clear answer to my comment Diego's answer.
Service Fabric - How to reserve or protect my hardcoded Port
UPDATE: As I flesh this out i think the question really becomes can you have mulitple nodes on one VM. Its not really about node types but nodes them selves. So the question is: Can i have a multiple IP'd VM that hosts Service Fabric and then host two nodes of different type types on it?
That way I could solve the above problem in and have a node type for external access and second node type for internal access instead of Hardcoding a port that is out of the range used during cluster set up.
I guess i'll get greedy and ask the follow up question here:
If i can not have multiple IPs is there anything to be concerned about by using a port outside of the range. Will service Fabric use that Port Number for anything else?
For example I wouldn't want Service Fabric to not think the micro service needs to be managed just like all other micro services just because the port is outside of the range?
For example in my onPrem ClusterConfig.Windows.MultiMachine.json file I currnetly have:
"nodes": [
{
"nodeName": "vm0",
"iPAddress": "Server1.DomainName.net",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "Server2.DomainName.net",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "Server3.DomainName.net",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
]
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20100"
},
"isPrimary": true
}
],
Can i instead do something like this where IP 1 and 4, 2 and 5 and 3 and 6 are on the same VM? Notice the start and end ports for the two node types are not split to allow for the hard coded WebAPI endpoints.
"nodes": [
{
"nodeName": "vm0",
"iPAddress": "IPAddress_1",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm1",
"iPAddress": "IPAddress_2",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm2",
"iPAddress": "IPAddress_3",
"nodeTypeRef": "NodeType0",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
]
"nodes": [
{
"nodeName": "vm3",
"iPAddress": "IPAddress_4",
"nodeTypeRef": "NodeType1",
"faultDomain": "fd:/dc1/r0",
"upgradeDomain": "UD0"
},
{
"nodeName": "vm4",
"iPAddress": "IPAddress_5",
"nodeTypeRef": "NodeType1",
"faultDomain": "fd:/dc1/r1",
"upgradeDomain": "UD1"
},
{
"nodeName": "vm5",
"iPAddress": "IPAddress_6",
"nodeTypeRef": "NodeType1",
"faultDomain": "fd:/dc1/r2",
"upgradeDomain": "UD2"
}
]
"nodeTypes": [
{
"name": "NodeType0",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20001",
"endPort": "20500"
},
"isPrimary": true
}
"name": "NodeType1",
"clientConnectionEndpointPort": "19000",
"clusterConnectionEndpointPort": "19001",
"leaseDriverEndpointPort": "19002",
"serviceConnectionEndpointPort": "19003",
"httpGatewayEndpointPort": "19080",
"reverseProxyEndpointPort": "19081",
"applicationPorts": {
"startPort": "20501",
"endPort": "21000"
},
],
Thanks in advance,
Greg
Yes and No.
When you install SF Cluster on Development Machine, it simulates a 5 node cluster in the same machine, by default you can't do it when you provision from azure portal.
Service Fabric is nothing more than windows services running on top of Virtual Machines.
The question here should be:
If I have two node types in the same machine, would it solve my port
conflict problem?
The answer is no, because they both would be competing for the ports the same way as when you try to load a single service binded to port 80 on all nodes in your local development machine.
As I suggested on the other question, if creating node types is not an option, you should use the hard-coded ports outside the list of the application ports.
For example:
- ServiceA is an API that exposes operations on port 80
- ServiceB is a background worker service that uses a random port(taken from application ports range)
Designing your service this way, you won't have issues with ports.
The other option would be let all services use random ports and use the reverse proxy to contact them, check it here.
In general you keep a public facing service (I.e Asp.net core api) to be consumed by external clients(anyone who is accessing from outside the sf cluster). You use this api service to invoke other micro services hosted in the same cluster. You can use remoting as communication stack within the cluster. This way you don’t have to worry about the ports in cluster.
Only port which is needed is 80 or some other for external client on which api service is listening. This is the preferred way as per the documentation as well.
As noted in the Standalone Deployment Preparation steps you can not have two nodes on one VM.
"In a production environment however, Service Fabric supports only one node per physical or virtual machine."
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-standalone-deployment-preparation#determine-the-initial-cluster-size

How could a spring-boot application determine if it is running on cloud foundry?

I'm writting a micro service with spring-boot. The db is mongodb. The service works perfectly in my local environment. But after I deployed it to the cloud foundry it doesn't work. The reason is connecting mongodb time out.
I think the root cause is the application doesn't know it is running on cloud. Because it still connecting 127.0.0.1:27017, but not the redirected port.
How could it know it is running on cloud? Thank you!
EDIT:
There is a mongodb instance bound to the service. And when I checked the environment information, I got following info:
{
"VCAP_SERVICES": {
"mongodb": [
{
"credentials": {
"hostname": "10.11.241.1",
"ports": {
"27017/tcp": "43417",
"28017/tcp": "43135"
},
"port": "43417",
"username": "xxxxxxxxxx",
"password": "xxxxxxxxxx",
"dbname": "gwkp7glhw9tq9cwp",
"uri": "xxxxxxxxxx"
},
"syslog_drain_url": null,
"volume_mounts": [],
"label": "mongodb",
"provider": null,
"plan": "v3.0-container",
"name": "mongodb-business-configuration",
"tags": [
"mongodb",
"document"
]
}
]
}
}
{
"VCAP_APPLICATION": {
"cf_api": "xxxxxxxxxx",
"limits": {
"fds": 16384,
"mem": 1024,
"disk": 1024
},
"application_name": "mock-service",
"application_uris": [
"xxxxxxxxxx"
],
"name": "mock-service",
"space_name": "xxxxxxxxxx",
"space_id": "xxxxxxxxxx",
"uris": [
"xxxxxxxxxx"
],
"users": null,
"application_id": "xxxxxxxxxx",
"version": "c7569d23-f3ee-49d0-9875-8e595ee76522",
"application_version": "c7569d23-f3ee-49d0-9875-8e595ee76522"
}
}
From my understanding, I think my spring-boot service should try to connect the port 43417 but not 27017, right? Thank you!
Finally I found the reason is I didn't specify the profile. After adding following code in my manifest.yml it works:
env:
SPRING_PROFILES_ACTIVE: cloud