How to determine the type of OS the service fabric cluster is running on - azure-service-fabric

I'm trying to find out what type of OS(Win or Linux) my Service Fabric cluster is running on once you are connected to it. I need to determine the OS type the SF cluster is running on, so I can modify the ServiceManifest.xml endpoints names of Application endpoints accordingly. On Windows your endpoint needs to have '.exe' but if you deploy your service to a Linux cluster it's without it
Windows
<EntryPoint>
<ExeHost>
<Program>MyApp.exe</Program>
</ExeHost>
</EntryPoint>
Linux
<EntryPoint>
<ExeHost>
<Program>MyApp</Program>
</ExeHost>
</EntryPoint>
I want to have only one ServiceManifest.xml in project and modify it accordingly
I've looked at sfctl and PowerShell CLI utility, but I can't find any info about what OS the clusters are running on.
Any idea how to determine the OS type once you connect to the cluster
Update:
I've found that if your cluster is running in Azure cloud you can use
az sf cluster list and there you can find vmImage="Windows" property. But you can't use this on localhost

On localhost you can use regular PowerShell Core to find out what OS the script is running on:
[System.Environment]::OSVersion.Platform

Related

Is Linux and Windows in one cluster supported on Azure Service Fabric?

Is there a possibility to have mixed windows and linux work nodes on Azure Service Fabric, or the cluster must be homogeneous?
No this is still not possible.
Was also asked before as well:
Service Fabric: Is it possible to run both Linux and Windows nodes

How to submit a Dask job to a remote Kubernetes cluster from local machine

I have a Kubernetes cluster set up using Kubernetes Engine on GCP. I have also installed Dask using the Helm package manager. My data are stored in a Google Storage bucket on GCP.
Running kubectl get services on my local machine yields the following output
I can open the dashboard and jupyter notebook using the external IP without any problems. However, I'd like to develop a workflow where I write code in my local machine and submit the script to the remote cluster and run it there.
How can I do this?
I tried following the instructions in Submitting Applications using dask-remote. I also tried exposing the scheduler using kubectl expose deployment with type LoadBalancer, though I do not know if I did this correctly. Suggestions are greatly appreciated.
Yes, if your client and workers share the same software environment then you should be able to connect a client to a remote scheduler using the publicly visible IP.
from dask.distributed import Client
client = Client('REDACTED_EXTERNAL_SCHEDULER_IP')

How to add Windows node while creating cluster using Kubernetes on Google cloud platform?

I have tried creating Kubernetes cluster but all the nodes are linux based OS(Container-Optimized OS (cos) (default) and Ubuntu). I have windows based image stored on docker Hub I need to deploy this app to kubernetes cluster. I am using https://console.cloud.google.com/kubernetes/ to create cluster.
While creating nodes, in setting there are only two options: Container-Optimized OS (cos) (default) and Ubuntu.
Windows is not supported by Google Kubernetes. There is a feature request that you can track: Feature request : Support for Windows Server Containers in GKE
You can launch your own Google Compute VM and run Windows containers. This article provides more information.
I don't think you can run Windows nodes in GKE, even though Kubernetes itself supports Windows nodes (https://kubernetes.io/docs/getting-started-guides/windows/).
In my opinion, the other options you have are:
Run an on-prem Kubernetes cluster with your Windows licenses (the control plane would still run with Linux, only the nodes would be Windows based)
Use GCE instead of GKE to run your containers: https://cloud.google.com/compute/docs/containers/ and https://cloud.google.com/blog/products/gcp/how-to-run-windows-containers-on-compute-engine
Hope that helps!

Service Fabric: Is it possible to run both Linux and Windows nodes

Is it possible to run both Linux and Windows nodes within the same cluster on Azure Service Fabric?
No, that is currently not possible.

How do you deploy from Visual Studio to a remote on-prem Service Fabric cluster?

I have installed a Service Fabric unsecured development cluster on a shared, on-premises VM with firewall turned off. I can connect to it locally (on same VM) via PowerShell, and deploy locally via Visual Studio. However I am unable to connect or deploy to the cluster from any other box on our network, getting the following error message from PowerShell:
Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue.
As I said, the firewall is turned off on the machine hosting the cluster. What am I doing wrong?
OneBox deployment of Service Fabric (installed via the SDK) does not support remote publishing.
Template for configuring a shared dev/test cluster consisting of three nodes can be found here: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-creation-for-windows-server/#download-the-service-fabric-standalone-package
/Mikkel