I am in the process of migrating our containers from m5.large instances to a1.large instances in our production ecs cluster.
But on this type of instance, when the user data script tries the start ecs command, I noticed it is not available anymore:
[ec2-user#ip-10-1-1-90 ~]$ start ecs
-bash: start: command not found
I am using the lattest recommended AMI (ami-0c812cd5f7b956092):
aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/arm64/recommended
What am I missing ?
My guess is that you're using the older "Amazon Linux AMI" ECS AMI on your m5 nodes. ARM instances are only supported in Amazon Linux 2 which uses systemd.
For any Amazon Linux 2 based AMI (arm or x86), you'll want to run systemctl start ecs instead. Also, for compatibility reasons, you can use the service style invocation on either Amazon Linux AMI or Amazon Linux 2: service ecs start.
Related
I have successfully created ECS cluster (EC2 Linux + Networking). Is it possible to login to the cluster to perform some administrative tasks? I have not deployed any containers or tasks to it yet. I can’t find any hints for it in AWS console or AWS documentation.
The "cluster" is just a logical grouping of resources. The "cluster" itself isn't a server you can log into or anything. You would perform actions on the cluster via the AWS console or the AWS API. You can connect to the EC2 servers managed by the ECS cluster individually. You would do that via the standard ssh method you would use to connect to any other EC2 Linux server.
ECS will take care most of the administrative works for you.You simply have to deploy and manage your applications on ECS. If you setup ECS correctly, you will never have to connect to instances.
Follow these instructions to deploy your service (docker image): https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service.html
Also you can use Cloudwatch to store container logs, so that you don't have to connect to instances to check the logs: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_awslogs.html
I know Kind needs Docker, and Minikube needs Virtual Box - but for learning Kubernetes features are they the same?
Thank you.
In terms of learning Kubernetes features, they are the same. You will get the same Kubernetes and Kubernetes resources in both: Pod, Deployments, ConfigMaps, StatefulSets, Secrets, etc. assuming they both have the same Kubernetes version.
Under the hood they very similar too with some implementation differences.
Minikube
Runs K8s in VM (1.7.0 vesion now supports running minikube on Docker)
Support multiple Hypervisors (VirtualBox, Hyperkit, parallels, etc)
You need to ssh to VM to run docker. (minikube ssh)
On the positive side, if you are using VMs, you get the VM isolation which is 'more secure' per se.
Update: It does support running in docker with --driver=docker
Kind
Runs Docker in a VM (Part of the docker desktop installation for Mac, or Windows)
Runs Kubernetes in that "Docker" VM
Supports Hyperkit (Mac) or Hyper-V (Windows) hypervisors.
Has the convenience that you can run the docker client from your Mac or Windows.
You can actually run it in Linux with no need to use a VM (It's a docker native installation on Linux)
It runs all K8s components in a single container.
Is there a possibility to have mixed windows and linux work nodes on Azure Service Fabric, or the cluster must be homogeneous?
No this is still not possible.
Was also asked before as well:
Service Fabric: Is it possible to run both Linux and Windows nodes
I have tried creating Kubernetes cluster but all the nodes are linux based OS(Container-Optimized OS (cos) (default) and Ubuntu). I have windows based image stored on docker Hub I need to deploy this app to kubernetes cluster. I am using https://console.cloud.google.com/kubernetes/ to create cluster.
While creating nodes, in setting there are only two options: Container-Optimized OS (cos) (default) and Ubuntu.
Windows is not supported by Google Kubernetes. There is a feature request that you can track: Feature request : Support for Windows Server Containers in GKE
You can launch your own Google Compute VM and run Windows containers. This article provides more information.
I don't think you can run Windows nodes in GKE, even though Kubernetes itself supports Windows nodes (https://kubernetes.io/docs/getting-started-guides/windows/).
In my opinion, the other options you have are:
Run an on-prem Kubernetes cluster with your Windows licenses (the control plane would still run with Linux, only the nodes would be Windows based)
Use GCE instead of GKE to run your containers: https://cloud.google.com/compute/docs/containers/ and https://cloud.google.com/blog/products/gcp/how-to-run-windows-containers-on-compute-engine
Hope that helps!
Is it possible to run both Linux and Windows nodes within the same cluster on Azure Service Fabric?
No, that is currently not possible.