service XXX was unable to place a task because no container inst met all of its reqmnts. instance XXX is already using a port required by your task - amazon-ecs

service crm was unable to place a task because no container instance met all of its requirements. The closest matching container-instance e45856e4821149XXXXXXXXX is already using a port required by your task.
is there any way to resolve this, currently i have trying to run 4 task-definition i have referred below AWS documents not sure which solution will be ideal to resolve current issue ? dynamic porting how to do it ?
registered ports : ["22","4000","2376","2375","51678","51679"]
https://aws.amazon.com/premiumsupport/knowledge-center/dynamic-port-mapping-ecs/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-instance-requirement-error/
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-event-messages.html#service-event-messages-1
tried referring AWS docs for current issue, not sure how to resolve port issue.

If you create port mappings in your task definition, you will occupy the ports on the host. If you do not create port mappings in your task definition (and only specify the container port) you will receive a dynamically allocated port on the host automatically.
So: don't specify the host port in the task definition.
The target group associated with your task can be used to dynamically target the tasks from, say, a load balancer or other resources supporting target groups.
Or you can create more instances in your autoscaling group so that your task can be placed on an instance where the port is not in use. You can use capacity providers to automatically create new instances when needed. Though, this is likely far less efficient than dynamic port mapping, depending on the performance characteristics of your workloads.

Related

Dynamic port mapping for ECS tasks

I want to run a socket program in aws ecs with client and server in one task definition. I am able to run it when I use awsvpc network mode and connect to server on localhost every time. This is good so I don’t need to know the IP address of server. The issue is server has to start on some port and if I run 10 of these tasks only 3 tasks(= number of running instances) run at a time. This is clearly because 10 tasks cannot open the same port. I can manually check for open ports before starting the server and somehow write it to docker shared volume where client can read and connect. But this seems complicated and my server has unnecessary code. For the Services there is dynamic port mapping by using Application Load Balancer but there isn’t anything for simply running tasks.
How can I run multiple socket programs without having to manage the port number in Aws ecs?
If you're using awsvpc mode, each task will get its own eni and there shouldn't be any port conflict. But each instance type has a limited number of enis available. You can increase that by enabling eni trunking which, however is supported by a handful of instance types:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-instance-eni.html#eni-trunking-supported-instance-types

mongodb mms monitoring agent does not find group members

I have installed the latest mongodb mms agent (6.5.0.456) on ubuntu 16.04 and initialised the replicaset. Hence I am running a single node replicaset with the monitoring agent enabled. The agent works fine, however it does not seem to actually find the replicaset member:
[2018/05/26 18:30:30.222] [agent.info] [components/agent.go:Iterate:170] Received new configuration: Primary agent, Assigned 0 out of 0 plus 0 chunk monitor(s)
[2018/05/26 18:30:30.222] [agent.info] [components/agent.go:Iterate:182] Nothing to do. Either the server detected the possibility of another monitoring agent running, or no Hosts are configured on the Group.
[2018/05/26 18:30:30.222] [agent.info] [components/agent.go:Run:199] Done. Sleeping for 55s...
[2018/05/26 18:30:30.222] [discovery.monitor.info] [components/discovery.go:discover:746] Performing discovery with 0 hosts
[2018/05/26 18:30:30.222] [discovery.monitor.info] [components/discovery.go:discover:803] Received discovery responses from 0/0 requests after 891ns
I can see two processes for monitor agents:
/bin/sh -c /usr/bin/mongodb-mms-monitoring-agent -conf /etc/mongodb-mms/monitoring-agent.config >> /var/log/mongodb-mms/monitoring-agent.log 2>&1
/usr/bin/mongodb-mms-monitoring-agent -conf /etc/mongodb-mms/monitoring-agent.config
However if I terminate one, it also tears down the other, so I do not think that is the problem.
So, question is what is the Group that the agent is referring to. Where is that configured? Or how do I find out which Group the agent refers to and how do I check if the group is configured correctly.
The rs.config() looks fine, with one replicaset member, which has a host field, which looks just fine. I can use that value to connect to the instance using the mongo command. no auth is configured.
EDIT
It kind of looks that the cloud manager now needs to be configured with the seed host. Then it starts to discover all the other nodes in the replicaset. This seems to be different to pre-cloud-manager days, where the agent was able to track the rs - if I remember correctly... Probably there still is a way to get this done easier, so I am leaving this question open for now...
So, question is what is the Group that the agent is referring to. Where is that configured? Or how do I find out which Group the agent refers to and how do I check if the group is configured correctly.
Configuration values for the Cloud Manager agent (such as mmsGroupId and mmsApiKey) are set in the config file, which is /etc/mongodb-mms/monitoring-agent.config by default. The agent needs this information in order to communicate with the Cloud Manager servers.
For more details, see Install or Update the Monitoring Agent and Monitoring Agent Configuration in the Cloud Manager documentation.
It kind of looks that the cloud manager now needs to be configured with the seed host. Then it starts to discover all the other nodes in the replicaset.
Unless a MongoDB process is already managed by Cloud Manager automation, I believe it has always been the case that you need to add an existing MongoDB process to monitoring to start the process of initial topology discovery. Once a deployment is monitored, any changes in deployment membership should automatically be discovered by the Cloud Manager agent.
Production employments should have authentication and access control enabled, so in addition to adding a seed hostname and port via the Cloud Manager UI you usually need to provide appropriate credentials.

Service instance count in Azure Fabric Service

Is there a way to find out number of instances of a servicetype that are running in a fabric service cluster at any give time through code? One way is to look at the ApplicationManifest file and get the number of instances set in that, but it might be overwritten sometimes by a parameter file. Any ideas here ?
If you want to examine your services programatically then look at FabricClient which exposes a number of operations that could show you the status deployed services. For your specific question, get the number of running instances, have look at FabricClient.QueryClient.GetReplicaList...(...), it will give you a list of replicas (in the case of StatelessServices, that would be the same as instances).

How do I deploy an entire environment (group of servers) using Chef?

I have an environment (Graphite) that looks like the following:
N worker servers
1 relay server that forwards work to these worker servers
1 web server that can query the relay server.
I would like to use Chef to setup and deploy this environment in EC2 without having to create each worker server individually, get their IPs and set them as attributes in the relay cookbook, create that relay, get the IP, set it as an attribute in the web server cookbook, etc.
Is there a way using chef in which I can make sure that the environment is properly deployed, configured and running without having to set the IPs manually? Particularly, I would like to be able to add a worker server and have the relay update its worker list, or swap the relay server for another one and have the web server update its reference accordingly.
Perhaps this is not what Chef is intended for and is more for per-server configuration and deployment, if that is the case, what would be a technology that facilitates this?
Things you will need are:
knife-ec2 - This is used to start/stop Amazon EC2 instances.
chef-server - To be able to use search in your recipes. It should be also accessible from your EC2 instances.
search - with this you will be able to find among the nodes provisioned by chef, exactly the one you need using different queries.
I have lately written an article How to Run Dynamic Cloud Tests with 800 Tomcats, Amazon EC2, Jenkins and LiveRebel. It involves loadbalancer installation and loadbalancer must know all IP adresses of the servers it balances. You can check out the recipe of balanced node, how it looks for loadbalancer:
search(:node, "roles:lr-loadbalancer").first
And check out the loadbalancer recipe, how it looks for all the balanced nodes and updates the apache config file:
lr_nodes = search(:node, "role:lr-node")
template ::File.join( node[:apache2][:home], 'conf.d', 'httpd-proxy-balancer.conf' ) do
mode 0644
variables(:lr_nodes => lr_nodes)
notifies :restart, 'service[apache2]'
end
Perhaps you are looking for this?
http://www.infochimps.com/platform/ironfan

JBoss multiple instances of a server, multiple ports in production environment not recommended?

The following document says:
This is easier to do and does not require a sysadmin. However, it is not the preferred approach for production systems for the reasons listed above. This approach is usually used in development to try out clustering behavior.
What are risks with this approach in the production environment? In weblogic, it is pretty common, and seen few production environments running with multiple ports(managed servers).
https://community.jboss.org/wiki/ConfiguringMultipleJBossInstancesOnOnemachine
The wiki clearly answers that question. Here is the text from the wiki for your reference
Where possible, it is advised to use a different ip address for each instance of JBoss rather than changing the ports or using the Service Binding Manager for the following reasons:
When you have a port conflict, it makes it very difficult to troubleshoot, given a large amount of ports and app servers.
Too many ports makes firewall rules too difficult to maintain.
Isolating the IP addresses gives you a guarantee that no other app server will be using the ports.
Each upgrade requires that you go in and re set the binding manager again. Most upgrades will upgrade the conf/jboss-service.xml file, which has the Service binding manager configuration in it.
The configuration is much simpler. When defining new ports(either through the Service Binding manager or by going in and changing all the ports in the configuration), it's always a headache trying to figure out which ports aren't taken already. If you use a NIC per JBoss Instance, all you have to change is the Ip address binding argument when executing the run.sh or run.bat. (-b )
Once you get 3 or 4 applications using different ports, the chances really increase that you will step on another one of your applications ports. It just gets more difficult to keep ports from conflicting.
JGroups will pick random ports within a cluster to communicate. Sometimes when clustering, if you are using the same ip address, two random ports may get picked in two different app servers(using the binding manager) that conflict. You can configure around this, but it's better not to run into this situation at all.
On a whole, having an individual IP addresses for each instance of an app server causes fewer problems (some of those problems are mentioned here, some aren't).