Forcing Re-Initialization of 2 machines in a stack - aws-cloudformation

I have a large stack which has about 10 machines. Of those, 2 are being used for development and so are constantly being changed.
What I would like to be able to do is terminate those 2 instances and then have them recreated.
Is there a way to easily rebuild only those 2 instances without having to take down and rebuild the whole stack?

Typically using an AutoScale group will do that work for you. I've typically used CloudFormation to create a "dev" stack which would include an ASG and other EC2 bits to make this work easier.

Related

Increase ECS fargate memory using EFS

one of my application running ECS(with fargate) needs more memory, the 20GB ephemeral memory is not sufficient for my application, so I am planning to use efs
volume {
name = "efs-test-space
efs_volume_configuration {
file_system_id = aws_efs_file_system.efs_apache.id
root_directory = "/"
transit_encryption = "ENABLED"
container_path = "/home/user/efs/"
authorization_config {
access_point_id = aws_efs_access_point.efs-access-point.id
iam = "ENABLED"
}
}
I can see it is mounted and my application is able to access the mounted folder, but because of HA and to have parallelism my ecs task count are 6. Since I am using sone EFS and same will be shared by all tasks. So here the problem I got stuck is providing unique mounted EFS filepath for each task .
I added something like this /home/user/efs/{random_id} but this I want to make as part of task lifecycle, I mean this folder should get deleted if my task is stopped or destroyed/
So is there a way to mount efs as bind mount or enable delete of folder during task destroy stage?
You can now increase your ephemeral storage size to 200GB all you need to do is to set the ephemeral parameter in fargate task definition
"ephemeralStorage": {
"sizeInGiB": 100
}
While this could be achieved in theory, there is a lot of moving parts to it because the life cycle of EFS (and Access Points) is decoupled from the life cycle of tasks. This means you need to create Access Points out of band and on the fly AND these Access Points (and their data) are not automatically deleted when you tear down your tasks. The Fargate/EFS integration did not have this as a primary use case. The primary use case were more around sharing of the same data among different tasks(which is what you are observing but it doesn't serve your use case!) in addition to provide point persistency for a single task. .
What you need to solve your problem easily is a new feature the Fargate team is working on right now and that will allow you to expand your local ephemeral storage as a property of the task. I can't say more about the timing but the feature is actively being developed so you may want to consider intercepting it rather than building a complex workflow to achieve the same result.

Enterprise Architect: Setting run state from initial attribute values when creating instance

I am on Enterprise Architect 13.5, creating a deployment diagram. I am defining our servers as nodes, and using attributes on them so that I can specify their details, such as Disk Controller = RAID 5 or Disks = 4 x 80 GB.
When dragging instances of these nodes onto a diagram, I can select "Set Run State" on them and set values for all attributes I have defined - just like it is done in the deployment diagram in the EAExample project:
Since our design will have several servers using the same configuration, my plan was to use the "initial value" column in the attribute definition on the node to specify the default configuration so that all instances I create automatically come up with reasonable values, and when the default changes, I would only change the Initial Values on the original node instead of having to go to all instances:
My problem is that even though I define initial values, all instances I create do not show any values when I drag them onto the diagram. Only by setting the Run State on each instance, I can get them to show the values I want:
Is this expected behavior? Btw, I can reproduce the same using classes and instances of them, so this is not merely a deployment diagram issue.
Any ideas are greatly appreciated! I'm also thankful if you can describe a better way to achieve the same result with EA, in case I am doing it wrong.
What you could do is to either write a script to assist with it or even create an add-in to bring in more automation. Scripting is easier to implement but you need to run the script manually (which however can add the values in a batch for newly created diagram objects). Using an add-in could do this on element creation if you hook to EA_OnPostNewElement.
What you need to do is to first get the classifier of the object. Using
Repository.GetElementByID(object.ClassifierID)
will return that. You then can check the attributes of that class and make a list of those with an initial value. Finally you add the run states of the object by assigning object.RunState with a crude string. E.g. for a != 33 it would be
#VAR;Variable=a;Value=33;Op=!=;#ENDVAR;
Just join as many as you need for multiple run states.

Anylogic: limiting used resources

I am currently modelling the entire production process of a company with limited human resources.
Part of the model is visualized here:
The Model: In the example there are multiple blocks but the focus for me is on the resource using blocks. The assemblers use 2 resources, the service, seize and rackstore blocks use 1 resource each. As you can imagine they are all fully utilized as I only have a resourcepool of 6 people (and there are more processes beyond this)
Question: Because of this full utilization my entire process is blocked because there are no free resources. Therefore I would like to ask if it would be possible for me to limit e.g. the blue part of the example flow to 3 employees using the same resourcepool? That way I can set priorities between the processes and make the process work again.
Use hold blocks to stop products to flow if the used resources are equal to 3
The code:
On enter delay (when the resource is seized)
resourcesInAssembler++;
if(resourcesInAssembler==3){
hold2.block();
hold1.block();
hold.block();
}
On exit (when the resource is released)
resourcesInAssembler--;
hold.unblock();
hold1.unblock();
hold2.unblock();

SSIS Child Packages not starting at the same time

I have a Database Project inside of SSDT 2012 that contains a SSIS project using the package deployment model. The goal of the project is to load a lot of information at once that would normally take too much time if one package did it. So I divided it between 15 children each doing their on separate part, loading data into various sql tables. So, inside this project is one parent package and 15 child packages. Because of the type of data that is loading, I have to use script task to insert it all. Each child package is the same, only differing between parameters that divide the data up between the children. Each child package is executed using a External Reference through the File System.
The problem I'm having is while the parent package is supposed to start all the child packages at once, not all of the children are starting. It's as if there is a limit to how many packages can start at one time (looks like about 10 or 11). Once it hits this limit, the rest don't start. But when one package finishes, another immediately starts.
Is there a property I'm missing that is limiting how packages can run at the same time? Based on what others are able to run at the same time, there seems to be something I'm missing. I read somewhere memory can be a factor, but when I look at Task Manager, I don't see anything above 15% of my memory used.
The problem is solved by looking at the the property MaxConcurrentExecutables on the parent package. In my parent package, this property had a default value of -1, which means it calculates how many tasks that run in parallel (in this case, child packages) based on the number of cores on your PC plus 2.
In my case I have 8 cores on my laptop, plus the number 2 which put me at 10 packages running at the same time. You can override this value by putting a higher positive number in its place to allow more children to run. After putting in 20, all tasks started at once.
More information about this can found here:
https://andrewbutenko.wordpress.com/2012/02/15/ssis-package-maxconcurrentexecutables/

managing instances of powerCLI script

I wrote a powerCLI script that can automatically deploy a new VM with some given parameters.
In few words, the script connects to a given VC and start the deployment from an existing template.
Can I regulate the number of instances of my script that will run on the same computer ?
Can I regulate the number of instances of my script that will run on different computers but when both instances will be connected to the same VC ?
To resolve the issue i thought of developing a server side appilcation where each instance of my script will connect to, and the server will then handle all the instances , but i am not sure if such thing is possible in powerCLI/Powershell.
Virtually anything is poshable, or so they say. What you're describing may be overkill, however, depending on your scenario. Multiple instances of the same script will each run in its own Powershell process. Virtual Center allows hundreds of simultaneous connections. Of course the content or context of your script might dictate that it shouldn't run in simultaneous instances. I haven't experimented, but it seems like there are ways to determine the name of running Powershell scripts. So if you keep the script name consistent on each computer, you could probably build in some checks along the lines of the linked answer.
But depending on your particulars, it might be easier to go a different way. For example, if you don't want the script to run simultaneously because you have hard-coded the name of a new-osCustomizationSpec, for example, a simple\klugey solution might be to do a check for that new spec, and disconnect/exit/rollback if it exists. A better solution might be to give the new spec a unique name. But the devil is in the details. Hope that helps a bit.