ECS: I need create different services from the same Task but using different environment variables...is it possible? - amazon-ecs

My problem is the following:
I need to run a docker container for every user in a platform, all these containers use the same docker image, but you can pass some environment variables like user_id,queue_id,etc in order for them to know which user they belong to
I know that I can create tasks and pass environment variables to them, but for me seems not OK to have thousand of identical tasks using the same docker image but different variables...suppose I want to update the docker image or make some change... in this way I would have to iterate and change every single task
My idea was create a single task and then, run different services using this task and passing different environment variables, so I would have some "username_x_service_task1" for the username_x running the task1
but this doesn't seems possible, at least not using boto3, the only thing I can define in the service are tags
my questions is:
this approach is correct?...or must I create different tasks for different users although the image is the same?
would be possible, inside my running service, to access to its custom tags, so I'd use the tasks to pass user_id queue_id and other required information?
is there any better way to do this :sweat_smile: ?
thank you guys.

Yes this is possible. You would define Container Overrides at the service level to override the environment variables set in the task definition, or to add additional environment variables.

Related

Assigning Testplan from Code when using Locust as a Library and Outputting to Timescale

When using locust as a library and redirecting the outputs to timescaledb all requests are logged with same run_id and testplan, I want somehow a method to distinguish between test to group different tests to different dashboards.
I tried changing request headers but not able to modify testplan variable in any way. I want somehow to dynamically assing test plans from code when using locust as a library to assign different dashboards to different testruns.
Table values
You can override the test plan name by passing --override-plan-name <yourname> or using env var LOCUST_OVERRIDE_PLAN_NAME.
A new run_id should be generated whenever a new test is started, are you sure this is not the case? Not that it would help differentiate the runs very much in Grafana...

How to provide dynamic values for approvals and checks in yaml pipelines?

I'm working on an integration between Azure Pipelines and ServiceNow's change management module. To achieve that the ServiceNow Change Management extension has been installed and configured according to this documentation page. In Azure DevOps we are using multistage yaml pipelines, which should create standard preapproved changes in ServiceNow.
The connection itself between the two applications works fine, I managed to put together a pipeline that creates change requests, waits until their status changes and then closes them. However, I'd like to pass some values set in the pipeline runs to the created change requests and I couldn't find a way to do it.
First I added a service connection to our Azure DevOps project, and created the ServiceNow check for it. I experimented a little with adding different expressions to it, like setting the short description to ${{ parameters.shortDescription }}, or defining a variable in the pipeline as ShortDescription: ${{ parameters.shortDescription }} and using that variable in the check as $(ShortDescription) or $[ variables.ShortDescription ]. Unfortunately none of these expressions got resolved. I also realized it is possible to use the predefined variables, but the values I'd like to set are not possible to describe by predefined variables. For example, selecting an assignment group would be pretty straightforward from a parameter defined as a list, but impossible to select from predefined variables.
So as a next idea, I tried to link a variable group to the check and update the variables through logging commands. Even though the variables from the group got resolved, they only showed the values I set them through the UI as a static default value. The dynamic values set via the logging commands were not visible. I played around some time and verified that I can update the definition of the variable groups through Azure CLI or REST API, so I can add new variables or update existing ones. Thus I tried to add a new variable to the linked group during the pipeline run named as ShortDescription_$(Build.BuildId). Even though it got added properly, I could not use it within the check, because it required double variable resolution, like $(ShortDescription_$(Build.BuildId)) and this expression was not resolved, not even partly. It remained $(ShortDescription_$(Build.BuildId)).
Then I started thinking about using only one variable from the group with a static name (e.g. ShortDescription) for all pipeline runs. However, I feel it would create a race condition and could cause some inconsistencies.
So as a last resort, I tried to put together an extension with an Agent and a ServerGate task, which are capable of storing the values I want to pass to change request and reading the stored values in an agentless environment. The problem here is, that the second task is not visible as a check for service connections. It's there as a release pipeline gate and looks good there, but I can't utilize it that way. Based on a question I found, this does not seem to be the problem with my task. To verify it, I copied the content of the same ServiceNow check I used before, and added it to my extension as a contribution with a different task id. And it did not show up as the question stated.
Which means now I can either
create a change request through my custom server task (as the ServerGate task can be used properly in yaml if it is changed to a Server task), but that way I can't wait for the state change of the ServiceNow ticket, or
create the change request in a separate stage where I want to use it, update it first in the same stage where I created it via the first-party check and wait for the state change in the stage where I would normally create it.
The second can work, but it has its own problems, like having misleading values stored in the changed request for the stage id field, or not having multiple change requests created for multiple run attempts of the deployment stage. Also I feel like it's not how the extension's task and check should be used.
Unfortunately, I'm out of ideas how this dynamic value passing can be achieved, if it's possible to do so in the first place. Could you please help me by sharing ideas, or pointing out errors in my attempts?

Enterprise Architect: Setting run state from initial attribute values when creating instance

I am on Enterprise Architect 13.5, creating a deployment diagram. I am defining our servers as nodes, and using attributes on them so that I can specify their details, such as Disk Controller = RAID 5 or Disks = 4 x 80 GB.
When dragging instances of these nodes onto a diagram, I can select "Set Run State" on them and set values for all attributes I have defined - just like it is done in the deployment diagram in the EAExample project:
Since our design will have several servers using the same configuration, my plan was to use the "initial value" column in the attribute definition on the node to specify the default configuration so that all instances I create automatically come up with reasonable values, and when the default changes, I would only change the Initial Values on the original node instead of having to go to all instances:
My problem is that even though I define initial values, all instances I create do not show any values when I drag them onto the diagram. Only by setting the Run State on each instance, I can get them to show the values I want:
Is this expected behavior? Btw, I can reproduce the same using classes and instances of them, so this is not merely a deployment diagram issue.
Any ideas are greatly appreciated! I'm also thankful if you can describe a better way to achieve the same result with EA, in case I am doing it wrong.
What you could do is to either write a script to assist with it or even create an add-in to bring in more automation. Scripting is easier to implement but you need to run the script manually (which however can add the values in a batch for newly created diagram objects). Using an add-in could do this on element creation if you hook to EA_OnPostNewElement.
What you need to do is to first get the classifier of the object. Using
Repository.GetElementByID(object.ClassifierID)
will return that. You then can check the attributes of that class and make a list of those with an initial value. Finally you add the run states of the object by assigning object.RunState with a crude string. E.g. for a != 33 it would be
#VAR;Variable=a;Value=33;Op=!=;#ENDVAR;
Just join as many as you need for multiple run states.

managing instances of powerCLI script

I wrote a powerCLI script that can automatically deploy a new VM with some given parameters.
In few words, the script connects to a given VC and start the deployment from an existing template.
Can I regulate the number of instances of my script that will run on the same computer ?
Can I regulate the number of instances of my script that will run on different computers but when both instances will be connected to the same VC ?
To resolve the issue i thought of developing a server side appilcation where each instance of my script will connect to, and the server will then handle all the instances , but i am not sure if such thing is possible in powerCLI/Powershell.
Virtually anything is poshable, or so they say. What you're describing may be overkill, however, depending on your scenario. Multiple instances of the same script will each run in its own Powershell process. Virtual Center allows hundreds of simultaneous connections. Of course the content or context of your script might dictate that it shouldn't run in simultaneous instances. I haven't experimented, but it seems like there are ways to determine the name of running Powershell scripts. So if you keep the script name consistent on each computer, you could probably build in some checks along the lines of the linked answer.
But depending on your particulars, it might be easier to go a different way. For example, if you don't want the script to run simultaneously because you have hard-coded the name of a new-osCustomizationSpec, for example, a simple\klugey solution might be to do a check for that new spec, and disconnect/exit/rollback if it exists. A better solution might be to give the new spec a unique name. But the devil is in the details. Hope that helps a bit.

How to make a Sequential Http get calls from locust

In Locust Load test Enviroment tasks are defined and are called randomly.
But if i want a task to be performed just after a specific task. Then how do i do it?
for ex: after every 'X' url call i want 'Y' url to be called based on the response of 'X'.
In my experience, I found that it's better to model Locust tasks as completely independent of each other, and each of them covering a user scenario or behavior (eg. customer logs in, searches for a book and adds it to the cart). This is mostly because that's a closer simulation of the user's behavior.
Have you tried just having the multiple requests on the same task, and just if / else based on your responses? This slide from Carl Byström's talk follows said approach.
You just have to make a sequential gets or posts. When you define your task do something like this:
#task(10)
def my_task(l):
l.client.get('/X')
l.client.get('/Y')
There's an option to create a custom task set inherited from TaskSequence class.
Then you should add seq_task decorators to all task set methods to run its tasks sequentially.
https://docs.locust.io/en/latest/writing-a-locustfile.html#tasksequence-class