How to add tags to all ECS task definition in one go - amazon-ecs

I would like to tag all 9000+ task definition in one go, please help me with best way to do.
i tried with cmd aws ecs tag-resource --resource-arn but it is allowing to take one arn at a time.

You can only do this using a program. Use the AWS SDK, list all ARNs, loop through them all and call the tag resource api on each of them. This is how you tag a bunch of tables in Python:
session = boto3.Session(profile_name="my-region")
client = session.client('dynamodb')
# Get all DDB tables
tables = client.list_tables()
# Loop through tables
for table in tables['TableNames']:
print(f'Tagging table: {table}')
client.tag_resource(ResourceArn=f'arn:aws:dynamodb:us-east-1:xxx:table/{table}',Tags=[{'Key':'my_tag','Value':'my_tag_value'}])

Related

how to copy a dashboard in a different azure devops instance?

I have a requirement to copy an existing dashboard dashboard in an org(source org) to a different org(target org) under a different ado instance by any means possible. Dashboard can have widgets and widgets can be linked to
pipeline
Query - A query can be referencing to a user, team, project, custom values of a standard field, custom fields
some other things that i have not encountered so far
So far my steps are as follows
get dashboard details using get dashboard rest api
identify widgets in dashboard details api response
get any pipeline if there is no pipeline create a dummy one and use its details for a widget that is using pipeline
identify distinct queries present in all widgets
create its equivalent query in target org and save its id
replace queryid in widget settings to its equivalent created queryid in targetorg
create dashboard in target org
I am facing issue in step 5
There are lot of moving variables in a query. Query might be referencing to things that does not exist in target org like a particular user, team, custom values of a standard field, custom fields. In order to create a query successfully i need to know possible values of a field in target org. While creating a new query from ui it shows possible values for a field in dropdown so i am wondering is there any rest api that gives possible values of a field and if no such field exist in target org then it should throw error.
Looking forward to suggestions for a simpler or alternative approach to replicate a dashboard across different ado instance and/or better approach for step 5
If you are looking for a rest api the query the fields in your target process of the organization, you could refer to this doc. Field-list.
GET https://dev.azure.com/{organization}/_apis/work/processes/{processId}/workItemTypes/{witRefName}/fields?api-version=6.0-preview.2
After that, you could create the fields in your target process, you could refer to this rest api. Fields-Create
POST https://dev.azure.com/{organization}/_apis/work/processdefinitions/{processId}/fields?api-version=4.1-preview.1
Or could you share more details of your requirement, like screenshots and widget definition or dashboards configuration for update.

Is there anyway to query components in an Azure Pipeline?

I know one can use REST API to query Pipeline activities but is there anyway to query Pipeline components ( i.e, listing of linked services, sources, sinks, parameters, etc.)
Right now I'm manually recording all the components I see in the pipeline and querying these components would make this listing faster and more accurate
enter image description here
Thanks, Jeannie
I haven't found a way to get all the components of a pipeline directly. If the information you want to obtain is defined in the YAML file, you cannot directly obtain it, you may need to parse the YAML file to obtain it.
If the information you want to get is defined on the UI, such as variables, you can get it through the REST API Definitions - Get. And you can get the resources of the pipeline through Resources - List.

Dynamically generate cloudformation resources using CDK

I'm trying to dynamically generate SNS subscriptions in CDK based on what we have in mappings. What's the best way to do this here? I have mappings that essentially maps the SNS topic ARNs my queue want to subscribe to in each region/stage. The mapping looks something like this:
"Mappings":
"SomeArnMap":
"eu-west-1":
"beta":
- "arn:aws:sns:us-west-2:0123456789:topic1"
"gamma":
- "arn:aws:sns:us-west-2:0123456789:topic2"
- "arn:aws:sns:us-west-2:0123456789:topic3"
How do I write code in CDK that creates a subscription for each element in the list here? I can't get regular loop to work because we don't know the size of the list until deployment. After CDK synth, it would just give me tokens like #{Token[TOKEN.264]} for my topic ARN.
Is it even doable in CDK/CloudFormation? Thanks.
Since tokens aren't resolved during the runtime of aws-cdk code, usually you can use cfn intrinsic functions which declare some sort operation on the token in your template. These are accessible in #aws-cdk/core.fn. However, cfn doesn't have intrinsics for looping over values, only selecting values from a list/map.
If your cdk has these mappings in its output template and you just want to extract a value for reference when building another construct Fn.findInMap I believe should do that.
const importedTopic = Sns.Topic.fromTopicArn(this, "ImportedTopicId", Fn.findInMap("SomeArnMap", "eu-west-1", "beta"));
importedTopic.addSubscription(SomeSqsQueueOrSomething);

Talend - Stats and Logs - On database - error

I have a job that inserts data from sql server to mysql. I have set the project settings as -
Have checked the check box for - Use statistics(tStatCatcher), Use logs (tLogcatcher), Use volumentrics (tflowmetercatcher)
Have selected 'On Databases'. And put in the table names
(stats_table,logs_table,flowmeter_table) as well. These tables were created before. The schema of these tables were determined using tcreatetable component.
The problem is when I run the job, data is inserted in the stats_table but not in flowmeter_table
My job is as follows
tmssInput -->tmap --> tmysqoutput.
I have not included tstatcatcher,tlogcatcher,tflowmetercatcher. The stats and logs for this job are taken from the project settings.
My question - Why is there no data entered in flowmeter_table? Should I include tStatCatcher , tlogcatcher and tflowmetercatcher explicitly in the job for it to run fine?
I am using TOS
Thanks in advance
Rathi
Using flow meter requires you to manually configure the flows you want to monitor.
On every flow you want to monitor, right-click on the row >parameters>advanced settings>Monitor connection.
Then you should be able to see data in your flow table.
If you are using the project settings , you don't need to add the *Catcher component on your job.
You need to use tstatcatcher,tlogcatcher,tflowmetercatcher composant in the job directly.
The composant have already their schema defined so you jusneed to put a tmap and redirect in the table you want like :
Moreover in order tu use the tlog catcher you need to put some tdie or twarn in your job.

Using luigi to update Postgres table

I've just started using the luigi library. I am regularly scraping a website and inserting any new records into a Postgres database. As I'm trying to rewrite parts of my scripts to use luigi, it's not clear to me how the "marker table" is supposed to be used.
Workflow:
Scrape data
Query DB to check if new data differs from old data.
If so, store the new data in the same table.
However, using luigi's postgres.CopyToTable, if the table already exists, no new data will be inserted. I guess I should be using the inserted column in the table_updates table to figure out what new data should be inserted, but it's unclear to me what that process looks like and I can't find any clear examples online.
You don't have to worry about marker table much: it's an internal table luigi uses to track which task has already been successfully executed. In order to do so, luigi uses the update_id property of your task. If you didn't declared one, then luigi will use the task_id as shown here. That task_id is a concatenation of the task family name and the first three parameters of your task.
The key here is to overwrite the update_id property of your task and return a custom string that you'll know will be unique for each run of your task. Usually you should use the significant parameters of your task, something like:
#property
def update_id(self):
return ":".join(self.param1, self.param2, self.param3)
By significant I mean parameters that change the output of your task. I imagine parameters like website url o id, and scraping date. Parameters like the hostname, port, username or password of your database will be the same for any of these tasks so they shouldn't be considered significant.
Notice that without having details about your tables and the data you're trying to save its pretty hard to say how you must build that update_id string, so please be careful.