Pipeline scheduling doubts - azure-devops

I have several pipelines in my project, and I need to schedule them in specific dates.
Is it possible to execute a pipeline every 11th day?
Is it possible to visualize all the pipelines in the project and see the next one to be executed or a table or something with all the pipelines schedules the way I'm working is the "classic view" not YAML.
I am sorry if the doubts seem dumb or something, but I'm searching and I can't find what I am looking for.

Scheduled triggers in YAML cause a pipeline to run on a schedule defined using cron syntax:
mm HH DD MM DW
\ \ \ \ \__ Days of week
\ \ \ \____ Months
\ \ \______ Days
\ \________ Hours
\__________ Minutes
So, if you want it will run every 11th of the month, you can achieve in this syntax:
0 0 11 * *
In your YAML:
schedules:
- cron: "0 0 11 * *" # replace with your schedule
You can use this site to check the syntax.
Regarding your second question, I'm afraid that impossible with YAML pipelines, there is no "table" or something :/
But, it's a good idea, I suggest you create a feature request here.

Related

Flutter Sharding Test still take same time as Serial

I am creating a parallel Flutter unit test on GitLab CI to avoid 60 minutes run-time limit.
The initial command was
flutter test --machine --coverage ./lib > test-report.jsonl
The process took around 59 minutes because we have a lot of unit tests.
In order to reduce the CI pipeline time, I modify the pipeline to be parallel using flutter shard and GitLab CI parallel feature.
The command is like this
flutter test \
--total-shards $CI_NODE_TOTAL \
--shard-index $( expr $CI_NODE_INDEX - 1 ) \
--machine \
--coverage \
--coverage-path coverage/lcov.info-$( expr $CI_NODE_INDEX - 1 ) \
./lib \
> test-report.jsonl-$( expr $CI_NODE_INDEX - 1 )
However, all the parallel jobs still run more than a 60-minute time limit.
What did I miss or how to debug it?

Azure devops pipeline improperly formed cron syntax

Trying to set up a cron on azure devops pipeline but I am getting this error message. I looked at the documentation but not sure what is not in line with the doc. Could someone let me know what is wrong with the my cron syntax? Thank you.
Error while validating cron input. Improperly formed cron syntax: '0 21 * * 1-7'.
Here is the entire yml file.
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- master
schedules:
- cron: "0 21 * * 1-7"
displayName: "pipeline cron test"
branches:
include:
- master
always: true
pool:
vmImage: ubuntu-latest
steps:
- script: echo Hello, world!
displayName: Run a one-line script, changed
- script: |
echo Add other tasks to build, test, and deploy your project.
echo See https://aka.ms/yaml
echo more info
displayName: 'Run a multi-line script'
Here's the relevant part of the doc.
Building Cron Syntax
Each cron syntax consists of 5 values separated by Space character:
1
2
3
4
5
6
mm HH DD MM DW
\ \ \ \ \__ Days of week
\ \ \ \____ Months
\ \ \______ Days
\ \________ Hours
\__________ Minutes
We can use following table to create understand syntax:
Syntax Meaning Accepted Values
mm Minutes 0 to 59
DD Hours 0 to 23
MM Months 1 through 12, full English names, first three letters of English names
DW Days of the Week 0 through 6 (starting with Sunday), full English names, first three letters of English names
Values can be provided in following formats:
Format Example Description
Wildcard * Matches all values for this field
Single value 5 Specifies a single value for this field
Comma delimited 3,5,6 Specifies multiple values for this field. Multiple formats can be combined, like 1,3-6
Ranges 1-3 The inclusive range of values for this field
Intervals */4 or 1-5/2 Intervals to match for this field, such as every 4th value or the range 1-5 with a step interval of 2
You should specify your CRON syntax like the following: "0 21 * * *" or "0 21 * * 0-6", if you want to trigger it for all days of the week.
Days of week: 0 through 6 (starting with Sunday), full English names,
first three letters of English names

Annotating Kubernetes resource with expiry time

I want to add Annotation as expiry time in a Kubernetes resource (rbac definition object).
How to add annotation as the expiry time.
Pseudo code is something like below,
annotations:
expiry-time: {{ current date + 1 hour }}
How to add this custom annotation? What's the language of code needs to be added for custom annotation?
If you are using *nix shell like bash you can use the date command 🔧 and the kubectl patch command 🧰.
kubectl patch <k8s-resource> <resource-name> -p \
"{\"metadata\":{\"annotations\":{\"expiry-time\":\"`date -d '1 hour' '+%m-%d-%Y-%H:%M:%S'`\"}}}"
If you are on Mac you can substitute the date command with this:
date -v+1d '+%m/%d/%Y -%H:%M:%S'
✌️☮️
This worked..
kubectl annotate rbacdefinition joe-access "expires-at=$(date -v+1H '+%m%d/%Y -%H:%M:%S')"

passing parameters via dataproc workflow-templates

I understand that dataproc workflow-templates is still in beta, but how do you pass parameters via the add-job into the executable sql? Here is a basic example:
#/bin/bash
DATE_PARTITION=$1
echo DatePartition: $DATE_PARTITION
# sample job
gcloud beta dataproc workflow-templates add-job hive \
--step-id=0_first-job \
--workflow-template=my-template \
--file='gs://mybucket/first-job.sql' \
--params="DATE_PARTITION=$DATE_PARTITION"
gcloud beta dataproc workflow-templates run $WORK_FLOW
gcloud beta dataproc workflow-templates remove-job $WORK_FLOW --step-
id=0_first-job
echo `date`
Here is my first-job.sql file called from the shell:
SET hive.input.format=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
SET mapred.output.compress=true;
SET hive.exec.compress.output=true;
SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
SET io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec;
USE mydb;
CREATE EXTERNAL TABLE if not exists data_raw (
field1 string,
field2 string
)
PARTITIONED BY (dt String)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
LOCATION 'gs://data/first-job/';
ALTER TABLE data_raw ADD IF NOT EXISTS PARTITION(dt="${hivevar:DATE_PARTITION}");
In the ALTER TABLE statement, what is the correct syntax? I’ve tried what feels like over 15 variations but nothing works. If I hard code it like this (ALTER TABLE data_raw ADD IF NOT EXISTS PARTITION(dt="2017-10-31");) the partition gets created, but unfortunately it needs to be parameterized.
BTW – The error I receive is consistently like this:
Error: Error while compiling statement: FAILED: ParseException line 1:48 cannot recognize input near '${DATE_PARTITION}' ')' '' in constant
I am probably close but not sure what I am missing.
TIA,
Melissa
Update: Dataproc now has workflow template parameterization, a beta feature:
https://cloud.google.com/dataproc/docs/concepts/workflows/workflow-parameters
For your specific case, you can do the following:
Create an empty template
gcloud beta dataproc workflow-templates create my-template
Add a job with a placeholder for the value you want to parameterize
gcloud beta dataproc workflow-templates add-job hive \
--step-id=0_first-job \
--workflow-template=my-template \
--file='gs://mybucket/first-job.sql' \
--params="DATE_PARTITION=PLACEHOLDER"
Export the template configuration to a file
gcloud beta dataproc workflow-templates export my-template \
--destination=hive-template.yaml
Edit the file to add a parameter
jobs:
- hiveJob:
queryFileUri: gs://mybucket/first-job.sql
scriptVariables:
DATE_PARTITION: PLACEHOLDER
stepId: 0_first-job
parameters:
- name: DATE_PARTITION
fields:
- jobs['0_first-job'].hiveJob.scriptVariables['DATE_PARTITION']
Import the changes
gcloud beta dataproc workflow-templates import my-template \
--source=hive-template.yaml
Add a managed cluster or cluster selector
gcloud beta dataproc workflow-templates set-managed-cluster my-template \
--cluster-name=my-cluster \
--zone=us-central1-a
Run your template with parameters
gcloud beta dataproc workflow-templates instantiate my-template \
--parameters="DATE_PARTITION=${DATE_PARTITION}"
Thanks for trying out Workflows! First-class support for parameterization is part of our roadmap. However for now your remove-job/add-job trick is the best way to go.
Regarding your specific question:
Values passed via params are accessed as ${hivevar:PARAM} (see [1]). Alternatively, you can set --properties which are accessed as ${PARAM}
The brackets around params are not needed. If it's intended to handle spaces in parameter values use quotations like: --params="FOO=a b c,BAR=X"
Finally, I noticed an errant space here DATE_PARTITION =$1 which probably results in empty DATE_PARTITION value
Hope this helps!
[1] How to use params/properties flag values when executing hive job on google dataproc

how to schedule console app in autosys

I have a console/executable app which I need to configure in autosys which will run every week on Sunday at 12 AM.I have never used autosys. Any help or link will be helpful related with jil command.
insert_job: JOB_NAME job_type: c
box_name: BOX_NAME (if you use one)
machine: servername
owner: account /(which will run the job, must be permissioned on the machine)/
command: whateveryouwanttorun
condition: s(previousjob) /(Means run only if previousjob is in succes state)/
description: "Description Here"
date_conditions: 1 /(1 or y for yes, 0 or n for no)/
starts_time: 00:00 /("00:00" also accepted)/
days_of_week: su
alarm_if_fail: 1 (1 or y for yes, 0 or n for no)
std_out_file: >>path\filename.log
std_err_file: >>path\errorfile.log
There are lots of other options you can invoke like use of a profile or adding variables or dates to log names etc etc. For an explanation of any of these you can ask back here or check google or ca.com for the user guide for the version of AutoSys in use at your place (commonly 11.0 or 11.3.n or possibly 4.5).
Good Luck!