Google Dataproc jobs tab not listing the jobs - google-cloud-dataproc

I have created dataproc cluster and processed dataproc jobs. When I select jobs tab, It didn't list the created jobs even when I select all regions.

There was a recently identified bug where jobs fail to list if you have no jobs in the older "global"region; this is fixed in code but the fix will take some time to get released everywhere, possibly up to middle of next week or so.
In the meantime, if you run any job in the "global" region and don't delete it, you should be able to see all of your jobs in other regions as well.
Once the fix is rolled out, this workaround will no longer be necessary.

Related

Use Airflow to run parametrized jobs on-demand and with a schedule

I have a reporting application that uses Celery to process thousands of jobs per day. There is a python module per each report type that encapsulates all job steps. Jobs take customer-specific parameters and typically complete within a few minutes. Currently, jobs are triggered by customers on-demand when they create a new report or request a refresh of an existing one.
Now, I would like to add scheduling, so the jobs run daily, and reports get refreshed automatically. I understand that Airflow shines at task orchestration and scheduling. I also like the idea of expressing my jobs as DAGs and getting the benefit of task retries. I can see how I can use Airflow to run scheduled batch-processing jobs, but I am unsure about my use case.
If I express my jobs as Airflow DAGs, I will still need to run them parametrized for each customer. It means, if the customer creates a new report, I will need to have a way to trigger a DAG with the customer-specific configuration. And with a scheduled execution, I will need to enumerate all customers and create a parametrized (sub-)DAG for each of them. My understanding this should be possible since Airflow supports DAGs created dynamically, however, I am not sure if this is an efficient and correct way to use Airflow.
I wonder if anyway considered using Airflow for a scenario similar to mine.
Celery workflows do literally the same, and you can create and run them at any point of time. Also, Celery has a pretty good scheduler (I have never seen it failing in 5 years of using Celery) - Celery Beat.
Sure, Airflow can be used to do what you need without any problems.
You can use Airflow to create DAGs dynamically, I am not sure if this will work with a scale of 1000 of DAGs though. There are some good examples on astronomer.io on Dynamically Generating DAGs in Airflow.
I have some DAGs and task that are dynamically generated by a yaml configuration with different schedules and configurations. It all works without any issue.
Only thing that might be challenging is the "jobs are triggered by customers on-demand" - I guess you could trigger any DAG with Airflow's REST API, but it's still in a experimental state.

Pause Scheduled tasks in SCDF

Hi I'm running batch jobs via SCDF in openshift environment. All the jobs have been scheduled through the scheduling option in SCDF. Is there way to pause or Hold those jobs from executing instead of destroying the schedules ? Since the number of jobs are more, everytime we have to recreated the schedules for all of them.
Thanks.
We have an open issue: spring-cloud/spring-cloud-dataflow#3276 to add support for it.
Feel free to update the issue with your use-case requirements and the acceptance criteria. Better yet, it'd be great if you can contribute adding support for it in a PR; we would love to collaborate and release it.

Kubernetes CronJob - Do ConcurrencyPolicy and manual job execution/creation communicate with one another?

I have a Kube cronjob that has a concurrencyPolicy of Replace. As I'd have expected, documentation suggests this means if there is a job running when the next cycle in the schedule is met while the previous job is running that the previous job would be killed off / cancelled.
What I want to know is, if I manually kick off a job with kubectl create job --from, does the concurrencyPolicy still play a part? It seems as though the answer is no from the testing I've been doing (and then I'll have multiple concurrent jobs), but would like to confirm.
If I'm correct and they don't work together, is there a way to have this functionality? Basically wanting to be able to deploy a job and then test it without having to wait around for it to kick off, but also don't want to have two jobs running at the same time.
Thanks!

Workflow scheduling in Informatica

My requirement is:
Workflow should run daily at 2pm. Workflow has been scheduled to run at 2pm
We have lookup on master tables. Records with IDs that are not present in the master tables will get rejected.
These new IDs have to be loaded into the master tables manually and then the workflow has to be re-run.
Daily the same thing happens.
My question is -
Is it possible to schedule a workflow to run twice every day(one for the first run, the other to run after the master table is updated)?
If No, can I manually start a scheduled workflow? Will it make the workflow unscheduled?
Please, Can any one help me with this?
Informatica's scheduler is a weak spot. I guess using two copies of the same workflow with different schedules would be the easiest solution.
Got a solution for my problem.
Once a workflow is scheduled, even if a particular session has to be re-run manually, whole workflow has be run from the workflow manager.
If that particular session is run manually, scheduling will be gone.
So always run the workflow instead of a session, so that scheduling will remain.

Problem in submitting jobs in oracle

A job has been submitted and an entry is also there in dba_jobs but this job is not comming in the running state.So there is no entry for the job in dba_jobs_running.But the parameter 'JOB_QUEUE_PROCESS' has the value 10
and there are no jobs in the running state.Please suggest how to solve this problem.
SELECT NEXT_DATE, NEXT_SEC, BROKEN, FAILURES, WHAT
FROM DBA_JOBS
WHERE JOB = :JOB_ID
What's that return? A BROKEN job won't kick off, and if the NEXT_DATE/NEXT_SEC is in the past, it won't kick off either.
I hope you labeled that database parameter correctly i.e. 'JOB_QUEUE_PROCESSES=10'.
This is typically why a job won't run.
Also check that the user/schema that is running the job is correct too.
An alternative is to use a different scheduling tool to run the job (i.e. cron on linux)