In SSMS is there a way to export a list of job steps in a SQL Server Agent job? - tsql

In SSMS is there a way to export a list of job steps in a SQL Server Agent job?
The reason I'm asking is that I've been asked to organize a long list of job steps placed in three different jobs. This is something I'd like to share with my team in an Excel document so that we can make notes, demonstrate grouping and sorting ideas, argue about it, etc.
Is there a way I can export this list, or at least print it? I could do a screenshot, except that the list extends past the screen? I'm not looking forward to keying this by hand into Excel.
Thank you for any ideas.

You can query the server;
select
job.name,
steps.step_id,
steps.step_name,
steps.command
from MSDB.dbo.SysJobs job
inner join MSDB.dbo.SysJobSteps steps on steps.Job_Id = job.Job_Id
where job.enabled = 1
order by 1, 2

Related

Anylogic-job shop scheduling

I am trying to do a job shop scheduling resorting to anylogic. I have 20 jobs, 5 machines(resources) and each job has a specific order to visit each machine. In each machine each job has different processing time.
This is what I have right know. I have jobs agent that have a DB table of the machine sequence associated.
This is my jobs agent. I created the collections col_machinesequence(arraylist of strings with op1,op2...where op are the columns of my DB table) and enterblock(arraylist of class Enter where I put my 5 enter blocks)
In each exit block I call the function nextmachine, you can read about it here How to send agents through exit and enter blocks?.
Right know, when I run my project I don't get any error however this is what happens. I guess something in my nextmachine function or in the collection is wrong so this is where I need your help, if anyone may know what is the problem.
I also want to order each job in each machine in order to the shortest processing time. I have this DB table that right know is not associated to any agent. Does anyone know how to do this?
Thank you in advance

Job ID owned by SSRS cannot match any Subscriptions on SSRS

I got a problem when I try to identify SQL Agent job that runs a Reporting Services subscription. However, I found there are a few Jobs owned by SSRS cannot match any Subscription. For instance, I have 16 jobs in job agent, but I only could identify 13 of them.
Does anyone have any ideas about this situation? Is there any way to figure it out where the unexpected job come from and trace them?
Appreciate it!!
It takes a bit of footwork, but you can figure this all out by looking in the ReportServer database that you specified at install time or in the SSRS Configuration tool.
The key tables you want to look at is reportSchedule and Subscriptions. Both will create jobs in your SQL Server Agent. The ScheduleID should match the job name. You can match ReportID with ItemID in the Catalog table to get the name of the report.
Here a query you can run to get more info on subscriptions. I made this into a report in SSRS that I review daily. Note: I probably ripped this off from another StackOverflow answer.
select c.Name,s.LastRunTime,s.LastStatus,s.Description,s.ScheduleID
from ReportServer.dbo.Subscriptions as s
left join ReportServer.dbo.Catalog as c
on c.ItemID=s.Report_OID order by LastRunTime desc

How to take data from 2 databases (with same schema) and copy it into 1 database using Data factory

I want to take data from 2 databases and copy(coalesce) it into 1 using Data factory.
The issue is: It seems that multiple inputs is not allowed for copy activities.
So i resorted to having 2 different datasets which are exact copies but with a different name... and then putting 2 different activities into the 1 pipeline which use their specific output dataset.
It just seems odd and wrong to do it this way.
Can i have some help.
This is what my diagram currently looks like:
Is there no way of just copying data from 2 seperate databases (which have the same structure but different data) to the 1 database?
The short answer is yes. But you need to work within the constraints of how ADF handles this.
A couple of things to help...
You'll always need at least 2 activities to do this when using the copy type activity. Microsoft of course charges per activity execution in ADF, so they aren't going to allow you to take shortcuts having many inputs and output per single copy activity (single charge).
The approach you show above is ok and to pass the ADF validation as you've found you simply need to have the output datasets created separately and called different things. Even if they still refer to the same underlying target table etc. This is really only a problem for the copy activity. What you could do is land the data firstly into separate staging tables in the Azure target database just for the copy (1:1). Then have a third downstream activity that executes a stored procedure that does the union of tables. In this case you could have 2 inputs to 1 output in the activity if you want to have that level of control in ADF.
Like this:
Final point, if you don't want the activities to execute in parallel you could chain the datasets to enforce a fake dependency or add a simple 'delay' clause to one of the copy operations. A delay on an activity would be simpler than provisioning a time slice offset.
Hope this helps

Can Tableau return non-UI results programmatically?

Tableau is an excellent tool for visualizing data. However, it is designed to be the final stop in a data (ETL) pipeline.
My Tableau workbook uses a bunch of Table Calcs to generate a list of "recommended orders". Rather than view these, I want to automate and execute them. This would make Tableau the engine of a quasi-ML process.
In other words, I would like to make Tableau a part of my ETL pipeline and send data to another tier. How can I write a back-end program that executes my Tableau workbook and receives a results dataset?
See the end of this article for example data I want to automate:
http://robm26.blogspot.com/2015/10/keep-your-factory-humming-with-tableau.html
Any ideas?
You're not not going to like the answer I'm going to give you -- "Don't do this".
Tableau isn't meant to be a task in a larger ETL pipeline and the reason you're having problems making it behave the way you want is it's not meant to be done.
Above and beyond the fact that you've figured out how to get a result that you want in Tableau ("the work is done"), Tableau isn't offering you any real value in the scenario you're describing. Use a tool (like Alteryx) that is really purpose built for this sort of work.
The above answer is correct that tabcmd is the way to pull it out. We use a function in python to generate the tabcmd requests so that they can be batched.
import subprocess
def runTabCmd(cmd):
# run tableau command and display the output
print cmd
if run_tabcmd == 'yes':
p = subprocess.Popen(
cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line
You probably already knew that, but for us it was a way to completely automate the pulling and loading into another python package like scikit-learn for a streamlined ML solution
I'm editing this answer to agree with Russell's answer. Tableau is not an ETL tool and should not be used as such. If you absolutely have to do something, you can use what I provided. Otherwise, the best practice is to use a tool designed for the job.
You can easily use tabcmd to get the results of a view in CSV, which can be used later in your ETL process. If you need to automate it, you can write a script and execute it with a cron job. I, myself, have a few views that are exported to CSV and used later in my ETL stream to feed our CRM.
Just remember to create the view exactly as you want it to be exported to CSV - usually including the order of the fields. Another tip is that I don't let it use the default "Measure Names" and "Measure Values" - to make sure everything is good on my CSV, I have the fields added manually in the row/columns section.

Split file to more files in talend

I'm looking for a way how to split job execution in talend studio according to actual file row - I'd like to process file rows starting with "DEBUG" in one job branch and another rows in another job branch. It that possible?
To do this, use a tMap component. Your job will look like this
t*Input--row-->tMap--out1--->tFileOutput*
--out2--->tFileOutput*
In the tMap component, you have input on the left and output on the right. In your output table, select "Activate expression filter" and use the text box to define your filter-- only rows that match that filter will be ouput from that connection. You can have as many output tables and filters as you need.
Using tMap is cool, but if number of output stream is not defined and fixed, tMap is not a good choice.
In this case using iterate link or tjavaflex can help you:
Have a look at this tutorial on "how to split a file into many files regarding a key on each record" which explains how to solve this kind of task. It is actually only available in french. The tutorial shows 3 different technics to achieve this task.
Finally I used tExctractRegeFields component - simply defined regex for matching lines. The most important (and I didn't know before) is that you can connect components with different types of connections. I did right click on used component a chose Row > Reject for new branch in job as described in question.
We can do it by using tfileoutputdelimited and tfileinputdelimited.
We have one option in tfileoutputdelimited in advanced settings and check option split out files in several files.