Deletiion of multiple PS files using JCL - Mainframe - jcl

I have the requirement to delete the below datasets at one go.
TEST.D210101.FILE ,
TEST.D210102.FILE.
i have written the IDCAMS but is not working if I have given in the below way.
DELETE (TEST.D.*) FORCE**
Its working only if I have second qualifier.
DELETE (TEST.D210101.*) FORCE
Could you please let us know how to solve the issue

Try
Delete TEST.D*.FILE
For idcams delete you can only use 1 * . For more complex deletes use the Data Facility (DF*) utilities, see
Also Google is your friend, try googling idcams wild cards delete

Related

In Power Query, when duplicating the source query should I duplicate the Transform File folder as well?

My apologies in advance if this question has already been asked, if so I cannot find it.
So, I have this huge data base divided by country where I need to import from each country data base individually and then, in Power Query, append the queries as one.
When I imported the US files, the Power Query automatically generated a Transform File folder with 4 helper queries:
Then I just duplicated the query US - Sales and named it as UK - Sales pointing it to the UK sales folder:
The Transform File folder didn't duplicate, though.
Everything seems to be working just fine right now, however I'd like to know if this could be problem in the near future, because I still have several countries to go. Should I manually import new queries as new connections instead of just duplicating them or it just doesn't matter?
Many thanks!
The Transform Files Folder group contains the code that is called to transform a list of files. It is re-usable code. You can see the Sample File, which serves as the template for the transform actions.
As long as the file that is arrived at for the Sample File has the same structure as the files that you are feeding into the command, then you can use any query with any list of files.
One thing you need to make sure is that the Sample File is not removed from your data source. You may want to create a new dummy file just for that purpose, make sure it won't be deleted, and then point the Sample File query to pull just that file.
The Transform Helper Queries are special queries that you may edit the queries, but you cannot delete and recreate your own manually. They are automatically created by PQ when combining list of contents and are inherently linked to the parent query.
That said, you cannot replicate them, and must use the Combine function provided by PQ to create the helper queries.
You may however, avoid duplicating the queries, instead replicate your steps in the parent query, and use table union to join the list before combining the contents with the same helper queries.

How to call a sas dataset by its label or where to check its name

I have a problem in dealing with SAS Enterprise Guide that runs on the server of my client.
I do not have access to the libraries so, in order to use the datasets the only thing we can do is to store them on the local disk C: of the computer and drag them to SAS.
We can not create libraries because the server does not read local paths.
Once you drag a table, let's call it "mydata" in SAS, the table is automatically renamed "mydata9865" with random numbers at the end and "mydata" is its label.
If you right-click the table and go to properties, you can't find the name of the table, just the label.
The only way I found to check the real name of the dataset is to open the Query Builder and check the name in the code preview.
The problem is that I am dealing with tables of millions of records and the machine I am using is very slow, so whenever I want to open the Query Building, just to check the table's name, it takes sometimes even an hour.
I am not a SAS expert, so I am sure there is a smarter way to do so. Is it possible for instance to use the table by calling it with its label?
data mydata2;
set mydata;
run;
instead of
set mydata9865?
Or is there some place I can rapidly check the name of the table without going through the query builder?
I tried to google it but I can't find anything, I hope someone will be able to help me!
Thank you in advance
Hover the mouse pointer over a data node to see it's attributes. The data set name is the File name: value.
For example:
In this example I had renamed the nodes created by two different queries to be the same (doable:yes, smart:maybe not). NOTE: A data node Label: is not necessarily the same as it's underlying data set's label metadata.
Regarding
use the table by calling it with its label?
Two nodes can have the same label, and is a a situation that defeats this approach.
Use the COPY task to upload your data explicitly. It sounds like you're not adding your data to the projects properly so SAS automatically assigns a name, rather than if you explicitly import or load your data.
Problem solved! I should have simply upload the data to the server with Tasks->Data->Upload Data Sets to Server but I didn't know this task so I didn't know it was possible to do it at all!
https://communities.sas.com/t5/SAS-Enterprise-Guide/Importing-sas-data-sets-from-C-drive-into-SAS-EG-not-possible/td-p/135184
Thank you everybody for you help!

Openwhisk Editor renaming a sequence produces duplicates

If I click on rename for a particular sequence in the Bluemix Editor, it will create a new sequence with the name, but the "old" sequence does still exist.
Thanks very much for the bug report. We have replicated this issue: under certain conditions, the rename fails to delete the old sequence. We'll get this fixed as soon as possible!
In the time being, a workaround is to manually delete the sequence with the old name. There will be no harm in doing so.
Nick Mitchell
IBM OpenWhisk Team

sugarCRM migrating Leads module

I am trying to move sugarCRM data (Leads, Opportunities and Applicants, these are modules in sugarCRM) - I have the .csv files. No SQL.its hosted by this company and they won't give me the sql.
the issue is that leads for example has 212 columns(fields)
the regular sugarCRM has far less fields.
I am trying to figure out what is the best way to import all the data without having to use the Studio to create each field individually.
Opportunities module has 110 fields also on the hosted version - and the regular sugarCRM only has about 27.
so my question is how do I create all the fields so I can import them
I already created a file that gets the column names, and I did import all the data into a table called Leads1. when I rename it to leads and check ... the data doesnt show on the page.
any ideas? (please dont answer and say : ask the company to send you the sql, because they will not send it they know I want to move out of their hosted environment I already spent an hour on the phone and they wouldn't)
any ideas or suggestions would be greatly appreciated thank you
With or without the SQL you'll need to recreate the fields in Studio as you need the views to also include the fields. It's tedious, but the only real viable option in this case. It is important that the fields are named exactly the same when doing this so that the import works correctly.
If you can hack some code, another option is to create a module that will export the SQL for the whole database for you from within SugarCRM and also the whole file structure as a zip so that you don't have to recreate anything.
BTW - make sure that the SugarCRM instance you are moving to is the exact same version. Once you do the import then you can upgrade to your desired version. This guarantees that the DB structure will be the same then (given that the custom fields get created appropriately).
Good luck!

Given code base hosted on TFS, which command can tell me which file has changed most?

I want to find out files under a given directory which have been updated most. Is there any command which can display this info? Or is there any way to get max version count for a given file, so I can write some script to get this info from all and then sort desc.
Do you mean changed the most number of times, or undergone the most code chrun?
Either way - looking at the report data might be the easiest option for you. Take a look at the following blog post I did explaining how to use Excel for looking at TFS data that uses churn as an example allowing you to drill down into folders and files - but you should be able to get the data that you are looking for.
Getting Started with the TFS Data Warehouse