Keeping track of refreshes in Crystal Reports 2008 - crystal-reports

I am curious to know if there is a way to tell if a report has been printed or ran. For example, the user enters in a inspectionnumber and hits apply and then clicks print and then prints the report. Can i know if the report has been printed? is there a way to use local variables to track that, some sort of loop?

I've never tested this, but here's a theory you can try.
In your Database Expert, go to your Current Connections and Add Command. Use this to write up a SQL query to save the usage data to a table in your data source (If your data source is read only, just add a delimited text file as an additional data source and output your usage data to that instead.)
The best example I have of this is # http://www.scribd.com/doc/2190438/20-Secrets-of-Crystal-Reports. On page 39, you'll see a method for creating a table of contents that more or less uses this method.

Related

Best Practice to Store Simulation Results

Dear Anylogic Community,
I am struggling with finding the right approach for storing my simulation results. I have datasets created that keep track of every value I am interested in. They live in Main (see below)
My aim is to do a parameter variation experiment. In every run, I change the value for p_nDrones (see below)
After the experiment, I would like to store all the datasets in one excel sheet.
However, when I do the parameter variation experiment and afterwards check the log of the dataset (datasets_log), the changed values do not even show up (2 is the value I did set up in the normal simulation).
Now my question. Do I need to create another type of dataset if I want to track the values that are produced in the experiments? Why are they not stored after executing the experiment?
I really would appreciate if someone could share the best way to set up this export of experiment results. I would like to store the whole time series for every dataset.
Thank you!
Best option would be to write the outputs to some external file at the end of each model run.
If you want to use Excel, which I personally would not advise, even though it has a nice excelFile.writeDataSet() function, you can.
I would rather write the data to a text file as you will have much for control over the writing, the file itself, it is thread-safe, and useable in many many more platforms than Microsoft Excel.
See my example below:
Setup parameters in your model that you will write the data to at the end of the model of type TextFile. Here I used the model on destroy code to write out the data from the data sets.
Here you can immediately see the benefit of using the text file! You can add the number of drones we are simulating (or scenario name or any other parameter) in a column, whereas with Excel this would be a pain...
Now you can pass your specific text file to the model to use by adding it to the parameter variation page, providing it to the model through the parameters.
You will see that I also set up some headers for the text file in the Initial Experiment setup part, and then at the very end of the experiment, I close the text files in the After experiment section so that the text files can be used.
Here is the result if you simply right-click on the text files and open them in Excel. (Excel will always have a purpose, even if it is just to open text files ;-) )

How to call a sas dataset by its label or where to check its name

I have a problem in dealing with SAS Enterprise Guide that runs on the server of my client.
I do not have access to the libraries so, in order to use the datasets the only thing we can do is to store them on the local disk C: of the computer and drag them to SAS.
We can not create libraries because the server does not read local paths.
Once you drag a table, let's call it "mydata" in SAS, the table is automatically renamed "mydata9865" with random numbers at the end and "mydata" is its label.
If you right-click the table and go to properties, you can't find the name of the table, just the label.
The only way I found to check the real name of the dataset is to open the Query Builder and check the name in the code preview.
The problem is that I am dealing with tables of millions of records and the machine I am using is very slow, so whenever I want to open the Query Building, just to check the table's name, it takes sometimes even an hour.
I am not a SAS expert, so I am sure there is a smarter way to do so. Is it possible for instance to use the table by calling it with its label?
data mydata2;
set mydata;
run;
instead of
set mydata9865?
Or is there some place I can rapidly check the name of the table without going through the query builder?
I tried to google it but I can't find anything, I hope someone will be able to help me!
Thank you in advance
Hover the mouse pointer over a data node to see it's attributes. The data set name is the File name: value.
For example:
In this example I had renamed the nodes created by two different queries to be the same (doable:yes, smart:maybe not). NOTE: A data node Label: is not necessarily the same as it's underlying data set's label metadata.
Regarding
use the table by calling it with its label?
Two nodes can have the same label, and is a a situation that defeats this approach.
Use the COPY task to upload your data explicitly. It sounds like you're not adding your data to the projects properly so SAS automatically assigns a name, rather than if you explicitly import or load your data.
Problem solved! I should have simply upload the data to the server with Tasks->Data->Upload Data Sets to Server but I didn't know this task so I didn't know it was possible to do it at all!
https://communities.sas.com/t5/SAS-Enterprise-Guide/Importing-sas-data-sets-from-C-drive-into-SAS-EG-not-possible/td-p/135184
Thank you everybody for you help!

Can Tableau return non-UI results programmatically?

Tableau is an excellent tool for visualizing data. However, it is designed to be the final stop in a data (ETL) pipeline.
My Tableau workbook uses a bunch of Table Calcs to generate a list of "recommended orders". Rather than view these, I want to automate and execute them. This would make Tableau the engine of a quasi-ML process.
In other words, I would like to make Tableau a part of my ETL pipeline and send data to another tier. How can I write a back-end program that executes my Tableau workbook and receives a results dataset?
See the end of this article for example data I want to automate:
http://robm26.blogspot.com/2015/10/keep-your-factory-humming-with-tableau.html
Any ideas?
You're not not going to like the answer I'm going to give you -- "Don't do this".
Tableau isn't meant to be a task in a larger ETL pipeline and the reason you're having problems making it behave the way you want is it's not meant to be done.
Above and beyond the fact that you've figured out how to get a result that you want in Tableau ("the work is done"), Tableau isn't offering you any real value in the scenario you're describing. Use a tool (like Alteryx) that is really purpose built for this sort of work.
The above answer is correct that tabcmd is the way to pull it out. We use a function in python to generate the tabcmd requests so that they can be batched.
import subprocess
def runTabCmd(cmd):
# run tableau command and display the output
print cmd
if run_tabcmd == 'yes':
p = subprocess.Popen(
cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in p.stdout.readlines():
print line
You probably already knew that, but for us it was a way to completely automate the pulling and loading into another python package like scikit-learn for a streamlined ML solution
I'm editing this answer to agree with Russell's answer. Tableau is not an ETL tool and should not be used as such. If you absolutely have to do something, you can use what I provided. Otherwise, the best practice is to use a tool designed for the job.
You can easily use tabcmd to get the results of a view in CSV, which can be used later in your ETL process. If you need to automate it, you can write a script and execute it with a cron job. I, myself, have a few views that are exported to CSV and used later in my ETL stream to feed our CRM.
Just remember to create the view exactly as you want it to be exported to CSV - usually including the order of the fields. Another tip is that I don't let it use the default "Measure Names" and "Measure Values" - to make sure everything is good on my CSV, I have the fields added manually in the row/columns section.

how to put more than 1 record in an oracle apex form?

I have a problem with oracle apex forms.
The problem is that I want to add more than 1 record at the same time in 1 form. I have already read that the best way to do that is to use an csv file but then there is no tutorial to do that.
Oracle is a database, and combining files with databases is always tricky and not extensively supported for obvious reasons. Storing files and presenting them for download is one thing. Getting an Oracle database to open a file and reading and processing the contents is another. It sure it possible, but especially combining this with an Apex application I think you are going to run into a lot of challenges such as security restrictions.
However, stepping away from files does not necessarily mean stepping away from CSV. You could simply offer a large text input on your page in which a user can copy-paste a large CSV string. This can then be submitted and processed by the database. To do this you would probably need to create a process that gets fired after you submit the page. From this process you can parse the CSV data and insert multiple rows in a table. The same can be done for things like XML or JSON.
However, who is generating this CSV? Requiring a user to construct CSV is not very user-friendly. It can be complicated and error prone. If the CSV is generated by another application, isn't there a way to circumvent Apex and pass the CSV to the database directly?
If a single text-based data carrier is not required, which I doubt reading your descriptions, why not simply keep your form but allow the user to submit multiple forms? Would if be sufficient to insert one record per submit, and later using a batch to query these records and start formatting one at a time?
If you simply want the user to be able to enter multiple machines without having to submit the page for each machine, this is also possible, but you will have to leave some standard Apex functionality behind and implement some more custom javascript and PL/SQL functionality. Apex only allows a static amount of page items, which needs to be defined design time. So if you want to dynamically add fields such as text boxes and select lists to your page, you will have to resort to javascript. You could start by defining a region which renders one row of input fields at page load, and create a link under it saying 'add another row', which will render a new row of input fields under the existing one, and repeat this as many times as the user needs to.
That takes care of the UI. Now when the user has entered all the data he wants, we need to submit all this data and get it into the database. So yes, at this point we would probably have to get all this data from the input fields and turn it into one single string. This would all have to be done client side in your javascript code. You can then use the Apex page item API to assign this generated string to a single page item, using the $x(...) or $v(...) functions. Then submit the page, at which point the page processes will be fired. You then define a page process which will parse the data in your page item, and use that data to insert multiple rows in the database.

Where does the stored procedure Sys.xp_readerrorlog Read its contents from specifically?

I have been working with this stored procedure Sys.xp_readerrorlog for around a week now, and what I have learned is it accepts 7 parameters to fully refine how it should display its data. Easily enough to understand.
I have the question now from, where exactly does this stored procedure get its data from? I know you can also preview the data in the SSMS Object browser, under Managements In the SQL Server logs folder, although I have come to the theory that the Dialog that opens when you read the logs also use this procedure to display to the user in a grid.
I am baffled. I scouted through the system databases and found nothing (no table) which looks remotely like the output you get from this procedure
exec sys.xp_readerrorlog 1,0,'','',null,null,N'Desc';
Any expert that can tell me where the actual log data is stored, and if it is queryable through a select statement if you have admin rights?
It reads from the SQL Server error log file, which is a plain text file. There is no built-in interface to the file from TSQL; xp_readerrorlog is widely known, but it's also undocumented so relying on it is risky although of course you can use it if you don't mind that risk.
Using SMO you can find the file location but there is no special API for reading it because it's just a text file.