Out result directly onto current screen field without C# etc - tsql

Is there a way to output my SQL result directly to where screen cursor is currently. Example: If i execute "select 1" I want the result 1 to immediately populate where my cursor is (like a form fill). So if i place my cursor in an open notepad file, execute the query, it populates the 1 directly in as though "typed" in. I need to achieves this all through SQL query without C# or others.
Pretty sure many won't approve (if it's even possible) but i do have my reason for wanting to achieve this.

In short: no.
Long: I am sure you can't do this without using other programming languages.
The databases' purpose is to help application developers with storing and effectively retrieving data. So, it's kind of "utility for developers" which doesn't do anything useful without a separate (let's call it "front-end") program that inserts into and requests data from there.
My guess is you need a standalone program that wait for special key combination, asks db for data retrieval, detects active window and the element, and pastes data in there

Related

Best practices for editing data using forms in ms-access

So I've started learning access due to necessity, as the person who was in charge of it passed way and someone had to keep it going.
I noticed a very bad (at least IMO) behavior in all databases he created: Every single form was bound directly to a table or saved query. This way, if the user opened a form, he had to complete all the steps he was supposed to do, because if he closed the form mid process (or the computer froze, or anything of the sort), the actual data would be compromised as it would be half complete. This often times broke everything in the process chain, rendering sub-sequential steps impossible to be performed and forced me to correct data manually directly in the tables.
As I've start upgrading his stuff and developing my own, I've been trying to learn ways to allow the data to be edited in the form only, making it possible to cancel the process anytime or save the changes all at once in the end.
If the editions were simple, I discovered that I could create a recordset, copy relevant data to unbound fields in the form and, in the end, if the user chose to, copy the data from the form fields back to the recordset.
Other times more complex solutions were required, as I would need to edit several pieces of data at once in continuous forms, "save" them, run more code, maybe add fields to hold the information originated from that processing and so on. I then learned about using temporary tables, but did not like it, since it tended to bloat the db. I even went on to creating temporary databases during code execution that would host the temporary tables and be destroyed in the end, but that added too much unnecessary complexity.
Nowadays I'm using disconnected ADO recordsets to hold the temporary data and fields. It works but has its limitations.
So I'm wondering, what is the best way you - much more experienced than me - guys use to approach this kind of scenario? Is using in memory ADO recordsets really the best way around?
I think you are mixing two things that a form does that have completely different requirements. Editing existing records (and bound forms are great for that) and creating new records (where using a straight bound form can result in creating incomplete records). The way to approach it depends on many things but mainly to how much data is necessary for a new record to be considered "complete".
I usually do one of the following things:
Create an unbound popup modal form for adding new records where only the necessary fields are present. Once complete it loads the new record to the main form for further editing.
Use the above method except the form is not a popup one but a set of unbound fields in the footer or header of the main form.
Let the user create new records but bind validation on the OnClose (and/or other appropriate to your situation) event of the form that deletes the half-filled record if it does not validate.
Let users create new records in the bound form but have a 'cleanup' routine called either on a schedule or based on user actions.
Ultimately if your business process requires the majority of fields to be manually added/edited every time a new record is added or edited, you are better off using an unbound form.
This way, if the user opened a form, he had to complete all the steps
he was supposed to do, because if he closed the form mid process (or
the computer froze, or anything of the sort), the actual data would be
compromised as it would be half complete
No, if the computer freezes, then no data is saved to the table. This is the same if you used a disconnected reocrdset and a un-bound form.
If you use the before update event in the form that has some verification code and does a simple cancel = true, then the forms data is not saved nor is the table updated. Again, if you used a dis-connected record set and the user closes the form, you have to test the data – and again you can either choose to write out the data or not – this effect is ZERO difference from using a bound form to a table or a disconnected form.
If the editions were simple, I discovered that I could create a
recordset, copy relevant data to unbound fields in the form and, in
the end, if the user chose to, copy the data from the form fields back
to the recordset.
No you don’t need to do the above. The above achieves nothing and only racks up additional development hours and increases cost of the application. In near all cases in-bound forms increase development costs over that of a simple form bound to a table. So the original developer had the correct idea. You can control the update of the underlying table in near all cases to achieve the required verification. Forms only save and write the data out if the developer allows as such.
So Access forms when bound no more or less write incomplete data out to a table if you place verification code in the forms before update event. A half-filled bound form, or a half filled un-bound form with dis-connected reocrdset BOTH will not write their data if the computer freezes.
And BOTH types of forms will not write out data to table until such time your verification code has completed.
Access is not designed for un-bound forms, and tools like vb.net, or even VB6 had a whole bunch of cool wizards and support for un-bound forms. In access, we don't have those wizards. And when you use UN-bound forms then you loose tons of form events. You in effect get the worst of both worlds, since you lose use of form events and have no wizards or support for un-bound. Even just the several delete record events we have are rather amazing.
You lose use of me.dirty, on-insert, me.newReocrd, forms after update events - the list of features you toss out and lose is huge. And if you want a button to write data to the table (such as a save button on the form), then just go:
If me.Dirty = True then
me.Dirty = False ' this forces your verification code to run
End if
There are FEW use cases in which in-bound forms will benefit you, but they will cost you rather much in terms of development times.

how to put more than 1 record in an oracle apex form?

I have a problem with oracle apex forms.
The problem is that I want to add more than 1 record at the same time in 1 form. I have already read that the best way to do that is to use an csv file but then there is no tutorial to do that.
Oracle is a database, and combining files with databases is always tricky and not extensively supported for obvious reasons. Storing files and presenting them for download is one thing. Getting an Oracle database to open a file and reading and processing the contents is another. It sure it possible, but especially combining this with an Apex application I think you are going to run into a lot of challenges such as security restrictions.
However, stepping away from files does not necessarily mean stepping away from CSV. You could simply offer a large text input on your page in which a user can copy-paste a large CSV string. This can then be submitted and processed by the database. To do this you would probably need to create a process that gets fired after you submit the page. From this process you can parse the CSV data and insert multiple rows in a table. The same can be done for things like XML or JSON.
However, who is generating this CSV? Requiring a user to construct CSV is not very user-friendly. It can be complicated and error prone. If the CSV is generated by another application, isn't there a way to circumvent Apex and pass the CSV to the database directly?
If a single text-based data carrier is not required, which I doubt reading your descriptions, why not simply keep your form but allow the user to submit multiple forms? Would if be sufficient to insert one record per submit, and later using a batch to query these records and start formatting one at a time?
If you simply want the user to be able to enter multiple machines without having to submit the page for each machine, this is also possible, but you will have to leave some standard Apex functionality behind and implement some more custom javascript and PL/SQL functionality. Apex only allows a static amount of page items, which needs to be defined design time. So if you want to dynamically add fields such as text boxes and select lists to your page, you will have to resort to javascript. You could start by defining a region which renders one row of input fields at page load, and create a link under it saying 'add another row', which will render a new row of input fields under the existing one, and repeat this as many times as the user needs to.
That takes care of the UI. Now when the user has entered all the data he wants, we need to submit all this data and get it into the database. So yes, at this point we would probably have to get all this data from the input fields and turn it into one single string. This would all have to be done client side in your javascript code. You can then use the Apex page item API to assign this generated string to a single page item, using the $x(...) or $v(...) functions. Then submit the page, at which point the page processes will be fired. You then define a page process which will parse the data in your page item, and use that data to insert multiple rows in the database.

What are some options for keeping track of temporary results and re-use them after restart, in case the program dies while running?

(Suggestions for improving the title of this question are welcomed.)
I have a perl script that uses web APIs to fetch a user's "liked" posts on various sites (tumblr, reddit, etc.), then download some portion of each post (for example, an image that's linked from the post).
Right now, I have a JSON-encoded file that keeps track of the posts that have already been fetched (for tumblr, it just records the total number of likes, for reddit, it records, the "id" of the last post fetched) so that the script can just pick up with the newly "liked" items the next time it runs. This means that after the program is finished archiving a new batch of links, the new "stopping point" is recorded in the JSON file.
However, if the program croaks for some reason (or is killed with ctrl+c, say), the progress is not recorded (since the progress is only recorded at the end of the "fetching"). So the next time the program runs, it looks in the tracking file and gets the last recorded stopping point (the last time it successfully completed fetching and recorded the progress), and picks up there again, downloading duplicates up to the point where it croaked the last time.
My question is, what's the best (i.e. simplest, most efficient, take your pick--I'm open to options here) way to record progress with each incremental archived item, so that if the program dies for some reason, it always knows exactly where to pick up where it left off? Adapting the current method (literally print-ing to the tracking file at the end of each fetch) to do the same thing after each individual item is definitely not the best solution because it's got to be pretty inefficient.
Edited for clarity
Let me make clearer that the file used to track the downloaded posts is not large, and does not grow appreciably with each "fetch" operation. There is only one element for each api (tumblr, etc.) that contains either the total number of likes for the account (in other words, the number that we have already downloaded, so we query the api for the current total, subtract the number in the file, and we know how many new items to fetch), or the ID of the last item fetched (reddit uses this, so we can ask the api for all items "after" the one in the file and only get the new stuff).
My problem is not an ever growing list of fetched posts, rather it is writing to the tracking file every time one single post is downloaded (and there could be thousands of posts downloaded in a single run).
Some ideas I would consider:
Write to the file more often or use an interrupt handler to 'safely' handle the interrupt signal. When it's called, allow the script to write to your file so it's as current as possible and elegantly quit.
Use a better storage mechanic than writing to a flat file. I would consider, depending on the need, using a database to store the ids. I groan when database starts getting in play due to the complexities it adds, however it doesn't have to be. I've used SQLite for queuing but also consider DBD::CSV which just writes to a CSV while allowing SQL syntax (haven't used it myself). In your code you could then check if the id is already in the database and know to skip it. I would imagine that SQLite is also more 'efficient' than reading/writing a flat file and, imo, would be easier to code than having to write code to read a file yourself.
I'd just use a hash, tied to an NDBM file, to keep track of what is loaded and what isn't.
When you start a new batch of URLs, you delete the NDBM file.
Then, in your code, at the start of the program, you do
tie(%visited, 'NDBM_File', 'visitedurls', O_RDWR|O_CREAT, 0666)
(don't worry about the O_CREAT, the file will remain intact if it exists unless you pass O_TRUNC as well)
Assuming your main loop looks like this:
while ($id=<INFILE>) {
my $url=id_to_url($id);
my $results=fetch($url);
save_results($url, $results);
}
you change that to
while ($id=<INFILE>) {
my $url=id_to_url($id);
my $results;
if ($visited{$url}) {
$results=$visited{$url};
} else {
$results=fetch($url);
$visited{$url}=$results;
}
save_results($url, $results);
}
So whenever you fetch a new URL, you write the results to the NDBM file, and whenever you restart your program, the results that have already been fetched will be in the NDBM file and fetched from there instead of reading the URL.
This assumes $results is a scalar, else you won't be able to store/retrieve it in this way. But as you're producing JSON anyway, the "partial json" for each URL will probably be what you want to store.

Where does the stored procedure Sys.xp_readerrorlog Read its contents from specifically?

I have been working with this stored procedure Sys.xp_readerrorlog for around a week now, and what I have learned is it accepts 7 parameters to fully refine how it should display its data. Easily enough to understand.
I have the question now from, where exactly does this stored procedure get its data from? I know you can also preview the data in the SSMS Object browser, under Managements In the SQL Server logs folder, although I have come to the theory that the Dialog that opens when you read the logs also use this procedure to display to the user in a grid.
I am baffled. I scouted through the system databases and found nothing (no table) which looks remotely like the output you get from this procedure
exec sys.xp_readerrorlog 1,0,'','',null,null,N'Desc';
Any expert that can tell me where the actual log data is stored, and if it is queryable through a select statement if you have admin rights?
It reads from the SQL Server error log file, which is a plain text file. There is no built-in interface to the file from TSQL; xp_readerrorlog is widely known, but it's also undocumented so relying on it is risky although of course you can use it if you don't mind that risk.
Using SMO you can find the file location but there is no special API for reading it because it's just a text file.

Keeping track of refreshes in Crystal Reports 2008

I am curious to know if there is a way to tell if a report has been printed or ran. For example, the user enters in a inspectionnumber and hits apply and then clicks print and then prints the report. Can i know if the report has been printed? is there a way to use local variables to track that, some sort of loop?
I've never tested this, but here's a theory you can try.
In your Database Expert, go to your Current Connections and Add Command. Use this to write up a SQL query to save the usage data to a table in your data source (If your data source is read only, just add a delimited text file as an additional data source and output your usage data to that instead.)
The best example I have of this is # http://www.scribd.com/doc/2190438/20-Secrets-of-Crystal-Reports. On page 39, you'll see a method for creating a table of contents that more or less uses this method.