TSQL Naming conventions ~ What's this naming convention called? - tsql

Given that I'm using TSQL, what's the name of this style of naming?
\\servername\instance.database.schema.table
and what other items can be inserted in place of .dbo. in the previous naming instance? how does one go about creating those alternatives? links to answers welcome
Also, how would one refer to a job on the server instead of a table or a sproc?
My intention is for when I write up my work for documentation (say when I'm closing out my FogBugz ticket or something) then I want to be able to at least sound like I know what I'm doing ;)
(updated per links and comments)

Three & four part name are the most common references I've come across, but as you can see - there's lots of shorthand alternatives.

The dbo is the schema. You can create new schemas and assign db objects to the schema.

That is the entire path. If you're hitting a table within your default schema (this is dbo by default) all you need is table. (SELECT * FROM Addresses)
If you're hitting a table on a schema other than the user's default (or you want to protect yourself from future changes) then you'll put Schema.Table. (SELECT * FROM Customers.Addresses)
If you're looking to hit a table on a different database within the same server, you will need to put DatabaseName.Schema.Table. (SELECT * FROM ProductionDB.Customers.Addresses)
Finally, if you're looking to a hit a table on a different server all-together, you need the full path. These machines must also have a server-link, AFAIK.

Related

Cannot repopulate ElectrodeGroup datajoint table

I'm a researcher in Loren Frank's lab at UCSF using datajoint and files in the nwb format. I made some changes to our code for defining entries in our ElectrodeGroup table, and was hoping to test those by deleting an entry in the table and regenerating it with the new code. I was able to delete the entry, but cannot repopulate it. In particular, when I run ElectrodeGroup.populate() or ElectrodeGroup.populate({"nwb_file_name": my_file_name}), no changes are made to the table. I confirmed that the electrode group I deleted and am trying to regenerate is defined in the original nwb file. I am seeking input on why the populate command seems to not be working here. Thanks in advance for any help!
This user also contacted our team through another channel. Sharing the solution below for future users, in reference to this schema. In short, the populate process is reserved for unique upstream primary keys.
Since the ElectrodeGroup's only upstream table dependency is Session, the make method will only be called if there are no electrode groups for that session. This is because from the perspective of DataJoint, the only 'guaranteed' knowledge about what should exist for this table is defined solely by the presence/absence of related upstream records. Since the 'new' primary 'electrode_group_name' attribute is defined by the ElectrodeGroup table itself, DataJoint doesn't know how many copies will be created by make, and so simply invokes make 1 time per Session, expecting the single make invocation to fully define all possible electrode_group_name values the table will use. If there is one value for that session, no work needs to be done, so no make() invocation occurs.
There are a couple possible solutions:
Model the electrode group explicitly, with a table defines the existence of an electrode group (e.g., ElectrodeGroupConfiguration). This ElectrodeGroup would then inherit primary keys from both Session and ElectrodeGroupConfiguration. The ElectrodeGroup make function would be adjusted to load that unique keys across upstream tables.
Adjust the make function to handle the partial insert/update case, and call the make function directly with the desired primary key when these kinds of 'abnormal' updates need to occur.
Method #1 is 'cleanest' w/r/t to the DataJoint data model (explicitly modeled data dependencies using make/populate), whereas #2 is slightly 'escaping' the DataJoint data model in a controlled way to achieve a desired schema/data result.

How to call a sas dataset by its label or where to check its name

I have a problem in dealing with SAS Enterprise Guide that runs on the server of my client.
I do not have access to the libraries so, in order to use the datasets the only thing we can do is to store them on the local disk C: of the computer and drag them to SAS.
We can not create libraries because the server does not read local paths.
Once you drag a table, let's call it "mydata" in SAS, the table is automatically renamed "mydata9865" with random numbers at the end and "mydata" is its label.
If you right-click the table and go to properties, you can't find the name of the table, just the label.
The only way I found to check the real name of the dataset is to open the Query Builder and check the name in the code preview.
The problem is that I am dealing with tables of millions of records and the machine I am using is very slow, so whenever I want to open the Query Building, just to check the table's name, it takes sometimes even an hour.
I am not a SAS expert, so I am sure there is a smarter way to do so. Is it possible for instance to use the table by calling it with its label?
data mydata2;
set mydata;
run;
instead of
set mydata9865?
Or is there some place I can rapidly check the name of the table without going through the query builder?
I tried to google it but I can't find anything, I hope someone will be able to help me!
Thank you in advance
Hover the mouse pointer over a data node to see it's attributes. The data set name is the File name: value.
For example:
In this example I had renamed the nodes created by two different queries to be the same (doable:yes, smart:maybe not). NOTE: A data node Label: is not necessarily the same as it's underlying data set's label metadata.
Regarding
use the table by calling it with its label?
Two nodes can have the same label, and is a a situation that defeats this approach.
Use the COPY task to upload your data explicitly. It sounds like you're not adding your data to the projects properly so SAS automatically assigns a name, rather than if you explicitly import or load your data.
Problem solved! I should have simply upload the data to the server with Tasks->Data->Upload Data Sets to Server but I didn't know this task so I didn't know it was possible to do it at all!
https://communities.sas.com/t5/SAS-Enterprise-Guide/Importing-sas-data-sets-from-C-drive-into-SAS-EG-not-possible/td-p/135184
Thank you everybody for you help!

Can we add a comment column next to the Change column in Audit View of EA?

Is there a provision to store user entered comment during modification to the “Model” which can be shown in the “Audit View” along with “Original” and “CHANGE” columns of EA.
Can we add a comment column next to the Change column in Audit View of EA, where user entered comment can be stored. Please suggest the EA API to do the same.
You can not easily do that. You might modify the underlying database and add columns to existing tables (or even add your private tables). But that would break XMI export as these columns would not be ex-/imported and you're on your own to maintain this. An alternative is to use tagged values in general cases. But here I doubt it's feasible. So probably your own table with foreign key referring the audit would be the choice. However, it merely sound like you're trying to re-build a check-in mechanism. FWIW: in practice I found this mechanism counter-productive as people tend to comment either nothing or trivialities. So that it hinders more than it helps.
You can not modify the standard dialogs (e.g. fo the shown audit view). That means you have to write an add-in to create your own dialog.
The table that contains the audit is t_snapshot.

PostgreSQL transaction variables

This question is sort of a follow up to this question, but it's different enough of a topic that I feel like it merits it's own discussion. For a bit of background, you can refer to it.
As a part of a new file importing system, I am building an audit system based on this wiki page. But, one of the things that I would like to include in the audit trail is the file name of the file that the data came from (these files are archived for long term storage so if there are questions, I can always go back).
One way I could go it is to create a import_batch record and record the name of the file there and then just stamp records when they update. Which is the path that I'm going down. But, it feels a bit clunky in a way. I'm been pondering the idea of trying to have the audit trigger be able to get the import_batch_id without it having to be in the NEW.* record. It seems like to me there are at least a couple of ways I might be able to accomplish this.
I could have a function that could create a temp table and store any information in it that I want (such as batch # or file name or whatever). This seem pretty clean and as I understand it would only live for the duration of the transaction. And as I understand it, it wouldn't have to worry about naming collisions. Each transaction would have a temp file named "tmp_import_info".
If I only care about the import_batch_id (which has a seq), I could probably just get the current value of the sequencer. I'm not a 100% sure how this would behave in a multi-user setting. I would think it would be possible for trans#1 to create import_batch_id #222 and then trans#2 to start and get #223. And then my audit trail would record the wrong data.
Are there other options that I'm not seeing here? Is there a way to add a transaction/session variable? Basically, something like pg_settings (but, that does allow for inserts, updates and deletes of values).
It feels like the best option might be the temp table.
The main good news for variant 2. is - quoting the manual here:
currval
Return the value most recently obtained by nextval for this sequence in the current session. (An error is reported if nextval has never been called for this sequence in this session.) Because this is returning a session-local value, it gives a predictable answer whether or not other sessions have executed nextval since the current session did.
Store your import file names in a table with a serial primary key. You can refer to your last value from the sequence with currval or lastval. Concurrent users cannot interfere. As long as you don't foil this path inside your own transaction yourself, this is safe.

Oracle Global Temporary Tables and using stored procedures and functions

we recently changed one of the databases I develop on from Oracle accounts to LDAP login accounts and all went well for the front end used by the staff that access the system. However, we have a second method of entry restricted to admin staff that load the data onto the database and a lot of processing is called using the dbms_scheduler.
Most of the database tables have a created_by column which is defaulted to pick up their user name from a sys_context but when the data loads are run from dbms_scheduler this information is not available and hence the created_by columns all get populated with APP_GLOBAL.
I have managed to populate a Global Temporary Table (GTT) with the sys_context value and use this to populate the created_by from a stored procedure called by dbms_scheduler so my next logical step was to put this in a function and call it so it could be used throughout the system or even be referenced from a before insert trigger.
The problem is, when putting the code into a function the data from the GTT is not found. The table is set to preserve rows.
I have trawled many a site for an answer but have found nothing to help me can anyone here provide a solution?
The scheduler will be using a different session than the session that created the job - preserve rows will not make the GTT data visible in a different session.
I am assuming the created_by columns have a default value like nvl(sys_context(...),'APP_GLOBAL'). Consider passing the user name as a parameter to the job and set the context as the first step in the job.
A weekend off and a closer look at my code showed a fatal flaw in my syntax where the selection of data from the GTT would never happen. A quick tweak and recompile and all is well.
Jack, thanks for your help.