Why do we need table value parameter - tsql

We have access to actual table inside storedprocedures . Then what is the need to pass table through parameter? Any special advantage?

Table-valued parameters are necessary to pass tabular data to a stored-procedure or function in a way that's "safe", especially from client-code (e.g. SqlCommand and SqlParameter).
The main alternative technique is to create and INSERT into a #temporaryTable first before calling the sproc, but temporary tables are not exactly temporary and they exist in tempdb which introduces namespacing and concurrency issues. You will also have to use Dynamic SQL because you cannot parameterise table names. With non-temporary tables the same issues apply.
Additionally, if you're wanting to pass data to a FUNCTION that you want to throw-away afterwards then you cannot use a temporary table because FUNCTION code is strictly read-only: it cannot drop the temporary table when it's done with it, whereas a table-valued parameter magically disappears after it falls out of scope.
It's also absolutely required if one FUNCTION wants to pass tabular data to another function for the same read-only reason: the function is not allowed to create a #temporarytable, but it can create and populate a table-valued parameter.
By analogy, it's like passing variables on the stack vs. on the heap - using the stack means you get automatic lifetime management and ownership semantics and not need to worry about concurrency as much - whereas using the heap introduces a whole load of issues.
An example
Supposing your application code needs to pass a list of tuples (or a list of primary-keys) to a stored procedure or FUNCTION - or if an existing sproc or function needs to pass data on to another function.
Using temporary-tables your code has to do this:
Create and open a SqlConnection.
Create and begin a TRANSACTION.
Create a new #temporarytable
Temporary tables are scoped to the current database session. This is okay for most purposes, but it means you cannot perform multiple concurrent database operations on that temporary table in the same session.
LOCK any normal tables you'll be using if necessary because your operation will span several SqlCommand executions.
From the client-side or originating stored-procedure, execute an INSERT statement to fill the temporary table. If you're inserting data from a client application you may need to execute a single-row INSERT operation many times too - this is very inefficient, especially in high-latency connection environments because the TDS protocol used by SQL Server is very chatty (ironically, you can perform a single multi-row INSERT operation using SqlCommand but you have to use a Table-Valued Parameter to contain the multiple rows of data).
Call the sproc or FUNCTION that will use the temporary table.
Tear down the #temporaryTable if you'll be keeping the session alive, or end the session immediately to prevent wasting memory.
But if you use a Table-Valued parameter it's much simpler:
Create and open a SqlConnection.
Create and begin a TRANSACTION.
Create your SqlCommand object that will call your sproc or FUNCTION, but you can use a Table-Valued Parameter directly and populate it all from the client in a single pass. The client software then pushes the table data stream to the server in a single operation which is far more efficient.
The sproc then runs. No teardown or session-ending is necessary.

Related

How to get column name and data type returned by a custom query in postgres?

How to get column name and data type returned by a custom query in postgres? We have inbuilt functions for table/views but not for custom queries. For more clarification I would say that I need a postgres function which will take sql string as parameter and will return colnames and their datatype.
I don't think there's any built-in SQL function which does this for you.
If you want to do this purely at the SQL level, the simplest and cheapest way is probably to CREATE TEMP VIEW AS (<your_query>), dig the column definitions out of the catalog tables, and drop the view when you're done. However, this can have a non-trivial overhead depending on how often you do it (as it needs to write view definitions to the catalogs), can't be run in a read-only transaction, and can't be done on a standby server.
The ideal solution, if it fits your use case, is to build a prepared query on the client side, and make use of the metadata returned by the server (in the form of a RowDescription message passed as part of the query protocol). Unfortunately, this depends very much on which client library you're using, and how much of this information it chooses to expose. For example, libpq will give you access to everything, whereas the JDBC driver limits you to the public methods on its ResultSetMetadata object (though you could probably pull more information from its private fields via reflection, if you're determined enough).
If you want a read-only, low-overhead, client-independent solution, then you could also write a server-side C function to prepare and describe the query via SPI. Writing and building C functions comes with a bit of a learning curve, but you can find numerous examples on PGXN, or within Postgres' own contrib modules.

Applying updates to a KDB table in thread safe manner in C

I need to update a KDB table with new/updated/deleted rows while it is being read by other threads. Since writing to K structures while other threads access will not be thread safe, the only way I can think of is to clone the whole table and apply new changes to that. Even to do that, I need to first clone the table, then find a way to insert/update/delete rows from it.
I'd like to know if there are functions in C to:
1. Clone the whole table
2. Delete existing rows
3. Insert new rows easily
4. Update existing rows
Appreciate suggestions on new approaches to the same problem as well.
Based on the comments...
You need to do a set of operations on the KDB database "atomically"
You don't have "control" of the database, so you can't set functions (though you don't actually need to be an admin to do this, but that's a different story)
You have a separate C process that is connecting to the database to do the operations you require. (Given you said you don't have "access" to the database as admin, you can't get KDB to load your C binary to use within-process anyway).
Firstly I'm going to assume you know how to connect to KDB+ and issue via the C API (found here).
All you need to do then is to concatenate your "atomic" operation into a set of statements that you are going to issue in one call from C. For example say you want to update a table and then delete some entry. This is what your call might look like:
{update name:`me from table where name=`you; delete from `table where name=`other;}[]
(Caution: this is just a dummy example, I've assumed your table is in-memory so that the delete operation here would work just fine, and not saved to disk, etc. If you need specific help with the actual statements you require in your use case then that's a different question for this forum).
Notice that this is an anonymous function that will get called immediately on issue ([]). There is the assumption that your operations within the function will succeed. Again, if you need actual q query help it's a different question for this forum.
Even if your KDB database is multithreaded (started with -s or negative port number), it will not let you update global variables inside a peach thread. Therefore your operation should work just fine. But just in case there's something else that could interfere with your new anonymous function, you can wrap the function with protected evaluation.

variable table or column names in a function

I'm trying to search all tables and columns in a database, a la here. The suggested technique is to construct SQL query strings and then EXEC them. This works well, as a stored procedure. (Another example of variable table/column names is here. Again, EXEC is used to execute "dynamic SQL".)
However, my app requires that I do this in a function, not an SP. (Our development framework has trouble obtaining results from an SP.) But in a function, at least on SQL Server 2008 R2, you can't use EXEC; I get this error:
Invalid use of a side-effecting operator 'INSERT EXEC' within a function.
According to the answer to this post, apparently by a Microsoft developer, this is by design; it has nothing to do with the INSERT, only the fact that when you execute dynamically-constructed SQL code, the parser cannot guarantee a lack of side effects. Therefore it won't allow you to create such a function.
So... is there any way to iterate over many tables/columns within a function?
I see from BOL that
The following statements are valid in a function: ...
EXECUTE
statements calling extended stored procedures.
Huh - How could extended SP's be guaranteed side-effect free?
But that doesn't help me anyway:
The extended stored procedure, when it is called from inside a
function, cannot return result sets to the client. Any ODS APIs that
return result sets to the client will return FAIL. The extended stored
procedure could connect back to an instance of SQL Server; however, it
should not try to join the same transaction as the function that
invoked the extended stored procedure.
Since we need the function to return the results of the search, an ESP won't help.
I don't really want to get into extended SP's anyway: incrementing the number of programming languages in the environment would complicate our development environment more than it's worth.
I can think of a few solutions right now, none of which is very satisfactory:
First call an SP that produces the needed data and puts it in a table, then select from the function which merely reads the result from the table; this could be trouble if the search takes a while and two users' searches overlap. Or,
Have the application (not the function) generate a long query naming every table and column name from the db. I wonder if the JDBC driver can handle a query that long. Or,
Have the application (not the function) generate a long series of short queries naming every table and column name from the db. This will make the overall search a lot slower.
Thanks for any suggestions.
P.S. Upon further searching, I stumbled across this question which is closely related. It has no answers.
Update: No longer needed
I think this question is still valid, and we may again have a situation where we need it. However, I don't need an answer anymore for the present problem. After much trial-and-error I managed to get our application framework to retrieve row results from the RDBMS via the JDBC driver from the stored procedure. Therefore getting the thing to work as a function is unnecessary.
But if anyone posts an answer here that helps with the stated problem, I will be happy to upvote and/or accept it as appropriate.
An sp is basically a predefined sql statment with some add ons.
So if you had
PSEUDOCODE
Create SP_DoSomething As
Select * From MyTable
END
And you can't use the SP
Then you just execute the SQL as in "Select * From MyTable"
As for that naff sql code.
For start you could join table to column with a where clause, which would get rid of that line by line if stuff.
Ask another question. Like How could this be improved, there's lots of scope for more attempts than mine.

When to truncate strings longer than the storage location allows?

Let's say I have a function that inserts records into a database table with string fields of limited length. In general, at what point should I be truncating strings that are too long for the storage location, in the insert function itself, or at every point in the code where it's called?
(I'm assuming here that truncation of strings that are too long is more desirable than having an exception thrown.)
I think it depends on where the function is and how accessible it is.
If it's a private function that just makes up your own SQL library then you can probably get away with truncating it in the function.
If it's in a library that, say, your team at work all use then perhaps you need to at least parse the string before attempting to insert it.
If it's a public API, then you shouldn't be silently truncating anything - throw a meaningful exception instead.
This should sit in the insert function - it's specific to the database implementation rather than the calling application. If you manage to change your data structure, you don't want to have to go back through all the client code to ensure the full string is used.
As per Widor, but may I also add:
Your application should ideally be structured so that there is a distinct data layer that separates the rest of your code from the database and its implementation logic.
In high traffic systems you will ideally want to limit the amount of data passing back and forth between the database and your code, hence data validation should be performed by your data layer BEFORE passing it on to your database. It is here that you can raise a meaningful exception for your business logic to handle.
The object data presented by the data layer need bear no relation to what is actually stored in or by the database. For instance it may present a data object class that is actually a composite of data stored in several tables.
The data layer itself can be structured in such a way that it can handle different database implementations.
I have used a factory pattern in the past that has allowed me to switch between SQL, MySQL databases, XML file storage and compiled test data as required at runtime without the need for recompilation.
edit
Your application data layer is the interface between your application code e.g. business logic and GUI, and your database.
The business logic will trigger the data layer to update the database with your string.
In your example the data layer contains your update function.
You can validate the string, truncate it if too long, and then update the database (through stored procedure call or direct write for instance) within that function if you wish.
In reality you'll have many strings that will have to be restricted to the same length, so it is advisable to have the validation performed by a seperate function to save duplicating code.
Also you may wish to validate/truncate the string and notify the user/calling code of this without writing the data to the database.
Essentially though this is performed by your application data layer code, which may be encapsulated within a class library/dll for instance and not left to the database to handle nor the business logic (other than to react to any error event/response fed back).

stored procedures as queries: CallableStatement vs. PreparedStatement

PostgreSQL documentation recommends using a CallableStatement to call stored procedures.
In the case of a stored procedure that returns a rowset, what are the differences between using CallableStatement:
String callString = "{ call rankFoos(?, ?) }";
CallableStatement callableStatement = con.prepareCall(callString);
callableStatement.setString(1, fooCategory);
callableStatement.setInt(2, minimumRank);
ResultSet results = statement.executeQuery();
And using a regular PreparedStatement:
String queryString = "SELECT FooUID, Rank FROM rankFoos(?, ?);";
PreparedStatement preparedStatement = connection.prepareStatement(queryString);
preparedStatement.setString(1, fooCategory);
preparedStatement.setInt(2, minimumRank);
ResultSet results = statement.executeQuery();
As I understand, CallableStatement offers a language-agnostic way of calling stored procedures. This doesn't matter to me though, since I know I'm using PostgreSQL. As far as I can see, the obvious advantage of using the PreparedStatement is a more versatile query, treating the stored procedure as a table, on which I can use WHERE, JOIN, ORDER BY, etc.
Are there aspects or differences between the methods that I'm missing? In the case of a stored procedure used as a query, which is recommended?
I'm pretty sure the second approach does not work at all with some RDBMS's, but since you are only going to use PostgreSQL, that shouldn't matter too much. For your simple case, there really isn't much of a downside. There are two issues I can see popping up:
Depending on how the stored procedures are written, they may require you to register out parameters in order to execute the procedure. That's not going to be possible at all with prepared statements. If you control both the creation of the stored procedure and the calling code, you probably don't have to worry about this.
It limits the effectiveness of calling stored procedures in the first place. One of the main advantages of a stored procedure is to encapsulate the querying logic at the database level. This allows you to tune the query or in some cases add functionality without having to make changes to your code. If you're planning on adding where clauses joins to the results of the stored procedure call, why not just put the original query in your java layer?
The main difference is independence and encapsulated way of programming.
Imagine that you are a Java Programmer and you don't know how to use Database, in this way you can not use the second method and u will meet problems in the second way.
First method allows you to do your java code, and ask somebody to write the quires for you as Stored-Procedure, so u can use them easily.
I do agree with #dlawrence