I have a funny issue with Entity Framework function import. I am importing a stored procedure from MS SQL Server 2008 R2, and it returns just one string. However, it is too complex for EF to infer this return type, so when defining function import I had to manually specify that the function returns a collection of scalars (ObjectResult<global::System.String>, as was generated).
Sometimes, the procedure returns a string containing only digits and starting with zero (e.g., 01234). When I access this result in the code, it turns out to be without the starting zero (1234)!
I know several workarounds, so it is not a question. I want to understand what's going on.
My wild guess is that in my case - when SP is too complex to "predict" its result - EF first tries to "guess" the type of the returned data by the format of that data. I.e., it sees a number (01234), and converts it into integer type (int, short - whatever). Then it sees that I want a string, and converts this number back into string, but of course during these conversions starting zero is lost. Is that true, or is there a better explanation?
UPDATE: This is a screenshot for function import:
Related
I'm currently improving a library client for Postgresql, the library already has working communication protocol including DataRow and RowDescription.
The problem I'm facing right now is how to deal with values.
Returning plain string with array of integers for example is kind of pointless.
By my research I found that some other libraries (like for Python) either return is as unmodified string or convert primitive types including arrays.
What I mean by conversion is making Postgres DataRow raw data as Python-type value: Postgres integer is parsed as python number, Postgres booleans as python booleans, etc.
Should I make second query to get information column type and use its converters or should I leave it plain?
You could opt to get the array values in the internal format by setting the corresponding "result-column format code" in the Bind message to 1, but that is typically a bad choice, since the internal format varies from type to type and may even depend on the server's architecture.
So your best option is probably to parse the string representation of the array on the client side, including all the escape characters.
When it comes to finding the base type for an array type, there is no other option than querying pg_type like
SELECT typelem::regtype FROM pg_type WHERE oid = 1007;
typelem
---------
integer
(1 row)
You could cache these values on the client side so that you don't have to query more than once per type and database session.
I'm relatively new to DB2 for IBMi and am wondering the methods of how to properly cleanse data for a dynamically generated query in PHP.
For example if writing a PHP class which handles all database interactions one would have to pass table names and such, some of which cannot be passed in using db2_bind_param(). Does db2_prepare() cleanse the structured query on its own? Or is it possible a malformed query can be "executed" within a db2_prepare() call? I know there is db2_execute() but the db is doing something in db2_prepare() and I'm not sure what (just syntax validation?).
I know if the passed values are in no way effected by the result of user input there shouldn't be much of an issue, but if one wanted to cleanse data before using it in a query (without using db2_prepare()/db2_execute()) what is the checklist for db2? The only thing I can find is to escape single quotes by prefixing them with another single quote. Is that really all there is to watch out for?
There is no magic "cleansing" happening when you call db2_prepare() -- it will simply attempt to compile the string you pass as a single SQL statement. If it is not a valid DB2 SQL statement, the error will be returned. Same with db2_exec(), only it will do in one call what db2_prepare() and db2_execute() do separately.
EDIT (to address further questions from the OP).
Execution of every SQL statement has three stages:
Compilation (or preparation), when the statement is parsed, syntactically and semantically analyzed, the user's privileges are determined, and the statement execution plan is created.
Parameter binding -- an optional step that is only necessary when the statement contains parameter markers. At this stage each parameter data type is verified to match what the statement text expects based on the preparation.
Execution proper, when the query plan generated at step 1 is performed by the database engine, optionally using the parameter (variable) values provided at step 2. The statement results, if any, are then returned to the client.
db2_prepare(), db2_bind_param(), and db2_execute() correspond to steps 1, 2 and 3 respectively. db2_exec() combines steps 1 and 3, skipping step 2 and assuming the absence of parameter markers.
Now, speaking about parameter safety, the binding step ensures that the supplied parameter values correspond to the expected data type constraints. For example, in the query containing something like ...WHERE MyIntCol = ?, if I attempt to bind a character value to that parameter it will generate an error.
If instead I were to use db2_exec() and compose a statement like so:
$stmt = "SELECT * FROM MyTab WHERE MyIntCol=" . $parm
I could easily pass something like "0 or 1=1" as the value of $parm, which would produce a perfectly valid SQL statement that only then will be successfully parsed, prepared and executed by db2_exec().
I have been implementing user defined types in Postgresql 9.2 and got confused.
In the PostgreSQL 9.2 documentation, there is a section (35.11) on user defined types. In the third paragraph of that section, the documentation refers to input and output functions that are used to construct a type. I am confused about the purpose of these functions. Are they concerned with on-disk representation or only in-memory representation? In the section referred to above, after defining the input and output functions, it states that:
If we want to do anything more with the type than merely store it,
we must provide additional functions to implement whatever operations
we'd like to have for the type.
Do the input and output functions deal with serialization?
As I understand it, the input function is the one which will be used to perform INSERT INTO and the output function to perform SELECT on the type so basically if we want to perform an INSERT INTO then we need a serialization function embedded or invoked in the input or output function. Can anyone help explain this to me?
Types must have a text representation, so that values of this type can be expressed as literals in a SQL query, and returned as results in output columns.
For example, '2013-20-01' is a text representation of a date. It's possible to write VALUES('2013-20-01'::date) in a SQL statement, because the input function of the date type recognizes this string as a date and transforms it into an internal representation (for both using it in memory and storing to disk).
Conversely, when client code issues SELECT date_field FROM table, the values inside date_field are returned in their text representation, which is produced by the type's output function from the internal representation (unless the client requested a binary format for this column).
I have a field in table in next format 1_2..1_10|1_6|1_8| where 1_2..1_10 include 1_2, 1_3 and other.
How I can select data, where number = 1_3?
1st suggestion: Get rights to modify the db structure and figure out how to better store the Navision string.
2nd suggestion: CLR
I'll assume you are relatively comfortable with each of these concepts. If you aren't they are very well documented all over the web.
My approach would be to use a CLR function as there's going to be some high level things that are awkward in SQL that C# takes care of quite easily. The psuedo walk through would go something like this.
Implementation
Create a CLR funciton and implement it on the SQL server instance.
Using SQL resultset change the query to look for the returned value of the CLR function based on the navision filter value where "1_3".
CLR Function Logic
Create a c# function that takes in the value of the filter field and returns a string value.
The CLR function splits the filter field by the | char into a list.
Inside the CLR function create a second list. Iterate over the first list. When you find a ranged string split it on the ".." and manually add every available value between the range to the second list. When you find a value that isnt' ranged simply add it to the second list.
Join the contents of the second list together on the "|" charecter.
Return the joined value.
SQL Logic
SELECT Field1,Field2...CLRFunctionName(FilterValue) AS FixedFilterValue FROM Sometable WHERE FixedFilterValue LIKE '%1_3%';
I am attempting to add a scalar query to a dataset. The query is pretty straight forward, it's just adding up some decimal values in a few columns and returning them. I am 100% confident that only one row and one column is returned, and that it is of decimal type (SQL money type). The problem is that for some reason, the generated method (in the .designer.cs code file) is returning a value of type object, when it should be decimal. What's strange is that there's another scalar query that has the exact same SQL but is returning decimal like it should.
How does the dataset designer determine the data type, and how can I tell it to return decimal?
Instead of using a scalar stored procedure, use a scalar function instead. The dataset designer will correctly detect the data type for the scalar function. You only need to use a scalar stored procedure if you are making changes to data during the query. A scalar function is read-only. You can also very conveniently drag the function into your dataset as a query instead of having to go through the wizard.
If you insist on using a stored procedure, or a regular query, you can always cast your result like so (in VB)...
Dim ta As New DataSet1TableAdapters.QueriesTableAdapter
Dim result As Decimal = DirectCast(ta.StoredProcedure1, Decimal)
or With Option Infer On
Dim resultInfer = DirectCast(ta.StoredProcedure1, Decimal)
First of all fill the schema of the dataset.tables("") to "schematype.source" using dataadapters. Ex: sqladp.fillschema(ds.tables(0),schematype.source)
And then fill the dataset
sqladp.fill(ds.tables(0))
Now i think it should return datatype of source table.
Is that u were looking for?