Dapper not throwing an exception when select query does not contain expected column - ado.net

When using Dapper as follows:
connection.Query("select a from Foo").Select(x => new Foo(x.a, x.b));
My preference is that an exception is thrown when trying to access x.b, rather than it just returning null.
I understand that just returning null may be by design (e.g. for performance reasons), and ideally tests would exist that flag up the missing data, but if I could have both, this seems like an additional layer of safety that may be worth having.
Alternatively, is there another way to instantiate an object using a constructor in a way that will throw an exception when a needed column is missing?

Yup. Dapper often seems to err on the side of optimism. Every error is a runtime error, and as in this case, some things that you would like to be errors fly straight through.
Were you to use QueryFirst, you wouldn't even need to run the program. This would be caught as you type. You access your query results via generated POCOs, so if a column isn't in the query, it isn't in the POCO. If you use select *, then later delete a columnn from the DB, you can regenerate all your queries and the compile errors will point to the line in your code that tries to access the deleted column.
Download here.
Short video presentation here.
disclaimer: I wrote QueryFirst

Related

How can I use ormlite to escape my insert?

I have ormlite integrated into an application I'm working on. Right now I'm trying to build in functionality to easily switch from automatically inserting data to the database to outputting the equivalent collection of insert statements to a file for later use. The data isn't user input but still requires proper escaping to handle basic gotchas like apostrophes.
Ideas I've burned through:
Dao.create() writes to the database directly, so that's a no-go.
QueryBuilder can't handle inserts.
JdbcDatabaseConnection.compileStatement() might work but the amount of setup required is inappropriate.
Using a java.sql.PreparedStatement has a reasonable enough interface (if toString() returns the SQL like I would hope) but it's not compatible with ormlite's connection types.
This should be very easy and if it is, I can't find the right combination of method calls to make it happen.
Right now I'm trying to build in functionality to easily switch from automatically inserting data to the database to outputting the equivalent collection of insert statements to a file for later use.
Interesting. So one hack would be to use the MappedCreate class. The MappedCreate.build(...) method takes a DatabaseType and a TableInfo which is available from the dao.getTableInfo().
The mappedCreate.toString() exposed the generated INSERT statement (with a prefix) which might help but you would still need to convert the ? arguments to be the actual values with escaped quotes. That you would have to do in your own code.
Hope this helps somewhat.

Which exception to throw when I find my data in inconsistent state in Scala?

I have a small Scala program which reads data from a data source. This data source is currently a .csv file, so it can contain data inconsistencies.
When implementing a repository pattern for my data, I implemented a method which will return an object by a specific field which should be unique. However, I can't guarantee that it will really be unique, as in a .csv file, I can't enforce data quality in a way I could in a real database.
So, the method checks whether there are one or zero objects with the requested field value in the repository, and that goes well. But I don't know Scala well (or Java for that matter), and the charts of the Java exception hierarchy which I found were not very helpful. Which would be the appropriate exception to throw if there are two objects with the same supposedly unique value. What should I use?
There are two handy exceptions for such cases: IllegalStateException and IllegalArgumentException. First one is used when object internal state is in some illegal position (say, you calling connect twice) and the last one (which seems to be more suitable to your case) is used when there is the data that comes from the outside world and it does not satisfy some prescribed conditions: e.g. negative value, when function is supposed to work with zero & positive values.
Both are not something that should be handled programmatically on the caller side (with the try/catch) -- they signify illegal usage of api and/or logical errors in program flow and such errors has to be fixed during the development (in your case, they have to inform developer who is passing that data, that specific field has to contain only unique values).
You can always use a customized Exception and in case this is a web API you might want to map your exception to: Bad Request (400) code.

How to check if a result contains rows? (FbDataReader.HasRows always returns true!)

I am using the Firebird ADO.NET Data Provider and before I pass the reader on to a consuming service I would like to determine whether any rows were returned. Consider the following snippet:
FbCommand cmd = GetSomeCommandFromTheEther();
FbDataReader reader = cmd.ExecuteReader();
if (reader.HasRows)
DoSomethingWith(reader);
else
TellTheUserWeGotNothing();
What I've now learned is that FbDataReader.HasRows always returns True. In fact looking at the source code it would appear it is just a wrapper for FbDataReader.command.IsSelectCommand, not only useless, it makes the property name "HasRows" a complete misnomer.
In any event, how can I find out whether a given query has rows, without advancing the record pointer? Note that I want to pass the reader off to an external service; if I call FbDataReader.Read() to check its result, I will consume a row and DoSomethingWith() will not get this first row.
I am afraid you have stumbled on a Firebird limitation. As stated in following Firebird FAQ link:
Why FbDataReader.HasRows returns always true?
The FbDataReader.HasRows property is implemented for compatibility
only. It returns always true because Firebird doesn't have a way for
know if a query returns rows of not without fetching the data.
There is already a mention of this in the Firebird Tracker. Check the issue DNET-305.
On the other hand, in .NET, it seems OleDbDataReader and SqlDataReader, which inherit from DbDataReader have the same problem, as stated in this MSDN link.
Since FbDataReader inherits from the same class as those, you might want to consider one of the workarounds that Microsoft suggests in its MSDN article, which is to perform first a select count(*). Granted, that is unelegant and a waste of time and resources but at least it could help you out.

variable table or column names in a function

I'm trying to search all tables and columns in a database, a la here. The suggested technique is to construct SQL query strings and then EXEC them. This works well, as a stored procedure. (Another example of variable table/column names is here. Again, EXEC is used to execute "dynamic SQL".)
However, my app requires that I do this in a function, not an SP. (Our development framework has trouble obtaining results from an SP.) But in a function, at least on SQL Server 2008 R2, you can't use EXEC; I get this error:
Invalid use of a side-effecting operator 'INSERT EXEC' within a function.
According to the answer to this post, apparently by a Microsoft developer, this is by design; it has nothing to do with the INSERT, only the fact that when you execute dynamically-constructed SQL code, the parser cannot guarantee a lack of side effects. Therefore it won't allow you to create such a function.
So... is there any way to iterate over many tables/columns within a function?
I see from BOL that
The following statements are valid in a function: ...
EXECUTE
statements calling extended stored procedures.
Huh - How could extended SP's be guaranteed side-effect free?
But that doesn't help me anyway:
The extended stored procedure, when it is called from inside a
function, cannot return result sets to the client. Any ODS APIs that
return result sets to the client will return FAIL. The extended stored
procedure could connect back to an instance of SQL Server; however, it
should not try to join the same transaction as the function that
invoked the extended stored procedure.
Since we need the function to return the results of the search, an ESP won't help.
I don't really want to get into extended SP's anyway: incrementing the number of programming languages in the environment would complicate our development environment more than it's worth.
I can think of a few solutions right now, none of which is very satisfactory:
First call an SP that produces the needed data and puts it in a table, then select from the function which merely reads the result from the table; this could be trouble if the search takes a while and two users' searches overlap. Or,
Have the application (not the function) generate a long query naming every table and column name from the db. I wonder if the JDBC driver can handle a query that long. Or,
Have the application (not the function) generate a long series of short queries naming every table and column name from the db. This will make the overall search a lot slower.
Thanks for any suggestions.
P.S. Upon further searching, I stumbled across this question which is closely related. It has no answers.
Update: No longer needed
I think this question is still valid, and we may again have a situation where we need it. However, I don't need an answer anymore for the present problem. After much trial-and-error I managed to get our application framework to retrieve row results from the RDBMS via the JDBC driver from the stored procedure. Therefore getting the thing to work as a function is unnecessary.
But if anyone posts an answer here that helps with the stated problem, I will be happy to upvote and/or accept it as appropriate.
An sp is basically a predefined sql statment with some add ons.
So if you had
PSEUDOCODE
Create SP_DoSomething As
Select * From MyTable
END
And you can't use the SP
Then you just execute the SQL as in "Select * From MyTable"
As for that naff sql code.
For start you could join table to column with a where clause, which would get rid of that line by line if stuff.
Ask another question. Like How could this be improved, there's lots of scope for more attempts than mine.

NEventStore CommonDomain: Why does EventStoreRepository.GetById open a new stream when an aggregate doesn't exist?

Please explain the reasoning behind making EventStoreRepository.GetById create a new stream when an aggregate doesn't exist. This behavior appears to give GetById a dual responsibility which in my case could result in undesirable results. For example, my aggregates always implement a factory method for their creation so that their first event results in the setting of the aggregate's Id. If a command was handled prior to the existance of the aggregate it may succeed even though the aggregate doesn't have it's Id set or other state initialized (crash-n-burn with null reference exception is equally as likely).
I would much rather have GetById throw an exception if an aggregate doesn't exist to prevent premature creation and command handling.
That was a dumb design decision on my part. I went back and forth on this one. To be honest, I'm still back and forth on it.
The issue is that exceptions should really be used to indicate exceptional or unexpected scenarios. The problem is that a stream not being found can be a common operation and even an expected operation in some regards. I tinkered with the idea of throwing an exception, returning null, or returning an empty stream.
The way to determine if the aggregate is empty is to check the Revision property to see if it's zero.