Column of Binary Type -1 for MAX value - biml

I am building a VS solution and using BIML, I have created tiers and c# code files.
When I run each individual biml file they compile and generate outputs in the viewer.
When I check for errors it throws up this error
"Column of binary type must specify positive Length or -1 to represent the MAX value.
In one of my c# code files I am doing a case statement on data type to switch into SQL data types.
In this code page I specify that the length of a binary column is -1 but I still getting the error.
Any help would be appreciated.
I have tried changing the -1 to 10 and also 1 but still get same error.
DataRow.cs FILE CONTENTS
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Varigence.Biml.Extensions;
public static class DataRow
{
public static string GetBiml(this System.Data.DataRow dataRow)
{
StringBuilder biml = new StringBuilder("");
biml.Append("<Column Name=\"")
.Append(dataRow["ColumnName"])
.Append("\" DataType=\"")
.Append(dataRow["DataTypeBiml"])
.Append("\"");
if (dataRow["DataTypeBiml"].ToString().Contains("String"))
biml.Append(" Length=\"").Append(dataRow["CharLength"]).Append("\"");
else if (dataRow["DataTypeBiml"] == "Decimal")
biml.Append(" Precision=\"").Append(dataRow["NumericPrecision"]).Append("\" Scale=\"").Append(dataRow["NumericScale"]).Append("\"");
else if (dataRow["DataTypeBiml"] == "Binary")
biml.Append(" Length=\"-1 \" ");
if (dataRow["IsNullable"] != "NO")
biml.Append(" IsNullable=\"true\"");
else
biml.Append(" IsNullable=\"false\"");
biml.Append(" />");
return biml.ToString();
}
}
1-ReadMetaData.biml
<## template tier="10" #>
<## import namespace="System.Data"#>
<## import namespace="System.Data.SqlClient"#>
<## code file="Helper.cs" #>
<## code file="DataRow.cs" #>
<#
string targetConnection = #"Data Source=SERVER;Initial Catalog=DATABASE;Integrated Security=SSPI;";
#>
<Biml xmlns="http://schemas.varigence.com/biml.xsd">
<Tables>
<#
var sourceTables = Helper.GetIncludedSourceTablesList();
// Loop through each source table in the included source tables list
foreach (Table sourceTable in sourceTables)
{
#>
<Table Name="<#=sourceTable.Name#>" SchemaName="schema">
<#
string targetQuery = #"SELECT OrdinalPosition = col.ORDINAL_POSITION,
ColumnName = col.COLUMN_NAME,
DataType = col.DATA_TYPE,
CharLength = ISNULL(col.CHARACTER_MAXIMUM_LENGTH, 0),
NumericPrecision = col.NUMERIC_PRECISION,
NumericScale = col.NUMERIC_SCALE,
IsNullable = col.IS_NULLABLE,
DataTypeBiml = CASE col.DATA_TYPE
WHEN 'bigint' THEN 'Int64'
WHEN 'bit' THEN 'Boolean'
WHEN 'char' THEN 'AnsiStringFixedLength'
WHEN 'datetime' THEN 'DateTime'
WHEN 'decimal' THEN 'Decimal'
WHEN 'float' THEN 'Double'
WHEN 'int' THEN 'Int32'
WHEN 'nchar' THEN 'StringFixedLength'
WHEN 'nvarchar' THEN 'String'
WHEN 'smallint' THEN 'Int16'
WHEN 'timestamp' THEN 'Binary'
WHEN 'tinyint' THEN 'Byte'
WHEN 'varchar' THEN 'AnsiString'
WHEN 'uniqueidentifier' THEN 'Guid'
ELSE 'Unknown'
END
FROM (
SELECT lkup.TABLE_SCHEMA,
lkup.TABLE_NAME,
ORDINAL_POSITION_MAX = MAX(lkup.ORDINAL_POSITION)
FROM INFORMATION_SCHEMA.COLUMNS AS lkup
WHERE lkup.TABLE_SCHEMA = 'dbo'
AND lkup.TABLE_NAME = '" + sourceTable.Name + #"'
GROUP BY lkup.TABLE_SCHEMA,
lkup.TABLE_NAME
) AS maxord
INNER JOIN INFORMATION_SCHEMA.COLUMNS AS col ON (maxord.TABLE_SCHEMA = col.TABLE_SCHEMA
AND maxord.TABLE_NAME = col.TABLE_NAME)
ORDER BY col.ORDINAL_POSITION;";
DataTable targetTable = new DataTable();
SqlDataAdapter targetAdapter = new SqlDataAdapter(targetQuery,targetConnection);
targetAdapter.Fill(targetTable);
#>
<Columns>
<# foreach (DataRow targetRow in targetTable.Rows) {#>
<#=targetRow.GetBiml()#>
<# } #>
</Columns>
</Table>
<# } #>
</Tables>
</Biml>
TableList.cs FILE CONTENTS
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Varigence.Biml.Extensions;
public class Helper
{
public static List<Table> GetIncludedSourceTablesList()
{
var tablesList = new List<Table>
{
new Table() { Name = "Tab1"},
new Table() { Name = "Tab2" },
new Table() { Name = "Tab3" },
new Table() { Name = "Tab4" },
new Table() { Name = "Tab5" },
new Table() { Name = "Tab6" }
};
return tablesList;
}
}
public class Table
{
public string Name { get; set; }
}
this is the part of the output in the viewer of the ReadMetaData.biml file which is not putting the length against the binary column
<Column Name="RowVers" DataType="Binary" IsNullable="true" />

Looking at your code, it may be as simple as that you have a trailing space after your -1, which could be discarding the Length property as it has an invalid value:
So this:
biml.Append(" Length=\"-1 \" ");
Should become this:
biml.Append(" Length=\"-1\" ");

I figured the mistake out, I Had not converted the DataTypeBiml to a string to compare it once I did the length was put out correctly.
else if (dataRow["DataTypeBiml"].ToString() == "Binary")
biml.Append(" Length=\"-1\"");
Thanks for your suggestions

Related

I can't display parameter value in report header in crystal reports

I'm developing a small application in c# and I'm using Crystal Reports for reporting. I want to display parameter values in report header but I can't. How can i display parameter values in report header?
ClassParams.EMANET_KITAP_ID = txtKitapID.Text;
ParameterFields From = new ParameterFields();
ParameterField KID = new ParameterField();
KID.Name = "EMANET_KITAP_ID";
ParameterDiscreteValue val = new ParameterDiscreteValue();
val.Value = ClassParams.EMANET_KITAP_ID;
KID.CurrentValues.Add(val);
From.Add(KID);
crystalReportViewer1.ParameterFieldInfo = From;
class ClassParams
{
public static string KID;
public static string EMANET_KITAP_ID
{
get { return KID; }
set { KID = value; }
}
}
Most likely, you have a multi-value parameter.
Since such a parameter stores the values as an array, you need to "flatten" the array.
If it's a String parameter, you can simply create a formula like this:
Join({?yourStringParameter}, ', ' );
If the data type is not string, loop through the array and concatenate the values to a string variable.

Joining two datatables in Powershell?

I am looking for a solution to join two large datatables in Powershell. Each final table table will have ca. 1 mio rows.
I spent some time to create a workaround via LINQ and an embedded C# code, but I am not sure if this is the most efficient way.
Here a demo code showing my custom datatable-join in action:
cls
Remove-Variable * -ea 0
$ErrorActionPreference = 'stop'
add-type -TypeDefinition #"
using System;
using System.Data;
using System.Data.DataSetExtensions;
using System.Linq;
namespace Linq {
public class DataTables {
public static DataTable Join(DataTable dt1, string column1, DataTable dt2, string column2){
DataTable dt3 = new DataTable();
Type ty = dt1.Columns[column1].GetType();
DataTable[] dta = {dt1, dt2};
for (int i=0; i<dta.Length; i++) {
string tableName = dta[i].TableName;
if (string.IsNullOrEmpty(tableName)) {tableName = "table" + i.ToString();};
foreach (DataColumn column in dta[i].Columns) {
string columnName = tableName + '/' + column.ColumnName;
dt3.Columns.Add(columnName, column.DataType);
}
}
return (
dt1.AsEnumerable().Join(
dt2.AsEnumerable(),
a => a[column1],
b => b[column2],
(a, b) => {
DataRow row = dt3.NewRow();
row.ItemArray = a.ItemArray.Concat(b.ItemArray).ToArray();
return row;
}
)
).CopyToDataTable();
}
}
}
"# -ReferencedAssemblies 'System.Xml','System.Data','System.Data.DataSetExtensions'
#define customer table:
$table_customer = [System.Data.DataTable]::new('customer')
[void]$table_customer.Columns.Add('id', 'int')
[void]$table_customer.Columns.Add('name', 'string')
# define order table:
$table_order = [System.Data.DataTable]::new('order')
[void]$table_order.Columns.Add('id', 'int')
[void]$table_order.Columns.Add('customer_id', 'int')
[void]$table_order.Columns.Add('name', 'string')
# fill both tables:
$oId = 0
foreach($cId in (1..3)) {
[void]$table_customer.rows.Add($cId, "customer_$cId")
foreach($o in 1..3) {
$oId++
[void]$table_order.rows.Add($oId, $cId, "customer_$cId order_$o")
}
}
# join the tables:
$table_joined = [Linq.DataTables]::Join($table_customer, 'id', $table_order, 'customer_id')
$table_customer | ft -AutoSize
$table_order | ft -AutoSize
$table_joined | ft -AutoSize
Is there any build-in function for this that I have missed? I was playing with System.Data.DataRelation, but this seems to be more like a filter for a second table based on a single criteria from the first table.
In case there is no better alternative, I am happy to share the above code with everyone.

FunctionImport in entity framework 4 issue

I'm using entity framework 4.
I have a stored procedure that just updates one value in my table, namely the application state ID. So I created a stored procedure that looks like this:
ALTER PROCEDURE [dbo].[UpdateApplicationState]
(
#ApplicationID INT,
#ApplicationStateID INT
)
AS
BEGIN
UPDATE
[Application]
SET
ApplicationStateID = #ApplicationStateID
WHERE
ApplicationID = #ApplicationID;
END
I created a function import called UpdateApplicationState. I had initially set its return type to null, but then it wasn't created in the context. So I changed its return type to int. Now it was created in the context. Is it wise to return something from my stored procedure?
Here is my method in my ApplicationRepository class:
public void UpdateApplicationState(int applicationID, int applicationStateID)
{
var result = context.UpdateApplicationState(applicationID, applicationStateID);
}
Here is my calling code to this method in my view:
applicationRepository.UpdateApplicationState(id, newApplicationStateID);
When I run it then I get the following error:
The data reader returned by the store
data provider does not have enough
columns for the query requested.
Any idea/advise on what I can do to get this to work?
Thanks
To get POCO to work with function imports that return null, you can customize the .Context.tt file like this.
Find the "Function Imports" named region (the section that starts with region.Begin("Function Imports"); and ends with region.End();) in the .Context.tt file and replace that whole section with the following:
region.Begin("Function Imports");
foreach (EdmFunction edmFunction in container.FunctionImports)
{
var parameters = FunctionImportParameter.Create(edmFunction.Parameters, code, ef);
string paramList = String.Join(", ", parameters.Select(p => p.FunctionParameterType + " " + p.FunctionParameterName).ToArray());
var isReturnTypeVoid = edmFunction.ReturnParameter == null;
string returnTypeElement = String.Empty;
if (!isReturnTypeVoid)
returnTypeElement = code.Escape(ef.GetElementType(edmFunction.ReturnParameter.TypeUsage));
#>
<# if (isReturnTypeVoid) { #>
<#=Accessibility.ForMethod(edmFunction)#> void <#=code.Escape(edmFunction)#>(<#=paramList#>)
<# } else { #>
<#=Accessibility.ForMethod(edmFunction)#> ObjectResult<<#=returnTypeElement#>> <#=code.Escape(edmFunction)#>(<#=paramList#>)
<# } #>
{
<#
foreach (var parameter in parameters)
{
if (!parameter.NeedsLocalVariable)
{
continue;
}
#>
ObjectParameter <#=parameter.LocalVariableName#>;
if (<#=parameter.IsNullableOfT ? parameter.FunctionParameterName + ".HasValue" : parameter.FunctionParameterName + " != null"#>)
{
<#=parameter.LocalVariableName#> = new ObjectParameter("<#=parameter.EsqlParameterName#>", <#=parameter.FunctionParameterName#>);
}
else
{
<#=parameter.LocalVariableName#> = new ObjectParameter("<#=parameter.EsqlParameterName#>", typeof(<#=parameter.RawClrTypeName#>));
}
<#
}
#>
<# if (isReturnTypeVoid) { #>
base.ExecuteFunction("<#=edmFunction.Name#>"<#=code.StringBefore(", ", String.Join(", ", parameters.Select(p => p.ExecuteParameterName).ToArray()))#>);
<# } else { #>
return base.ExecuteFunction<<#=returnTypeElement#>>("<#=edmFunction.Name#>"<#=code.StringBefore(", ", String.Join(", ", parameters.Select(p => p.ExecuteParameterName).ToArray()))#>);
<# } #>
}
<#
}
region.End();
What I'm doing here is instead of ignoring all function imports that return null, I'm creating a method that returns null. I hope this is helpful.
It is because you do not actually returning anything from your stored procedure. Add a line like below to your SP (SELECT ##ROWCOUNT), and it will be executing properly.
BEGIN
...
SELECT ##ROWCOUNT
END
While this solution will address your issue and actually returns the number of effected rows by your SP, I am not clear on why this is an issue for you:
I had initially set its return type to null, but then it wasn't created in the context.
When doing a Function Import, you can select "None" as return type and it will generate a new method on your ObjectContext with a return type of int. This method basically executes a stored procedure that is defined in the data source; discards any results returned from the function; and returns the number of rows affected by the execution.
EDIT: Why a Function without return value is ignored in a POCO Scenario:
Drilling into ObjectContext T4 template file coming with ADO.NET C# POCO Entity Generator reveals why you cannot see your Function in your ObjectContext class: Simply it's ignored! They escape to the next iteration in the foreach loop that generates the functions.
The workaround for this is to change the T4 template to actually generate a method for Functions without return type or just returning something based on the first solution.
region.Begin("Function Imports");
foreach (EdmFunction edmFunction in container.FunctionImports)
{
var parameters = FunctionImportParameter.Create(edmFunction.Parameters, code, ef);
string paramList = String.Join(", ", parameters.Select(p => p.FunctionParameterType + " " + p.FunctionParameterName).ToArray());
// Here is why a Function without return value is ignored:
if (edmFunction.ReturnParameter == null)
{
continue;
}
string returnTypeElement = code.Escape(ef.GetElementType(edmFunction.ReturnParameter.TypeUsage));
...

Cannot WriteXML for DataTable because Windows Search Returns String Array for Authors Property

The System.Author Windows property is a multiple value string. Windows Search returns this value as an array of strings in a DataColumn. (The column's data-type is string[] or String().) When I call the WriteXML method on the resulting data-table, I get the following InvalidOperationException exception.
Is there a way to specify the data-table's xml-serializer to use for specific columns or specific data-types?
Basically, how can I make WriteXML work with this data-table?
System.InvalidOperationException:
Type System.String[] does not
implement IXmlSerializable interface
therefore can not proceed with
serialization.
You could easily copy your DataTable changing the offending Authors column to a String and joing the string[] data with a proper delimiter like "|" or "; ".
DataTable xmlFriendlyTable = oldTable.Clone();
xmlFriendlyTable.Columns["Author"].DataType = typeof(String);
xmlFriendlyTable.Columns["Author"].ColumnMapping = MappingType.Element;
foreach(var row in oldTable.Rows) {
object[] rowData = row.ItemArray;
object[] cpyRowData = new object[rowData.Length];
for(int i = 0; i<rowData.Length; i++) {
if(rowData[i] != null && rowData[i].GetType() == typeof(String[])) {
cpyRowData[i] = String.Join("; ", (rowData[i] as String[]));
} else {
cpyRowData[i] = rowData[i];
}
xmlFriendlyTable.Rows.Add(cpyRowData);
}
}
xmlFriendlyTable.WriteXml( ... );
NOTE Wrote the above in the web browser, so there may be syntax errors.

using the TSqlParser

I'm attempting to parse SQL using the TSql100Parser provided by microsoft. Right now I'm having a little trouble using it the way it seems to be intended to be used. Also, the lack of documentation doesn't help. (example: http://msdn.microsoft.com/en-us/library/microsoft.data.schema.scriptdom.sql.tsql100parser.aspx )
When I run a simple SELECT statement through the parser it returns a collection of TSqlStatements which contains a SELECT statement.
Trouble is, the TSqlSelect statement doesn't contain attributes such as a WHERE clause, even though the clause is implemented as a class. http://msdn.microsoft.com/en-us/library/microsoft.data.schema.scriptdom.sql.whereclause.aspx
The parser does recognise the WHERE clause as such, looking at the token stream.
So, my question is, am I using the parser correctly? Right now the token stream seems to be the most useful feature of the parser...
My Test project:
public static void Main(string[] args)
{
var parser = new TSql100Parser(false);
IList<ParseError> Errors;
IScriptFragment result = parser.Parse(
new StringReader("Select col from T1 where 1 = 1 group by 1;" +
"select col2 from T2;" +
"select col1 from tbl1 where id in (select id from tbl);"),
out Errors);
var Script = result as TSqlScript;
foreach (var ts in Script.Batches)
{
Console.WriteLine("new batch");
foreach (var st in ts.Statements)
{
IterateStatement(st);
}
}
}
static void IterateStatement(TSqlStatement statement)
{
Console.WriteLine("New Statement");
if (statement is SelectStatement)
{
PrintStatement(sstmnt);
}
}
Yes, you are using the parser correctly.
As Damien_The_Unbeliever points out, within the SelectStatement there is a QueryExpression property which will be a QuerySpecification object for your third select statement (with the WHERE clause).
This represents the 'real' SELECT bit of the query (whereas the outer SelectStatement object you are looking at has just got the 'WITH' clause (for CTEs), 'FOR' clause (for XML), 'ORDER BY' and other bits)
The QuerySpecification object is the object with the FromClauses, WhereClause, GroupByClause etc.
So you can get to your WHERE Clause by using:
((QuerySpecification)((SelectStatement)statement).QueryExpression).WhereClause
which has a SearchCondition property etc. etc.
Quick glance around would indicate that it contains a QueryExpression, which could be a QuerySpecification, which does have the Where clause attached to it.
if someone lands here and wants to know how to get the whole elements of a select statement the following code explain that:
QuerySpecification spec = (QuerySpecification)(((SelectStatement)st).QueryExpression);
StringBuilder sb = new StringBuilder();
sb.AppendLine("Select Elements");
foreach (var elm in spec.SelectElements)
sb.Append(((Identifier)((Column)((SelectColumn)elm).Expression).Identifiers[0]).Value);
sb.AppendLine();
sb.AppendLine("From Elements");
foreach (var elm in spec.FromClauses)
sb.Append(((SchemaObjectTableSource)elm).SchemaObject.BaseIdentifier.Value);
sb.AppendLine();
sb.AppendLine("Where Elements");
BinaryExpression binaryexp = (BinaryExpression)spec.WhereClause.SearchCondition;
sb.Append("operator is " + binaryexp.BinaryExpressionType);
if (binaryexp.FirstExpression is Column)
sb.Append(" First exp is " + ((Identifier)((Column)binaryexp.FirstExpression).Identifiers[0]).Value);
if (binaryexp.SecondExpression is Literal)
sb.Append(" Second exp is " + ((Literal)binaryexp.SecondExpression).Value);
I had to split a SELECT statement into pieces. My goal was to COUNT how many record a query will return. My first solution was to build a sub query such as
SELECT COUNT(*) FROM (select id, name from T where cat='A' order by id) as QUERY
The problem was that in this case the order clause raises the error "The ORDER BY clause is not valid in views, inline functions, derived tables, sub-queries, and common table expressions, unless TOP or FOR XML is also specified"
So I built a parser that split a SELECT statment into fragments using the TSql100Parser class.
using Microsoft.Data.Schema.ScriptDom.Sql;
using Microsoft.Data.Schema.ScriptDom;
using System.IO;
...
public class SelectParser
{
public string Parse(string sqlSelect, out string fields, out string from, out string groupby, out string where, out string having, out string orderby)
{
TSql100Parser parser = new TSql100Parser(false);
TextReader rd = new StringReader(sqlSelect);
IList<ParseError> errors;
var fragments = parser.Parse(rd, out errors);
fields = string.Empty;
from = string.Empty;
groupby = string.Empty;
where = string.Empty;
orderby = string.Empty;
having = string.Empty;
if (errors.Count > 0)
{
var retMessage = string.Empty;
foreach (var error in errors)
{
retMessage += error.Identifier + " - " + error.Message + " - position: " + error.Offset + "; ";
}
return retMessage;
}
try
{
// Extract the query assuming it is a SelectStatement
var query = ((fragments as TSqlScript).Batches[0].Statements[0] as SelectStatement).QueryExpression;
// Constructs the From clause with the optional joins
from = (query as QuerySpecification).FromClauses[0].GetString();
// Extract the where clause
where = (query as QuerySpecification).WhereClause.GetString();
// Get the field list
var fieldList = new List<string>();
foreach (var f in (query as QuerySpecification).SelectElements)
fieldList.Add((f as SelectColumn).GetString());
fields = string.Join(", ", fieldList.ToArray());
// Get The group by clause
groupby = (query as QuerySpecification).GroupByClause.GetString();
// Get the having clause of the query
having = (query as QuerySpecification).HavingClause.GetString();
// Get the order by clause
orderby = ((fragments as TSqlScript).Batches[0].Statements[0] as SelectStatement).OrderByClause.GetString();
}
catch (Exception ex)
{
return ex.ToString();
}
return string.Empty;
}
}
public static class Extension
{
/// <summary>
/// Get a string representing the SQL source fragment
/// </summary>
/// <param name="statement">The SQL Statement to get the string from, can be any derived class</param>
/// <returns>The SQL that represents the object</returns>
public static string GetString(this TSqlFragment statement)
{
string s = string.Empty;
if (statement == null) return string.Empty;
for (int i = statement.FirstTokenIndex; i <= statement.LastTokenIndex; i++)
{
s += statement.ScriptTokenStream[i].Text;
}
return s;
}
}
And to use this class simply:
string fields, from, groupby, where, having, orderby;
SelectParser selectParser = new SelectParser();
var retMessage = selectParser.Parse("SELECT * FROM T where cat='A' Order by Id desc",
out fields, out from, out groupby, out where, out having, out orderby);