How to insert data into my SQLDB service instance (Bluemix) - ibm-cloud

I have created an SQLDB service instance and bound it to my application. I have created some tables and need to load data into them. If I write an INSERT statement into RUN DDL, I receive a SQL -104 error. How can I INSERT SQL into my SQLDB service instance.

If you're needing to run your SQL from an application then there are several examples (sample code included) of how to accomplish this at the site listed below:
http://www.ng.bluemix.net/docs/services/SQLDB/index.html#run-a-query-in-java
Additionally, you can execute SQL in the SQL Database Console by navigating to Manage -> Work with Database Objects. More information can be found here:
http://www.ng.bluemix.net/docs/services/SQLDB/index.html#sqldb_005

s.executeUpdate("CREATE TABLE MYLIBRARY.MYTABLE (NAME VARCHAR(20), ID INTEGER)");
s.executeUpdate("INSERT INTO MYLIBRARY.MYTABLE (NAME, ID) VALUES ('BlueMix', 123)");
Full Code

Most people do initial database population or migrations when they deploy their application. Often these database commands are programming language specific. The poster didn't include the programming language. You can accomplish this two ways.
Append a bash script that would call your database scripts that you uploaded. This project shows how you can call that bash script from within your manifest file as part of doing a CF Push.
Some languages like offer a file type or service that will automatically get used to populate the database on initial deploy or when your migrate/synch the db. For example Python Django offers a "fixtures" file that will automatically take a JSON file and populate your database tables

Related

List Analysis Services Tabular instance tables in PowerShell

I need to list Tabular SSAS (Compatibility version 1500) tables from PowerShell.
Invoke-ASCmd command in Sql Server PowerShell package looks promising, however I'm a bit lost in documentation.
I can see that the following query from examples lists datasources of a tabular instance:
Invoke-ASCmd -Database:"Adventure Works DW 2008R2" -Query:
"<Discover xmlns='urn:schemas-microsoft-com:xml-analysis'>
<RequestType>DISCOVER_DATASOURCES</RequestType>
<Restrictions></Restrictions><Properties></Properties>
</Discover>"
It looks like RequestType parameter is what I'm after; I didn't find any documentation on it so tried guessing DISCOVER_TABLES, LIST_TABLES and TABLES which were rejected.
TMSL (which is what 1500 supports according to this link) has commands for altering and deleting tables, however I cannot find anything on querying or listing.
Dynamic Management Views sound like a possible solution however I cannot figure out the syntax.
From "Script Administrative Tasks in Analysis Services":
You can create a standalone MDX script file that queries data or system information. For example, Dynamic Management Views (DMV) that expose information about local server operations and server health are accessed via the MDX Select statement.
Found this discussion and tried
Invoke-ASCmd -Server "localhost" -Database:"database" -Query:"SELECT * FROM DBSCHEMA_TABLES"
however am getting an error
-1055522771 "Either the user X does not have permission to access the referenced mining model, DBSCHEMA_TABLES, or the object does not exist."
I use this to show all tables in tabular model database:
<Discover xmlns='urn:schemas-microsoft-com:xml-analysis'>
<RequestType>TMSCHEMA_TABLES</RequestType>
<Restrictions>
<RestrictionList>
<SystemFlags>0</SystemFlags>
</RestrictionList>
</Restrictions>
<Properties>
<PropertyList>
<CATALOG>YOUR_TABULAR_MODEL_DATABASE_NAME</CATALOG>
</PropertyList>
</Properties>
</Discover>
Hope this helps. For full reference see here or here.

MDS import data queue

I am following this guidance: https://www.mssqltips.com/sqlservertutorial/3806/sql-server-master-data-services-importing-data/
The instructions say after we load data into the staging tables, we go into the MDS integration screen and select "START BATCHES".
Is this a manual override to begin the process? or how do I know how to automatically queue up a batch to begin?
Thanks!
Alternative way to run the staging process
After you load the staging table with required data.. call/execute the Staging UDP.
Basically, Staging UDPs are different Stored Procedures for every entity in the MDS database (automatically created by MDS) that follow the naming convention:
stg.udp_<EntityName>_Leaf
You have to provide it values for some parameters. Here is a sample code of how to call these.
USE [MDS_DATABASE_NAME]
GO
EXEC [stg].[udp_entityname_Leaf]
#VersionName = N'VERSION_1',
#LogFlag = 1,
#BatchTag = N'batch1'
#UserName=N’domain\user’
GO
For more details look at:
Staging Stored Procedure (Master Data Services).
Do remember that the #BatchTag value has to match the value that you initially populated in the Staging table.
Automating the Staging process
The simplest way for you to do that would be to schedule a job in SQL Agent which would execute something like the code above to call the staging UDP.
Please note that you would need to get creative about figuring out how the Job will know the correct Batch Tag.
That said, a lot of developers just create a single SSIS Package which does the Loading of data in the Staging table (as step 1) and then Executes the Staging UDP (as the final step).
This SSIS package is then executed through a scheduled SQL Agent job.

Internal Server Error (500) AWS Elasticbeanstalk deployed flask app

I have deployed various versions of the app previously with no errors.
However, I had to make some modifications on two tables in my AWS RDS running PostgreSQL. alter table NAME alter column NAME type date using to_date(...)
I've done that directly on the AWS RDS and modified SQLAlchemy column accordingly. date = Column(Date)
Any API call that sends queries these two tables returns Internal Server Error - In the meantime, there are no errors in deploying this version of the app and moreover from my python interpreter running any SQLAlchemy code that queries any of these two tables always returns what's expected.
I have tried several ways to fix this. The only one that worked is to remove the line date = Column(Date) from SQLAlchemy setup file - Then there was no 500 on any API call but of course that doesn't help as I need that column!
Any help on this will be highly appreciated really...

Entity Framework set database schema depending on deployment environment

The application I develop is deployed to severeal environments (development, test, staging, production).
While developing I created the entity model from the existing development-database. Everything works fine, but as I wanted to put the application onto the test-environment, I realized the following problem:
The structure of the database is identical in all environments, but the database schema changes from environment to environment. For example there's a Customers table in every database. On my local dev machine it has the schema dbo ([dbo].[Customers]), but in the test environment the schema is test ([test].[Customers]), whilst the schema is stag in the staging environment ([stag].[Customers]) and so forth.
So when I deploy the application in the test environment, it gets no data from the database, because the entity framework expects the data to be found in [dbo].[Customers] but there is no such table, there is just a [test].[Customers].
I know, that I can define a schema other than dbo, but this doesn't help me, because I need a different schema depending on the deployment environment.
Any suggestions?
Somehow I think I'll be ending up, asking my DB admin to change the schema to dbo in every database in each environment...
If you are using code first you have to use fluent API approach from linked question and load current schema from configuration file (you will have to modify configuration per each deployment).
If you are using ObjectContext with EDMX you can use Model adapter. Other way which works with DbContext as well is storing EF metadata in files and executing some code which will change schema in ssdl file at application startup.

Is the Entity Framework compatible with SQL Azure?

I cannot get the Entity Framework to work against SQL Azure. Is this just me or is it not intended to be compatible? (I have tried the original release of EF with VS2008 as well as the more recent VS2010 Beta 2 version)
To check this I created the simplest scenario possible. Add a single table to a local SQL Server 2008 instance. The table has two columns, a primary key of type integer and a string column. I add a single row to the table with values of (1, foobar). I then added exactly the same setup to my SQL Azure database.
Created a console application and generated an EF model from the local database. Run the application and all is good, the single row can be returned from a trivial query. Update the connection string to connect to the SQL Azure and now it fails. It connects to the SQL Azure database without problems but the query fails on processing the result.
I tracked the initial problem down using the exception information. The conceptual model had the attribute Schema="dbo" set for the entity set of my single defined entity. I removed this attribute and now it fails with another error...
"Invalid object name 'testModelStoreContainer.Test'."
Where 'Test' is of course the name of the entity I have defined and so it looks like it's trying to create the entity from the returned result. But for some unknown reason cannot work out this trivial scenario.
So either I am making a really fundamental error or SQL Azure is not compatible with the EF? And that seems just crazy to me. I want to use the EF in my WebRole and then RIA Services to the Silverlight client end.
While I haven't done this myself I'm pretty sure that members on the EF team have, as has Kevin Hoffman.
So it is probably just that you went astray with one step in your porting process.
It sounds like you tried to update the EDMX (XML) by hand, from one that works against a local database.
If you do this most of the changes will be required in the StorageModel element in the EDMX (aka SSDL). But it sounds like you've been making changes in the ConceptualModel (aka CSDL) element.
My guess is you simply need to replace all references to the dbo schema in the SSDL with whatever schema is the SQL Azure schema.
Hope this helps
Alex
To answer the main question - Yes, at least the Entity Framework v4 can be used with SQL Azure - I haven't honestly tried with the initial version (from the .Net Framework 3.5. SP 1).
A little while back I did a complete project and blogged about the experience: http://www.sanderstechnology.com/?p=9961 Hopefully this might help a little bit!
Microsoft's Windows Azure documentation contains How to: Connect to Windows Azure SQL Database Using the ADO.NET Entity Framework.
After creating your model, these instructions describe how to use SQL Azure with the Entity Framework:
Migrate the School database to SQL Database by following the instructions in How to: Migrate a Database by Using the Generate Scripts Wizard (Windows Azure SQL Database).
In the SchoolEFApplication project, open the App.Config file. Change the connection string so that it connects to your SQL Database.
<connectionStrings>
<add name="SchoolEntities"
connectionString="metadata=res://*/SchoolDataModel.csdl|res://*/SchoolDataModel.ssdl|res://*/SchoolDataModel.msl;provider=System.Data.SqlClient;provider connection string="Data Source=<provideServerName>.database.windows.net;Initial Catalog=School;Integrated Security=False;User ID=<provideUserID>;Password=<providePassword>;MultipleActiveResultSets=True;Encrypt=True;TrustServerCertificate=False""
providerName="System.Data.EntityClient"/>
</connectionStrings>
Press F5 to run the application against your SQL Database.