Table doesn't show up in ADO object - ado.net

I used SQL Server Management Studio to create 3 tables using SQL statements.
Then I attempted to create an ADO object in Visual Studio. The ADO wizard saw my database, but not the tables I created.
Is this because I need to somehow commit the changes I've made in Management Studio?
Or do I need to add some kind of prefix to the table names like:
CREATE TABLE mydb.Table1 ... ?
Or could this be a permissions thing?
Or am I not waiting enough time (1 min or so) for the ADO wizard to grab the table names from the database?

Ah, I guess when you script the creation of your tables (or anything else you're adding) you need to add this:
USE [Interview_MicahHoover]
GO
So much for relying on the ETL guy to set things up ;)
I figured this out by using the GUI to create the table, then right clicking on the new table in the left menu and saying 'CREATE TO ... clipboard', pasting into notepad, and looking at how the SQL Server Management Studio did it.

Related

How to execute a SQL script in DBeaver?

I have a number of .sql files that I wish to execute through DBeaver. Traditional database development programmes allow the user to edit and run SQL scripts (totally or partially) in the same window, but this is not obvious with DBeaver.
When I open a .sql script some drop down boxes in the button bar appear, that seem to serve as connection selectors. But none of the connections I have defined appear in these drop down boxes. It is possible to open a SQL console on database objects in the Database Navigation view, but not on SQL scripts.
How can I execute a SQL script, totally or partially, against a particular database connection with DBeaver?
For larger files, the more optimally it is edit the .sql file in an external editor and then in DBeaver:
right-click on your DB
chose Tools / Execute script
load your .sql file
click Start.
This approach is generally more convenient and faster for large files.
To do this without an external editor, you must:
set your DB in Active datasource select
load your file File / Open file ...
run the whole script e.g. using a shortcut Alt+X.
I believe I figured how to do this. First of all, the desired script must be open with the SQL editor. Then one must select the Auto-synch connection with navigator option that is available from the down arrow menu for the Set active connection from database navigator connection button:
In certain cases, this immediately activates the SQL console within the SQL editor panel. If that is not the case then one must go through the Database Navigator and select the desired schema on which to work.
It is then possible to execute a segment of a SQL script (e.g. a query) by selecting it and pressing Ctrl+Enter.
I hope all is well! Great question! I had a similar question when I started working with dbeaver. Here is what I have figured out so far:
run an entire single script with ALT+X;
run selected part of the code or anything before a semicolon, ; with CTRL+ENTER;
run multiple files with Tasks; in dbeaver 21.0:
Click menu Database -> Tasks -> Create new task
Specify task name;
Choose the connection;
Select files to run (I had two to exclude, underlined with red);
Run the task from same menu and watch dbeaver iterate the code for you.
Attached is a screenshot: running a Database Task on dbeaver 21.0
In DBeaver 21.1.3 Community Ed. I can change database and schema by drop down lists on top toolbar. To automatically change it according to selected database in Database Navigator make tick to Window / Preferences / Editors / Auto-sync editor connection with navigator selection.
I just created a New SQL Script (^]) and then hit context menu>File>import SQL Script (Shift-Control-Alt-O). And of course execute it (^-Enter). For dummies like me this isn't that obvious way-of-working.

SSMS 2008 adding ALTER TABLE WITH CHECK ADD CONSTRAINT to my Procs

I have searched Google, BOL, and several forums and can't find the answer:
I have a very small data base application that I write some queries and SPs to extract data. A few days ago I opened an exsiting SP to find that something had added code similar to that below, sometimes multiple lines referring to every table in the database. When I set up a new simple SP like "Select * from TinyTable" and re-open it, the same code has been inserted.
The last thing I remember doing was reviewing the settings for results to grid in SSMS 2008 R2. I'm afraid I may have accidently changed a setting but I've spent hours reviewing them and can't identify what it might be.
I have considered reinstalling SSMS to set back to defaults but I have a linked server set up to solve a collation conflict, and don't want to cause problems with that. If anyone can point me in the right direction I'd appreciate it. I may be searching using the wrong terminology but can find nothing. As I say, I don't know for sure a change to the SSMS tools options is the problem but I suspect it could be something I have done.
Here's a sample of what gets automatically inserted at the bottom of every one of my procs:
GO
ALTER TABLE [dbo].[tblLot] WITH CHECK ADD CONSTRAINT [FK_tblLot_tblLocation] FOREIGN KEY([LocID])
REFERENCES [dbo].[tblLocation] ([LocID])
You likely have Tools > Options > SQL Server Object Explorer > Scripting > Object Scripting Options > Generate script for dependent objects set to True. Try changing the value to false.

Get Error when Save modifed record using Light Switch on Azure

I am using Light switch on Azure.
After I modified a column in a record when I click the Save button I got
"Store update, insert, or delete statement affected an unexpected number of rows(0). Entties may have been modified or deleted since entities were loaded, Refresh ObjectStateManager entries.
I use VS 2012 on my dev machine debug this light switch app. it works fine and no errors when I modify the save column on same records then save it.
Is anybody in this forum has idea what could cause this? and how should I work around it?
I suspect the azure machine don't have the same version of EF with my dev machine. but in the Light switch project both client and server reference I could not find the EF is referenced there. So I don't know how I can bring the EF dll on my machine up to Azure machine.
Anybody could give me some suggestion on this?
Thanks
Chris
Usually it's a side effect of Optimistic Concurrency. This article can give you the idea of it in Lightswitch:
LightSwitch 2012 Concurrency Enhancements
When it's working on dev machine and it's not working on Azure, I guess something is not right in your production database.
you can also take a look at Entity framework: affected an unexpected number of rows(0)
Having Instead of insert/update triggers, sometimes SQL server does not report back an IdentityScope for each new inserted/updated row. Therefore EF can not realize the number of affected rows.
Normally, any insert/update into a table with identity column are immediately followed by a select of the scope_identity() to populate the associated value in the Entity Framework. The instead of trigger causes this second step to be missed, which leads to the 0 rows inserted error.
You can change your trigger to be either before or after insert or tweak your trigger by adding following line at the end of it:
select [Id] from [dbo].[TableXXX] where ##ROWCOUNT > 0 and [Id] = scope_identity()
Find more details in this or this thread.

Managing database changes

I'm starting to move more logic into the database, using triggers, views, functions, CTEs, etc. When plv8/json comes out for postgres, I can see myself putting lots of logic in there.
I'm having problems with the "standard" way of doing database migrations in sequel and activerecord. Both sequel and activerecord let you put arbitrary sql code into timestamped files. When each file is ran, a schema_versions table is updated with the filename (or timestamp in the filename), which keeps record of which migrations have been applied to the current database.
If a lot of coding is being done at the database level, that means that modifications to existing views, functions, etc follow the below pattern:
Migration 1 defines a function and a view that uses that function.
-- Migration 1
create function calculate(x int) returns int as $$
return x + 1;
$$ language sql;
create view foos as (
select something, calculate(something) from a_table
);
Requirements change, and I need to change a function type. In Migration 2 I have to drop all objects that depend on foo, and recreate them by copying their entire body -- even if there weren't any changes in most of the other code!
-- Migration 2
-- Have to drop all views and functions that depend on the
-- `calculate(int)` function.
drop view foos;
create or replace calculate(x bigint) returns bigint as $$
return x + 1;
$$ language sql;
-- I could do `drop function calculate(int) cascade`,
-- but I might accidentally drop some objects that wouldn't get recreated below.
-- Now I have to recreate foo.
create view foos as (
select something, calculate(something) from a_table
);
If I'm building a system based on views and functions and triggers, my migrations would be filled with duplicated code, and it's difficult to find the latest version of the code. You might say "don't do that!", but for my purposes (e-commerce, shipping, transactions), I'm finding it's a lot easier and faster to have the database ensure the integrity of the data by doing the logic inside the database.
You can (of course) dump the current database schema (which includes all the code definitions), but I think you lose comments. And you wouldn't generally want to edit a giant file that contains the whole schema.
Any ideas on how to solve this problem?
My best idea is to how the sql code contained in their own canonical files (app/sql/orders/shipping.sql, app/sql/orders/creation.sql, etc). Everyone develops directly on these. Whenever it's time for a release, then you'd want to make a new migration file, look at all the changed code since the previous release, figure out the dependency chain of the database objects that need to be dropped and recreated, and then copy the sql from the canonical sql files into a new sequel/activerecord migration file. But it's a pain. :/
Thoughts are very welcome. I hope I explained this well enough, I'm cutting back on my caffeine intake and I'm a little groggy atm.
Oh, I asked a similar question on Stack Overflow: Changing the type of a column used in other views The answer was a function that let me pass in:
sql code to run
database views to drop and recreate
The function would retrieve the view definition, drop the views, run the sql code, then recreate the view definition (in reverse order of dropping). Perhaps a system of functions like this would help solve the problem of having to copy/paste sql code into the migration files.
I'd recommend liquibase.
You create files which track the changes to your database and these will be run into the database in the correct migration order.
You might find Dave Wheeler's blog-posts interesting starting from here:
http://justatheory.com/computers/databases/simple-sql-change-management.html
My rate of database change is fairly small but I tend to be careless and make small changes to the schema directly, so I've had to come up with a fair bit of infrastructure to catch when I've done so. The basic elements are:
A makefile that can rebuild a development database from scratch
A set of schema-files separated into "modules" (lookups_schema.sql, lookup_data.sql)
A set of update files that transition from one revision to the next
I don't usually have the corresponding downgrade scripts, some people do
A script to populate my database with a plausible amount of test data
Crucially, a test suite via pgTAP that checks my various functions, views and also the upgrade scripts. The upgrade tests can be run against a live database too.
If you have a separate instance of PostgreSQL set up with fsync turned off / on ramdisk etc then rebuilding the whole DB and populating it can take seconds (if you don't have too much test data).
Start with #1, #2, then add #6 (pgTAP is very cool), then the rest. The crucial thing is a test suite that checks your in-database code.
There are tools that try to automate schema changes for you, but they are really only good at adding a new column to a table and that sort of thing. Once you have code in your db then they're not much help.

T-SQL Hierarchy to duplicate Dependent Objects tree view in SQL Server 2005

Id like to map the calling stack from one master stored procedure through its hundreds of siblings. i can see it in the dialog, but cannot copy or print it, but couldnt trap anythiing worthwhile in proflier.
do you know what sproc fills that treeview? i must be a recursive CTE that reads syscomments or information_schema.routines, but its beyond my chops, though i can imagine it
thanks in advance
drew
you might want to look at the source code for sp_depends (Transact-SQL).
In SQL Server Management Studio:
go to the "master" database
and then "Programmability"
then "Stored Procedures"
then "System Stored Procedures"
then "sys.sp_depends"
In sp_depends's code, there are queries on the tables you'd need to hit to build output like you are after.