Is there a free web based tool to view the database schema and its properties - database-schema

Is there a free web based tool (I prefer Microsoft because I have an MSDN account) to view the database schema and its properties? I'm currently using Visio but I dont know if it's possible in Visio to do the following:
View the database schema that I will upload in a Sharepoint workspace.
If I click the name of the Database table, it will either pop up a small window with its database properties and column description or another window will pop up with its database properties and column description.
For example I have a Schema table interface composed of a profile and time tables.
THESE ARE JUST EXAMPLES.
ProfileDim table
Profile ID number(10)
FirstName varchar (20)
MiddleName varchar (20)
LastName varchar (20) TimeDim Table
DateHired datetime---------------EmployeeStartDate datetime,
Date_Name nvarchar(50),
Year datetime,
If I click the name "TimeDim" it will take me to another page where I can see the database properties written below and the description of each column. I prefer not to use MS excel or word. I want a tool that I could edit, delete in case I want to add another column, change the data type and change the name etc.
TimeDim table
COLUMN NAME DATA TYPE ALLOW NULLS DESCRIPTION
EmployeeStartDate datetime, Unchecked "START DATE OF THE EMPLOYEE"
Date_Name nvarchar(50), Checked "CURRENT DATE NAME"
Year datetime, Checked "NAME OF THE CURRENT YEAR"

There are several tools that generate your database documentation to for example html. I don't know online tools but the generated docs can be placed online ofcourse. They are not free but you can use them in the evaluation perio to generate the database documentation
toad data modeler
dbdesc
dezign for databases

For Microsoft DB servers, you can use SQL Server Management Studio Express.
The DB schema management tool depends on which database software you are using. There are different tools to MySQL, PostgreSQL, Microsoft SQL Server, etc...

Related

Get information about schema, tables, primary keys

How to get the name of the schema, tables and primary keys?
How to know his authorizations?
The only information I have is obtained by the command below:
db2 => connect
Database Connection Information
Database server = DB2/AIX64 11.1.3.3
SQL authorization ID = mkrugger
Local database alias = DBRCF
You can use the command line (interactive command line processor), if you want, but if you are starting out then it is easier to use a GUI tool.
Example free GUI, IBM Data Studio, and there are many more (any GUI that works with JDBC should work with Db2 on Linux/Unix/Windows). These are easy to find online and download if you are permitted.
To use the Db2 command-line (clp) which is what you show in your question,
Example command lines:
list tables for all
list tables for user
list tables for schema ...
describe table ...
describe indexes for table ...
Reference for LIST TABLES command
You can also use plain SQL to read the catalog views, which describes the schemas, tables, primary keys as a series of views.
Look in the online free documentation for details of views like SYSCAT.TABLES, SYSCAT.COLUMNS , SYSCAT.INDEXES and hundreds of other views.
Depending on which Db2 product is installed locally, there are a range of other command-line based tools. One in particular is db2look which lets you extract all of the DDL of the database (or a subset of it) into a plain text file if you prefer that.

google cloud Import data from Cloud Storage Could not complete the operation

it my first time using google cloud.
I am setting up everything and I was going to add a sample database to my google cloud account to test few things, however when I try to import my sampleDB I get this error:
Could not complete the operation.
I have already make a bucket and imported my sql file in there,
this is my sample sql file:
> CREATE DATABASE IF NOT EXISTS student DEFAULT CHARACTER SET utf8mb4
> COLLATE utf8mb4_0900_ai_ci; USE student;
>
> CREATE TABLE person ( id int(11) NOT NULL, name varchar(25) NOT
> NULL, age int(3) NOT NULL, sex text, email text NOT NULL,
> study varchar(20) NOT NULL, birthday date NOT NULL ) ENGINE=InnoDB
> DEFAULT CHARSET=utf8mb4;
>
> INSERT INTO person (id, `name`, age, sex, email, study, birthday)
> VALUES (1, 'Saeed', 30, 'M', 'nakafisarrd#gmail.com', 'computer',
> '1987-04-30');
>
> ALTER TABLE person ADD PRIMARY KEY (id);
this is the tutorial I am following.
further more I have installed google app engine sdk for node. is doesn show me any error so I cant figure out what is going wrong here!
I think you might be mixing two topics here, so I will guide you through both process:
· First, you want to create a Cloud SQL instance importing a *.sql file, as explained in the tutorial you followed. However, bear in mind that this tutorial is using MySQL, so you should be creating a PostgreSQL instance instead if that fits your environment better. I would also recommend you to follow the official documentation on that topic, as it is explained in a clear way how to achieve what you want to do. I have tested the script you provided in a 2nd Generation MySQL instance changing the first line from CREATE DATABASE IF NOT EXISTS student DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_0900_ai_ci; to CREATE DATABASE IF NOT EXISTS student; and it worked.
· Second, you want to upload your NodeJS application to the Google Cloud Platform, and with that purpose I suggest using App Engine. As of today, NodeJS support is only available in App Engine Flexible, so you can also follow a guided quickstart in the documentation.
So I would recommend you to first clarify if you want to use either MySQL or PostgreSQL, create a Cloud SQL instance that matches your requirements, and then import the .sql file you shared. Everything should work fine.

How can I link a Google spreadsheet to PostgreSQL? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
How can I link a Google spreadsheet to PostgreSQL? I googled and got some sample code for MySQL, and a couple of products which do it. As per this link ,support exists for a few databases.
My approach is using R and its googlesheets and DBI libraries. Googlesheets connects the spreadsheet with R and DBI connects the R with PostgreSQL. You can update the table whenever you want by simply running the script. With this approach you can also add transformations to the data before storing them in PostgreSQL. Another way is using Python and pandas and gspread libraries.
More info:
Googlesheets: https://cran.r-project.org/web/packages/googlesheets/vignettes/basic-usage.html
DBI: https://cran.r-project.org/web/packages/DBI/vignettes/DBI-1.html
We have been pulling Google Sheets into QGIS via PostgreSQL Foreign Data Wrappers. We then build a materialized view that connects records to geometry (via School Numbers, for example) and use the materialized view as a standard spatial table.
From there we have also published this 'google sheet materialized view' to a web application via node.js and add a button to refresh the map if the google sheet data has been changed. It works really well.
Method 1: Multicorn FDW and GSheets Extension:
(Note: The gsspreadsheet extension to the multicorn FDW is questionably maintained, and there is the risk that v4 of the GSheets API may break it... we don't know yet, and are planning for the possibility that we might have to implement Method 2 (below) if it does indeed break. )
To connect to a Google Sheet via a PostgreSQL FDW, use the Multicorn FDW:
https://github.com/Kozea/Multicorn
Then install the gspreadsheet_fdw extension for Multicorn:
https://github.com/lincolnturner/gspreadsheet_fdw
In PostgreSQL, create the multicorn FDW:
CREATE SERVER multicorn_gspreadsheet
FOREIGN DATA WRAPPER multicorn
OPTIONS (wrapper 'gspreadsheet_fdw.GspreadsheetFdw');
Then create the FDW table connecting to the Google Sheet using the gspreadsheet extension of multicorn:
CREATE FOREIGN TABLE speced_fdw.centerprogram_current_gsheet (
column1 integer NULL,
column2 varchar NULL,
column3 varchar NULL
)
SERVER multicorn_gspreadsheet
OPTIONS (keyfile '/usr/pgsql-9.5/share/credential.json', gskey 'example_d33lk2kdislids');
You now have a foreign table pointing directly to your Google Sheet, which you can built a materialized view from.
Method 2: Using FILE_FDW and CSV GSheet Export:
The other option is to use the FILE_FDW right out of PostgreSQL and connect to the CSV export of the GSheet using WGET.
First, create the server:
CREATE SERVER fdw_files
FOREIGN DATA WRAPPER file_fdw
OPTIONS ()
Then create the FDW table:
CREATE FOREIGN TABLE public.test_file_fdw (
"name" varchar NOT NULL,
"date" varchar NULL,
"address" varchar NULL
)
SERVER fdw_files
OPTIONS (program 'wget -q -O - "https://docs.google.com/spreadsheets/d/2343randomurlcharacters_r0/export?gid=969976&format=csv"', format 'csv', header 'true');
The above options for wget are listed here: https://www.gnu.org/software/wget/manual/html_node/HTTP-Options.html
You can do this using Google app script and database connector like psycopg2 if you're using python or node-postgres if you're using node. Run your db connector on some server, front it with a REST API and fetch the data using app script from google sheet.
One thing you'd need to figure out is permissions (Google calls them scopes).
You can build an addon and publish it internally. Here is a good tutorial on how to do this.
If you don't want to build your own addon, you can use database connectors like Castodia, Kpibees and Seekwell from Google marketplace. Most of them support Postgres.
Wow, I forgot about this question. I got this working by making an additional MySQL database for two databases. The Mysql database had a single table which was used to talk to the google sheets api. In turn this mysql table was a foreign table in the Postgres database.

Revision control of data inside Lightswitch

I'm developing a Lightswitch application that will be accessed by different users. Some background info..
When a user make some changes to one or multiple rows he/she should be able to save those changes to a "temp file", without the main data being affected. Like if you're working with an Excel document and choose "Save as", the original file will still be there. The app should be able to handle multiple of those "savings". Then the user can open of of these "savings" and apply them to the main database.
My plan to accomplish this is to have multiple rows for the same data and having columns with user data, revision etc. Tho my main concern here is how to let the user choose which "saving" to open when entering the application and then filter out the correct data. Do I need to do a custom control to accomplish this, anyone that could give me some opinions? Kinda new in the Lightswitch area.
Thanks
I'm using Lightswitch to develop a quoting interface that implements revision control. They way I do it is to have a parent table that contains a list of all the quotes (this would be similar to an Explorer window full of Excel spreadsheets i.e. data.xls, data(1).xls, data(2).xls, etc.). Each of which has a unique ID and a revision number. The details of each revision of each quote are held in a child table that has a foreign key relationship linking it to the unique ID of a particular revision of a particular quote.
When a user logs in, they are presented with a grid view of all revisions of their quotes. When they select a particular quote revision, the unique ID of that entry is used as a parameter in all of my filter queries on the details of that quote, which are presented on a different screen.
My tables are created like this:
create table Quotes (
"QuoteID" uniqueidentifier
not null primary key,
"QuoteNumber" nvarchar(8)
not null,
"QuoteRevStart" date
not null,
"QuoteRevEnd" date,
"QuoteRevNumber" tinyint
not null,
"QuoteRevCurrent" bit
not null
)
create table QuoteDetails (
"QuoteDetailsID" uniqueidentifier default newid()
not null primary key,
"QuoteNo" uniqueidentifier
not null foreign key references Quotes(QuoteID),
"ItemNo" smallint
not null,
"ProductQty" smallint
not null,
)
This is based on Type 6 Slowly Changing Dimensions database design. All of this is done with standard Lightswitch controls.

Creating a "table of tables" in PostgreSQL or achieving similar functionality?

I'm just getting started with PostgreSQL, and I'm new to database design.
I'm writing software in which I have various plugins that update a database. Each plugin periodically updates its own designated table in the database. So a plugin named 'KeyboardPlugin' will update the 'KeyboardTable', and 'MousePlugin' will update the 'MouseTable'. I'd like for my database to store these 'plugin-table' relationships while enforcing referential integrity. So ideally, I'd like a configuration table with the following columns:
Plugin-Name (type 'text')
Table-Name (type ?)
My software will read from this configuration table to help the plugins determine which table to update. Originally, my idea was to have the second column (Table-Name) be of type 'text'. But then, if someone mistypes the table name, or an existing relationship becomes invalid because of someone deleting a table, we have problems. I'd like for the 'Table-Name' column to act as a reference to another table, while enforcing referential integrity.
What is the best way to do this in PostgreSQL? Feel free to suggest an entirely new way to setup my database, different from what I'm currently exploring. Also, if it helps you answer my question, I'm using the pgAdmin tool to setup my database.
I appreciate your help.
I would go with your original plan to store the name as text. Possibly enhanced by additionally storing the schema name:
addin text
,sch text
,tbl text
Tables have an OID in the system catalog (pg_catalog.pg_class). You can get those with a nifty special cast:
SELECT 'myschema.mytable'::regclass
But the OID can change over a dump / restore. So just store the names as text and verify the table is there by casting it like demonstrated at application time.
Of course, if you use each tables for multiple addins it might pay to make a separate table
CREATE TABLE tbl (
,tbl_id serial PRIMARY KEY
,sch text
,name text
);
and reference it in ...
CREATE TABLE addin (
,addin_id serial PRIMARY KEY
,addin text
,tbl_id integer REFERENCES tbl(tbl_id) ON UPDATE CASCADE ON DELETE CASCADE
);
Or even make it an n:m relationship if addins have multiple tables. But be aware, as #OMG_Ponies commented, that a setup like this will require you to execute a lot of dynamic SQL because you don't know the identifiers beforehand.
I guess all plugins have a set of basic attributes and then each plugin will have a set of plugin-specific attributes. If this is the case you can use a single table together with the hstore datatype (a standard extension that just needs to be installed).
Something like this:
CREATE TABLE plugins
(
plugin_name text not null primary key,
common_int_attribute integer not null,
common_text_attribute text not null,
plugin_atttributes hstore
)
Then you can do something like this:
INSERT INTO plugins
(plugin_name, common_int_attribute, common_text_attribute, hstore)
VALUES
('plugin_1', 42, 'foobar', 'some_key => "the fish", other_key => 24'),
('plugin_2', 100, 'foobar', 'weird_key => 12345, more_info => "10.2.4"');
This creates two plugins named plugin_1 and plugin_2
Plugin_1 has the additional attributes "some_key" and "other_key", while plugin_2 stores the keys "weird_key" and "more_info".
You can index those hstore columns and query them very efficiently.
The following will select all plugins that have a key "weird_key" defined.
SELECT *
FROM plugins
WHERE plugin_attributes ? 'weird_key'
The following statement will select all plugins that have a key some_key with the value the fish:
SELECT *
FROM plugins
WHERE plugin_attributes #> ('some_key => "the fish"')
Much more convenient than using an EAV model in my opinion (and most probably a lot faster as well).
The only drawback is that you lose type-safety with this approach (but usually you'd lose that with the EAV concept as well).
You don't need an application catalog. Just add the application name to the keys of the table. This of course assumes that all the tables have the same structure. If not: use the application name for a table name, or as others have suggested: as a schema name( which also would allow for multiple tables per application).
EDIT:
But the real issue is of course that you should first model your data, and than build the applications to manipulate it. The data should not serve the code; the code should serve the data.