How to export image of relational model in Oracle SQL Data Modeler? - oracle-sqldeveloper

I have a relational model in Oracle SQL Developer Data Modeler with tables and relationships. Is it possible to export that relational model to a image file?

It is possible, but looks like is not very intuitive.
Zoom your relational model to the expected resolution (at least to be readable).
Go to File, then Data Modeler, then Print Diagram and selected the desired format.
This will generate an image file with your relational model with the current zoom level as the resolution.

Current path to printing model in SQL developer is:
File -> Data Modeler -> Print Diagram -> To ... File

I know this question is old, but I did figure out how to make it work using a different thread on stack overflow.
How to generate an entity-relationship (ER) diagram using Oracle SQL Developer

Related

Tables in Data Modeler's Browser, not showing in Oracle Connection

I created an entity-relationship diagram (logical model) and engineered it to a relational model.
The tables were generated. Now I need to use them from the connection XE as you see in the picture.
The tables I made can only be seen on the data modeler design view in the "Browser", how do I get them on the connection "XE" to generate data dictionary, etc?
There are three possibilities:
you just need to expand the tables item in the tree to see your tables
You are looking in the wrong schema/user - go down to Other Users, and find the schema those tables belong to
The tables do not exist in the current database
If #3 is the issue, you would need to create them, possibly using the information in your Data Model - that is, you can generate the DDL/SQL scripts for those tables.
Then taking those scripts, run them while connected to the appropriate database/schema.
Disclaimer: I'm an Oracle employee and the product manager for these tools.

how to update tabular data from source tables

I have a simple test setup:
A SQL Server (2017) with one database, with one table
A SQL Server Analysis Server (2017, with compatibility level 1400)
I have created a simple tabular model in Visual Studio with one datasource (the database with one table) and one table
This is my power query:
let
Source = #"SQL/MYCOMPUTER\SQLDEV;SampleDatabase",
dbo_testTable = Source{[Schema="dbo",Item="testTable"]}[Data]
in
dbo_testTable
I have deployed this tabular model to my SSAS instance...
Now my question: if the table in my SQL Server is updated (added records), how can I see these updates reflected in the Tabular Model? Do I have to rerun the Tabular Model somehow?
I have tried "Process Table" in SSMS on the Tabular model table, but it does not get the new records...
Processing a table processes whichever dimension or fact table you selected and this will only read data from the database objects used by this table. What processing is actually performed will depend on the type of processing that you used. As far as the question in the answer you posted, Process Full on an entire Tabular model will remove all data from the deployed model, then reload everything and process the hierarchies and measures as well, so yes the new data from the underlying tables will now be in the model for all tables within it after you processed it using this option. There are multiple processing types that can either be done at the database, table, or partition level. You can view additional details on these via the Microsoft reference.
I have found that on the level of the Database in the SSAS instance, there is an option "Process Database" that has an option "Process Full", which does update all the underlying tables.
But maybe there is a better way to do this?

How to document a no-SQL database model?

When developing a relational database model for an application one can create for example an entity-relationship model, e.g. using MySQL Workbench. Although documenting those models using UML class diagrams is also very common. Both methods can be seen as standard way to go when making up an architecture for a project.
Question: what is the standard way to go for no-SQL database models?
I want to design a model for MongoDB and/or ElasticSearch which are basically arbitrary JSON stores. But I need to document at least the fields and the "relations" to each other to give a reference structure.
Is there any tool (diagram language?) existing? I looked into the documentations and found no hint for an answer. I'm aware of text files and know that one could simply write a text file with an example JSON. But I'm looking for a little bit more sophisticated and polished solution.
Any ideas or standards here?
Moon Modeler for MongoDB is a tool for visual definition of noSQL structures. It allows you to draw diagrams similar to entity relationship diagrams - with nested types/embedded documents.

Populate Core Data structure for iPhone/iPad with Sqlite3

I have a SQLite database. Should I put the DB in a data structure with Core Data. How can I do? My problem is "z relations" between tables.
It's possible?
Core Data isn't SQL even when it employs an SQLite store. Although it is theoretically possible to convert a standard SQLite file to the schema Core Data uses, that is difficult and risky especially given that Apple doesn't document the schema and can therefore change it without warning. You really need to translate the SQL data into Core Data objects.
The best way is to write a utility app containing you Core Data model. Read in the SQL data with the standard functions and then use that data and relationships to create the appropriate managed objects and object relationships in Core Data.
Usually you have code anyway for creating managed objects, populating attributes and setting relationships. Just use that code but instead of providing the data from the UI or a feed, provide it from the data provided by SQL.
I found a solution. In the future, should I use SQLite directly, but for those who have a similar problem to mine this solution works well.
Step 1: Core Data in your table add column headed gl'ID temporary relations of the original table.
Step 2: In the data in CSV add two columns. The first column contains the value 1 and refers to P_OPT of Core Data and the second column contains the identifier of the table and retrieved P_ENT generated by reading the SQLite Core Data in the table Z_PRIMARYKEY.
Step 3: With any editor Mac transfer your data in SQLite files generated by Core Data. Remember to attach gl'ID (relations) in the temporary columns.
Step 4: Through the use of the SQL UPDATE command (works with any SQL editor on the Mac) updates all ID columns of relations in Core Data with the value Z_PK. The value retrieved by the queries and the use of temporary columns.
Sorry for the bad English. I hope not to have been convoluted with the explanation and useful to others.

Data Type for storing document in SQL Server 2008 via Entity Framework

I'm trying to store a document in SQL Server 2008 using the Entity Framework.
I believe I have the code for doing this completed. The problem I'm now facing is which Data Type to use in SQL Server and in my entity model.
'Image' was my first choice, but this causes an "Invalid mapping" error when I update the model. I see that there's no equivalent of 'Image' (going by the Type drop-down in the entity's properties).
So then I tried 'varbinary(MAX)' and I see that this maps to 'binary' in the entity model. However, when I run the code it tells me that the data would be truncated so it stopped. Upon investigation I see that the SQL Server Data Type 'binary' is 8000 bytes long - which is why I chose 'varbinary(MAX)' - so the entity model seems to be reducing/mapping 'varbinary(MAX)' to 'binary'.
Is this right?
If so, what should my Data Types be (in both SQL Server 2008 and in my entity model) please? Any suggestions?
Change the data type of your model to byte[] and it will be all right dude, if you need more explanations please leave a comment.
EDIT:
Dude, I had tried it before in Linq to Sql and this time I tried it in EF, in conceptual model of your namely Foo.edmx file your type is Binary(you can open it through open with context menu of Visual Studio and then selecting Xml Editor or any other text editor like notepad) but in a generated file named Foo.designer.cs your data type is Byte[].
And there is not a limit you mentioned above.
I tried it with a 10000 bytes and it's inserted successfully without truncating my array. About benchmarking on saving documents in database or file system I read an article and it said that in Sql Server 7, file system have a better performance on retrieving stored data but in later versions of Sql Server, it take over the file system speed and it suggested saving documents on Sql Server.
IMHO on saving documents, if they are not too large, I prefer to store them on DB (NoSql DBs has great performance here as far as I know),
First: Integrity of my data,
Second: More performance that you can have(cause if your folder has large number of files, reading and writing files in those folder slows down gradually more and more unless you organize them in more than one folder and preferably in a tree like folders),
Third: Security policies that you may apply to them through your application more easily(although you can do this on file system approach but i think it's easier here)
Fourth: you can benefit from facilities provided by your DBMS for querying and manipulating and ... those files
and much more ... :-)
Ideally you should not store documents in the database, instead store the path to the document in the database, which then points to the physical document on the web server itself (or some other storage, CDN, etc).
However, if you must store it in SQL Server (and seeing as though your on SQL 2008),
you should be using FILESTREAM.
And it is supported by EF4 (i believe). It maps to binary.
As i said though, i'm not sure how well this will perform, run some benchmarks - and if it's not performing too well, try using regular ADO.NET/FileStream API.
I still think you should put it on the file system, not in the database (my opinion of course)