How do you check your database in and out of svn (or git) - version-control

Currently I go into phpMyAdmin, export my database as a text file and then save it with the application files before I commit things to svn (or git). Then of course, I've got to import it to production.
Is there a better way?

Depends on the language you use, RoR has it built in. Currently for a project I'm doing in ASP.net MVC I have 2 files in the project in a folder: database. One file contains the structure of the database and one file some dummy variables for testing. I must say it is a cumbersome way of sharing your database since when you update something you have to let the others know they have to rerun the (updated) sql structure script.
The structure script deletes tables if the exist and readds them + adds new tables.
Could not find a better way like db::migrate of Ruby on Rails.

If you don't have something like rails migrate, are in java environment or anything else, check out liquibase. It's pretty cool if you need that much flexibility. We just track .sql files which setup the entire database.

Generally, I would create a script that is able to generate the database (i.e., all the tables, users, views, indexes, etc) and another script that populates the DB with data. Then, use DBDeploy (similar to RoRs migrations) to handle all DB modifications. Then I would create build targets for all these script in Ant, NAnt, Buildr, etc. This way everything is versioned and in text files so it works with any SCM.

If you're looking for migrations similar to db:migrate in Rails, but you're not in rails, there are other options. There's migrate4j which is similar to db:migrate, but written in/for Java. There's also liquibase, which is very flexible and (AFAIK) language independent, but does make you write everything in XML (which is kind of the opposite of "the Rails way").

If you look at Apache ODE, they have a h2.rake task for Buildr that builds a database for testing automatically.

Related

Is there a way to refactor all table-names and column-names of a PowerBI (PBIX) dashboard through Visual Studio Code or any other IDE?

I have about 25-30 odd tables in the PowerBI app. reading from a SQL database. The application is towards the end-stages of development, and now business wants things renamed using new conventions and this technically means a lot of rework to manually go 1 table at a time to refactor in PBI. Is there a way in PBI, like how we could simply refactor a C-Sharp/Java codebase through an IDE like Visual Studio/IntelliJ by easily renaming something at one place only and it auto-refactors where-ever that dependency exists?

EntityFramework Codefirst Updating from Database

I know this isn't the designed method of using EntityFramework but I have a much easier time designing the database in SQL Management Studio to create indexes, link foreign keys, etc... all visually. But I really want to use the automatic database updates for the future when deploying updates to customers.
I designed the original database in SQL Management Studio then created the EntityFramework code first from that database. But if I want to add another table so far, I end up dropping everything then regenerating the DB from SQL Management Studio. This has been ok for the first part of my framework development but now things are starting to get a little more complex.
I've tried to learn the code first mechanism but some of the more complex items have bit me and I don't have a lot of time to allot for this. I was hoping there was a hybrid way of designing in SQL Management Studio then utilizing the database deployment functionality built into EntityFramework.
I'm working on getting the POCO reverse generator running here but having some issues. Ultimately, I think that is the way I'll want to go. But I did find a temporary workaround with existing tools built into VS2019 until I get that tool working.
I made a copy of the name of the model because I had a custom initializing for the connection string.
Delete the model file
I right clicked the folder where the codefirst files are placed in my solution then add-new item
Select ADO.NET Entity Data Model and named id the same as the one in step 1
Select Code First from Database
Deselect saving connection string (it won't overwrite and the needed one is already in the app.config)
Select the tables to import
In my case I deselected the Pluralize object names
Finish
Update the master model file with the custom initializer
In the package manager console enter: add-migration (I'm assuming you already have migrations set up)
name the migration
add-migration [name in step 12] (this applies the migration)
It might seem like a lot of steps, but it's actually not. And so far with a couple of days of playing around with it, I'm not running into any issues.

Does Oracle SQL Developer require a dedicated git repository? [duplicate]

We have big database with a lot of stuff and I want to use version control (Git) to manage changes.
There are a lot of articles how to do it step by step but one piece is missing for me.
Is there standard or recommended way for file structure of whole database (data excluded) and how it can be obtained from existing database?
It is a lot of sources, procedures, functions, packages, etc.
Version control articles show how to manage few files from version control perspective. But they suggest that each file should be selected and saved to file system separately.
Is there way to export/import all stuff to maybe some preorganized structure?
Good IDEs have such structures defined by languages or products. But it looks for me that SQL Developer doesn't have one.
It also looks for me that SQL Developer may have only one repository. No concept of projects which can combine or unite different databases in separate units.
Should I invent my whole structure and use something like
**project/Abc/DB1/Packages/packzgeXyz/source1.sql**
for each source? Sure I can do this but I worry that may miss something.
Any advise?
Yes, SQL Developer can unload a schema to files for you. And then you could take such files to your SVN or Git projects.
Tools - Database Export.
I set the output to multiple directories - so one directory for schema object type.
Then I set my application schema, then proceed to FINISH/OK.
Output looks like:
I talk about this in more detail here.

How can I version control my little framework better?

I have a little php framework for HTML development. I basically made HTML semi-functional by writing functions that return some HTML markup. For example:
P('Some, text' . a('Link', '#'), 'my_class);
returns something like this:
<p class="my_class">Some, textLink</p>
Apart from being shorter to write, it is also a lot faster, not only because it is shorter, but because there are less weird combination of characters needed to type the function calls than there are to type the equivalent markup.
This framework is still under development, but I use it across different projects. My current layout is this:
~/publkic_html/
->core/
->project_1/
->core/
->project_2/
->core/
->project_3/
->core/
As you can see, the framework is called core, and I have one copy of it at the top directory, then, each project has its own copy. What I currently do, which is not any form of version control, is I copy the core on a project back to the top directory core when I make changes to the core of that project.
For example, lets imagine I'm working on project_1, and I realize there is something that I need to be added to core, so I add it. Then I copy the core on project_1 to the top core with this:
cp -rvu ~/public_html/project_1/core/ ~/public_html/
I do a recursive update. I like verbose because I can see what is being copied.
When I need to work on another project, lets say project_2, I do the opposite, updating project_2/core/ with the contents from ~/public_html/core
As you can see this is a problem. Not only is it annoying, but it is also very prone to problems. For example, I could forget to do an update after changing a core that is local to a project. This introduces a lot of problems, like changing the same file on two different projects without updating will likely result on changes being overwritten and lost.
How can I manage this in a more efficient and safe way? Not to mention saner!
I was looking at git, but is seems that I would just end up with many different version-controlled version of core.
You can (really - must) use any SCM, which allow code re-use and share (instead of cloning) (and I don't know any VCS, which does NOT have this feature)
Subversion
externals
Mercurial
Subrepo (guestrepo)
Git
Submodules (git-subtree)
PS: Git as first VCS is worst choice

How to update zfproject.xml file after deleting some controllers, dbtables, etc. in Zend Framework?

I am using Netbeans IDE to work with Zend Framework. When I create a new controller, action, etc.. using Netbeans Zend Command Window, zfproject.xml file is updated automatically. However, when I delete some of them, the file is not updated and still keeps the names that I deleted.
Is there a way (apart from manual way) to update this file?
Is it needed to update zfproject.xml to run the project properly or is it just an organized schema of the project?
Thanks a lot
This is very good question. zfproject.xml often gets out of sync when you use both Zend Tool and manual creating of the files.
Is there a way (apart from manual way) to update this file?
I don't know a good answer for this part. You may try to iterate the application directory structure.
Is it needed to update zfproject.xml to run the project properly or is it just an organized schema of the project?
This is just a schema which is not parsed during the normal application life. Used only by the tools.