I'm supporting some legacy PostgreSQL 8.3/4 databases, and migrating them onto some newer Windows Server 2008 hardware.
I've been informed that the NAMEDATALEN figure needs to be higher than the default.
As far as I understand, the NAMEDATALEN configuration does not exist in a config file, but rather needs to be set on compiling the application.
Having already installed PostgreSQL 9.0 on the new box, I'm wondering if it's possible to alter this configuration after the fact?
It's not possible to alter this option - it needs to be changed in source file src/include/pg_config_manual.h. Then Postgres needs to be recompiled, data directory initialized with initdb and data restored. Every security and bugfix minor release will then have to be patched and recompiled. This is bad thing to do.
This is much easier and sensible to patch an application source to use shorter table/function/etc names. The maximum is 63 characters, which is enough for insanely_stupid_and_totally_impractical_table_or_function_name0
Maybe your schema really does not need longer names, and this requirement it is just an artifact from long gone version of your client application. Check this - try to import a schema and functions to new database.
And this question should probably be migrated to serverfault.com.
Related
In testing an upgrade to our Postgres database, we've discovered that one of our oldest versioned migration files is no longer valid SQL. This isn't an issue for the production database which (of course) has those migrations already in the schema_history_table, but standing up any new sandboxes is now made impossible by this broken V file.
What's the best way to bring an old V file into the modern world without forever orphaning our production database?
Of the top of my head I can think of a few possible options.
Configure postgres to enable previous version compatibility. I'm no expert at this, but I think there are some options here.
Just modify the historic migration scripts to they now work with the new version. This will mean that you can't stand up old versions any longer, but does this matter to you? I think that you'll need to run flyway repair after you do this, as Flyway will detect that the files have been tampered with.
Create a parallel set of scripts, one for each version, putting them in different folders. Then use the flyway.locations option to specify different folders depending on the version of the target.
I have installed Firebird server from the zip kit using instsvc.exe. The work's done well with the inno setup Exec function.
instsvc install -auto -name 'FireBird2_5'
My question is what are the minimum files necessary to install Firebird server.
The installer is too slow due to unnecessary files, I found this link and I'm looking for something similar.
The total size of Firebird 2.5.8 is 230 files and +/- 30MB unzipped, I doubt this would really be a problem, but if you really want to minimize things, you can remove the following.
Using Firebird-2.5.8.27089-0_x64.zip as the basis, you can get rid of the following files or folders because they are just examples and documentation, or files for specific purposes (if you know you need them, don't delete them):
doc
examples
help
include
lib
misc
system32
udf (most have been replaced by built-in functions anyway)
Readme.txt
In theory you can remove the intl folder, but that will severely limit character set support in Firebird which can cause a lot of problems, so I'd advise against that.
If I'm not mistaken it should also be possible to remove plugin\fbtrace.dll and fbtrace.conf, but you may want to double check that.
From the bin folder, you can get rid of the following files:
fbguard.exe (make sure you don't enable use of Firebird Guardian using instsvc)
gdef.exe (tool for deprecated GDL DDL language)
gpre.exe (preprocessor for compiling embedded SQL, unlikely you need this)
gsplit.exe (tool for splitting backup files)
install_classic.bat
install_super.bat
install_superclassic.bat
qli.exe (tool for a deprecated query language)
uninstall.bat
If you don't need the administrative tools (but this might not be a good idea because management, and fixing or diagnosing database problems gets harder), you can also remove from bin:
fb_lock_print.exe
fbsvmgr.exe
fbtracemgr.exe
gbak.exe
gfix.exe
gsec.exe
gstat.exe
isql.exe
nbackup.exe
In theory you could also get rid of fb_inet_server.exe or fbserver.exe, depending on whether you use Classic, SuperServer or SuperClassic. Classic and SuperClassic use fb_inet_server.exe and SuperServer fbserver.exe; you can delete the other.
The other files are either technically necessary or legally necessary (the license notices).
We are in process of upgrading Sitecore 6.6 to 7.2. Part of upgrade is to migrate all the media items from 6.6 to 7.2.
I tried creating a package but the package size is too large and times out on package installation.
I found link below using Powershell Console where it shows copy-item command:
http://blog.najmanowicz.com/2011/11/18/sample-scripts-for-sitecore-powershell-console
I attached the 6.6 to 7.2 version where I can access the 6.6 DB. However copy-item doesn't seem to support different databases.
Could someone please help how I can use SiteCore Powershell or similar to migrate media items from 6.6 to 7.2?
I had a similar issue with a (very large) media library with a similar migration. Packages seems to bomb out around the 2GB mark, instead serialize the items:
Delete everything from /Data/Serialization
Open the media library. Makes sure you have the Developer tab
showing (right click somewhere on the toolbar and enable it
otherwise)
Select your root media item then Serialize Tree
Wait...
Copy the serialized files from /Data/Serialization to your new
server
From the toolbar select Update or Revert Tree depending on your requirements
Profit.
You can find more info in the Sitecore Serialization Guide and this post by Brian Pedersen
You should be able to do this in Powershell too (from my understanding). You need to:
Add the database to your connectionString.config
Add that database to your web.config to <sitecore><databases><database>. You can copy the existing master node and rename the id attribute to match your conneciton name
Your legacy database should now be connected to Sitecore interface, you can check it is present in the database selector list from the right of the desktop
The powershell command now needs a "from" and "to" location. Assume your database is called "legacy_master", the following should work:
copy-item "master:\media library\*" "legacy_master:\media library\"
I've found Hedgehog TDS (and sometimes Razl) quite useful for doing this.
Create a new TDS project (don't version control it), and download all the items you need to your local machine. You can for example connect the "Debug" build to your source 6.6 instance, and a "Release" build to your target 7.2 instance. Then you can just synchronize the items to your target machine. It's sometimes good to synchronize one or a few branches at a time if you have long latency connections.
The good thing about this is that you're in total control of your content and can see what fields are updated etc. During an update process, it's sometimes useful to compare other parts of the db as well, just to ensure you don't miss any changes you've made to the platform.
Since I mentioned Razl as well: I've found Razl quite good if you have a whole branch that you know should be transferred from one db to another (such as the case you describe). TDS is a bit slower, but more universal - and you may have a TDS license already so it may not be worth an additional Razl license.
I've just added item transfer from one DB to another so you can Copy-item between databases starting with Sitecore PowerShell Extensions 3.0. Thanks for the great idea!
Just to add another option you can perform tasks like this using Revolver.
WARNING: Try this in a test environment first
if we assume that:
the context item is the media library item
the current database is master
the target database is called master72
then something like this should work:
cp -r -n master72/sitecore/
We are using OrientDB in its embedded Java mode (not as a separate server process), and would like to avoid having Snappy executed from /tmp (for security reasons).
My understanding is that Snappy is for compression. I have found a couple references to disabling compression in the XML config file for an OrientDB server, but that doesn't apply to us. Glancing through the source code, it looked like there might be an ALTER command that might change the compression setting, but a) I couldn't see what that command would be, and b) running it at that point might be too late, as snappy might already have been loaded.
The other option would be if we could just install the snappy.so library permanently on the server, and have OrientDB use that copy. I suspect that's not possible, but figured I would mention it in case it is.
We are using OrientDB 1.7.4.
Start the JVM with this option:
-Dstorage.compressionMethod=nothing
The important is to create the database with such mode. Before 2.0 (still in snapshot status now) you have to create and use the database with such setting.
I'm part of a development team that works on many CMS based projects, using systems like Joomla and Drupal.
In our development process, all of our code changes are managed inside of Git. At the end of a sprint, we create a DIFF that we can apply via patch to live site.
The problem is that most of the time, the changes include
Database Schema Changes
Database Data Changes
Source Code changes
Binary file changes (like images)
Git Diff handles Source Code changes beautifully. Binary files are only not included in the Diff except for reference to the fact that the files have changed.
Database Schema Changes and Database Data Changes are a mess.
I was wandering if anything like an unified patch system exists that could be used to deploy all of these changes in 1 patch.
So the question is, "Is there a system that can be used to deploy all of these changes in 1 shot?
Ideally, this system would allow to run dry-run like patch, but for all of the 4 data types.
Edit:
Thank you everyone for the feedback that you provided, it was a starting point for my research in this area.
Here is what I found so far:
It's difficult to deploy php based
applications using linux packaging
system because the changes to the
project happen iteratively rather
then as releases.
It would be possible to use dbconfig to deploy changes to a
project, but the problem is
generating mysql db diffs (schema
and data)
what really is missing for deployment of php based applications
is a deployment manager that would
be installed on the server and would
be the interface for deploying the
patches
I started a Google Wave on this topic and produced a lot of information as a result.
If anyone is interested in reading this wave, please let me know and I will add you.
For handling installation and upgrade of our application, we use the debian packaging system . ( .deb package )
Context :
We are making J2EE + Flex application. Shipping and administred throught a VPN.
So not so far from you.
Fresh install and upgrade for a version to another are made through puppet ( a system for automating system administration tasks : he install our .deb )
In the .deb we have
our compiled sourcecode
the schema of the database ( handled by [db-config][1] )
binary stuff
how to install throught apt all other application needed ( mysql, tomcat ... )
= All stuff for a fresh install
We also add the info to go from a version to another
the script for upgrading the database ( for each version )
new binary
new stuff to lauch at the machine start ( eg : some weeks ago we have add a activeMQ server )
=> Once the .deb is made correctly, we can install or upgrade seamless in one operation. ( it's made automatically, without any prompt ).
Theire is one .deb per realease, each .deb has a version number and a signature.
You can pick any of our .deb and make a fresh install or upgrade from the actual version to the version number he hold.
The .deb is in our continous integration system. ( we build a .deb each hour, like if we are about to realease a new version )
What are the benefit ?
Install / upgrade automaticcally, with confidence.
Rollback a version
run dry are natively supported
In your precise case
* Database Schema Changes
* Database Data Changes
* Source Code changes
* Binary file changes (like images)
Database => you will have to write migration script. One for each version. ( ex : 1.2-update.sql 1.3-update.sql )
Source code and binary => add them, say in witch version they have to be copied/use
Edit : i'm not sure about source code. We are doing that with compiled code...
Some links to start :
https://wiki.ubuntu.com/PackagingGuide/Complete
http://www.debian.org/doc/manuals/maint-guide/index.fr.html#contents ( in french )
[1]: http://pwet.fr/man/linux/formats/dbconfig dbconfig
[1]: http://www.debian.org/doc/FAQ/ch-pkg_basics.en.html debian
I don't think you'll find a fail-safe mechanism.
I recommend that, when possible, you take into account compatibility with the current published source when making schema/data changes.
This way you can make a v. simple tool that runs database scripts committed to a particular svn location (you don't want diff on database changes, as if you need further modifications you need different statements).
With the above done, you can have a simple command that runs the database changes, then the binary & source code changes.
For database there is also the option of schema&data comparisons tools, these could be used to compare environments & make sure there isn't anything unexpected missing in the change scripts - could also generate the change scripts, but as I said you really want to make sure it won't break current source.
You can create a tool to do the migrations painlessly -- something similar to Peoplesoft's Patch Upgrade Assistant.
It is basically a standalone executable that reads an "Upgrade Template" and carries out tasks. The upgrade template declaratively describes the upgrade tasks or "steps". The steps could be - copy (for backing up or moving the precompiled objects like classes and othar binaries), database (for altering schema elements), SQL Scripts (for loading or transforming current data). The steps will have some predicate logic capable - if it is this, do this, else skip it and go to next etc.
The template is usually an XML file. It also provides for manual steps with instructions for manual actions. Each step also specifies if it is recoverable or not. It would also validate if the step has succeeded or not.
It may be possible to have a Open Source project around this requirement which is quite common.
You need to save git commit objects in local file and then import them into other repo/branch.