Pull data from multiple files into a overview file - qgis

My current project is split into sections with each section having a set of layers that are the same setup and information but of course, different data in it depending on the location.
Is there a way to pull all the information into 1 file without actually doing the copy & paste method or combining the files using QGIS merge?
My thinking is a live version update so that if you update 1.1 and 1.2 that the overview for project 1 also updates with all the changes.
Does something like this exist?

Related

Managing a ClearCase Snapshot View with Remote Server

I have a snapshot view of a project of tens of thousands of files. I work remotely and a live view wouldn't be practical. I am only testing with these files, so I never have to put something back, but I do want to be able to get any files that have changed.
The way it has been explained to me is that there is no mechanism in ClearCase to identify my out-of-date files or to automatically update them when I would request an update of just those files.
The only option I have is to replace the entire snapshot, which could mean waiting a very long time for it to download (even when I am on the local network and not working remotely). Even then, I wouldn't know which files were updated since my existing snapshot was made.
I'm new to ClearCase, but have used SVN. SVN has this capability to see which files are out of date and to request an update of just those files.
Is there a way, with ClearCase, to get what I want? I feel (or want to think) that I may be misinformed about how it works.
The cleartool update command using -print option:
-print Produces a preview of the update operation: instead of copying or removing files, update prints a report to standard output the actions it would take for each specified element.
That should suffice to know what's changed and if you need to update.
btw: the update may analyze the entire view, but only actually downloads files that have changed.
update
Updates elements in a snapshot view
[...]
Updating Loaded Elements
For one or more loaded elements, the update command does the following:
* Reevaluates the config spec to select versions of loaded elements in
the VOB and loads them if they differ from the currently loaded
versions
You could also work more effectively by using labels or baselines. If you only update after a particular baseline, you could run cleartool diffbl to find the differences between the current and latest. You could then just monitor for a new baseline. Or you can use cleartool lsact -l to examine the element versions on the new activity.
Do you have the option of using the ClearCase Remote Client (CCRC)? It is designed to efficiently support high-latency (i.e. WAN) connections to the ClearCase servers. See the ClearCase Knowledge Center:
Developing software with Rational ClearTeam Explorer
CCRC supports both Web views (similar to snapshot views) and automatic views (similar to dynamic views) and provides much better performance than CCLC (the "ClearCase Local Client" that supports snapshot and dynamic views) over a high-latency network.
The command line interface for CCRC (rcleartool) supports the 'update' operation as does the ClearTeam Explorer GUI. The update operation evaluates which versioned files have changed and only updates that subset.

ModX: how to update database without overriding content

I am working on a ModX website (mainly templates but also system settings, user management, etc) while a development website is already online with the customer starting to input the content.
I haven't found a practical solution to push my work online (layouts are stored in the database) without overriding the content input by the customer (in the database as well).
My present workflow consists of first replacing my local modx_site_content table with the one extracted from the online database, then push this hybrid database online. Not practicle, plus I am not sure user changes are confined to modx_site_content only.
There must be a better workflow though! How do you handle this?
I don't think it gets any easier than selecting the tables you need and exporting only those into the live environment. Assuming you only work on templating, the template, snippet & chunk tables are all you need to export.
We usually take a copy, develop, and merge only once when the new features are supposed to go live this minimizes this trouble. Also the client can continue normally until d-day.
If youre doing a lot of snippet work you could always just include an actual php file instead and work with your editor directly towards those files, connect them to git and what not.
If you project is not really big, you can store your chunks/resources, etc. in a separate files (there is and option called "Static Resource"), and then, manage your changes with git. Otherwise, you need to store user data in a separate table and deploy the whole database with Fabric, for example.

Subclipse / Subversive: Any way to filter out files modified locally

Is there any way using Subclipse or Subversive to apply some kind of filter on Package Explorer that will hide all files that weren't modified locally?
It would be sometimes very useful when I just want focus on my local changes (for example to revise them). I know that files that were modified locally are marked in Package Explorer (in Subclipse by "star" symbol) but in big projects with hundred of files it doesn't help that much (it would be much easier and clearer if only modified files would be visible).
Of course packages containing modified files should be visible as well.
Have you tried the Synchronize view? This shows all your changes in a view which makes it easy to work with the items. You can also create and group items by changeset when using this view.
Using Subclipse, I set the Synchronize view so that all SVN projects in my workspace are synchronized. I then pin it and set a schedule to refresh every hour. Local changes refresh immediately, the hourly schedule is for how often to check the repository for incoming changes.
You can out the view in Outgoing mode if you only want to look at your local changes.

Choosing a version control package for document control

We have a legal requirement to ensure the latest version of documents (mainly Word and Excel) are readily accessible. We currently implement document control by manually updating the page footer with a new version number but want a better system.
I've played around with TortoiseSVN and the functionality is good; but the problem is that unless I've missed a configuration variable somewhere, Subversion applies version numbers to the whole project (i.e. every file in the repository), not to the individual files. What I want is to be able to create a folder in the repository and all our documents go in there, and the version numbers of the files are only changed when that particular document is changed. Currently if we had 30 files and each was printed and put on display including the version numbers, if someone went to the repository the version number would almost certainly have changed even if the document contents were identical. Not ideal.
The alternative to this would be to create a new repository for each and every document but the administrative overhead on that will be prohibitive. I'm essentially looking for something that does much of what TortoiseSVN does, but treats files as individual projects with their own independent version number.
Whatever solution we come up with we would want the version of the document to be automatically shown in the page footer of the document. Tortoise can do this with a macro http://insights.oetiker.ch/windows/SvnProperties4MSOffice/.
Appreciate any help, thanks.
Greg
I think what you're actually looking for is a document management system.
Version Control Systems (VCS), such as Subversion (the underlying technology behind TortoiseSVN), are fundamentally unsuited for your task due to their focus on tracking changesets among all files within a given project (i.e. one changeset/version can involve changes to many files within the project).
Another advantage of using a document management system is that they typically allow you to attach extensible metadata attributes to your documents in a much richer way than version control systems, as well as providing search capabilities.

Daily Build and SQL Server Changes

I am about to try and automate a daily build, which will involve database changes, code generation, and of course a build, commit, and later on a deployment. At the moment, each developer on the team includes their structure and data changes for the DB in two files respectively, e.g. 6.029_Brady_Data.sql. Each structure and data file includes all changes for a version, but all changes are repeatable, i.e. with EXISTS checks etc, so they can be run every day if needed.
What can I do to bring more order to this process, which is currently basically concatenate all structure change files, run them repeatedly until all dependencies are resolved, then repeat with the data change files.
Create a database project using Visual studio database edition, put it into source control and let the developers check in their code. I have done this and it works good with daily builds and offer a lot of support for structuring your database code. See this blog post for features
http://www.vitalygorn.com/blog/post/2008/01/Handling-Database-easily-with-Visual-Studio-2008.aspx