Access 2013 + Source Control (Team Foundation Server) workflow - version-control

Since Access 2013 is not any longer offering direct source control compatibility: how is your workflow to integrate a source code control into MS Access, especially TFS?
Edit: workflow of access2013 -> ANY source code system appreciated
First thing I think of is exporting all objects into Text Files withe the builtin function SaveAsText which is available for almost every item in your database.
Application.SaveAsText acModule, d.Name, sExportLocation & "Module_" & d.Name & ".txt"
I would load, save and maybe even check the plain files in with VBA functions. The question is: is there a better workflow for this task... I really doubt that this is the best way to integrate Access 2013 projects in Sorce control.
I heard of OASIS SVN but I think this is basically the same mechanism I would use.
Please tell me how you manage your access projects

I use OASIS-SVN here to export all objects in my access database file to be text files.
I then use git, souretree, etc...
It has worked well for me and has a number of settings that are useful (eg you can choose to export data, to export table links etc)
It is not ideal, but is manageable and better than nothing!
As you might expect, I use a git project and separate local directory for every access file.
Another option that is recently on the market can be found here "entAscc" now known as Ivercy!
This looks very promising as the source control is integrated into the development environment. I've not used it, but would like to!

Related

Does Oracle SQL Developer require a dedicated git repository? [duplicate]

We have big database with a lot of stuff and I want to use version control (Git) to manage changes.
There are a lot of articles how to do it step by step but one piece is missing for me.
Is there standard or recommended way for file structure of whole database (data excluded) and how it can be obtained from existing database?
It is a lot of sources, procedures, functions, packages, etc.
Version control articles show how to manage few files from version control perspective. But they suggest that each file should be selected and saved to file system separately.
Is there way to export/import all stuff to maybe some preorganized structure?
Good IDEs have such structures defined by languages or products. But it looks for me that SQL Developer doesn't have one.
It also looks for me that SQL Developer may have only one repository. No concept of projects which can combine or unite different databases in separate units.
Should I invent my whole structure and use something like
**project/Abc/DB1/Packages/packzgeXyz/source1.sql**
for each source? Sure I can do this but I worry that may miss something.
Any advise?
Yes, SQL Developer can unload a schema to files for you. And then you could take such files to your SVN or Git projects.
Tools - Database Export.
I set the output to multiple directories - so one directory for schema object type.
Then I set my application schema, then proceed to FINISH/OK.
Output looks like:
I talk about this in more detail here.

Is TFS Source Control really feasible for Dynamics CRM?

I'd really like to get a CRM solution under source control but there are a lot of issues. I was excited to see the SolutionPackager tool - thinking MS finally gave us a way to do this. However the tools to export the solution, extract it to files and check it into source control are not integrated. I'm working on a C# project that ties everything together because it's easier to work with the APIs in a single C# solution than deal with a combination of command line utilities such as tf.exe, PowerShell commandlets and plain old .cmd files.
Source files for plugins and Silverlight pages are easy to deal with but I'm looking to get all of the customizations under source control. SolutionPackager works well for tracking customizations made in the CRM interface, but fails in a lot of other areas. I want to create VS solutions for my web resources and reports but I have issues with the VS project and solution structures. SolutionPackager expects to find things where it puts them for repackaging and I'm sure it would not like to see a bunch of .sln, .csproj and .vspscc files interspersed with them.
I figured putting the VS solutions in a separate folder would be the answer but it's not easy. If I create a project for my web resources and try to put my existing .html, .css and .js files into it it wants to copy those into the project folder. I have to remember to use "Add As Link" each time. Worse yet, if I try to do the same with SSRS reports, the "Add As Link" feature isn't even available.
Has anyone done this successfully? I'm open to any suggestions.
I have seen below link but i have not get chance to implement it.when i have try it will post information.
http://blogs.msdn.com/b/crm/archive/2013/05/17/release-alm-for-microsoft-dynamics-crm-2011-crm-solution-lifecycle-management.aspx

Synergy CM equivalent of SVN Externals?

Trying to determine if there is any behavior analogous to SVN:Externals in IBM Rational Synergy CM.
Ultimate objective is to find a more integrated way for Synergy projects existing in one database to contain files originating from another database without manually copying them to the second database and manually propagating changes. SVN of course supports this through the concept of Externals which allow you point to directories or files even on a different repository.
Search has revealed nothing so far, and unfortunately use of Synergy is mandated in this situation so switching revision control tools is not an option. If manual copying is the answer, so be it, just wanted to confirm.
The only way to share projects and files between databases is the DCM concept (see manual) . However, it's quite complicated to use and require some scripting in order to function properly. If you have simple needs you can always do it "outside" Synergy with a simple file/directory repository.

Automate build and developement pattern with VisualStudio

I'm currently working on a project that's been going on for several years straight. The development-team is small (less than 5 programmers), and source-control is virtually non-existent, and the deployment-process as is is just based on manually moving files from one server to another. The project is in classic ASP, so building isn't an issue, as both deployment and testing is just about getting the files to where they need to be and directing the browser at the correct location. Currently all development is done on a network-drive which is also the test-server. The test-server is only available when inside the the local network (can be accessed trough vpn), and is available on the address 'site.test' in the browser (requires editing to the hosts-file on all the clients, but since there are so few of us that hasn't proven to be any problem at all). All development is done in visual studio. Whenever a file is change the developer that changed the file is required to write the file he changed into a word-document and include a small description as of what was changed and why. Then, whenever there's supposed to be a version-bump (deployment), our lead-developer goes trough the word-document and copies every file (file by file) that has changed over to the production-server. Now, I don't think I need to tell you that this method is very error prone (a developer might for instance forget to add that he changed some dependency, and that might cause problems when deployed), and there's a lot of work involved with deploying.
And here comes the main question. I've been asked by the lead developer to use some time and see if I can come up with a simple solution that can simplify and automate the "version-control" and the deployment. Now, the important thing is that it's as easy as posible to use for the developers. Two of the existing developers have worked with computers for a long time, and are pretty stuck up in their routines, so for instance changing it into something like git bash wouldn't work at all. Don't get me wrong, I love git, but the first time one of them got a merge-conflict they wouldn't know what to do at all. Also, it would be ideal to change to a more distributed development-process where the developers wouldn't need to be logged into vpn (or need internet at all) to develop, and the changes they made offline could be synced up when they were done with them. Now, I've looked at Teem Development Server from Microsoft because of it's strong integration with Visual Studio. As far as I've tested it seems possible to make Visual Studio prompt the user if they want to check in changes whenever the user closes Visual Studio. Now, using TFS for source-control would probably eliminate most of the problems with the development, but how about deployment? Not to mention versioning? As far as I've understood (I've only looked briefly at TFS), TFS has a running number for every check-in, but is it possible to tell TFS that this check-in should be version 2.0.1 of the system (for example), and then have it deploy it to the web-server? And another problem, the whole solution consists of about 10 directories with hundreds of files in, though the system itself (without images and such) is only 5 directories, and only these 5 should be deployed to the server, is this possible to automate?
I know there's a lot of questions here, but what is most important is that I want to automate the development process (not the coding, but the managing of the code), and the deployment process, and I want to make it as simple as possible to use. I don't care if the setup is a bit of work, cause I got enough time at hand to setup whatever system that fits our needs, but the other devs should not have to do a lot of setup. If all of the machines that should use the system needs to be setup once, that's no problem at all, cause I can do that, but there shouldn't bee any need to do config and setups as we go.
Now, do any of you have any suggestions to what systems to use/how to use them, in order to simplify the described processes above? I've worked with several types of scm-systems before (GIT, HG and SubVersion), but I don't have any experience with build-systems at all (if that is needed). Articles, and discussion on how to efficiently setup systems like this would be greatly appreciated. In advance, thanks.
This is pretty subjective territory, but I think you need to get some easy wins first. The developers who are "stuck up in there ways" are the main roadblock here. They are going to see change as disruptive and not worth it. You need to slowly and carefully go for the easy wins.
First, TFS is probably not going to be a good choice. It's expensive, heavy, and the source control in TFS is pretty lousy. Go for Subversion: it's easy to setup and easy to use, and it's free. Get that in place first, and get the devs using it. Much easier said than done.
Later (possibly much later), once the devs are using it and couldn't imagine life without a VCS, then you could switch to Hg or Git if you need first class branching and all those other nice features.
Once you have Subversion in place, you can use something like JetBrains TeamCity or Jenkins, both of which are free and easy to use. However, I'm just assuming you don't have a lot of tests and build scripts that the CI server is really going to be running, so it's far more important that you get VCS first. In all things: keep it as simple as possible. Baby steps. Get some wins, build trust, repeat.
I can't even begin to think where to begin with this! Intending no offense directed at you, apart from the mention of git and HG, this post could have been written 10 years ago.
1) Source control - How can a team of developers possibly work effectively without some form of source control? Hell, even if it's Visual Source Safe (* shudder *) at least it would be something. You have to insist that the team implement source control. You know what's available so I won't get into preaching about that. (However, Subversion with TortoiseSVN has worked quite well for me.)
2)
"write the file he changed into a
word-document and include a small
description as of what was changed and
why"
You have got to be kidding... What happens if two developers change the same file? Does the lead then have to manually merge two changes that s/he extracts from the word doc? Please see #1 and explain to them how commit comments work.
Since your don't really need to "build" (i.e. compiled, etc.) anything, you should be able to solve most of your problems with some simple tools. First and foremost you need to use a source control solution. Yes, the developers would have to learn how to use another tool (EEEK!). You could do the initial leg work of getting the code into the repository. If you have file access to the other developers machines, you could even copy a checked-out working copy to their machines so they wouldn't have to do the checkout themselves (not really that hard). You could then use all the creamy goodness of version control to create version branches when each deployment needs to be done. You could write simple scripts using the command line SVN tools to check out said branches and automatically copy the files to the target server(s). Using a tool like BeyondCompare, the copy process could be restricted to only the files that are different (plus BC can handle an FTP target if that is an issue). By enforcing commit comments on the SVN repo, you'll guarantee that the developers provide comments, and for each set of changes between releases you could very easily generate a list of all those comments using the CSM log retrieval features.

Is there anyone out there using Clear Case with Sybase Powerbuilder?

Word has come from upon high to standardize our SCM system. And upon the clay tablets was written Clear Case.
I am reaching out to anyone who is actually using this configuration - to get best practices, hints and tips, war stories, anything...
The Sybase Source Control newsgroup only gives back the sound of crickets.
We currently have a boatload of actively maintained Powerbuilder 11.5 and EAServer 5.5 systems - so version-ing at the PBL library file level is NOT an option.
And it will be a long, long time before we go to the newest version 12 - which removes the PBL file and uses text files and works as a Visual-Studio plug-in.
I've always used the following pattern
_work.pbl
_last_minute_changes.pbl
1.pbl
2.pbl
3.pbl
...
I export the objects from 1,2,3... and check them into clearcase. I set up a nightly build using PowerGen to do a bootstrap import to a network share. I use a script to pull those pbl's down into my view. I check an object out of clearcase and import it into my _work.pbl. Make my changes, export it and check it into clearcase. A trigger then fires a CI build that imports the object into the _last_minute_changes.pbl and regenerates it against the previous nights pbl's and then archives it to a network share.
I then refresh my view from the share using the script and delete the object from my work.pbl. When it comes time to deploy we run a script that takes the sync'd pbl's and turns them into pbd's.
I used this process for a team of over 100+ powerbuilder developers in 4 states and it woked really well for us. Our application had over 12,000 objects and we never had any problems.
I do use ClearCase, but not directly with PowerBuilder projects.
The ClearCase manual has:
an extensive section on PowerBuilder integration,
and a couple of technotes, including a "
Getting started with PowerBuilder and ClearCase Integration" document.
The Sybase infocenter (11.5) mentions settings affecting source controls.
PowerBuilder projects or not, I recommend:
snapshot views for all development activities
dynamic views for consultation purposes (you can very well have both: one dynamic view to test your config spec, and one snapshot view to reuse the same tested config spec and actually copy the files locally)
CC Vob servers (for hosting the repositories) should be on a LAN. If there are on a WAN, then use CCRC (a RCP client communicating through web with a Web ClearCase Server which, in turn will communicate with the Vob servers on the same LAN)
CC View servers on a LAN (each client should manage its own view server)
I used ClearCase and PowerBuilder at a previous job.
We were using the IDE-integrated source control, and had it setup so that the individual objects were saved in clearcase as raw text objects (.sro, .srw, etc). I was not the one that exported the objects so unfortunately I can't give details, but I think PB can do at least some of that for you. Anyway, with this configuration when we checked in a file from PB, the IDE would automatically check the .srX file into ClearCase. This is the configuration you need, so that you can view the history of your changes using the ClearCase tools.
We also used PowerGen to automatically create PBL's using the source files in ClearCase. This is also a process you want to set up. Previously to this process we had to manually check the PBL's into source control (!!). I strongly advise against you doing this - otherwise you cannot truly guarantee that the .srX files and the PBL's are in sync.
Anyway, that's a brief summary. Let me know if there's anything you would like me to clarify, and I'll do my best. Good luck!
I am the Source Code Control Administrator and I have been using ClearCase and PowerBuilder together (using the IDE integration) for about 7 years. We have the PBL objects (.srw, .sru, etc) exported and in ClearCase. The PBLs are not in ClearCase. We also use PowerGen for regeneration instead of GLV because of GLVs issues with more complex systems.
ClearCase integrates beautifully with PowerBuilder (we are usign 9 and we are doing an ROI on the upgrade to 12).
Search IBM's website for "Getting started with PowerBuilder and ClearCase.pdf". That contains some very good information.