HTTP based "mirror" - powershell

I am looking to implement a PS based system to manage a local library of assets, specifically a library of Revit Family files. There is a "vetted library" that acts as the source library, to which items can be added, removed or revised. This library then needs to be mirrored on the local machine.
I do this now with the vetted library on the network, and I do a Robocopy /mir at every user logon. This works great for a traditional office environment with laptops that sometimes leave the office, to ensure they have the current library. However, with Work From Home now a major issue, I want to implement a similar functionality but with a web hosted library, either on my own server or an Amazon S3 bucket. My thinking is to make this a two stage process.
1: At update of the vetted library, an XML file is updated, which includes the entire folder structure and file data for the library, including file size and file hash.
2: On the local machine, I download the vetted library map, and compare with the previous map. Missing and extraneous files are easy, though moved files are a bit more complex. Files with different sizes are easy too. If files are the same size, then already computed hashes are compared. In this way I can build a list of files to be deleted locally, as well as new files to be downloaded.
These libraries can easily reach 5gb and 10k files per library, and every year a new library is required. Often firms have as many as 5 year versions of the software installed. So, LOTS of files and lots of size.
This seems like the most performant way to handle regular updates, but I wonder if there is a cmdlet already available that handles this kind of thing better?
I know I COULD do this with Dropbox or the like, but there are a number of arguments against it, from the size of the libraries to security and access control (which I will need to address with my solution eventually as well). These libraries can cost 10s of thousands of $ to purchase, and folks aren't going to want to manage them via dropbox or OneDrive.
And... the fact that Microsoft has OneDrive has me thinking there isn't a built in PS way to do this, since they want to push OneDrive. In which case, is my file map compare based approach viable, or is there a better approach I should consider.
I know there is no code here, so maybe I am running afoul of some Stack overflow rule, but hopefully program specification and planning is seen as an appropriate avenue for questions as well as simple code solutions.

Related

PLC Version Control

I need to come up with a CM process for PLC code.
Currently, the system is developed using RSLogix 5000. The build product is a monolithic file that can be loaded onto a PLC for execution and edited directly in the development environment. With multiple developers, this has become a problem. They're stepping on each others changes.
As an analogy, it's as if, when doing Java development, the only wway to edit and save the source would be to load up a *.jar file into your IDE, make the change, and then save it back to the jar file. This is less than ideal.
How can I coordinate changes between multiple developers working with PLC's?
If we are talking about one big binary files, then a VCS (centralized or decentralized) is not the best tool for the job.
An external referencial (a shared disk for instance) where a batch will copy and label the current PCL state is better.
See "Tracking Software History"
To avert discontinuities in the historical record of revisions, old versions of programs must be stored.
“We take it a step further, though. Using our MDT AutoSave, we actually go out and interrogate the equipment. Overnight or at whatever frequency is specified, the software reads the programs in the PLCs and then compares that information to the last known program. The version-control software will copy the new program and store it and [then] compare it to the last one.
Launching version control is fairly simple. Required is software installation and then hardware configuration. “You would need a server and a couple of weeks of engineering and you’re good to go,” Perysyn says. However, his company uses a “shrink-wrap approach” that involves installing the software and then customization by users filling in the blanks.
That being said, when you have multiple changes from multiple developers, you need an integration environment where a first delivery can be done and validated, before pushing it to the actual server.
See also this post.
I use Unity Pro, so this may not apply for other brands.
Unity can export an "archive" file which is XML which describes the PLC program and IO setup in its entirety. After commissioning changes, I create an export and check it in to my local Git repo. This gets me an annotated history of changes, but no visual comparison. I can always use UnityDiff for comparison.
Check out http://www.mdtsoft.com/ also
You need specialized versioning system for PLCs like VersionDog.
From the manufacturer:
"Special support with Smart Compares for SIMATIC S5, SIMATIC S7,
SIMATIC PCS 7, WinCC, WinCC flexible, InTouch, CoDeSys, TwinCAT,
Phoenix PC WORX, RSLogix, Schneider Modsoft, Schneider Concept,
Schneider Unity, SINUMERIK 840D, Bosch IndraWorks and more. Also robot
programs from ABB and Kuka and office related data formats like
Microsoft Word, Microsoft Excel and Adobe PDF are perfectly supported
by versiondog.
Update: Here is a screenshot showing ladder version compare. I guess that's what most PLC folks are interested in. We also use it to schedule e-mail report if PLC offline and online application versions are a match, as an alarm that something has been changed in PLC but not put into version control server.
About RSLogix5000 specifically, I have seen developers use an emulated PLC and make their changes online. The final product once developed is then put together with all the comments (as they are not contained in the PLC) and then commissioned. There are issues with changes that cannot be done online, such as AOIs. There are tools in place to stop two people editing the same logic online at once and to take ownership of sections. Backups can be done in the form of uploads, but there isn't any way to track changes.
It is a messy problem, messier still for when you are maintaining a system as you want an .ACD that you can go online with, as unless you are somehow doing a diff with the RSLogix compare tool you just see unreadable machine code like "+|Éû³´¬ÙÆW×晵‚>Ù,"
The most common revision control I have seen (sadly) is just saving the the latest file, then taking a copy and adding the current date to the file name, like the recommended control.com post described.
RSLogix5000 has always prohibited multiple users from opening and editing on the same .ACD simultaneously. However, if multiple users have identical .ACD files, open them, and all make connections to the same target controller, they each can edit on the controller simultaneously, but only if they are working on different routines. Other's edits appear automatically, if they were to look at another programmers routine.
Note that working online like this is usually done with the PLC running, even sometimes with the target system (some kind of machine) operating. This kind of arrangement for the purpose of completing work faster, or in some cases because the system is huge. No one develops like this, as it is really a debug tool and impractical for significant changes.
If one programmer finishes, and another is not done, the unfinished work of the other will be saved to the first programmer's .ACD when they save. Whoever saves last will have everyone's work.
Like others have mentioned in this thread, using file date is fairly reasonable. Some companies use a version control variable that is usually displayed on a connected HMI. Other companies use a separate document that documents who and what changes. Sometimes version notes are placed in a lengthy rung comment in the main routine.
My company uses a separate change log, and dated archive copies are maintained. Multiple programmers are only used in the most extreme cases. Someone is always designated to maintain the offline file integrity, usually the person who will be working the longest, or the project manager.
It is important to note that rung comments are not carried from one user to another before RSLogix5000 v21 because previous versions didn't store comments on the controller.
All this said, you might be trying to manage offline development. I haven't seen any sophisticated methods for this. Usually programmers write the needed routines separately, and a project manager will assemble them into a single project. The cleanest approach I've seen is where a project manager will create an architecture with global functionality, and assign routine work to others, giving them a copy of the .ACD to work with. They return the .ACD with changes, and the project manager copies and pastes their routines into the "master" project.
This is a very good question and it really depends on what you want it to do.
If you are only using Rockwell equipment it might be helpfull to look at their solution, I think it's called FactoryTalk AssetCentre.
Currently I am looking into using Bazaar from Canonical.
One thing that VonC pointed out is that a piece of software that can interogate the PLC is a deffinate plus, not a must in my oppinion but it sure as hell helps.
Am I reading your question properly and you have multiple developers working on the same PLC code at the same time? It's a scary thought but I know it sometimes needs to happen, Siemens PLC's are a bit easier to program with multiple developers but I would assign one person to consolidate and test all the changes before committing to the PLC. Any CVS system will let you create branches for every developer but how you would get them to consolidate their changes is the million dolar question.
Bart.
A simple thing to do would be to do a text diff on the .l5k files so you can easily see whether a developer has been messing with part of the file that is outside of their scope.
I saw this question just now from a link at stack exchange: Are There Realistic/Useful Solutions for Source Control for Ladder Logic Programs. Rather than have a link only answer, I'll dupe my answer here:
There is actually a canned solution - from GE-IP of all places. Check out Proficy Change Management. This product does version control from a PLC control systems point of view, rather than a pure version control of files point of view - it works as a layer sitting on top of a VCS (the scary part is that originally this VCS was Visual SourceSafe) and handles rights management, reporting and checkout/checkin.
While the product is from GE-IP, it is designed to support a variety of PLC and HMI systems out of the box.
Full disclosure, I used for work for a company selling and installing PCM (but that was 7 years ago). So if you ask me what it was like back then I'm likely to tell you where it all went wrong!
In my company we just started a trial with Copia.io
Check it out. Our first tests look very promising!
It brings, branching, merging, ladder diff etc... for multiple PLC platforms (Rockwell, Siemens, Codesys)..
PS. I work for a company that builds machines, we were looking for version-dog alike solutions with a bit more power in collaboration and diffing capabilities. I used tools like Mercurial, Git, Tortoise in past companies (not for PLC though).

Load CMS core files from one server from multiple servers

I'm almost done with our custom CMS system. Now we want to install this for different websites (and more in the future), but every time I change the core files I will need to update each server/website seperatly.
What I really want is to load the core files from our server, so if I install an CMS I only define the nedded config files (on that server) and the rest is loaded from our server. This way I can pass changes in the core very simple, and only once.
How to do this, or this a completely wrong way? If so, what is the right way? Thing I need to look out for? Is it secure (without paying thousands for a https connection)?
I have completely no idea how to start or were to begin, and couldn't find anything helpful (maybe wrong search) so everything is helpful!
Thanks in advance!
Note: My application is build using the Zend Framework
You can't load the required files from remote on runtime (or really don't want to ;). This problem goes down to a proper release & configuration managment where you update all of your servers. But this can mostly be done automatically.
Depending on how much time you want to spend on this mechanism there are some things you have to be aware of. The general idea is, that you have one central server which holds the releases and have all other servers check if for updates, download and install them. There are lot's of possibilities like svn, archives, ... and the check/update can be done manually at the frontend or by crons in the background. Usually you'll update all changed files except the config files and the database as they can't be replaced but have to be modified in a certain way (this is the place where update scripts come into place).
This could look like this:
Cronjob is running on the server which checks for updates via svn
If there is a new revision it'll do a svn-update
This is an very easy to implement mechansim which holds some drawbacks like you can't change the config-files and database. Well infact it'd be possible but quite difficult to achieve.
Maybe this could be easier with a archive-based solution:
Cronjob checks updateserver for a new version. This could be done by reading the contents of a file on the update-server and compare it to a local copy
If there is a new version, download the related archive
Unpack the archive and copy the files
With that approach you might be able to include update-scripts into updates to modify configs/databases.
Automatic updatedistribution is a very very complex topic and that are only two very simple approaches. There are probably very many different solutions out there and 'selecting' the right one is not an easy task (it does even get more complex if you have different versions of a product with dependencies :) and there is no "this is the way it has to be done".

need to implement versioning in Online backup tool

I am working on the developement of a application that will perform online backup of the files and folder in the PC, automatically or manually. Currently, I was keeping only the latest version of the file at the server.Now, I have to implement the versioning so that only the changes can be transfered to the online server and user must be able to download any of the available version of the file at Backup Server.
I need to perform Deduplication for this. Guys, though I am able to perform it using the fixed block size but facing an overhead of transferring the file having CRC information with each version backup.
I have never worked on such technology , so lacks in experience. I am eager to know is there any feasible method to embedd this functionality in the application without much pain. Is any third party tool would help to perform same thing? Please let me know?
Note: I am using FTP protocol to transfer the data.
There's a program called dump that does something similar, but it operates on filesystem blocks rather than files. rsync also may be of interest.
You will need to keep track of a large number of blocks with multiple versions and how they fit into the various versions of the original files, so you will need some kind of database to track this information, and an efficient way to query it to determine which blocks in a given file need to be transferred. Also note that adding something to the beginning of a file will cause all your blocks to be "new" if you use a naive blocking and diff scheme.
To do this well will be very complex. I highly recommend you thoroughly research already-available solutions, and if you decide you need to write your own, consider the benefits of their designs carefully.

Is there any form of Version Control for LSL?

Is there any form of version control for Linden Scripting Language?
I can't see it being worth putting all the effort into programming something in Second Life if when a database goes down over there I lose all of my hard work.
Unfortunately there is no source control in-world. I would agree with giggy. I am currently moving my projects over to a Subversion (SVN) system to get them under control. Really should have done this a while ago.
There are many free & paid SVN services available on the net.
Just two free examples:
http://www.sourceforge.net
http://code.google.com
You also have the option to set one up locally so you have more control over it.
Do a search on here for 'subversion' or 'svn' to learn more about how to set one up.
[edit 5/18/09]
You added in a comment you want to backup entire objects. There are various programs to do that. One I came across in a quick Google search was: Second Inventory
I cannot recommend this or any other program as I have not used them. But that should give you a start.
[/edit]
-cb
You can use Meerkat viewer to backupt complete objects. or use some of the test programas of libopenmetaverse to backup in a text environment. I think you can backup scripts from the inventory with them.
Jon Brouchoud, an architect working in SL, developed an in-world collaborative versioning system called Wikitree. It's a visual SVN without the delta-differencing that occurs in typical source code control systems. He announced that it was being open sourced in http://archvirtual.com/2009/10/28/wiki-tree-goes-open-source/#.VQRqDeEyhzM
Check out the video in the blog post to see how it's used.
Can you save it to a file? If so then you can use just about anything, SVN, Git, VSS...
There is no good source control in game. I keep meticulous version information on the names of my scripts and I have a pile of old versions of things in folders.
I keep my source out of game for the most part and use SVN. LSLEditor is a decent app for working with the scripts and if you create a solution with objects, it can emulate alot of the in game environment. (Giving Objects, reading notecards etc.) link text
I personally keep any code snippets that I feel are worth keeping around on github.com (http://github.com/cylence/slscripts).
Git is a very good source code manager for LSL since its commits work line-by-line, unlike other SCM's such as Subversion or CVS. The reason this is so crucial is due to the fact that most Second Life scripts live in ONE FILE (since they can't call each other... grrr). So having the comparison done on the file level is not nearly as effective. Comparing line by line is perfect for LSL. With that said, it also (alike SourceForge and Google Code) allows you to make your code publicly viewable (if you so choose) and available for download in a compressed file for easier distribution.
Late reply, I know, but some things have changed in SecondLife, and some things, well, have not. Since the Third Party Viewer policy still keeps a hard wall up against saving and loading objects between viewer and system, I was thinking about another possibility so far completely overlooked: Bots!
Scripted agents, AKA Bots, have all usual avatar actions available to them. Although I have never seen one used as an object repository, there is no reason you couldn't create one. Logged in as a separate account the agent can be wherever you want automatically or by command, then collect any or all objects you are working on at set intervals or by command, and anything they have collected may be given to you or collaborators.
I won't say it's easy to script an agent, and couldn't even speak for making an extension to a scripted agent myself, but if you don't want to start from scratch there is an extensive open source framework to build on, Corrade. Other bot services don't seem to list 'object repository' among their abilities either but any that support CasperVend must already provide the ability to receive items on request.
Of course the lo-fi route, just regularly taking a copy and sending the objects to a backup avatar, may still be a simple backup solution for one user. Although that does necessitate logging in as the other account either in parallel or once every 20 or-so items to be sure they are being received and not capped by the server. This process cannot rename the items or sort them automatically like a bot may. Identically named items are listed in inventory as most recent at the top but this is a mess when working with multiples of various items.
Finally, there is a Coalesce feature for managing several items as one in inventory. This is currently not supported for sending or receiving objects, but in the absence of a bot, can make it easier to keep track of projects you don't wish to actually link as one item. (Caveat; don't rezz 'no-copy' coalesced items near 'no-build' land parcels, any that cannot be rezzed are completely lost)

What is the best way to handle files for a small office?

I'm currently working at a small web development company, we mostly do campaign sites and other promotional stuff. For our first year we've been using a "server" for sharing project files, a plain windows machine with a network share. But this isn't exactly future proof.
SVN is great for code (it's what we use now), but I want to have the comfort of versioning (or atleast some form of syncing) for all or most of our files.
What I essentially want is something that does what subversion does for code, but for our documents/psd/pdf files.
I realize subversion handles binary files too, but I feel it might be a bit overkill for our purposes.
It doesn't necessarily need all the bells and whistles of a full version control system, but something that that removes the need for incremental naming (Notes_1.23.doc) and lessens the chance of overwriting something by mistake.
It also needs to be multiplatform, handle large files (100 mb+) and be usable by somewhat non technical people.
SVN is great for binaries, too. If you're afraid you can't compare revisions, I can tell you that it is possible for Word docs, using Tortoise.
But I do not know, what you mean with "expanding the versioning". SVN is no document management system.
Edit:
but I feel it might be a bit overkill for our purposes
If you are already using SVN and it fulfils your purposes, why bother with a second system?
If you have a windows 2003 server, you can have a look at Sharepoint Services 3.0 (http://technet.microsoft.com/en-us/windowsserver/sharepoint/bb684453.aspx).
It can do version control for documents, and has a nice integration with Office, starting with Office xp, but office 2003 and 2007 are better. Office and PDF files can be indexed (via Adobe IFilter), and searched. You can also add IFilters to search metadata in your documents.
Regarding large files, by default the max filesize is 50MB, but it can be configured.
We've just moved over to Perforce and have been really happy with it. It's a commercial product, but it's so powerful and easy to use that it's worth the price per seat IMHO.
A decent folder structure and naming scheme?
VCS don't really handle images and such very well - would it be possible to have the code in a VCS (SVN/Git/Mercurial etc), along-side a sensible folder structure for the binary-assets (source photos, Photoshop PSD files, Illustrator files and so on)?
It wouldn't handle syncing, but a central file-server would achieve the same thing.
It would require some enforcing and kitten-herding to get people to name things properly, but I think having a version folder for each asset (like someproject/asset/header_logo/v01/header_logo_v01.psd) will basically be like a VCS, but easier to move between different revisions (no vcs checkout blah -r 234 when a client decides they prefered v02 more than v03)
Your question is interesting because your specifying that it be suitable for a small office. At the enterprise level, I would recommend something along the line of EMC Documentum's eRoom, but obviously thats going to be way more than you need, and more than you want cost-wise as well. I'm not sure of the licensing details on this but I've heard that if your office has MS Office, you have access to Sharepoint, which might work well for you. I'm also sure there are a lot of SAAS implementations of this kind of stuff, so you may want to look at that, keeping in mind that the servers will not be hosted by you, so if the material is extremely sensitive, thats obviously not the proper route.
You might want to consider using a Mac as your server and using Time Machine to backup your shared folders. Doing this gives you automatic backups and allows you to share through Samba so everyone can have a network drive on their computer. A Mac server is probably overkill. A Mac Mini would do for a small office or a repurposed desktop machine.
You might also consider Amazon's S3 service to do offline backups. Since it's a pay-as-you-go service this can scale with use, and if you feel you want to move to something else you can always download your data and take it somewhere else.
Windows Vista features local file versioning in its file system, which can be useful, but is limited in terms of teamwork. However, if somebody overwrites somebody else's file, a new version is stored as it should be.
Also consider KnowledgeTree. Have a look at it, some demos/screenshots are available at
http://www.knowledgetree.com/
It has a free open source Community Edition - so it's cost effective. We haven't tried it, but we chose this one over other systems for a small business looking for document versioning solution.