We are using a Variscite VAR-SOM-AM33 platform for our project, and software platform is based on OpenEmbedded/Yocto.
To ensure the hardware is running with the current software, the devices are connected to the internet. So far, we have been following the OE recipes and generating ipk and applying software updates via opkg.
However, the process is not satisfactory as some of the recipes are poorly written (fails to uninstall/install during the upgrade process). What robust technique/solution are available for OE/Yocto based systems?
Thanks in advance.
I'd like to add SWUpdate to the list of packages that you should consider. It was recommended in a 2016 paper by the Konsulko Group for Automotive-Grade Linux. That paper mentions a few other options, and provides an analysis of the various tools, so it's probably worth a read. Quoting from the paper:
It is our recommendation that the reference AGL software update strategy make use of SWUpdate in a dual copy configuration and integrate OSTree support. This allows recovery from a corrupt partition for the exception case, but also optimizes the common case where small, incremental updates can be quickly applied or rolled back as needed to [meet] OEM policy.
I don't completely agree with the paper. For example, they wrote off Mender.io because it lacks community support, but IMO the Automotive-Grade Linux group is influential enough to create popularity from scratch. Still, it's a good paper, and the fact that they settled on SWUpdate was interesting to me. I was already leaning toward it because the author, sbabic, is involved in U-Boot software development, and we use U-Boot to burn new images into our device.
At the moment I'm unsatisfied with all of the current options, but mostly because I want extra functionality. I'll probably settle on a custom system which incorporates one or more of the aforementioned packages. Unfortunately that's not the kind of definitive answer that SO prefers, but I hope that it was helpful.
I'm working on a metadata layer to integrate the Software Updater (swupd) from Clear Linux with the Yocto Project / OpenEmbedded Core.
swupd performs whole of OS updates, rather than package-based updates, using binary deltas to only update the files which change and to do so in an efficient manner.
I recently wrote some documentation (within the docs/Guide.md file in the meta-swupd repo) about adopting the "Clear Linux Way" to utilise meta-swupd from an OE/YP based distro. A wikified version of that guide, including a link to the layer git repository, are available on the Yocto Project wiki:
https://wiki.yoctoproject.org/wiki/Meta-swupd
I also have a sample layer on Github which demonstrates use of the layer (this is also the distro layer I test much of meta-swupd with):
https://github.com/incandescant/meta-myhouse
About mender.io, I have recently talked to them regarding their open-source update.
Currently, they already have their client side developed, and is working on the server side. They use HTTP and JSON. This is their git, it is only supporting Beaglebone and QEMU at the moment.
The way mender.io works is: they will have one persistence data and uboot. and 2 rootfs (active and backup) to update. So, when there is an update on the server, the users will be notified to pull it down. Give a mender -rootfs image update command. And if the upgrade is success, the user gives another mender -commit command. If there is no mender -commit, the rootfs will be rolled back to the previous rootfs in the next reboot. Mender currently only support update of kernel and rootfs.
The main role of mender.io is to ensure that the mass distributed image upgrade process is recoverable from errors. In the Server side, mender.io developed a management server to the mass distributed devices using UUID.
Not to advertise but please try out mender.io and give feedback so that the software could be more mature.
Mender Introduction pdf
Well, you can either use package based upgrades, like you do. In that case, you'll need to test and verify everything locally before you push any updates to the field. Obviously, you'll likely need to improve a number of recipes. (And I assume that you upstream those immprovements, right?)
The alternative is to use image-based upgrades. Either with full images, see for instance the discussion at Stackoverflow: Embedded Linux mechanism for deloying firmware updates or swupd
Note: I got distracted while writing this answer, so look at the answer from joshuagi; he explains a lot more of swupd.
I think they are two problems here. We (OpenEmbedded) do need to be careful that we do not break package based updates.
Also, there are image updates like swupd (mentioned above and swupdate, described at: https://sbabic.github.io/swupdate/swupdate.html
meta-updater provides support for OSTree-based updates to OE systems.
OSTree is interesting because it provides a half-way house between full image updates (which are large and tricky to handle correctly) and package based updates (which are tricky to make robust). It has a 'git-like' object representation of a root filesystem, and uses chroot and hard links to atomically switch between file system images.
(Disclosure: I'm a contributor to meta-updater)
Posts here were done years ago. Now also RAUC seems to be promising alternative to mender.io
Related
This is not a "why should I use version control" question :-)
I have always used version control from the first line of code of every project I've written so far. However yesterday I came up with a question (maybe a stupid one) to which I find no answer: when should version control really start during the software development process? Should it start from the first line of code, as I've been doing all of my life, or should it start when you really have an operational version of your code? Put in other words: should version control be used before the first version of your software? (I mean version control, not source backup, of course!).
Pre-development you don't need version control; but what you do need is some form of collaboration mechanism to keep track of changes to the specifications and documentation.
Some teams deploy version control at this stage. Personally I don't find the value of it here, a wiki/trello or similar is more valuable and makes more sense; as you are tracking a lot of abstract ideas.
As soon as you start writing code - you should start the version control process; and through out the development phase before you have deployed you continue to use version control; this is where you start getting value out of it. Especially if you are developing with others. If you are a solo developer, version control may seem like extra work for no use; this is debatable, but when you are working in a team it is essential.
Once the project has deployed; revision control is critical and mandatory. You simply cannot afford to not have it - version control offers you lots of benefits for the type of work you undertake after deployment. Bugfixes, automated testing, deployment - these can easily be automated from your version control system. If you didn't use version control during development; now is the best time to deploy it since you have a solid codebase as your reference point.
Version control is so simple these days with mercurial/git and their online hosting services that it is costs nothing to get started; and the benefits far outweigh any drawbacks.
The question is quite abstract. So, an equally abstract answer.
I think you should use version control on a specific project as soon as it starts to add value.
If you can distinguish between two phases - proof of concept/prototype etc., and product code, I think you should separate the code bases for the two. And you can use version control tools for both (source backup first, then real version control), just avoiding cluttering the production repository with early stuff.
If you are using version control just for the code, you could ask that question.
But ideally, version control should help you reproduce a build, which means the configuration files and other settings can be as important as your first line of code.
See for instance ".classpath and .project - check into version control or not?.
That is the kind of data which will facilitate collaboration, as other developers will be up and running (ie able to build your program) very quickly.
I am working on a RSS reader application. And I need to find a backend database. I want the database be embedded because I don't want the users to install a database server.
I know SQLite is a good choice, but I am wondering if there are any other nosql choices?
(I don't yet have 50 rep points to comment on, and build upon, the accepted answer; otherwise I would, sorry!)
You can embed MongoDB in your OEM solution but there are two things to consider:
It is written in C++, so if you are coding in a different language you might need to write a wrapper that launchers the database process separately.
MongoDB is licensed under Gnu AGPL-3.0 which is a copy left server license. The accepted answer, and the Google group quote, both correctly state that this would normally force you to also be AGPL licensed. However, they MongoDb states that the intention of the license is to allow refinements to their code to be submitted back, and that your product will remain separate. This makes me think that the normal copy left rules don't apply.
The goal of the server license is to require that enhancements to MongoDB be released to the community. Traditional GPL often does not achieve this anymore as a huge amount of software runs in the cloud. For example, Google has no obligation to release their improvements to the MySQL kernel – if they do they are being nice.
To make the above practical, we promise that your client application which uses the database is a separate work. To facilitate this, the mongodb.org supported drivers (the part you link with your application) are released under Apache license, which is copyleft free. Note: if you would like a signed letter asserting the above promise please request via email.
Source: http://www.mongodb.org/display/DOCS/Licensing
According to the Google Group, yes it can, but it doesn't cover how exactly.
Yes, but it isn't pretty and will
force your app to be AGPL licensed. If
you are interested take a look at how
the tools handle the --dbpath option.
Source: http://groups.google.com/group/mongodb-user/browse_thread/thread/463956a93d3fb734?pli=1
If you're using .NET, one option might be RavenDB, which is a document database, and can be embedded.
Please checkout https://github.com/Softmotions/ejdb
This project being developed to resolve this issue.
How about Couchbase Lite? It's an open source, embeddable document database. While it can function as a standalone document database, its real value is in its ability to synchronize with remote document databases. It may be aimed at iOS / Android, but it can run on anything with a JVM.
https://github.com/couchbase/couchbase-lite-java
There is no straight forwarding way to use MongoDB as an embedded library in terms of a well-reusable library. Eliot - head of 10gen - spoke of "it would be nice to have one" - but there is nothing available that could be reused in a sane way.
Looks like a lot of OEMs are trying to get Mongo on to their hardware and devices for real-time processing. A link from MongoDBs website
I usually use Buildroot to create a cross-compiled Embedded Linux root file-system along with all the user space packages.
I noticed that MongoDB is one of the packages that's already integrated as one of the Buildroot builtin packages.
You may check out MongoDB make file for some hints regarding how to built it for Embedded Linux.
I need to come up with a CM process for PLC code.
Currently, the system is developed using RSLogix 5000. The build product is a monolithic file that can be loaded onto a PLC for execution and edited directly in the development environment. With multiple developers, this has become a problem. They're stepping on each others changes.
As an analogy, it's as if, when doing Java development, the only wway to edit and save the source would be to load up a *.jar file into your IDE, make the change, and then save it back to the jar file. This is less than ideal.
How can I coordinate changes between multiple developers working with PLC's?
If we are talking about one big binary files, then a VCS (centralized or decentralized) is not the best tool for the job.
An external referencial (a shared disk for instance) where a batch will copy and label the current PCL state is better.
See "Tracking Software History"
To avert discontinuities in the historical record of revisions, old versions of programs must be stored.
“We take it a step further, though. Using our MDT AutoSave, we actually go out and interrogate the equipment. Overnight or at whatever frequency is specified, the software reads the programs in the PLCs and then compares that information to the last known program. The version-control software will copy the new program and store it and [then] compare it to the last one.
Launching version control is fairly simple. Required is software installation and then hardware configuration. “You would need a server and a couple of weeks of engineering and you’re good to go,” Perysyn says. However, his company uses a “shrink-wrap approach” that involves installing the software and then customization by users filling in the blanks.
That being said, when you have multiple changes from multiple developers, you need an integration environment where a first delivery can be done and validated, before pushing it to the actual server.
See also this post.
I use Unity Pro, so this may not apply for other brands.
Unity can export an "archive" file which is XML which describes the PLC program and IO setup in its entirety. After commissioning changes, I create an export and check it in to my local Git repo. This gets me an annotated history of changes, but no visual comparison. I can always use UnityDiff for comparison.
Check out http://www.mdtsoft.com/ also
You need specialized versioning system for PLCs like VersionDog.
From the manufacturer:
"Special support with Smart Compares for SIMATIC S5, SIMATIC S7,
SIMATIC PCS 7, WinCC, WinCC flexible, InTouch, CoDeSys, TwinCAT,
Phoenix PC WORX, RSLogix, Schneider Modsoft, Schneider Concept,
Schneider Unity, SINUMERIK 840D, Bosch IndraWorks and more. Also robot
programs from ABB and Kuka and office related data formats like
Microsoft Word, Microsoft Excel and Adobe PDF are perfectly supported
by versiondog.
Update: Here is a screenshot showing ladder version compare. I guess that's what most PLC folks are interested in. We also use it to schedule e-mail report if PLC offline and online application versions are a match, as an alarm that something has been changed in PLC but not put into version control server.
About RSLogix5000 specifically, I have seen developers use an emulated PLC and make their changes online. The final product once developed is then put together with all the comments (as they are not contained in the PLC) and then commissioned. There are issues with changes that cannot be done online, such as AOIs. There are tools in place to stop two people editing the same logic online at once and to take ownership of sections. Backups can be done in the form of uploads, but there isn't any way to track changes.
It is a messy problem, messier still for when you are maintaining a system as you want an .ACD that you can go online with, as unless you are somehow doing a diff with the RSLogix compare tool you just see unreadable machine code like "+|Éû³´¬ÙÆW×晵‚>Ù,"
The most common revision control I have seen (sadly) is just saving the the latest file, then taking a copy and adding the current date to the file name, like the recommended control.com post described.
RSLogix5000 has always prohibited multiple users from opening and editing on the same .ACD simultaneously. However, if multiple users have identical .ACD files, open them, and all make connections to the same target controller, they each can edit on the controller simultaneously, but only if they are working on different routines. Other's edits appear automatically, if they were to look at another programmers routine.
Note that working online like this is usually done with the PLC running, even sometimes with the target system (some kind of machine) operating. This kind of arrangement for the purpose of completing work faster, or in some cases because the system is huge. No one develops like this, as it is really a debug tool and impractical for significant changes.
If one programmer finishes, and another is not done, the unfinished work of the other will be saved to the first programmer's .ACD when they save. Whoever saves last will have everyone's work.
Like others have mentioned in this thread, using file date is fairly reasonable. Some companies use a version control variable that is usually displayed on a connected HMI. Other companies use a separate document that documents who and what changes. Sometimes version notes are placed in a lengthy rung comment in the main routine.
My company uses a separate change log, and dated archive copies are maintained. Multiple programmers are only used in the most extreme cases. Someone is always designated to maintain the offline file integrity, usually the person who will be working the longest, or the project manager.
It is important to note that rung comments are not carried from one user to another before RSLogix5000 v21 because previous versions didn't store comments on the controller.
All this said, you might be trying to manage offline development. I haven't seen any sophisticated methods for this. Usually programmers write the needed routines separately, and a project manager will assemble them into a single project. The cleanest approach I've seen is where a project manager will create an architecture with global functionality, and assign routine work to others, giving them a copy of the .ACD to work with. They return the .ACD with changes, and the project manager copies and pastes their routines into the "master" project.
This is a very good question and it really depends on what you want it to do.
If you are only using Rockwell equipment it might be helpfull to look at their solution, I think it's called FactoryTalk AssetCentre.
Currently I am looking into using Bazaar from Canonical.
One thing that VonC pointed out is that a piece of software that can interogate the PLC is a deffinate plus, not a must in my oppinion but it sure as hell helps.
Am I reading your question properly and you have multiple developers working on the same PLC code at the same time? It's a scary thought but I know it sometimes needs to happen, Siemens PLC's are a bit easier to program with multiple developers but I would assign one person to consolidate and test all the changes before committing to the PLC. Any CVS system will let you create branches for every developer but how you would get them to consolidate their changes is the million dolar question.
Bart.
A simple thing to do would be to do a text diff on the .l5k files so you can easily see whether a developer has been messing with part of the file that is outside of their scope.
I saw this question just now from a link at stack exchange: Are There Realistic/Useful Solutions for Source Control for Ladder Logic Programs. Rather than have a link only answer, I'll dupe my answer here:
There is actually a canned solution - from GE-IP of all places. Check out Proficy Change Management. This product does version control from a PLC control systems point of view, rather than a pure version control of files point of view - it works as a layer sitting on top of a VCS (the scary part is that originally this VCS was Visual SourceSafe) and handles rights management, reporting and checkout/checkin.
While the product is from GE-IP, it is designed to support a variety of PLC and HMI systems out of the box.
Full disclosure, I used for work for a company selling and installing PCM (but that was 7 years ago). So if you ask me what it was like back then I'm likely to tell you where it all went wrong!
In my company we just started a trial with Copia.io
Check it out. Our first tests look very promising!
It brings, branching, merging, ladder diff etc... for multiple PLC platforms (Rockwell, Siemens, Codesys)..
PS. I work for a company that builds machines, we were looking for version-dog alike solutions with a bit more power in collaboration and diffing capabilities. I used tools like Mercurial, Git, Tortoise in past companies (not for PLC though).
Can anyone explain in simple terms what the difference is between configuration management and version control? From the descriptions I've been able to find on various websites, it seems like configuration management is just a fancy term for putting your config files in a source control repository. But others lead me to believe there is a more involved explanation.
Version control is necessary but not sufficient for configuration management. Version control happens in some central or distributed repository, but says nothing about where any particular version is deployed or used.
Configuration management worries about how to take what is in version control and deploy that consistently to the appropriate places, primarily QA and production, but in a large enough development operation developers as well.
For example, you may keep all of your SQL queries in version control, including your table modification scripts, but that doesn't control when those scripts are deployed to the appropriate database server and kept in sync with the deployment of any other code that relies on that database structure.
Configuration management includes, but is not limited to, version control.
Configuration management is everything that you need to manage in terms of a project. This includes software, hardware, tests, documentation, release management, and more. It identifies every end-user component and tracks every proposed and approved change to it from Day 1 of the project to the day the project ends.
Version control is specifically applied to computer files. This includes documents, spreadsheets, emails, source code, and more.
Version control is saving files and keeping different versions of them, so you can see the change over time.
Configuration management is generally referred to as an overall process of which keeps track of what version of the code is on what server, how the servers are setup (and the install scripts to do so at many places). It is how process of what happens after the code goes into source control and how gets to deployed to the servers/desktops etc.
Configuration management is an ambigute term.
In software, it tends to be a superset of version control with emphasis on the entire process to produce a result in a repeatable and predictible manner.
In computing maintenance, it is related to the maintenance of the configuration settings and hardware/firmware/software versions of entire networks and set of attached computing machines (including servers, clients, routers...).
In hardware manufacturing, it represents even a superset of the two above, including the hardware pieces and software modules needed to obtain a product, with the description of the process to manufacture them, and even sometime the entire schemas and configurations of the production lines themselves.
In addition to everything said above I'd like to recommend Bob Aiello's book named "Configuration Management Best Practices" - http://www.amazon.com/dp/0321685865 .
It covers all aspects of Software Configuration Management including version control.
Version control is the control of deliverables whereas configuration management is managing the entire process leading to produce the deliverables. Configuration management involves change management, project management, etc., which generally are not managed by simple version control.
Roughly speaking, version control means you can check out the source for any particular version. Configuration management means you can build and deploy and probably test any particular version.
This can be helpful.
Versions and configurations
Versions:
Ability to maintain several versions of an object.
Commonly found in many software engineering and concurrent engineering environments.
Merging and reconciliation of various versions is left to the application program
Some systems maintain a version graph
Configuration:
A configuration is a collection compatible versions of modules of a software system (a version per module)
Version control is one of the features of a SCM system.
From the subversion user guide:
http://svnbook.red-bean.com/en/1.7/svn-book.html
"Some version control systems are also software configuration management (SCM) systems. These systems are specifically tailored to manage trees of source code and have many features that are specific to software development—such as natively understanding programming languages, or supplying tools for building software. Subversion, however, is not one of these systems. It is a general system that can be used to manage any collection of files. For you, those files might be source code—for others, anything from grocery shopping lists to digital video mixdowns and beyond."
I am looking into improving the backup process a group of animators use. Currently they back up their work into external hard drives or DVDs manually, taking full copies of everything. The data consists of thousands of high resolution images, project files of various video editing software and sound files. Basically everything is binary data and nothing should ever be merged on checkin.
Should I investigate version control systems that I would use as a software developer (Subversion, GIT etc.), or is there a class of version control systems intended for non-SW data that would suit these needs better?
You could also check out AlienBrain. Its a project asset management system designed for artists.
If your scope is just "backup" then I'd say stick to backup solutions.
But if you are thinking about the whole lifecycle of the animator's work, then the type of use typically falls into the "Digital Asset Management" category for the very reasons you mention: huge data volumes; binary formats.
Since version control (SCM) software is usually designed for text files that can be diff'd and merged, they tend not to do so well with binary formats in high volume. While your average web graphics are not going to be an issue for (software) version control tools, you mention video, which puts you in another league.
The bad news (maybe - depends on your business) is that DAM is dominated by the big end of town. #Atmospherian has mentioned AlienBrain which is a good representative of niche offering for artists. At the other end of the spectrum you have more general purpose offerings like Oracle's UCM (formerly Stellent). Make sure you check the price tags though.
There must be open source or lower cost alternatives available - but I don't know them, sorry.
What does seem to be very common are custom inhouse solutions. Unlike managing code, where changes to the files themselves have their own significance, managing digital assets tends to focus on the metadata (the image/video is just an associated blob). And since since many shops have their own particular production workflow, it makes the territory ripe for some skunkworks programming (if that's your bent - go for it!).
So while I'm not recommending any particular products, I suggest if you think "digital asset management" rather than "version control" when scouting for solutions you will probably find answers more suited to your needs.
Your question is a little unclear - you seem to have conflated version control and backup.
If what you want is version control, then take a look at the list on wikipedia: Comparison of revision control software. That shows most of the widely known version control systems, and their basic features. You're looking for something where you can set it up to force user's to checkout before they edit. Be aware that commercial solutions range in price from moderately expensive up to 'You want HOW much?'
If what you want is backup software, then I'd start at List of backup software in wikipedia. There's a lot more choices in the backup software arena, and there are a lot of price points.
Either way, figure in the creation of a admin position (either as part of someone's job or a new person altogether, if you're big enough). I've worked with backup and version control systems that didn't have an admin and it's a problem. Either no one takes care of problems, or everyone gets their fingers in there and really screws things up. Either way, making it part of someone's job (officially) is the best way to limit damage.
I think Clearcase would work for you.The reason being everything is VOB(VersionedObject) no matter what it is ! Check once
From your description, it sounds like you would do pretty well with some basic backup software such as Retrospect. Using daily backups of workstations, only changed data would be backed up and it would be easy to roll back to an earlier version of a file if needed.
What you don't get from such a setup is the ability to check out / check in files and get warnings about conflicts.
Vidyatel has an editing software that can compere video content and find the difference between the video versions leaning on the video only.
The result is in - EDL/TC.
It might help.
You should take a look at boar. It is exactly what you want, "version control and backup for photos, videos and other binary files". It is version control designed for large binary files.