Any configuration management tool in the market to automate the property file for the different environments, lets say i have one complete configuration files with key and values but for the other environments the values will differ so am just trying to look out if this could be done easily like template or csv or any other valuable approach.
You can use Sparrow6 automation DSL to populate configuration files using embedded templater. You can go even further and utilize Tomtit task runner that integrates with Sparrow6 and provides environments feature for these very types of tasks.
You can try https://microconfig.io
It is designed to solve the configuration problem you’re describing.
It supports properties and yaml format.
Related
I have a configuration file and want to standardize the keys.
So when other developers create their own configurations, they will be validated.
When using programming languages you use interfaces.
I also know that somehow some YAML-based Tools like Ansible also have defaults/definitions to ensure you are writing your stuff correctly. But as far as I know, they also work with addons and such...
So is there are a way to achieve this kind of effect in YAML files?
I mean I could programmatically validate it somehow, but maybe there is something better.
You have two options:
Use JSON Schema. Some editors and validators offer direct YAML support, for others you need to transform the YAML to JSON first to validate (which can be done automatically). See implementations. This will restrict you to use only features available in JSON (e.g. no complex keys, anchors & aliases being resolved etc).
Use the target type as schema. YAML is a serialization language, this means that the native type in the programming language of your choice serves as schema. This obviously doesn't work well in languages with dynamic typing, like Python or JavaScript. It does work in Java (via SnakeYAML), C# (via YamlDotNet) and others. You don't get automatic editor support but you can write a small validator tool and even writing a validator tool for your favorite editor is simple today with LSP.
Ansible (and lots of other YAML-based frameworks) isn't really a good example because it uses Jinja on top of YAML and therefore is far more complex to validate.
I'm developing an extension in VS Code to add language support for OpenSCAD (Script-based 3D modeling program). Currently, I have been working on a way to open / preview a file in OpenSCAD from VS Code, which I have been able to do successfully using my own preview manager.
My issue is that I want to add configurable naming formats when exporting an OpenSCAD file that use environment variables similar to those used in the tasks.json file. More info can be found here: https://code.visualstudio.com/docs/editor/variables-reference. As an example, taking the file test.scad and the export configuration ${fileBasenameNoExtension}.stl would export to the file test.stl.
Additionally, I want to add a custom variable, ${#} that would evaluate after all other variables as a unique version number to avoid duplicate exported files. Using the example file: test.scad and the export configuration, ${fileBasenameNoExtension}_${#}.stl, the extension would export to the file test_1.stl for the first time. Then, seeing that test_1.stl exists, it would export to test_2.stl, and so on. I implement similar functionality in all of my exporting utilities, so it is important I can implement it here.
Now that the intro is done, on to the actual question: To anyone who knows more about the VS Code API than I do, in order to best get the functionality described above, should I implement environment variable evaluation into my custom preview manager or reimplement the preview manager I have using tasks? Because I have already implemented my own preview manager that I am happy with, I would prefer to do the former. However, I have been unable to find any functions in the VS Code API that will evaluate the environment variables in a string. Is there a typescript function to evaluate environment variables in the API that I have missed?
If re-implementing this functionality using tasks is a better way to achieve my goal, would I have to sacrifice the control I have in my preview manager, such as being able to selectively kill open previews and dispalying active exports?
Or, is there a compromise that could use all of the power from tasks without losing any functionality I've already developed?
Link to branch of my extension's repository: https://github.com/Antyos/vscode-openscad/tree/PreviewModel
Does anyone know how you version / source control changes in Reddot Cms (OpenText). Also is there any best practice advice for release management of changes from one Reddot environment to another Reddot instance. Any help or advice would be greatly appreciated.
There is best-practice, but as you have probably realised, there aren't too many practitioners of RedDot these days. In case you should come back to this thread (or for someone else's benefit) Versioning is built into the Template Manager, but has to be enabled. There's no Source Control integration last time I checked, but we developed a prototype system that allows for the creation of templates in Visual Studio. The project to complete that has since died due to lack of commercial support, but some of the ideas may be useful for you if you want it.
I split up the answer in two parts: Versioning and migration between stages.
Versioning can only be done with the template history or via an external service that grabs the templates on a regular basis or triggered manually. At least for the Management Server there is no built-in service for a "real" versioning or release of more than just single templates/content classes or even including pages.
There are 3 ways of moving changes from dev to test or prod I have seen often:
Two templates: Using two templates on one server, on called "Development" and the other one "Production". All new development is done on the "Development" template and moved to the other template as soon as finished. If elements are different between those templates they need to be duplicated. This is typically on small installations without staging areas. Nowadays, you will find only very few of those.
Partial tree export: Development is done on a dev server and the changes are exported as partial tree. There is a special area in the project tree where pages are created which templates shall be moved over. These are exported including the templates and imported on the target server to override the existing ones.
Tool support: There are external tools for moving templates and content classes to other servers. There is e.g. SitePort (http://siteport.net , can also move whole templates between RedDot servers afaik) and the Sync Tool (http://www.erminas.de/en/products#synctool , can compare and move single element attributes and/or single lines of templates, please note: this shall not be advertisement as the tool is made by us but I do not know any other like this). Some companies also have custom development tools for this.
I have a local Sitecore instance where I made changes involving both code and the creation of a new sublayout.
After deploying the code I can see on the new environment the usercontrol (.ascx) file associated to the sublayout, but the corresponding item does not appear and cannot be used.
If I attempt to recreate the usercontrol, it tells me that the file already exists, and due to my lack of experience with the platform I found myself unable to import it.
What would be the optimal way to proceed?
To deploy your new sublayout correctly you should create a Sitecore Package. This is basically a zip file that allows you to move both items and disk files between Sitecore instances in a controlled manner. For basic installs of Sitecore, where you have not added any specialised tools, it is generally the preferred way to move resources between servers.
The "Package Designer Guide" on the Sitecore Developer Network will give you information about how to use the Sitecore UI on your development site to create a package containing both the Item(s) and the file(s) for your sublayout:
http://sdn.sitecore.net/upload/sitecore6/65/package_designer_admin_guide-a4.pdf
Once created, this package can then be imported onto whatever other servers you want to deploy your sublayout to.
-- Edited to add --
Derek Hunziker's answer makes a good point: As well as the basic Sitecore behaviour there are third party tools available which can enhance and extend the deployment experience if you wish. As well as Hedgehog TDS, you might also consider:
The "Sitecore Rocks" extension for Visual Studio allows the creation of packages from within the
Visual Studio UI. This tool is free to use. (https://visualstudiogallery.msdn.microsoft.com/44a26c88-83a7-46f6-903c-5c59bcd3d35b/)
There are also a variety of open source tools - Sitecore Courier is one example: (https://github.com/adoprog/Sitecore-Courier) This is designed to help automate deployment between Sitecore instances.
Both TDS and Courier are most suited to regular deployments, such as those during ongoing development cycles, since they both include automation to help decide what gets deployed. The standard Sitecore UI and the Sitecore Rocks extensions for package creation are better suited to ad-hoc deployments, since you generally pick the things to deploy manually.
A common best practice is to deploy your items along with your code using Team Development for Sitecore. This eliminates the need to create Sitecore packages every time you want to move items between environments, which in turn reduces issues caused by human error. As an added bonus, the items that you own as a developer (such as Templates and SubLayouts) can be checked into source control.
Full disclosure: I work for Hedgehog Development :)
We're looking into setting up a proper deployment process.
From what I've read there seems to be 4 methods of doing this.
Copy & Paste -- We don't want to do this
Using the "Package" mechanism built into the Salesforce Web Interface
Eclipse Force IDE "Deploy to Server" option
Ant Script (haven't tried this one yet)
Does anyone have advice on the limitation of the various methods .
Can you include everything in a Web Interface package?
We're looking to deploy the following items:
Apex Classes
Apex Triggers
WorkFlows
Email Templates
MailMerge Templates -- Can't seem to find these in Eclipse
Custom Fields
Page Layout
RecordTypes (can't seem to find these in Website or Eclipse)
PickList items?
SControls
I recommend the Force.com Migration Tool.
For reference:
Force.com Migration Tool Documentation
Migration Tool Guide
The Migration Tool allows you to use ant targets to move your metadata between salesforce.com organzations.
I can speak to this from recent painful experience.
Packaging: this is a very old method that predates the metadata API on which both Ant and Eclipse rely. In our experience, packaging's only benefit is in defining your project. If you're using Eclipse (which we do, and I recommend), you can define your project as being based on a particular package. As long as you remember to add new components to your package, your project hangs together
One thing that baffled us for a while, btw, are the many uses of package. We've noted the following:
Installed packages: these come in managed and unmanaged flavors and are really, in the words of a recent post on the SFDC boards, for ISVs to deploy their stuff into various unknown orgs "out there". Both managed and unmanaged packages have limitations that make them unsuitable and unneeded for deployment from development to production within an org, or in any case where you're doing custom development and don't intend to distribute code to a large anonymous base.
Non-installed packages: this is what you see when you click "Packages" in the web UI. These, that we sometimes call "development packages", seem to be just a convenient way to keep a project definition together.
Anyway, the conclusion I'm coming toward is that our team (custom development, not an ISV) does not need packages in any form.
The other forms of deployment, both Eclipse and Ant, rely on the Metadata API. In theory they are capable of exactly the same things. In reality they appear to be complementary. The Force.com migration tool, built into the Force.com IDE for Eclipse, makes deployment as easy as it can be (which is not very) and gives you a nice look at what it intends to deploy. On the other hand, we've seen Ant do some things the IDE could not. So it's probably worthwhile to learn both.
The process we're leaning toward is to keep all our projects in SVN, and use the SVN structure as the project definition (Eclipse will work with this and respect it). And we use Eclipse and sometimes Ant for migration. No apparent need for packages anywhere.
By the way, one more thing to be aware of -- not all components are migratable. Some things must be reconfigured by hand in the target environment. One example would be time-based workflows. Queues and Groups also need to behand-created, I think. Likewise the metadata API can't directly process field deletions so if you deleted a field in your source, you need to delete it by hand in the target. There are other cases as well.
Hope that's useful --
-- Steve Lane
As of Spring '09, mail merge templates are not supported in metadata but record types are. You will find record types as an XML element in the file for the object they belong to. Everything else on your list is supported with a small exception. Picklist values for standard fields cannot be edited in Spring '09. Stay tuned for news on Summer '09 feature announcements.
Update: Standard picklists on standard objects are now metadata exposed (as of API v16):
http://www.salesforce.com/us/developer/docs/api_meta/Content/meta_picklist.htm
Otherwise, Steve Lane's response is pretty accurate. The advantage of using unmanaged packages (what Steve calls non-installed packages) is that when you add metadata to a package, the metadata it depends on will automatically be added. So it's easier to grab a full set of metadata containing all its dependencies. If you are repeatedly moving metadata from one org (sandbox) to another (production), Steve's approach is probably the best way to go and certainly the most common today. I frequently use unmanaged "developer" packages to move something I've developed in one org to another unrelated org. For my purpose, I like to have the package defined in the org as opposed to an Eclipse project / SVN. But that probably doesn't make sense if you are doing team development across many dev/sandbox orgs and are using SVN already.
Jesper
Another option is to use Change Sets if you want to move meta data from a sandbox to production.
There are currently some limitations on how change sets can be used:
Sending a change set between two organizations requires a deployment
connection. Currently, change sets can only be sent between
organizations that are affiliated with a production organization, for
example, a production organization and a sandbox, or two sandboxes
created from the same organization.
From the docs:
A package must be managed for it to be published publicly on AppExchange, and for it to support upgrades. An organization can create a single managed package that can be downloaded and installed by many different organizations. They differ from unmanaged packages in that some components are locked, allowing the managed package to be upgraded later. Unmanaged packages do not include locked components and cannot be upgraded. In addition, managed packages obfuscate certain components (like Apex) on subscribing organizations, so as to protect the intellectual property of the developer.
Advantage to managed package would be that it allows you to easily version and distribute things across multiple SFDC organizations.