We are doing a minor upgrade for our product and wanted to confirm below thing related to components handling in minor upgrade:
I do not want some components installed on system after minor upgrade. So, I have removed those components from “Setup Design” view. Though, I can still see the components in Components view with RED exclamation mark showing that they are no more part of any feature.
Installshield recommends Components should not be removed from Product in minor upgrade.
Is it fine? Or, will it add some risk to the upgrade or uninstall?
Components are still visible in red exclamation in Components view. Does that mean my Product still has Components and I am safe to remove those components from Setup Design view?
There are multiple layers here. Fundamentally, removing components in a minor upgrade is not allowed by Windows Installer (see MSIENFORCEMINORUPGRADERULES or Major Upgrade vs. Minor Upgrade vs. Small Update). The result is typically that the data in that component is orphaned on the machine; the minor upgrade doesn't remove the data, but it does remove Windows Installer's record of the data.
Some people recommend getting the desired effect of removing a component in a minor upgrade by using the component setting Reevaluate Condition. the default is No, but if you set this to Yes and provide a false Condition such as 0, the minor upgrade can remove the component's data. (You can further couple this with an empty file replacing the contents of any files that were in the component to cut down on your installation's size. Change only the files' contents, or Windows Installer will be unable to remove the obsoleted files.)
Finally, removing a component from a feature does not remove it from your project, although when the component is no longer part of any features, the build will exclude it from the resulting .msi file. So when you're ready to change to a major upgrade, you should remove the component from your project entirely; until then you should keep the component.
Related
I am using Jib to pull a base image, add my wrapper java code to it, and build my image on top of that. Due to the widely known log4j CVE in December 2021, we are looking for a way to remove the vulnerable classes. (Now more CVEs are found in 2022, one of them has a score of 10.0, the highest possible. See https://www.cvedetails.com/vulnerability-list/vendor_id-45/product_id-37215/Apache-Log4j.html)
The base image is near EOL, so the provider answered that they would not release a new version; besides, log4j 1.x also reached EOL long before. But the current situation is that we have no plan of upgrading the base image to next version, so removing the classes seem to be the only way now.
The base image will use /opt/amq/bin/launch.sh as entrypoint. And I have found that I can use customized entrypoint to run a script before that, which removes the classes. Like <entrypoint>/opt/amq/bin/my_script.sh</entrypoint>, and in that I have run_fix.sh && /opt/amq/bin/launch.sh.
Then I realized that even this would work by mitigating the risk when the application is actually running, the vulnerability scan(part of security process) will still raise alarms while examining the image binary, as this is a static process done before the image is uploaded to the docker registry for production, way before actually running it. They can only be removed at the moment when the application runs, aka at runtime.
Can jib pre-process the base image while doing Maven build(mvn clean install -Pdocker-build) instead of only allowing it at runtime? According to what I have read, I understand it's a big NO, and there's no plugin for it yet.
By the design of container images, it is impossible for anyone or any tool to physically remove files from an already existing container image. Images are immutable. The best you can try is "mark deletion" with some special "whiteout" file (.wh.xyz), which a container runtime will hide the target files at runtime.
However, I am not sure if your vulnerability scanner will take the effect of whiteout files into account during scanning. Hopefully it does. If it doesn't, the only option I can think of is to re-create your own base image.
Take a look at this Stack Overflow answer for more details.
I need to add, modify, remove and change folder structure and send it as a upgrade file. Can i achieve this with minor upgrade feature in the Install-shield software. Explain the difference between small, minor and major upgrades.
Installation projects usually do not operate with bare "files" and "folders", instead they operate with "features" and "components". This is exactly you should be looking for in your existing project. As the simple answer you can remove or add folders with minor upgrade, but this is not a good idea. You should be operating with features and components to re-structure your project and add/remove folders and files.
This is the first line of Google search for your question "Explain the difference between small, minor and major upgrades." : Major Upgrade vs. Minor Upgrade vs. Small Update. Please consider do more reading before posting the question.
The problem: we have dozens of Maven sub-projects (managed by m2eclipse) in our 3-level POM tree and people keep adding and removing some of them on a bi-weekly basis. The problem is further complicated by a fact that not all newly added projects result in compile-time error when they are missing. The could end up not being dropped into OSGi container since people forget to import them properly and Eclipse for some reason doesn't know about their existence automatically.
Currently, people have to watch some mailing list and whenever there is such an event, they have to go and either manually invoke import wizard for the very root POM and add missing projects or manually remove some of the not needed ones. Moving/renaming is a combination of removing/adding.
That all is very error prone and we would like to automate/simplify the process somehow.
Ideally, we would like to have the following workflow:
1) sync
2) fire Eclipse
3) Some hook to trigger which would analyze developer's workspace against latest POM tree (the very root POM is fixed and known)
4) There should be some button somewhere which would be:
- green, if everything is all-right
- red, if not
Clicking it should automatically remove not needed projects (and update Eclipse internals) and add the new ones (some sort of invoking import wizard in a silent mode).
Is it possible with the existing functionality? Or would we have to somehow extend m2e? Any other solutions???
Any help would be very appreciated!
P.S.
We're aware that this type of problem we have is probably due to badly designed project structure. However, it's not easy to get that fixed while running on tight release cycles. So, we need an interim solution.
This smells to me that you're fixing a wrong problem. I doubt something like this would be supported out-of-the-box in m2e - unless one day it becomes best practice to put each type in it's own module. After some time project modules should stabilize, and reflect architecture which can change but not frequently (major versions only). If it changes too frequently then not enough thought has been put into design decisions. Consider splitting projects into multiple sub-projects which one can checkout/clone and work independently on.
When syncing changes just check if there were modules added/deleted - and if so, after sync just logically remove and then import back existing maven projects.
I will start building libraries on different operating-systems since they will run on different mobile platforms.
Is there a way to version control a whole development environment, containing both the version of the operating system as well as all the development tools, etc.?
I need this information to be able to rebuild a Lib from a freeze on the source with the same result and this might not be the case if the environment is not the same.
We actually use VMWare to achieve that result. Every version that is important we keep a snapshot of the complete enviroment. Then when we have to go back, it is easy to fire up the image and it is right where you left it.
This is called a true "Configuration Management" issue (as oppose to simple "Source Code Management" which include RCS and VCS)
Most of the time "configuration" is the recording of your code and its dependency with third-party library, but to really respect the reproducibility principle behind SCM, the tools necessary to rebuild a product ought to be versioned as well.
That means you are not working with just one "component" (your source repository), but with several components, each at certain labels which define your configuration:
the IDE (for example you could version a full Eclipse directory)
the language (you can version a JDK directory)
the source code and project information (since you can add some IDE-specific files with your code)
Then, depending your VCS tool:
you define your workspace with specific versions of each component (ClearCase UCM, SVN external, Git submodules, ...)
you work only in one of those component (your source code), the other ones being read-only
The specifics depend on the language, tool and code you are using, and you may need to combine that approach with branches per platform to isolate the (hopefully small) part which is platform-specific.
Using online interfaces to a version control system is a nice way to have a published location for the most recent versions of code. For example, I have a LaTeX package here (which is released to CTAN whenever changes are verified to actually work):
http://github.com/wspr/pstool/tree/master
The package itself is derived from a single file (in this case, pstool.tex) which, when processed, produces the documentation, the readme, the installer file, and the actual files that make up the package as it is used by LaTeX.
In order to make it easy for users who want to download this stuff, I include all of the derived files mentioned above in the repository itself as well as the master file pstool.tex. This means that I'll have double the number of changes every time I commit because the package file pstool.sty is a generated subset of the master file.
Is this a perversion of version control?
#Jon Limjap raised a good point:
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
That's really the crux of the matter in this case. Yes, released versions of the package can be obtained from elsewhere. So it does really make more sense to only version the non-generated files.
On the other hand, #Madir's comment that:
the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes
is also rather pertinent in that if a user finds a bug and I fix it immediately, they can then head over to the repository and grab the file that's necessary for them to continue working without having to run any "installation" steps.
And this, I think, is the more important use case for my particular set of projects.
We don't version files that can be automatically generated using scripts included in the repository itself. The reason for this is that after a checkout, these files can be rebuild with a single click or command. In our projects we always try to make this as easy as possible, and thus preventing the need for versioning these files.
One scenario I can imagine where this could be useful if 'tagging' specific releases of a product, for use in a production environment (or any non-development environment) where tools required for generating the output might not be available.
We also use targets in our build scripts that can create and upload archives with a released version of our products. This can be uploaded to a production server, or a HTTP server for downloading by users of your products.
I am using Tortoise SVN for small system ASP.NET development. Most code is interpreted ASPX, but there are around a dozen binary DLLs generated by a manual compile step. Whilst it doesn't make a lot of sense to have these source-code versioned in theory, it certainly makes it convenient to ensure they are correctly mirrored from the development environment onto the production system (one click). Also - in case of disaster - the rollback to the previous step is again one click in SVN.
So I bit the bullet and included them in the SVN archive - the convenience, which is real and repeated, outweighs cost, which is borne behind the scenes.
Not necessarily, although best practices for source control advise that you do not include generated files, for obvious reasons.
Is there another way for you to publish your generated files elsewhere for download, instead of relying on your version control to be your download server?
Normally, derived files should not be stored in version control. In your case, you could build a release procedure that created a tarball that includes the derived files.
As you say, keeping the derived files in version control only increases the amount of noise you have to deal with.
In some cases we do, but it's more of a sysadmin type of use case, where the generated files (say, DNS zone files built from a script) have intrinsic interest in their own right, and the revision control is more linear audit trail than branching-and-tagging source control.