Rancher doesn't pick up changes in Catalog - kubernetes-helm

I had a catalog of version 1.0.0
I made changes to the catalog with the same version, rancher still shows the old catalog.
I have read that it is because Rancher's caching mechanism.
Ok, I then incremented the version to 1.0.1
Rancher still shows the old one.
Then made increment to 1.0.2, 1.0.4 ..etc. Still the old one.
How to force Rancher to use the real catalog instead of showing some old one??
How to clear calatog cache?
rancher v2.4.3
edit: the only solution currently is to increment version in bigger steps like 1.1.x

Here is what I have found:
Rancher doesnt provide any information about the Catalog cache or any kind of possbility to reset it.But it seems it clears the cache for the catalog once the corresponding repository deleted from the Catalog page.
Clearing the cache:
Delete the helm repository then wait the clear-cache mechanism to take effect. It takes ~1 min. Then readd the repository and it seems to work.
Whether the new catalog is in effect can be checked in the Preview section on the App Launch/Upgrade Form.
To be more specific:
The caches are stored here (at least in case of dockerized Rancher):
/var/lib/rancher/management-state/catalog-cache
So -after the deletion- once you see the corresponding cache folder has been removed you are good to go.

We have a setup where Rancher-2.5.9 is used. In this Deleting a repository is not needed. A catalog refresh works fine (select the catalog, choose the menu option, and click on refresh). I can see the new version pushed a few minute back are available in the dropdown.
But unfortunately the same is not working on another setup where rancher-2.6.3 exists. Need to debug more and see if the issue is there in the setup or in the rancher version.

Related

How do I update the keycloak client?

I have a 12.0.1 instance and I would like to upgrade to 12.0.2. I followed the guide here https://www.keycloak.org/docs/latest/upgrading/ however I do not seem to have the new option that was added here (https://github.com/keycloak/keycloak/commit/91a51c2dbeb770e0202e89b438fc16963a5eed21) in my UI. If I go to Server Info in the admin panel it shows 12.0.2 as the version.
What am I missing? Do I need to update the client separately? I downloaded the latest archive and copied the files from my old KEYCLOAK_HOME/standalone/ into the new folder and restarted keycloak. The DB was migrated from what I can tell but that UI options is just missing.
Do I need to do something to my existing realm to activate this feature?
Keycloak release 12.0.2 doesn't contain linked KEYCLOAK-16606 change. See the history of the commits: https://github.com/keycloak/keycloak/blob/12.0.2/federation/ldap/src/main/java/org/keycloak/storage/ldap/mappers/UserAttributeLDAPStorageMapper.java You need to build your own custom Keycloak release from the master or wait until change will be released.

How to clear Eclipse p2 repository cache

I am facing the puzzling fact that the information of update sites fail to be updated despite my forcing a reload in Preferences > Install/Updates > Available Software Sites.
I have a local update site (file:/ protocol, on Windows) and an online update site (https://) that I use as staging/test update sites for an open source project that I am maintaining.
I build the update site using an update site project that is stored locally and wiped clean each time I build it. When I have tested the new release in a different Eclipse instance and I have validated my changes, I then upload the entire update site to my server. Then, just to simulate what a user would do, I update the plugin in another Eclipse instance that runs on a different physical machine.
I have (yesterday) built another version, 2.2.0.201702052007 and uploaded it to my server. The previous version was 2.2.0.201702042059.
The problem that I have is that the Eclipse instances (Mars.2 and Neon) on my development machine keep reporting the previous to last version, despite my reloading the update site information. However, the other machine sees the new version without a problem.
This is what I've tried:
Reloading the information of the update site: each time, I get a confirmation message saying "information for [...] has been reloaded from the server" but it turns out that it hasn't been reloaded: I see the older feature version.
Accessing the update site from a different Eclipse instance on a different machine: I see the new version.
Loading the update site's site.xml file from a browser: I see the new version.
Using FileZilla to download the entire update site to a local folder and unzipping content.jar and artifacts.jar so that I can read the XML files embedded in those JAR files: I see no trace of the older version.
Removing the update site, restarting Eclipse and adding the update site again: the problem was still there.
As a last resort, I removed all files of the update site from the server: Eclipse still reported successfully reloading the information from the server.
I shut down the httpd service on the VPS. Eclipse reported success until I restarted it and it then failed. But once the web server was again online, it failed to actually send a request to the web server as it kept saying there was no update site! As a consequence, the online update site now appears empty and restarting Eclipse does not change that.
[EDIT] Even more incomprehensible, the Reload button reports success even when there's no network connection to the update site (network interface disabled).[/EDIT]
There seems to be in the provisioning framework a cache somewhere between the UI and my server that reports an outdated information and feature version in spite of the explicit requests to reload that very information.
Is there any file or folder that I can delete to have the provisioning framework reset itself? If possible, I would altogether disable its cache.
I've found out that Oomph apparently has an action on the update site information retrieval process.
Anyway, I could recover normal operation (for now) and have the information properly reloaded by first deleting the appropriate files in C:\Users\...\.eclipse\org.eclipse.oomph.p2\cache.
By “the appropriate files”, I am referring to the fact that files in that folder are named after the URLs of repositories known to your Eclipse instances.

Google App Engine: Deployed Source doesn't have Local updates

I'm working with Google App Engine in Eclipse w/ JSP pages in Windows 7.
I already have an app deployed and working, but I am unable to make changes to it for some reason.
If I make changes and debug locally, my localhost page is showing the changes that I implement.
While I am not getting any errors in the deployment, the same changes that work on my local debug are no longer showing up, so I can't update my app.
I thought updating the version number might help, but I had no luck with this.
Any ideas? Thanks.
Are you deploying the same version (as specified in appengine-web.xml) as the default version that is running on your app? If not, you'll have to access your new deployment at http://newversion.appname.appspot.com, or change your default version in app engine to your newly deployed version.
I have had the same problems too, especially when the changes concerned the static pages. Some little things to check:
If you have set an expiration date in your app.yaml, your browser cache could be holding the file.
If it’s specific to the online contents, it could be an intermediary cache (such as a squid server) serving the outdated contents, in which case you’d have to flush the cache to get the new version.
You could start by checking the log on the GAE console to see if the request is received by the server, that would help you debug.
Another trick, if you’re being served an outdated version of http://yourapp.appspot.com/index, try and pass a dummy argument to force the browser to update the version, for instance : http://yourapp.appspot.com/index?p=1

How to upgrade Wordpress and plugins when deploying using Capistrano?

I'm hoping someone can confirm whether or not the following scenario is an issue with deploying updates to WordPress sites and, if so, do you have a solution on how to best manage this?
The basics:
I have a local development WordPress Multisite project for which I
use GIT and Capistrano to deploy to remote staging and production
servers.
Everything EXCEPT the uploads and blogs.dir directories (in
wp-content) are under version control. Yes, the WordPress core,
themes, plugins, etc are updated locally, committed, pushed and
deployed. This means that I have to login and activate plugins
initially - they are simply installed via the Capistrano deploy
The databases on development, staging and production are different and
I'm not concerned about trying to sync these up
My Concern:
Many updates to plugins and the WordPress core also perform updates to the database when doing an auto update via the admin. I am updating WordPress core and plugins locally on my development install. The code to these updates ends up being committed, pushed and deployed. However, when the code is deployed it is simply adding/deleting/replacing changed files to the staging and production servers. Production and staging are missing any of the updates to the database since this is usually part of the auto update process - eg, deactivate, updated, activate (run any updates to database).
My Questions:
Is my concern about the production and staging servers having the
latest code but missing any database updates required for the latest
code accurate?
If so, does anyone have thoughts on how I can modify Capistrano
deploy code to deactivate/reactivate of plugins? What about changes
in WordPress, eg, 3.2 to 3.3?
If Capistrano isn't the tool for this - and I need to do it more
"manually" by logging into the admin - is there a maintenance mode
tool/plugin that will somewhat automate the deactivation/activation of the
plugins so any updates upon activation are triggered?
Many Thanks,
Matt
Its important to note that you don't need to activate and deactivate plugins when you upgrading the WordPress core from version to version. Here is an explanation from Ryan Boren on why. Depending on the plugin though, some of them may have an upgrade process built into their upgrade - that is, the upgrade of the plugin, not of WordPress. None the less I'll go through your three questions and answer them as directly as i can.
1. Is my concern about the production and staging servers having the latest code but missing any database updates required for the latest code accurate?
Yes, when updating, if there is a change to the database schema, then WordPress will not function properly unless the new schema exists. When attempting to access the admin side of WordPress, if the db version is lower than your WordPress version expects, it will redirect you to a database upgrade page.
WordPress sets a global called $wp_db_version in the /wp-includes/version.php file and maintains each of the migration scripts to upgrade the database incrementally from each previous versions to the next until the version number is up to date, seen here. Here is a simpler list in a FAQ showing how the revision numbers correlate to WordPress versions..
2. If so, does anyone have thoughts on how I can modify Capistrano deploy code to deactivate/reactivate of plugins?
As I said above, you dont typically need to activate/deactivate plugins after core upgrades, unless I suppose the plugin specifically requires that you do so. If the schema changes in WordPress break a plugin, then the plugin developers will need to release a new version. When upgrading that plugin, it will be shut off and restarted, and its those developers responsibility to make sure everything that needs to take place does so.
However you may need to deactivate/activate separately in deployed environments such as yours, since the actual upgrade process is taking place on different machine, and thus probably a different database from that which it will ultimately be used on.
Perhaps the best thing to do would be to have your deployment script hit a URI of a plugin within WordPress, a plugin you would write which would deactivate/activate plugins, or an existing one that already does it.
It's possible some exiting plugins might handle parts of what your looking for, but I take the key component of your question to be automation, and an avoidance of having to log into each environment and upgrade plugins for each one, so developing one yourself that does exactly what you need might be the way to go. Developing a plugin is possible if you make use of the tools WordPress already provides.
activate_plugin()
activate_plugins()
deactivate_plugins()
validate_plugin()
Plugin_Upgrader class (maybe)
Look through the whole /wp-admin/includes/plugin.php file to see what you might find useful. Additionally checkout code that actually handles plugins in the admin side in /wp-admin/plugins.php - just to see how its done. You may want to stop the deactivate_plugin hooks from wiping out plugin configuration with plugins that clean-up after themselves, so consider passing $silent as true when de-activating the plugin.
To make this really slick, you'll probably want to grab get_option('active_plugins') to see which plugins were already activated, and only run your script on those (make sure the plugin excludes itself from the process)
3. What about changes in WordPress, eg, 3.2 to 3.3?
Changes from 3.2 to 3.3 should be thought of as no different from any other set of changes, so everything said here applies.
4. If Capistrano isn't the tool for this - and I need to do it more "manually" by logging into the admin - is there a maintenance mode tool/plugin that will somewhat automate the deactivation/activation of the plugins so any updates upon activation are triggered?
I don't think Capistrano will be doing any of the heavy lifting here - but its certainly not in the way either. You should just need to be able to just hit a URI within the plugin, and that should get things rolling within the application. The important thing is that obviously all those functions need to be available so you cant just run it as in independent script.

ClickOnce upgrades leaving early versions on disk

There seems to be a problem with ClickOnce deployments.
The manifest file is executed on the client machine, and there's a check to see if a new version is available. If a new version is available, this gets copied over to the client machine. BUT the old version remains.
This can be a problem. If the application is upgraded on a regular basis, this will end up occupying a large and continually growing disk space. This could be a problem at a work place where multiple users all logged on to the same Citrix server.
Is there a straightforward solution to ClickOnce not cleaning up after itself? Is there some setting that I'm missing?
Later Edit
This question actually states something that's incorrect. In reality ClickOnce upgrades only leave the previous version behind, and versions before that are cleaned up. I'll leave the question here (as opposed to deleting it) as this is a misunderstanding that others could have as well.
According to Microsoft Click once does clean up after itself however it will always leave the previous one version behind to enable roll-back functionality.
see http://www.sayedhashimi.com/PermaLink,guid,520010a7-6ce7-47ec-af0f-a57694bf3d41.aspx for more info.