Can someone share the procedure to upgrade rundeck 3.3.5 to 3.4.10
In order to over come log4j security vulnerability
The process is described here. Your instance is 3.3.5 so, you don't need to follow the database migration process, you can test in a non-prod env by launching the new instance over the old one.
Anyway, as good advice please backup all your instance data and test it in a non-prod environment before.
UPDATE 04/22/2022:
Rundeck 4.1.0 uses H2v2 as the default testing backend, please take a look at this if you're using H2 as the default backend.
Related
We've just upgraded from Airflow 2.2.4 to 2.4.2, which we deploy via k8s and Helm, using the base apache-airflow==2.4.2 Docker image with some other Python packages installed with pip. Part of Airflow 2.4.2 upgrade replaces Tree View with Grid View in the web UI. When we go to Grid View, we see the following:
The marked up output is your generic Airflow error:
Ooops!
Something bad has happened.
Airflow is used by many users, and it is very likely that others had
similar problems and you can easily find a solution to your problem.
Consider following these steps:
gather the relevant information (detailed logs with errors,
reproduction steps, details of your deployment)
find similar issues using:
GitHub Discussions
GitHub Issues
Stack Overflow
the usual search engine you use on a daily basis
if you run Airflow on a Managed Service, consider opening an issue
using the service support channels
if you tried and have difficulty with diagnosing and fixing the
problem yourself, consider creating a bug report.
Make sure however, to include all relevant details and results of your investigation so far.
Python version: 3.9.15 Airflow version: 2.4.2 Node:
{{deprecated by author}}
------------------------------------------------------------------------------- Error! > Please contact server admin.
Looking in our web logs only yields the following text:
[[34m2022-12-15 01:31:52,943[0m] {[34mapp.py:[0m1741} ERROR[0m - Exception on /object/grid_data [GET][0m
I can't find any other instance online of another Airflow user experiencing this problem, despite searching for hours.
What could be the problem here? It appears to be an internal Airflow UI issue, so may be related to our infrastructure or setup perhaps, but I can't see why.
Also of note, some users had trouble with Grid View when the Operators in their DAGs had a parameters named params. I can rule this out as a cause, as our DAGs and Operators definitely do not have that.
How exactly did you make the update? Did you run helm uninstall airflow and then helm install apache/airflow?
Did you check that run migration job succeed or even if it was executed?
Looks like the issue was to do with dbt-snowflake==1.0.0 being installed as well, and it had some sort of Python package constraint conflict with apache-airflow==2.4.2. Upgrading dbt-snowflake to v1.3.0 solved the issue.
I need to connect kafka with airflow send data from kafka to airflow and save it to a local file, tell me how best to do it. I thinks airflow-provider-kafka is this the correct approach, is there anything else?
You can install kafka sdk and connect it via PythonOperator but it seems that "airflow-provider-kafka" wrap it and do it , so seems like its good to use it and extend it if needed.
The only reason not to use "airflow-provider-kafka" its because it developed and maintained outside of airflow repo , so in the future you can have compatibility issues and breaking changes that would prevent you to upgrade your airflow version
The only reason not to use "airflow-provider-kafka" its because it developed and maintained outside of airflow repo , so in the future you can have compatibility issues and breaking changes that would prevent you to upgrade your airflow version
Use the provider.
The Kafka provider in Airflow will not be breaking when you upgrade your airflow version - this is guaranteed by the Airflow project using and adhering to SemVer. Furthermore, the provider is maintained at Astronomer who has a vested commercial interest in these providers.
(Unless/Until Airflow 3 comes out, at which time it will probably be upgraded)
We have an AWS Elasticsearch domain we created through CloudFormation running version 6.3 of ES. When we update the ElasticsearchVersion property in the template, it replaces the Elasticsearch domain with a new one running the new version instead of updating the existing one.
How does anyone upgrade their Elasticsearch domains that were deployed with CF if it doesn't do an in-place upgrade? I am almost thinking at this point I need to create and manage my ES domains through boto3.
Any insight or ideas would be greatly appreciated.
This is now possible (as of 25/11/2019) by setting an UpdatePolicy with EnableVersionUpgrade: True.
For example:
ElasticSearchDomain:
Type: AWS::Elasticsearch::Domain
Properties: ...
UpdatePolicy:
EnableVersionUpgrade: true
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-updatepolicy.html#cfn-attributes-updatepolicy-upgradeelasticsearchdomain
Received correspondence back from AWS Support regarding an ES in-place upgrade through CloudFormation.
tl;dr It is currently not supported but a feature request is already active for this functionality.
You are correct in saying that ES in-place upgrade is not supported by CFN at this moment. Thus upgrading ES from 6.3 to 6.4 can be done via CLI or AWS Console will keep the existing domain, but with CloudFormation, it will launch a new domain and discard the existing one.
I see that there is already an active feature request for this. I will go ahead and pass your sentiment regards to our internal team about this matter as well.
Unfortunately, AWS Support does not have visibility to service enhancement implementation roadmap, so I would not be able to provide you with an exact time frame.
Latest since this checkin the service-connector cf plugin seems to be gone for SwisscomDev.
Official Link to the plugin simply returns a 404.
Isn't it supported anymore? What's the alternative? And did I miss communication about it?
Isn't it supported anymore? What's the alternative?
Yes. Our proprietary CF CLI client plugin is phased out. The alternative is cf ssh (from upstream). See Accessing Services with SSH on docs.developer.swisscom.com.
On Migrate from legacy MariaDB to MariaDB Ent you have a step by step howto for cf ssh. Please adapt that for your service and port.
cf ssh proy-app -L 13000:<old-db-host>:<old-db-port>
Here on #swisscomdev there is also a posting Alternative to Swisscom CF plugin named Service Connector with MongoDB Ops Manager example.
We have an edge case for an enterprise customer that may still need a cf sc feature (not available in cf ssh). Investigation ongoing.
And did I miss communication about it?
Sorry we failed in communication.
Sorry that you notice this change in our GitHub repo first. We wished to update docs first, then communicate the EOL. We somehow forgot it in yesterdays newsletter.
I have a Service Fabric application with a few services underneath it. They are all currently sitting at version 1.0.0.
I deploy an update out to the cluster for version 2.0.0. Everything is running fine and the deployment succeeds. I notice a very large but in the version. Is there a way to manually rollback to version 1.0.0? The only thing I have found is automatic rollback during an upgrade.
Matt's answer is correct, but I'll elaborate a bit on it here.
The key is in understanding the different steps during application deployment:
Copy
Register
Create
Upgrade
Visual Studio rolls these up into single "publish" and "upgrade" operations to make it easy and convenient. But these are actually individual commands in the Service Fabric management API (through PowerShell, C# or HTTP). Let's take a quick look at what these steps are:
Copy:
This just takes your compiled application package and copies it up to the cluster. No big deal.
Register:
This is the important step in your case. Register basically tells the cluster that it can now create instances of your application. Most importantly, you can register multiple versions of the same application. At this point, your applications aren't running yet.
Create:
This is where instances of your registered applications are created and start running.
Before we get to upgrade, let's look at what's on your cluster. The first time you go through this deployment process with version 1.0.0 of your application (call it FooType), you'll have just that one type registered:
FooType 1.0.0
Now you're ready to upgrade. You first copy your new application package with a new version (2.0.0) up to the cluster. Then, you register the new version of your application. Now you have two versions of that type registered:
FooType 1.0.0
FooType 2.0.0
Then when you run the upgrade command, Service Fabric takes your instance of 1.0.0 and upgrades it to 2.0.0. If you need to roll it back once the upgrade is complete, you simply use the same upgrade command to "upgrade" the application instance from 2.0.0 back to 1.0.0. You can do this because 1.0.0 is still registered in the cluster. Note that the version numbers are in fact not meaningful to Service Fabric other than that they are different strings. I can use "orange" and "banana" as my version strings if I want.
So the key here is that when you do a "publish" from Visual Studio to upgrade your application, it's doing all of these steps: it's copying, registering, and upgrading. In your case, you don't actually want to re-register 1.0.0 because it's already registered on the cluster. You just want to issue the upgrade command again.
For an even longer explanation, see: Blue/Green Deployments with Azure ServiceFabric
Just follow the same upgrade procedure, but targeting your 1.0.0 version instead. "Rollback" is just an "upgrade" to your older version.